Welcome to NeTCoM 2019

11th International Conference on Networks & Communications

November 23 ~ 24, 2019, Zurich, Switzerland



Accepted Papers
A Hybrid Model for Evacuation Simulation and Efficiency Optimization in Large Complex Buildings

Hao Yuan, Guo Yu, Yifan Ma, Jieneng Chen, Xiongda Chen, Tongji University, China

ABSTRACT

Traditional methods for simulating the flow of people includes the Cellular Automaton, artificial potential field, and so on. This paper seeks out to refine the traditional Cellular Automaton and combines it with the adapted Ant Colony model as well as the Artificial Potential Field to simulate the evacuation process within large buildings. This research work includes applying the model to the Louvre to get an estimation of the total evacuation time within one floor, and after systematic analysis, identifying the bottlenecks alongside the evacuation routes. This proves the applicability and flexibility of the model.

KEYWORDS

Evacuation Simulation Model, Cellular Automaton, Artificial Potential Field, Ant Colony, Large Complex Buildings.


Quality Model to the Adaptive Guidance

Hamid Khemissa1 and Mourad Oussala2, 1Computer Systems Laboratory, Faculty of Electronics and Informatics, Computer Science Institute, USTHB: University of Science and Technology Houari Boumediene, Algiers; Algeria and 2Laboratoire des Sciences du Numérique de Nantes (LS2N), Faculty of sciences, Nantes University, France

ABSTRACT

The need for adaptive guidance systems is now recognized for all software development processes. The new needs generated by the mobility context for software development led these guidance systems to both quality and ability adaptation to the possible variations of the development context. This paper deals with the adaptive guidance quality to satisfy the developer’s guidance needs. We propose a quality model to the adaptive guidance. This model offers a more detailed description of the quality factors of guidance service adaptation. This description aims to assess the quality level of each guidance adaptation factor and therefore the evaluation of the adaptive quality guidance services.

KEYWORDS

Quality model, Guidance System Quality, Adaptive Guidance, Plasticity.


Collaborative and Fast Decryption Using Fog Computing and A Hidden Access Policy

Ahmed Saidi1, Omar Nouali2 And Abdelouahab Amira3, 1, 2, 3Department Of Computer Security, Research Center For Scientific And Technical Information, Algiers, Algeria and 1, 3Faculty Of Exact Sciences, Universite De Bejaia, 06000 Bejaia, Algeria

ABSTRACT

Nowadays, IOT (Internet Of Things) devices are everywhere and are used in many domains including ehealth, smart-cities, vehicular networks, .. etc. Users use IOT devices like smartphones to access and share data anytime and from anywhere. However, the usage of such devices also introduces many security issues, including in data sharing. For this reason, security mechanisms such as ABE (AttributeBased Encryption) have been introduced in IOT environments to secure data sharing. Nevertheless, Ciphertext-Policy ABE (CP-ABE) is rather resource intensive both in the encryption and the decryption processes. This makes it unadapted for IOT environments where the devices have limited computing resources and low energy. In addition, in CP-ABE, the privacy of the access policy is not assured because it is sent in clear text along with the cipher-text. To overcome these issues, we propose a new approach based on CP-ABE which uses fog devices to reduce the bandwidth, and partially delegates data decryption to these fog devices. It also ensures the privacy of the access policy by adding false attributes to the access policy. We also discuss the security properties and the complexity of our approach. We show that our approach ensures the confidentiality of the data and the privacy of the access policy. The complexity is also improved when compared with existing approaches.

KEYWORDS

Fog Computing, Access Control, Attribute based Encryption, Decryption Outsourcing


Managed Cloud Operations

Andrei Petrescu and Mihai Carabas, University POLITEHNICA of Bucharest, Splaiul Independentei 313, Bucharest, Romania

ABSTRACT

In today’s fast-moving world, advances in technology occur at an alarming rate. Keeping up is difficult, but mandatory, and we must find solutions that will make the process easy. Out of all these technologies, cloud computing is one that is evolving the quickest. We will explore the tools which will help us help us reach our goal and talk about the main subject of our paper, namely keeping up to date with the latest releases in OpenStack private cloud technology. We will also talk about the results and how we found the best solution for the context in which this paper lies.

KEYWORDS

cloud, openstack, cinder, nova, keystone, glance, heat


Semantic Document Classification based on Strategies of Semantic Similarity Computation and Correlation Analysis

Shuo Yang1, Ran Wei2, Hengliang Tan1 and Jiao Du1, 1School of Computer Science and Cyber Engineering Guangzhou University, Guangzhou, China and 2Department of Computer Science University of California, Irvine, California, USA

ABSTRACT

Document (text) classification is a common method in ebusiness, facilitating users in the tasks such as document collection, analysis, categorization and storage. Semantic analysis can help to improve the performance of document classification. Though having been considered when designing previous methods for automatic document classification, more focus should be given to semantics with the increase number of content-rich electronic documents, forum posts or blogs online, which can reduce human workload by a great margin. This paper proposes a novel semantic document classification approach aiming to resolve two types of semantic problems: (1) polysemy problem, by using a novel semantic similarity computing strategy (SSC) and (2) synonym problem, by proposing a novel strong correlation analysis method (SCM). Experiments show that our strategies can help to improve the performance of the baseline methods

KEYWORDS

semantic document classification, semantic similarity, semantic embedding, correlation analysis, machine learning


Automated Generation of Computer Graded Unit Testing-Based Programming Assessments for Education

Sébastien Combéfis1,2 and Guillaume de Moffarts2, 1ECAM Brussels Engineering School, Brussels, Belgium and 2Computer Science and IT in Education ASBL, Louvain-la-Neuve, Belgium

ABSTRACT

Automatic assessment of code, in particular to support education, is an important feature that several Learning Management Systems (LMS) do include, at least in some extent. Several kinds of assessments can be designed, such as “fill in the following code”, “write a function that”, or “correct the bug in the following program” exercises, for example. One difficulty for an instructor is to create such programming exercises, that is, writing the statement and providing all the information necessary to the platform to grade the assessment. Another difficulty appears when the instructor wants to use his/her exercises on another LMS platform, since they have to be re-encoded into the other LMS, maybe with a complete different way to describe and configure the exercise. This paper presents a tool that can automatically generate programming exercises from one single and unique description, in several programming languages. The generated exercises can then be automatically graded by the same platform, providing intelligent feedback to the user in order to support his/her learning. This paper focuses on and details unit testing-based exercises and provides insights into new kinds of exercises that could be generated by the platform in the future, with some additional developments.

KEYWORDS

Code Grader, Programming Assessment, Code Exercise Generation, Computer Science Education


Lifetime Extension of Ad Hoc Wireless Network with Shadowing Effects

Amir J. Majid; Ph.D, College of Engineering, University of Science and Technology of Fujairah, UAE

ABSTRACT

Lifetime extension algorithm is implemented on an ad hoc wireless networks with shadowing effects, and simulated on Matlab platform. The main aim is to maximize the lifetimes of sensors which cover a number of targeted zones, by sharing their subsets according to their minimum coverage failure probabilities, with the consideration of shadowing effects in the vicinity of network environment, in which the Path Loss Model (PLM) is used in the analyzed.

KEYWORDS

ad hoc, failure probability, PLM, shadowing, sensor lifetime, WSN


Measurement and Characterization of the Stationary Noise in Narrowband Power Line Communication

Raja Alaya and Rabah Attia, Tunisian Polytechnic School, University of Carthage, Tunisia

ABSTRACT

Understanding the interference scenario in power lines network is a key step to characterize the power line communication (PLC) system. This paper focuses on the characterization and modelling of the stationary noise in Narrowband PLC. Measurement and analysis of noise is carried out in the Tunisian outdoor Low Voltage (LV) power line network in the frequency band below 500 kHz. Based on existing models and measurements results, a parametric model of noise is proposed; the model parameters are statistically studied.

KEYWORDS

Power Line Communication, Measurement, Modelling, Narrowband Frequency, Noise


Token Bucket-based Throughput Constraining in Cross-layer Schedulers

Jeremy Van den Eynde and Chris Blondia, University of Antwerp - imec, IDLab - Department of Mathematics and Computer Science, Sint-Pietersvliet 7, 2000 Antwerp, Belgium

ABSTRACT

In this paper we consider upper and lower constraining users' service rates in a slotted, cross-layer scheduler context. Such schedulers often cannot guarantee these bounds, despite the usefulness in adhering to Quality of Service (QoS) requirements, aiding the admission control system or providing different levels of service to users. We approach this problem with a low-complexity algorithm that is easily integrated in any utility function-based cross-layer scheduler. The algorithm modifies the weights of the associated Network Utility Maximization problem, rather than for example applying a token bucket to the scheduler's output or adding constraints in the physical layer. We study the efficacy of the algorithm through simulations with various schedulers from literature and mixes of traffic. The metrics we consider show that we can bound the average service rate within about five slots, for most schedulers. Schedulers whose weight is very volatile are more difficult to constrain.

KEYWORDS

Cross-layer Scheduling, Quality of Service, Token Buckets, Resource allocation


Methodology to Evaluate WSN Simulators: Focusing on Energy Consumption Awareness

Michel Bakni1, Luis Manuel Moreno Chacon2, Yudith Cardinale2, Guillaume Terrasson1, and Octavian Curea1, 1Univ. Bordeaux, ESTIA Institute of Technology, F-64210 Bidart, France and 2Universidad Simon Bolivar, Caracas, 1080-A, Venezuela

ABSTRACT

Nowadays, there exists a large number of available network simulators, that dier in their design, goals, and characteristics. Users who have to decide which simulator is the most ap- propriate for their particular requirements, are today lost, faced with a panoply of disparate and diverse simulators. Hence, it is obvious the need for establishing guidelines that support users in the tasks of selecting and customizing a simulator to suit their preferences and needs. In previous works, we proposed a generic and novel methodological approach to evaluate network simulators, considering a set of qualitative and quantitative criteria. However, it lacks criteria related to Wire- less Sensor Networks (WSN). Thus, the aim of this work is three fold: (i) extend the previous proposed methodology to include the evaluation of WSN simulators, such as energy consumption modelling and scalability; (ii) elaborate a study of the state of the art of WSN simulators, with the intention of identifying the most used and cited in scientic articles; and (iii) demonstrate the suitability of our novel methodology by evaluating and comparing three of the most cited simulators. The application of our methodological approach leads to results that are measurable and comparable, giving a comprehensive overview of simulators features, their advantages, and disadvantages. Thus, the novel methodology provides researchers with an evaluation tool that can be used to describe and compare WSN simulators in order to select the most appropriate one for a given scenario.

KEYWORDS

Methodology, Simulators, Wireless Sensors Networks, Energy Consumption


Finding Maximal Localizable Region in Wireless Sensor Networks by Merging Rigid Clusters

Saroja Kanchi, Department of Computer Science, Kettering University, Flint, MI, USA

ABSTRACT

Localization of Wireless Sensor Network (WSN) is the problem of finding the geo-locations of sensors in a sensor network deployed in various applications. Given the prolification of sensors in various applications, the localization and tracking of sensors have received considerable attention. Properties of rigidity and flexibility of the underlying graph of the WSN have been studied as a means of determining the localizability of the nodes in the WSN. In this paper, we present a new 3-merge technique for merging three rigid clusters of a network graph, into larger rigid cluster and we use this algorithm for finding maximal localizable regions within the WSN. We provide simulation results on random deployments of WSN to prove that this technique outperforms previously known algorithms for finding maximal localizable subregions. Moreover, simulation results show that the number of anchors needed to localize the entire WSN decreases due to finding large localizable regions.

KEYWORDS

Wireless Sensor Network, localization, rigidity


Motion estimation from noisy image sequences using new frequency weighting functions

El Mehdi Ismaili Alaoui, Laboratory of Computer Networks and Systems Faculty of Sciences, Moulay Ismail University Meknes-Morocco

ABSTRACT

Motion estimation is a signal-matching technique. It is a key component of target tracking, medical imaging, video compression, and many other systems. This paper presents a four new estimators for frame-to-frame image motion estimation. The estimators of interest are the ROTH impulse response, the smoothed coherence transform (SCOT), the maximum likelihood (ML) and the Wiener estimators. These are all referred to as Generalized Cross-Correlation (GCC)-estimators. These estimators are based on the cross-correlation of the received images and various weighting functions are used to prefilter the received images before cross-correlation. Since the estimators and weighting functions are similar to those used in the time delay estimation [1]. As the performances of the GCC-estimators are considerably degraded by the signal-to-noise ratio (SNR) level, this factor has been taken as a prime factor in benchmarking the different GCC-estimators. For robust motion estimation it has been found that the GCC-Wiener is particularly suited to this purpose. The accuracy of the estimators is also discussed.

KEYWORDS

Motion estimation, Motion vector field, Whitening function, Noisy image sequences, GCC-estimators