El Mehdi Ismaili Alaoui, Laboratory of Computer Networks and Systems Faculty of Sciences, Moulay Ismail University Meknes-Morocco
Motion estimation is a signal-matching technique. It is a key component of target tracking, medical imaging, video compression, and many other systems. This paper presents a four new estimators for frame-to-frame image motion estimation. The estimators of interest are the ROTH impulse response, the smoothed coherence transform (SCOT), the maximum likelihood (ML) and the Wiener estimators. These are all referred to as Generalized Cross-Correlation (GCC)-estimators. These estimators are based on the cross-correlation of the received images and various weighting functions are used to prefilter the received images before cross-correlation. Since the estimators and weighting functions are similar to those used in the time delay estimation . As the performances of the GCC-estimators are considerably degraded by the signal-to-noise ratio (SNR) level, this factor has been taken as a prime factor in benchmarking the different GCC-estimators. For robust motion estimation it has been found that the GCC-Wiener is particularly suited to this purpose. The accuracy of the estimators is also discussed.
Motion estimation, Motion vector field, Whitening function, Noisy image sequences, GCC-estimators
Hao Yuan, Guo Yu, Yifan Ma, Jieneng Chen, Xiongda Chen, Tongji University, China
Traditional methods for simulating the flow of people includes the Cellular Automaton, artificial potential field, and so on. This paper seeks out to refine the traditional Cellular Automaton and combines it with the adapted Ant Colony model as well as the Artificial Potential Field to simulate the evacuation process within large buildings. This research work includes applying the model to the Louvre to get an estimation of the total evacuation time within one floor, and after systematic analysis, identifying the bottlenecks alongside the evacuation routes. This proves the applicability and flexibility of the model.
Evacuation Simulation Model, Cellular Automaton, Artificial Potential Field, Ant Colony, Large Complex Buildings.
Hamid Khemissa1 and Mourad Oussala2, 1Computer Systems Laboratory, Faculty of Electronics and Informatics, Computer Science Institute, USTHB: University of Science and Technology Houari Boumediene, Algiers; Algeria and 2Laboratoire des Sciences du Numérique de Nantes (LS2N), Faculty of sciences, Nantes University, France
The need for adaptive guidance systems is now recognized for all software development processes. The new needs generated by the mobility context for software development led these guidance systems to both quality and ability adaptation to the possible variations of the development context. This paper deals with the adaptive guidance quality to satisfy the developer’s guidance needs. We propose a quality model to the adaptive guidance. This model offers a more detailed description of the quality factors of guidance service adaptation. This description aims to assess the quality level of each guidance adaptation factor and therefore the evaluation of the adaptive quality guidance services.
Quality model, Guidance System Quality, Adaptive Guidance, Plasticity.
Ahmed Saidi1, Omar Nouali2 And Abdelouahab Amira3, 1, 2, 3Department Of Computer Security, Research Center For Scientific And Technical Information, Algiers, Algeria and 1, 3Faculty Of Exact Sciences, Universite De Bejaia, 06000 Bejaia, Algeria
Nowadays, IOT (Internet Of Things) devices are everywhere and are used in many domains including ehealth, smart-cities, vehicular networks, .. etc. Users use IOT devices like smartphones to access and share data anytime and from anywhere. However, the usage of such devices also introduces many security issues, including in data sharing. For this reason, security mechanisms such as ABE (AttributeBased Encryption) have been introduced in IOT environments to secure data sharing. Nevertheless, Ciphertext-Policy ABE (CP-ABE) is rather resource intensive both in the encryption and the decryption processes. This makes it unadapted for IOT environments where the devices have limited computing resources and low energy. In addition, in CP-ABE, the privacy of the access policy is not assured because it is sent in clear text along with the cipher-text. To overcome these issues, we propose a new approach based on CP-ABE which uses fog devices to reduce the bandwidth, and partially delegates data decryption to these fog devices. It also ensures the privacy of the access policy by adding false attributes to the access policy. We also discuss the security properties and the complexity of our approach. We show that our approach ensures the confidentiality of the data and the privacy of the access policy. The complexity is also improved when compared with existing approaches.
Fog Computing, Access Control, Attribute based Encryption, Decryption Outsourcing
Andrei Petrescu and Mihai Carabas, University POLITEHNICA of Bucharest, Splaiul Independentei 313, Bucharest, Romania
In today’s fast-moving world, advances in technology occur at an alarming rate. Keeping up is difficult, but mandatory, and we must find solutions that will make the process easy. Out of all these technologies, cloud computing is one that is evolving the quickest. We will explore the tools which will help us help us reach our goal and talk about the main subject of our paper, namely keeping up to date with the latest releases in OpenStack private cloud technology. We will also talk about the results and how we found the best solution for the context in which this paper lies.
cloud, openstack, cinder, nova, keystone, glance, heat
Shuo Yang1, Ran Wei2, Hengliang Tan1 and Jiao Du1, 1School of Computer Science and Cyber Engineering Guangzhou University, Guangzhou, China and 2Department of Computer Science University of California, Irvine, California, USA
Document (text) classification is a common method in ebusiness, facilitating users in the tasks such as document collection, analysis, categorization and storage. Semantic analysis can help to improve the performance of document classification. Though having been considered when designing previous methods for automatic document classification, more focus should be given to semantics with the increase number of content-rich electronic documents, forum posts or blogs online, which can reduce human workload by a great margin. This paper proposes a novel semantic document classification approach aiming to resolve two types of semantic problems: (1) polysemy problem, by using a novel semantic similarity computing strategy (SSC) and (2) synonym problem, by proposing a novel strong correlation analysis method (SCM). Experiments show that our strategies can help to improve the performance of the baseline methods
semantic document classification, semantic similarity, semantic embedding, correlation analysis, machine learning
Sébastien Combéfis1,2 and Guillaume de Moffarts2, 1ECAM Brussels Engineering School, Brussels, Belgium and 2Computer Science and IT in Education ASBL, Louvain-la-Neuve, Belgium
Automatic assessment of code, in particular to support education, is an important feature that several Learning Management Systems (LMS) do include, at least in some extent. Several kinds of assessments can be designed, such as “fill in the following code”, “write a function that”, or “correct the bug in the following program” exercises, for example. One difficulty for an instructor is to create such programming exercises, that is, writing the statement and providing all the information necessary to the platform to grade the assessment. Another difficulty appears when the instructor wants to use his/her exercises on another LMS platform, since they have to be re-encoded into the other LMS, maybe with a complete different way to describe and configure the exercise. This paper presents a tool that can automatically generate programming exercises from one single and unique description, in several programming languages. The generated exercises can then be automatically graded by the same platform, providing intelligent feedback to the user in order to support his/her learning. This paper focuses on and details unit testing-based exercises and provides insights into new kinds of exercises that could be generated by the platform in the future, with some additional developments.
Code Grader, Programming Assessment, Code Exercise Generation, Computer Science Education
Amir J. Majid; Ph.D, College of Engineering, University of Science and Technology of Fujairah, UAE
Lifetime extension algorithm is implemented on an ad hoc wireless networks with shadowing effects, and simulated on Matlab platform. The main aim is to maximize the lifetimes of sensors which cover a number of targeted zones, by sharing their subsets according to their minimum coverage failure probabilities, with the consideration of shadowing effects in the vicinity of network environment, in which the Path Loss Model (PLM) is used in the analyzed.
ad hoc, failure probability, PLM, shadowing, sensor lifetime, WSN
Raja Alaya and Rabah Attia, Tunisian Polytechnic School, University of Carthage, Tunisia
Understanding the interference scenario in power lines network is a key step to characterize the power line communication (PLC) system. This paper focuses on the characterization and modelling of the stationary noise in Narrowband PLC. Measurement and analysis of noise is carried out in the Tunisian outdoor Low Voltage (LV) power line network in the frequency band below 500 kHz. Based on existing models and measurements results, a parametric model of noise is proposed; the model parameters are statistically studied.
Power Line Communication, Measurement, Modelling, Narrowband Frequency, Noise
Jeremy Van den Eynde and Chris Blondia, University of Antwerp - imec, IDLab - Department of Mathematics and Computer Science, Sint-Pietersvliet 7, 2000 Antwerp, Belgium
In this paper we consider upper and lower constraining users' service rates in a slotted, cross-layer scheduler context. Such schedulers often cannot guarantee these bounds, despite the usefulness in adhering to Quality of Service (QoS) requirements, aiding the admission control system or providing different levels of service to users. We approach this problem with a low-complexity algorithm that is easily integrated in any utility function-based cross-layer scheduler. The algorithm modifies the weights of the associated Network Utility Maximization problem, rather than for example applying a token bucket to the scheduler's output or adding constraints in the physical layer. We study the efficacy of the algorithm through simulations with various schedulers from literature and mixes of traffic. The metrics we consider show that we can bound the average service rate within about five slots, for most schedulers. Schedulers whose weight is very volatile are more difficult to constrain.
Cross-layer Scheduling, Quality of Service, Token Buckets, Resource allocation
Michel Bakni1, Luis Manuel Moreno Chacon2, Yudith Cardinale2, Guillaume Terrasson1, and Octavian Curea1, 1Univ. Bordeaux, ESTIA Institute of Technology, F-64210 Bidart, France and 2Universidad Simon Bolivar, Caracas, 1080-A, Venezuela
Nowadays, there exists a large number of available network simulators, that dier in their design, goals, and characteristics. Users who have to decide which simulator is the most ap- propriate for their particular requirements, are today lost, faced with a panoply of disparate and diverse simulators. Hence, it is obvious the need for establishing guidelines that support users in the tasks of selecting and customizing a simulator to suit their preferences and needs. In previous works, we proposed a generic and novel methodological approach to evaluate network simulators, considering a set of qualitative and quantitative criteria. However, it lacks criteria related to Wire- less Sensor Networks (WSN). Thus, the aim of this work is three fold: (i) extend the previous proposed methodology to include the evaluation of WSN simulators, such as energy consumption modelling and scalability; (ii) elaborate a study of the state of the art of WSN simulators, with the intention of identifying the most used and cited in scientic articles; and (iii) demonstrate the suitability of our novel methodology by evaluating and comparing three of the most cited simulators. The application of our methodological approach leads to results that are measurable and comparable, giving a comprehensive overview of simulators features, their advantages, and disadvantages. Thus, the novel methodology provides researchers with an evaluation tool that can be used to describe and compare WSN simulators in order to select the most appropriate one for a given scenario.
Methodology, Simulators, Wireless Sensors Networks, Energy Consumption
Saroja Kanchi, Department of Computer Science, Kettering University, Flint, MI, USA
Localization of Wireless Sensor Network (WSN) is the problem of finding the geo-locations of sensors in a sensor network deployed in various applications. Given the prolification of sensors in various applications, the localization and tracking of sensors have received considerable attention. Properties of rigidity and flexibility of the underlying graph of the WSN have been studied as a means of determining the localizability of the nodes in the WSN. In this paper, we present a new 3-merge technique for merging three rigid clusters of a network graph, into larger rigid cluster and we use this algorithm for finding maximal localizable regions within the WSN. We provide simulation results on random deployments of WSN to prove that this technique outperforms previously known algorithms for finding maximal localizable subregions. Moreover, simulation results show that the number of anchors needed to localize the entire WSN decreases due to finding large localizable regions.
Wireless Sensor Network, localization, rigidity
Chafika Benkherourou, Computer Science Department, University of Batna 2, Batna, Algeria
Master Data Management (MDM) and Service Oriented Architecture (SOA) are gaining increased prominence within the worlds of business and technology. When adopting SOA, organizations can face many difficulties. The most common problems are poor data quality. In order to overcome these problems, the use of ad-justed MDM is proposed. The aim of this paper is proposing a new framework to implement of Master Data Management in a context of SOA. Our major contribution is the description of the process for establishing MDM functions, and what are the different steps and interdependencies that should be taken into account when a SOA strategy is used. The solution proposed present a framework to implement a MDM with a service layer composed of two services guaranteeing the quality of the master data and metadata.
Master Data, Master Data management, MDM, Service Oriented Architecture, SOA , Data Quality, Metadata