El Mehdi Ismaili Alaoui, Laboratory of Computer Networks and Systems Faculty of Sciences, Moulay Ismail University Meknes-Morocco
Motion estimation is a signal-matching technique. It is a key component of target tracking, medical imaging, video compression, and many other systems. This paper presents a four new estimators for frame-to-frame image motion estimation. The estimators of interest are the ROTH impulse response, the smoothed coherence transform (SCOT), the maximum likelihood (ML) and the Wiener estimators. These are all referred to as Generalized Cross-Correlation (GCC)-estimators. These estimators are based on the cross-correlation of the received images and various weighting functions are used to prefilter the received images before cross-correlation. Since the estimators and weighting functions are similar to those used in the time delay estimation . As the performances of the GCC-estimators are considerably degraded by the signal-to-noise ratio (SNR) level, this factor has been taken as a prime factor in benchmarking the different GCC-estimators. For robust motion estimation it has been found that the GCC-Wiener is particularly suited to this purpose. The accuracy of the estimators is also discussed.
Motion estimation, Motion vector field, Whitening function, Noisy image sequences, GCC-estimators
Merouane BOUZID and Bakkar LASKAR, Electronics Faculty, University of Sciences and Technology Houari Boumediene(USTHB), Algeria
Speech steganography is a technique of covert communication which conveys secret speech hidden in cover digital speech signal in such a way that the existence of the secret speech is concealed. In this paper, we develop a steganographic speech coding system based on embedding coded secret speech into host public speech coded by the AMR-WB (ITU-T G.722.2) speech coder. For the compression of the secret speech signal, we used the 2.4 kbits/s MELP speech coder. The embedding process of the secret bit stream is carried out into the split-multistage vector quantization (S-MSVQ) indices of G.722.2 immittance spectral frequencies (ISF) by modifying the mechanism of the S-MSVQ second stage.
Multi-stage vector quantization, steganography, data hiding, ISF parameters, secure speech, wideband speech coder, MELP, AMR-WB
Rafflesia khan and Rameswar Debnath, Computer Science and Engineering Discipline, Khulna University, Khulna, Bangladesh
Leaf detection and segmentation is a complex image segmentation problem as leaves are most often found in groups with natural background. Edges of leaves cannot be clearly defined from image because of their color similarities. Also, separating every single as well as overlapping leaf individually is even more challenging as leaves share almost same color, texture and shape. In this paper, we propose a new automatic approach for leaf segmentation from image. Our leaf segmentation process uses efficient techniques for processing an image to obtain contours of every individual objects. Then, it selects the best appropriate connected contours that represent region of every leaves appearing in an image. Our model archives an overall 90.46% segmentation rate where segmentation rates for single and overlapping leaves are 95.34% and 86.73%, respectively.
Image Processing, leaf object segmentation, overlapping leaves, connected contour, object boundary detectio
Sertap Kamçi, Dogukan Aksu and Muhammed Ali Aydin, Computer Engineering Department, Istanbul University-Cerrahpasa, Istanbul, Turkey
Unmanned vehicle technologies are an area of great interest in theory and practice today. These technologies have advanced considerably after the first applications have been implemented and cause a rapid change in human life. Autonomous vehicles are also a big part of these technologies. By using image processing and artificial intelligence techniques, an autonomous vehicle can move successfully without a driver help. Autonomous vehicles move from the starting point to the specified target by applying pre-defined rules and The most important action of a driver has to do is to follow the lanes on the way to the destination. There are also rules for proper tracking of the lanes. Many accidents are caused due to insufficient follow-up of the lanes and non-compliance with these rules. The majority of these accidents also result in injury and death. In this paper, we present an autonomous vehicle prototype that follows lanes via image processing techniques, which are a major part of autonomous vehicle technology. Autonomous movement capability is provided by using some image processing algorithms such as canny edge detection, Sobel filter, etc. These algorithms were implemented and tested on the vehicle. The vehicle detected and followed the determined lanes. By that way, it went to the destination successfully.
Autonomous Vehicle, Lane Detection, Image Processing, HSV Color, RGB Color, Canny Edge Detection, DC Motor, Region of Interest (ROI), Vanishing Point, Sobel Filter
Büsra Rümeysa Mete, Dogukan Aksu and Emel Arslan, Computer Engineering Department, Istanbul University-Cerrahpasa, Istanbul, Turkey
In mathematical sense, optimization can be defined as minimizing or maximizing a function. Meta-Heuristic Optimization Algorithms aim to find the best solution -in other words the optimum solution- from the search space to the current problem as soon as possible. Today, many optimization techniques that have been developed inspired by biological systems and also their behaviour in nature are used for the solution of diverse optimization problems. One of the metaheuristic algorithms is Grey Wolf Optimization (GWO) and it is cultivated by observing the hunting strategy and the communal behaviour of grey wolf swarms. Dolphin Swarm Optimization (DSO) is another optimization algorithm and it was implemented by modelling the living habits and also the biological characteristics shown in the dolphin's real predatory course. Bacterial Foraging Optimization (BFO), the last algorithm that we have investigated, was developed with inspiration from the social foraging action of Escherichia coli. In this study, we reviewed these three nature-inspired optimization algorithms and also we applied some benchmark test functions only to GWO and presented the results.
Meta-Heuristic Optimization, Grey Wolf Optimization, Dolphin Swarm Algorithm, Bacterial Foraging Optimization
Hao Yuan, Guo Yu, Yifan Ma, Jieneng Chen, Xiongda Chen, Tongji University, China
Traditional methods for simulating the flow of people includes the Cellular Automaton, artificial potential field, and so on. This paper seeks out to refine the traditional Cellular Automaton and combines it with the adapted Ant Colony model as well as the Artificial Potential Field to simulate the evacuation process within large buildings. This research work includes applying the model to the Louvre to get an estimation of the total evacuation time within one floor, and after systematic analysis, identifying the bottlenecks alongside the evacuation routes. This proves the applicability and flexibility of the model.
Evacuation Simulation Model, Cellular Automaton, Artificial Potential Field, Ant Colony, Large Complex Buildings.
Hamid Khemissa1 and Mourad Oussala2, 1Computer Systems Laboratory, Faculty of Electronics and Informatics, Computer Science Institute, USTHB: University of Science and Technology Houari Boumediene, Algiers; Algeria and 2Laboratoire des Sciences du Numérique de Nantes (LS2N), Faculty of sciences, Nantes University, France
The need for adaptive guidance systems is now recognized for all software development processes. The new needs generated by the mobility context for software development led these guidance systems to both quality and ability adaptation to the possible variations of the development context. This paper deals with the adaptive guidance quality to satisfy the developer’s guidance needs. We propose a quality model to the adaptive guidance. This model offers a more detailed description of the quality factors of guidance service adaptation. This description aims to assess the quality level of each guidance adaptation factor and therefore the evaluation of the adaptive quality guidance services.
Quality model, Guidance System Quality, Adaptive Guidance, Plasticity.
Ahmed Saidi1, Omar Nouali2 And Abdelouahab Amira3, 1, 2, 3Department Of Computer Security, Research Center For Scientific And Technical Information, Algiers, Algeria and 1, 3Faculty Of Exact Sciences, Universite De Bejaia, 06000 Bejaia, Algeria
Nowadays, IOT (Internet Of Things) devices are everywhere and are used in many domains including ehealth, smart-cities, vehicular networks, .. etc. Users use IOT devices like smartphones to access and share data anytime and from anywhere. However, the usage of such devices also introduces many security issues, including in data sharing. For this reason, security mechanisms such as ABE (AttributeBased Encryption) have been introduced in IOT environments to secure data sharing. Nevertheless, Ciphertext-Policy ABE (CP-ABE) is rather resource intensive both in the encryption and the decryption processes. This makes it unadapted for IOT environments where the devices have limited computing resources and low energy. In addition, in CP-ABE, the privacy of the access policy is not assured because it is sent in clear text along with the cipher-text. To overcome these issues, we propose a new approach based on CP-ABE which uses fog devices to reduce the bandwidth, and partially delegates data decryption to these fog devices. It also ensures the privacy of the access policy by adding false attributes to the access policy. We also discuss the security properties and the complexity of our approach. We show that our approach ensures the confidentiality of the data and the privacy of the access policy. The complexity is also improved when compared with existing approaches.
Fog Computing, Access Control, Attribute based Encryption, Decryption Outsourcing
Andrei Petrescu and Mihai Carabas, University POLITEHNICA of Bucharest, Splaiul Independentei 313, Bucharest, Romania
In today’s fast-moving world, advances in technology occur at an alarming rate. Keeping up is difficult, but mandatory, and we must find solutions that will make the process easy. Out of all these technologies, cloud computing is one that is evolving the quickest. We will explore the tools which will help us help us reach our goal and talk about the main subject of our paper, namely keeping up to date with the latest releases in OpenStack private cloud technology. We will also talk about the results and how we found the best solution for the context in which this paper lies.
cloud, openstack, cinder, nova, keystone, glance, heat
Shuo Yang1, Ran Wei2, Hengliang Tan1 and Jiao Du1, 1School of Computer Science and Cyber Engineering Guangzhou University, Guangzhou, China and 2Department of Computer Science University of California, Irvine, California, USA
Document (text) classification is a common method in ebusiness, facilitating users in the tasks such as document collection, analysis, categorization and storage. Semantic analysis can help to improve the performance of document classification. Though having been considered when designing previous methods for automatic document classification, more focus should be given to semantics with the increase number of content-rich electronic documents, forum posts or blogs online, which can reduce human workload by a great margin. This paper proposes a novel semantic document classification approach aiming to resolve two types of semantic problems: (1) polysemy problem, by using a novel semantic similarity computing strategy (SSC) and (2) synonym problem, by proposing a novel strong correlation analysis method (SCM). Experiments show that our strategies can help to improve the performance of the baseline methods
semantic document classification, semantic similarity, semantic embedding, correlation analysis, machine learning
Sébastien Combéfis1,2 and Guillaume de Moffarts2, 1ECAM Brussels Engineering School, Brussels, Belgium and 2Computer Science and IT in Education ASBL, Louvain-la-Neuve, Belgium
Automatic assessment of code, in particular to support education, is an important feature that several Learning Management Systems (LMS) do include, at least in some extent. Several kinds of assessments can be designed, such as “fill in the following code”, “write a function that”, or “correct the bug in the following program” exercises, for example. One difficulty for an instructor is to create such programming exercises, that is, writing the statement and providing all the information necessary to the platform to grade the assessment. Another difficulty appears when the instructor wants to use his/her exercises on another LMS platform, since they have to be re-encoded into the other LMS, maybe with a complete different way to describe and configure the exercise. This paper presents a tool that can automatically generate programming exercises from one single and unique description, in several programming languages. The generated exercises can then be automatically graded by the same platform, providing intelligent feedback to the user in order to support his/her learning. This paper focuses on and details unit testing-based exercises and provides insights into new kinds of exercises that could be generated by the platform in the future, with some additional developments.
Code Grader, Programming Assessment, Code Exercise Generation, Computer Science Education
Vincenzo Scotti, Licia Sbattella and Roberto Tedesco, DEIB, Politecnico di Milano, Milano, Italy
We present a methodology for automatic generation of football match “highlights”, relying on the commentator voices and leveraging two multimodal NNs. The fist model (M1) classifies sequences and provides a representation of such sequences to be elaborated by the second model. M2 exploits M1 to decode unbound streams of information, generating the final set of scenes to put into the match summary. Raw audio, along with transcriptions generated by an ASR, extracted from 369 football matches provided the source for feature extraction. We employed such features to train M1 and M2; for M1, the feature streams were split in sequences at (nearly) sentence granularity, while for M2 the entire streams were employed. The final results were promising, especially if adopted in a semi-automatic, real-world video pipeline.
Neural Networks, NLP, Voice, Text, Summarisation
Harutyun Berberyan and Shahid Ali, Department of Information Technology, AGI Institute, Auckland, New Zealand
This research study is focused on a company which operated in online shopping. The company entered into the online market without proper testing. The company’s site was migrated from local server to Amazon Web Services which required additional changes in its site architecture. Having automation testing especially in this case, regression test suite needs to be applied for the mentioned changes. It will be very useful for quickly testing the functionality of the site and further to validate that everything is working as expected. In order to conduct the mentioned regression testing through the test automation Selenium Webdriver was selected as a test automation tool/framework and TestNG framework was added to the test automation environment to generate comprehensive reports. After test execution the results showed that first of all the automation testing is more than 3 times faster than manual and human interaction is led to the minimum. Moreover, it proves that the core functionalities were not suffered from architectural changes although some minor bugs have been revealed during the collective execution of test cases. This research will create the regression ready solution on sas.am testers’ and developers’ hands also it will be a good test automation framework for all web applications created on 1C-Bitrix framework, which is getting popularity.
Amazon Web Service, Application Programming Interface, Page Object Model
Moussa WITTI and Prof. Dimitri KONSTANTAS, Information Science Institute University of Geneva Route de Drize 7, 1227 Carouge, Switzerland
In the era of Internet of Thing, data are collected from heterogeneous wireless protocols such as ZigBee, WiFi, RFID, Bluetooth, sub-GHz, Z-Wave, 2G / 3G / 4G form smart sensors to fog and cloud platform. However, the collected data may contains sensitive information, which the owner does not want to be disclosure. Because of IOT architecture based on heterogeneous technologies, ensuring privacy and maintaining security are difficult. How to protect data and preserve privacy over network during end-to-end or hop-to-hop communication? In this paper, we propose an architecture approach for secure and privacy-aware data collection in Fog Node Based Distributed IOT environment.
Internet of Thing, privacy, security, data collection, fog
Yixian Qi1, Qi Lu2, Yu Su3 and Fangyan Zhang4, 1Valencia High School, Placentia, CA 92870, 2Department of Social Science, University of California, Irvine, Irvine, CA, 92697, 3Department of Computer Science, California State Polytechnic University, Pomona, CA, 91768 and 4ASML, San Jose, CA, 95131
College application is critical and complicated task for high school students. Generally speaking, one student will submit application to a number of universities or colleges. However, there is no proper software for them to organize their application related information during application process. This paper proposes an all-in-one system which can contains useful features that help students in their college application, such as compare his or her SAT/GPA, organize their rewards and activities etc. This tool has been published in Google Play.
Android application, App development, Google Drive
Peng Zou and Yunfei Cai, Department of Intelligent Science and Technology, Nanjing University of Science and Technology, Nanjing, China
In this paper, a target tracking algorithm, TriT(Triplet Network Based Tracker), based on Triplet network is proposed to solve the problem of visual target tracking in complex scenes. Compared with Siamese-fc algorithm, which adopts a two-way feature extraction network, TriT uses three parallel convolutional neural networks to extract the features of the target in the first frame, the target in the previous frame and the search regions of the current frame, and then obtains the high-level semantic information of the three areas. Then, the features of the target in the first frame and the target in the previous frame are respectively convolved with the features of the current search region to obtain the similarity between each position in the search area and the target in the first frame and the target in the previous frame, so as to generate two similarity score maps. Then, interpolate and enlarge the two low-resolution score maps, and use the APCE value of the score maps as the medium to fuse the two score maps, according to which the position of the tracking target in the current frame can be located. Experiments in this paper have confirmed that, compared with some other real-time target tracking algorithms such as Siamese-fc, TriT has great advantages in tracking robustness and can effectively execute tracking tasks in complex scenes, such as illumination change, occlusion and interference of similar targets. Experimental results also show that the proposed algorithm has good real-time performance.
Target Tracking, High Robustness, Triplet Network, Score Maps Fusion
Shukla Sharma123, Ludovic Koeh12, Pascal Bruniaux12, Xianyi Zeng12, 1Gemtex, 2Ensait, 3Ecole Centrale De Lille Lille, France
The fashion industry has moved so fast in a few years and expected to grow more. The state of fashion 2019 report by Mckinsey has predicted the effective use of data-driven value-added services will be used by the largest industry players in the market . Also, automation and data analytics have enabled start-ups to build agile business model for made-to-order production . Many fashion firms have enabled their business model to give extremely personalized experiences to their customers by using advanced CAD tools like CLO 3D, Marvelous-Designer, Browzwear, Lectra and many more for designing the garment and build 3D avatar for the customized garment as well as integration with the web and mobile application based platforms. In this paper, we have presented our initial work to build a garment fashion recommendation system for customized garments, which can be used with mobile and web applications. Proposed architecture for recommendation system is based on different data mining techniques like clustering, classification and association mining. We have collected user’s historical data from a fashion company dealing with customized made-to-measure garments by getting orders directly from the web platform.
Recommendation System, BIRCH, Adaptive Random Forest, Incremental learning, data mining, Association mining
Xin Liu, Computer science and Technology, Beijing University of Posts and Telecommunications, Beijing, China
Extractive and abstractive are two main text summarization techniques, unlike previous works which treat them as two separated tasks. In this paper,we present TRCC-ES, an original model based transformer with copy and coverage on extractive sentences. We aim to obtain a short summary with a precise text span in a long paragraph. On the one hand, we combine extractive model with abstractive model to generate a more readable paragraph by calculating word-level attention after obtaining sentences with high ROUGE scores. On the other hand, we apply transformer language model to generate summarization. The results of the experiments based on two abstractive summarization datasets show that our model significantly outperforms the state-of-the-art summarization models.
Abstractive Summarization, Transformer, Extractive Sentences, Copy
Amir J. Majid; Ph.D, College of Engineering, University of Science and Technology of Fujairah, UAE
Lifetime extension algorithm is implemented on an ad hoc wireless networks with shadowing effects, and simulated on Matlab platform. The main aim is to maximize the lifetimes of sensors which cover a number of targeted zones, by sharing their subsets according to their minimum coverage failure probabilities, with the consideration of shadowing effects in the vicinity of network environment, in which the Path Loss Model (PLM) is used in the analyzed.
ad hoc, failure probability, PLM, shadowing, sensor lifetime, WSN
Raja Alaya and Rabah Attia, Tunisian Polytechnic School, University of Carthage, Tunisia
Understanding the interference scenario in power lines network is a key step to characterize the power line communication (PLC) system. This paper focuses on the characterization and modelling of the stationary noise in Narrowband PLC. Measurement and analysis of noise is carried out in the Tunisian outdoor Low Voltage (LV) power line network in the frequency band below 500 kHz. Based on existing models and measurements results, a parametric model of noise is proposed; the model parameters are statistically studied.
Power Line Communication, Measurement, Modelling, Narrowband Frequency, Noise
Jeremy Van den Eynde and Chris Blondia, University of Antwerp - imec, IDLab - Department of Mathematics and Computer Science, Sint-Pietersvliet 7, 2000 Antwerp, Belgium
In this paper we consider upper and lower constraining users' service rates in a slotted, cross-layer scheduler context. Such schedulers often cannot guarantee these bounds, despite the usefulness in adhering to Quality of Service (QoS) requirements, aiding the admission control system or providing different levels of service to users. We approach this problem with a low-complexity algorithm that is easily integrated in any utility function-based cross-layer scheduler. The algorithm modifies the weights of the associated Network Utility Maximization problem, rather than for example applying a token bucket to the scheduler's output or adding constraints in the physical layer. We study the efficacy of the algorithm through simulations with various schedulers from literature and mixes of traffic. The metrics we consider show that we can bound the average service rate within about five slots, for most schedulers. Schedulers whose weight is very volatile are more difficult to constrain.
Cross-layer Scheduling, Quality of Service, Token Buckets, Resource allocation
Jaspreet Kaur and Sumit Kalra, Department of Computer Science & Engineering Indian Institute of Technology Jodhpur Jodhpur, India
Nowadays every infrastructure work smartly as smart home, smart industry, smart academia etc. They use various smart IoT (Internet of Things) devices those charged by electricity. These electricity usage report is recorded by smart meter and then it send to the billing office wirelessly for payment that is given by users. These electricity bill rates vary depending on the environments those use it as household, industrial purpose etc. But there are several questions we need to be consider as: how much these energy or data is transferred securely?, how these smart devices are authorized?, how to make this system cost efficient and usage of electricity more appropriately so,that accurate electricity bill is generated or paid by the users without corruption? For resolving all of the above issues, We develop SS EMS :A Smart and Secure Framework of Energy Management System that uses blockchain technology for security features along with IoT devices and Machine Learning(ML) or Deep Learning (DL techniques) for the prediction of usage of electricity at these smart infrastructure along with two new ideas for these system as: 1. A new methodology for billing payment specifically for IoT or smart devices. 2. Securely selling or buying energy (Solar, Biomass etc.) directly between each other smart infrastructure for easier exchange of energy.
Smart IoT Devices, Blockchain, ML or DL Methods, Energy Management System, Solar Energy, Biomass Energy.
Michel Bakni1, Luis Manuel Moreno Chacon2, Yudith Cardinale2, Guillaume Terrasson1, and Octavian Curea1, 1Univ. Bordeaux, ESTIA Institute of Technology, F-64210 Bidart, France and 2Universidad Simon Bolivar, Caracas, 1080-A, Venezuela
Nowadays, there exists a large number of available network simulators, that dier in their design, goals, and characteristics. Users who have to decide which simulator is the most ap- propriate for their particular requirements, are today lost, faced with a panoply of disparate and diverse simulators. Hence, it is obvious the need for establishing guidelines that support users in the tasks of selecting and customizing a simulator to suit their preferences and needs. In previous works, we proposed a generic and novel methodological approach to evaluate network simulators, considering a set of qualitative and quantitative criteria. However, it lacks criteria related to Wire- less Sensor Networks (WSN). Thus, the aim of this work is three fold: (i) extend the previous proposed methodology to include the evaluation of WSN simulators, such as energy consumption modelling and scalability; (ii) elaborate a study of the state of the art of WSN simulators, with the intention of identifying the most used and cited in scientic articles; and (iii) demonstrate the suitability of our novel methodology by evaluating and comparing three of the most cited simulators. The application of our methodological approach leads to results that are measurable and comparable, giving a comprehensive overview of simulators features, their advantages, and disadvantages. Thus, the novel methodology provides researchers with an evaluation tool that can be used to describe and compare WSN simulators in order to select the most appropriate one for a given scenario.
Methodology, Simulators, Wireless Sensors Networks, Energy Consumption
Saroja Kanchi, Department of Computer Science, Kettering University, Flint, MI, USA
Localization of Wireless Sensor Network (WSN) is the problem of finding the geo-locations of sensors in a sensor network deployed in various applications. Given the prolification of sensors in various applications, the localization and tracking of sensors have received considerable attention. Properties of rigidity and flexibility of the underlying graph of the WSN have been studied as a means of determining the localizability of the nodes in the WSN. In this paper, we present a new 3-merge technique for merging three rigid clusters of a network graph, into larger rigid cluster and we use this algorithm for finding maximal localizable regions within the WSN. We provide simulation results on random deployments of WSN to prove that this technique outperforms previously known algorithms for finding maximal localizable subregions. Moreover, simulation results show that the number of anchors needed to localize the entire WSN decreases due to finding large localizable regions.
Wireless Sensor Network, localization, rigidity
Chafika Benkherourou, Computer Science Department, University of Batna 2, Batna, Algeria
Master Data Management (MDM) and Service Oriented Architecture (SOA) are gaining increased prominence within the worlds of business and technology. When adopting SOA, organizations can face many difficulties. The most common problems are poor data quality. In order to overcome these problems, the use of ad-justed MDM is proposed. The aim of this paper is proposing a new framework to implement of Master Data Management in a context of SOA. Our major contribution is the description of the process for establishing MDM functions, and what are the different steps and interdependencies that should be taken into account when a SOA strategy is used. The solution proposed present a framework to implement a MDM with a service layer composed of two services guaranteeing the quality of the master data and metadata.
Master Data, Master Data management, MDM, Service Oriented Architecture, SOA , Data Quality, Metadata
Cristiana Carvalho1, Filipe Cabral Pinto1, Isabel Borges1, Gonçalo Machado1 and Ilídio Oliveira2, 1Altice Labs, Aveiro, Portugal and 2Departamento de Eletrónica, Telecomunicações e Informática, University of Aveiro, Aveiro, Portugal
Digital transformation has changed management models in cities. The use of tools supported by information and communication technologies has facilitated the planning and control of the urban space allowing a rapprochement between the city and the citizens. This proximity is exponentiated with the advent of the Internet of things becoming possible to permanently know the state of the city and to act on the different infrastructures in a dynamic way. This paper proposes the use of Machine Learning techniques to enhance city management by predicting behaviours and automatically adapt rules mechanisms in order to mitigate city problems contributing to the improvement of lives living or visiting municipalities
Architecture, Learning City, Smart Cities, Machine Learning, Big Data
Mohammad Rmayti, Alexis Olivereau and Baptiste Polvé, CEA, LIST, Communicating Systems Lab, 91191 Gif-sur-Yvette CEDEX, France
During the last decade, the world witnessed a significant transformation in manufacturing policies called Industry 4.0. During this fourth industrial revolution, computer-based systems are aided with smarter mechanisms that rely mainly on artificial intelligence, especially machine learning. This latter replaces and automates currently numerous complex industrial tasks that were accomplished by humans such as the Internet of Things (IoT) and Autonomous driving. Therefore, information and assets belonging to such an industrial environment become a new node in the network that raises new technical vulnerabilities and increase the attack surface. Risk management and cybersecurity strategies are the best way to overcome such challenges. In this context, this paper presents an Intrusion Detection System (IDS) that we designed, implemented and evaluated in a concrete industrial IoT environment. The results show that our IDS can help efficiently to increase the security of the studied IoT network by detecting anomalies with high accuracy
Industrial IoT, Network Security, Intrusion Detection, Machine Learning.
Muzaffar Shah, Darshan Adiga, Shabir Bhat and Viveka Vyeth, Datoin Bangalore, India
In almost every type of business a retention stage is very important in customer life cycle because according to market theory, it is always expensive to attract new customers than retaining the existing ones. Thus, a churn prediction system which can predict accurately ahead of time, whether a customer will churn in the foreseeable future and also help the enterprises with the possible reasons which may cause a customer to churn is an extremely powerful tool for any marketing team. In this paper, we propose an approach to predict customer churn for non-subscription based business settings. We suggest a set of generic features which can be extracted from sales and payment data of almost all non-subscription based businesses and can be used in predicting customer churn. We have used the neural network-based Multilayer perceptron for prediction purpose. The proposed method achieves an F1-Score of 80% and a recall of 85%, comparable to the accuracy of churn prediction for subscription-based business settings. We also propose a system for causality analysis of churn, which will predict a set of causes which may have led to the customer churn and helps to derive customer retention strategies.
churn Analysis, Causality Analysis, Machine Learning, Business Analytics , Deep Neural Network
Osama Mohammad Rababah and Nour Alokaily, Information Technology Department, The University of Jordan, Amman, Jordan
The huge volume of online reviews makes it difficult for a human to process and extract all significant information to make decisions. As a result, there has been a trend to develop systems that can automatically summarize opinions from a set of reviews. In this respect, the automatic classification and information extraction from users’ comments, also known as sentiment analysis (SA) becomes vital to offer users the best responses to users’ queries, based on their own preferences. In this paper, we present a novel system to offer a personalized user experiences and to solve the semantic-pragmatic gap. Having a system for forecasting sentiments might allow us, to extract opinions from the internet and predict online user’s favorites, which could determine valuable for commercial or marketing research. The data used in this paper belongs to the tagged corpus positive and negative processed movie reviews introduced by Pang and Lee . The results show that even when a small sample is used, sentiment analysis can be done with a high accuracy if appropriate natural language processing algorithms applied.
Machine Learning, Big Data, Natural Language Processing, Sentiment Analysis
Abdessamad OUZNINI, Youssef FAKIR, Mohamed FAKIR, Bouzekri MOUSTAID, Department of computer Sciences Faculty of science and Technology, University Sultan Moulay Slimane, Beni Mellal, Morocco
In this study, we create a job matching system that articulate over semantic web technologies and fuzzy logic for candidate selection and evaluation. The flexibility of the system come from using semantic web to modelling candidate competency and requirement as well as job offer components based on an ontological approach furthermore for matching between CVs and Offers, we use techniques that are based on fuzzy logic.
E-recruitment, semantic web, ontologies, fuzzy sets.
Catherine Beazley1, Karan Gadiya1, Rakesh Ravi1, David Roden1, Boda Ye1, Brendan Abraham2, Donald Brown1, and Malathi Veeraraghavan3, 1Data Science Institute, University of Virginia, Charlottesville, USA, 2Department of Systems and Information Engineering, University of Virginia, Charlottesville, USA and 3Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, USA
Systems administrators need to efficiently detect and respond to cybersecurity breaches to protect their network. By effectively detecting anomalous activity in NetFlow data, systems administrators can limit the amount of packet capture they analyze so they can evaluate and respond to threats quicker. Unsupervised machine learning methods are an effective way to do so, as they can detect previously unseen types of malicious activity and do not require labeled datasets. In this paper, we compare unsupervised anomaly detection methods to detect potentially malicious connections. We evaluate an Autoencoder, Isolation Forest, Elliptic Envelope, Local Outlier Factor, and One Class Support Vector Machine. We show that all five methods effectively separate the data into connections with different network activity, but Isolation Forest, the Autoencoder, and Elliptic Envelope detect anomalies the best. Our findings are useful for reducing the amount of packet capture systems administrators need to evaluate to respond to threats.
Unsupervised Learning, Anomaly Detection, Cybersecurity, NetFlow, Packet Capture
Shubham Sharma and Rene V. Mayorga, Industrial Systems Engineering, University of Regina, Canada
In today’s world, the problem of lower back pain is one of the fastest growing crucial ailments to deal with. More than half of total population on the earth, suffers from it at least once in a lifetime. Human Lower Back Pain symptoms are commonly categorized as Normal or Abnormal. In order to remedy Human Lower Back Pain, with the growth of technology over the time, many medical methods have been developed to diagnose and cure this pain at its earliest stage possible. This study aims to develop two Machine Learning (M.L.) models which can classify Human Lower Back Pain symptoms in a human body using non-conventional techniques such as Feedforward/Backpropagation Artificial Neural Networks, and Fully Connected Deep Networks. An Automatic Feature Engineering technique is implemented to extract featured data used for the classification. The proposed models are compared with respect to a Support Vector Machine model; considering different performance parameters.
Machine Learning, Artificial Neural Networks, Fully Connected Deep Networks, Support Vector Machine, Lower Back Pain, Automatic Feature Engineering technique.
Günther Schuh, Paul Scholz, Sebastian Schorr, Durmus Harman, Matthias Möller, Jörg Heib, Dirk Bähre
A significant amount of data is generated and could be utilized in order to improve quality, time, and cost related performance characteristics of the production process. Machine Learning (ML) is considered as a particularly effective method of data processing with the aim of generating usable knowledge from data and therefore becomes increasingly relevant in manufacturing. In this research paper, a technology framework is created that supports solution providers in the development and deployment process of ML applications. This framework is subsequently successfully employed in the development of an ML application for quality prediction in a machining process of Bosch Rexroth AG. For this purpose the 50 most relevant features were extracted out of time series data and used to determine the best ML operation. Extra Tree Regressor (XT) is found to achieve precise predictions with a coefficient of determination (R2) of constantly over 91% for the considered quality characteristics of a bore of hydraulic valves.
Technology Management, Technology Framework, Quality Prediction, Machine Learning, Advanced Manufacturing
You Peng1, Yao Pan2 and Qi Lu3, 1Department of Computer Science California State Polytechnic University, Pomona, CA, 91768, 2LinkedIn Corporation Sunnyvale, CA 94085 and 3Department of Social Science University of California, Irvine, CA, 92697
Today, the growing market of entertainment has placed a higher demand for music. Quality music is essential for video making, video game making, or even in any public places. However, sometimes finding a suitable list of music can be hard and expensive. This may be solved by automatic, deep-learning based music making. Using Recurrent Neural Network, computers are able to learn the patterns from existing music pieces and convert them to a possibility map. Companies like Google, Sony, and Amper are creating their applications for music generation. We plan to set up a platform where generating music can be done and retrieved directly online. With different options for genre and length, the users can conveniently generate music that fits their needs.
Music Generation, Machine Learning, RNN, Web Service
Hadi Samer Jomaa and Josif Grabocka and Lars Schmidt-Thieme, Information Systems and Machine Learning Lab Universtiry of Hildesheim, Germany
In classical Q-learning, the objective is to maximize the sum of discounted rewards through iteratively using the Bellman equation as an update, in an attempt to estimate the action value function of the optimal policy. In this paper, we extend the well-established loss by introducing the hindsight factor, an additional loss which integrates the historic temporal difference in action-value as part of the reward. The effect of this modification is examined in a deterministic continuous state space function estimation problem, resulting in an evident reduction in overestimation and improved stability. The underlying effect of the hindsight factor is modeled as an adaptive learning rate which is adjusted based on the actionvalue. The proposed method outperfoms variations of Q-learning, with an overall higher average reward and lower action values, which supports the deterministic evaluation, and proves that the hindsight factor contributes to lower overestimation errors.
Reinforcement Learning, Q-Learning, Hindsight
Tarek Messatfa, Fouad Chebbara, Abdelkarim Belhedri, and Abderrahim Annou, Department of Electronic and Telecommunications, Electrical Engineering Laboratory (LAGE), University of Kasdi Merbah Ouargla, Ouargla, Algeria
In this paper a circular microstrip patch antenna with Defected Ground Structure (DGS) has been designed and simulated for Ultra and Super Wide Band (UWB/SWB) applications by using the Computer Simulation Technology (CST). The aim of this work is to increase the bandwidth of an antenna by using (DGS). The total size of antenna is 25x30 mm2. This proposed antenna covers the frequency range of UWB and SWB for S11<-10dB was from 2.77 GHz to 30 GHz , this proposed antenna which gives a useful structure for modern wireless communication systems include point to point communication such as WVB (Wireless Video Broadcast), Satellite Communication and Radar Applications, WLAN applications “IEEE802.11a” in (5.12-5.825 GHz) and WiMAX system in (3.4–3.7 GHz) and for short range communication such as Biomedical applications.
Defected Ground Structure (DGS), Ultra Wideband (UWB), Super Wideband (SWB), Microstrip Patch Antenna, WLAN & WiMAX.
ChangHyun Roh and YongWoon Hwang and Im-Yeong Lee, SoonChunHyang University, Asan, Republic of Korea
E-government systems have ensured reliable data and prevented the forgery and modulation of that information, and so far, this role has been carried out in traditional centralized management. However, centralized data management has the disadvantage of single point of error and bottlenecks. Blockchain technology has emerged to solve this problem. It is characterized by decentralization, which has been considered insecure in the past, to ensure the integrity of data. However, there are many problems in applying general blockchain and smart contracts to e-government electronic voting. In this paper, we propose a scheme to guarantee the integrity of voting data and to automate the process of voting and counting by smart contract by applying blockchain and smart contract to provide integrity and automation function to electronic voting.
e-Voting, Blockchain, Ethereum, Smart contract, Security, Fairness.
Haji Akhundov1, Erik van der Sluis2, Said Hamdioui1 and Mottaqiallah Taouil1, 1Delft University of Technology, Delft, The Netherlands and 2Intrinsic ID B.V., Eindhoven, The Netherlands
Nowadays, Internet of Things (IoT) is a trending topic in the computing world. Notably, IoT devices have strict design requirements and are often referred to as constrained devices. Therefore, security techniques and primitives that are lightweight are more suitable for such devices, e.g., Static Random-Access Memory (SRAM) Physical Unclonable Functions (PUFs) and Elliptic Curve Cryptography (ECC). SRAM PUF is an intrinsic security primitive that is seeing widespread adoption in the IoT segment. ECC is a public-key algorithm technique that has been gaining popularity among constrained IoT devices. The popularity is due to using significantly smaller operands when compared to other public-key techniques such as RSA (Rivest Shamir Adleman). This paper shows the design, development, and evaluation of an application-specific secure communication architecture based on SRAM PUF technology and ECC for constrained IoT devices. More specifically, it introduces an Elliptic Curve Diffie-Hellman (ECDH) public-key based cryptographic protocol that utilizes PUF-derived keys as the root-of-trust for silicon authentication. Also, it proposes a design of a modular hardware architecture that supports the protocol. Finally, to analyze the practicality as well as the feasibility of the proposed protocol, we demonstrate the solution by prototyping and verifying a protocol variant on the commercial Xilinx Zynq-7000 APSoC device.
Mohamed. N. Hassan Ahmed1, S. Abo-Taleb2 Mohamed Shalaby3, 1University of Calgary, Calgary, Canada, 2Ain Shams university, Cairo, Egypt and 3Egyptian Armed Forces,Arab Academy for science Technology and Maritime Transport, Cairo, Egypt
Many software implementations for the schemes used for public key cryptosystems, of which elliptic curve cryptography is the most powerful, has been concerned with performance and efficiency. However, the advent of side channel attacks with their diverse categories, such as timing, fault and power analysis attacks, impose us to reconsider new strategies in implementing more secured elliptic curve algorithms to thwart any information leakage that leads to break the security of these algorithms. In this paper, we propose a new optimization on the algorithmic level for computing the arithmetic of elliptic curve point over prime fields to countermeasure side channel attacks that threaten elliptic curve cryptosystems. Indeed, these attacks present nowadays a realistic threat to cryptographic applications and have been proved to be very effective against most cryptosystems. Targeting performance and security by fast, side channel protected code, we built a library for the underlying prime fields arithmetic for the common fields specialized by NIST and SECG. Our work can be employed in numerous applications such as, E-health, Ebanking, E-commerce and E-governance.
Elliptic Curve Cryptography, Side Channel Attacks Countermeasures, Digital Signature.
Pushpinder Kaur Chouhan, Md Israfil Biswas, Naveed Khan, Chris Nugent, and Philip Morrow, Ulster University, Jordanstown, UK
The Internet of Things (IoT) is a system of interrelated things (devices, machines, objects, animals, people etc) that can be recog-nised with unique identifiers and has the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. Thus, an IoT environment can generate a huge set of data. To analyse big data so that an appropriate action can be taken, different modelling techniques and risk management methodologies should be deployed. In this article, we present existing data modelling techniques and risk management methodologies that can be used for a data to action paradigm. Data to action considers means of devising, executing and verifying appropriate courses of action based on the situation assessment of an IoT application and its associated infrastructure, together with quantified measures of uncertainty, including the likelihood that security has been compromised. To validate data to action in the IoT environment, we deploy some of the mentioned modelling techniques and risk management phenomena on publicly available weather forecast data. Our proposed model generates a notification message to inform school authorities about school closure by analysing the weather conditions data.
Internet of Thing, Data to action, Machine Learning Algorithms, Data Modelling
Aliya Tabassum and Wadha Lebda, Department of Computer Science and Engineering, Qatar University, Doha, Qatar
Internet of Things (IoTs) is the interconnection of heterogeneous smart devices through the Internet with diverse application areas. The huge number of smart devices and the complexity of networks has made it impossible to secure the data and communication between devices. Various conventional security controls are insufficient to prevent numerous attacks against these information-rich devices. Along with enhancing existing approaches, a peripheral defense, Intrusion Detection System (IDS), proved efficient in most scenarios. However, conventional IDS approaches are unsuitable to mitigate continuously emerging zero-day attacks. Intelligent mechanisms that can detect unfamiliar intrusions seems a prospective solution. This article explores exploitable popular attacks against IoT architecture and its relevant defense to identify appropriate protective mechanism for different networking practices and attack categories. In addition, a security framework for IoT architecture is provided with a list of security enhancement techniques.
Attacks, Architecture, Internet of Things (IoTs), Intrusion Detection System, Security.
Rajeev Kanth1, Tuomas Korpi1, Arto Toppinen1, Kimmo Myllymäki1, Jatin Chaudhary2,3, Jukka Heikkonen2, 1Savonia University of Applied Sciences, Opistotie 2, 70150 Kuopio, Finland, 2University of Turku, Vesilinnantie 5, 20520 Turku, Finland and 3Sardar Vallabhbhai National Institute of Technology, Surat India
The term “Internet of Things (IoT)” and its ecosystem is expanding very rapidly, and therefore, it is complicated to capture the actual definition. It also creates numerous challenges and opportunities, even for a human being. This paper is targeted to the audiences who do not have sufficient knowledge of Information Technology, Digitalization, Wireless Networking, and sensor technologies. We have proposed a simple and easily understandable explanation including applications and impacts to the society. In this paper, as the applications, we will also be discussing a few innovative experiments and the results that we have obtained. Lastly, we can say that after reading this paper, a layman who does not have specific knowledge in this field, do understand the concepts, advancements behind this buzzword, “IoT.”
Internet of Things, Things-to-Things Connectivity, IoT Applications, Smart Bin Management
Erdal ÖZDOGAN1 and O.Ayhan ERDEM2, 1Department of Information Systems, Gazi University, Ankara, Turkey and 2Department of Computer Engineering, Gazi University, Ankara, Turkey
One of the important factors affecting communication performance in the Internet of Things is the messaging protocol. MQTT, XMPP and AMQP are centralized application protocols that communicate through the server. DDS and CoAP are application protocols that can communicate directly, especially in real-time applications. As the Internet of Things is becoming more widespread and usage scenarios have different requirements, new approaches to data communication are required. In this study a UDP based hybrid application layer protocol has been designed which can communicate both directly and through central server. In addition, operating logic and packet structure of the developed hybrid protocol is examined.
Internet of Things, IoT Application Protocol, Hybrid IoT Protocol, Lightweight IoT Protocol, UDP Based IoT protocol