Anomaly Detection and Redundancy Elimination of Big Sensor Data in Internet of Things

In the era of big data and Internet of things, massive sensor data are gathered with Internet of things. Quantity of data captured by sensor networks are considered to contain highly useful and valuable information. However, for a variety of reasons, received sensor data often appear abnormal. Therefore, effective anomaly detection methods are required to guarantee the quality of data collected by those sensor nodes. Since sensor data are usually correlated in time and space, not all the gathered data are valuable for further data processing and analysis. Preprocessing is necessary for eliminating the redundancy in gathered massive sensor data. In this paper, the proposed work defines a sensor data preprocessing framework. It is mainly composed of two parts, i.e., sensor data anomaly detection and sensor data redundancy elimination. In the first part, methods based on principal statistic analysis and Bayesian network is proposed for sensor data anomaly detection. Then, approaches based on static Bayesian network (SBN) and dynamic Bayesian networks (DBNs) are proposed for sensor data redundancy elimination. Static sensor data redundancy detection algorithm (SSDRDA) for eliminating redundant data in static datasets and real-time sensor data redundancy detection algorithm (RSDRDA) for eliminating redundant sensor data in real-time are proposed. The efficiency and effectiveness of the proposed methods are validated using real-world gathered sensor datasets.


Similar Publications

Randomized binary exponential backoff (BEB) is a popular algorithm for coordinating access to a shared channel. With an operational history exceeding four decades, BEB is currently an important component of several wireless standards. Despite this track record, prior theoretical results indicate that under bursty traffic (1) BEB yields poor makespan and (2) superior algorithms are possible. Read More


Primitive partitioning strategies for streaming applications operate efficiently under two very strict assumptions: the resources are homogeneous and the messages are drawn from a uniform key distribution. These assumptions are often not true for the real-world use cases. Dealing with heterogeneity and non-uniform workload requires inferring the resource capacities and input distribution at run time. Read More


Triangle-free graphs play a central role in graph theory, and triangle detection (or triangle finding) as well as triangle enumeration (triangle listing) play central roles in the field of graph algorithms. In distributed computing, algorithms with sublinear round complexity for triangle finding and listing have recently been developed in the powerful CONGEST clique model, where communication is allowed between any two nodes of the network. In this paper we present the first algorithms with sublinear complexity for triangle finding and triangle listing in the standard CONGEST model, where the communication topology is the same as the topology of the network. Read More


Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. Read More


In this work, we provide a general framework for adding a linearizable iterator to data structures with set operations. We propose a condition on these set operations, called locality, so that any data structure implemented from local atomic operations can be augmented with a linearizable iterator as described by our framework. We then apply the iterator framework to various data structures, prove locality of their operations, and demonstrate that the iterator framework does not significantly affect the performance of concurrent operations. Read More


Even in the absence of clocks, time bounds on the duration of actions enable the use of time for distributed coordination. This paper initiates an investigation of coordination in such a setting. A new communication structure called a zigzag pattern is introduced, and shown to guarantee bounds on the relative timing of events in this clockless model. Read More


Consider a set of agents in a peer-to-peer communication network, where each agent has a personal dataset and a personal learning objective. The main question addressed in this paper is: how can agents collaborate to improve upon their locally learned model without leaking sensitive information about their data? Our first contribution is to reformulate this problem so that it can be solved by a block coordinate descent algorithm. We obtain an efficient and fully decentralized protocol working in an asynchronous fashion. Read More


The Internet of Mobile Things encompasses stream data being generated by sensors, network communications that pull and push these data streams, as well as running processing and analytics that can effectively leverage actionable information for planning, management, and business advantage. Edge computing emerges as a new paradigm that decentralizes the communication, computation, control and storage resources from the cloud to the edge of the Internet. This paper proposes an edge computing platform where mobile fog nodes are physical devices where descriptive analytics is deployed to analyze real-time transit data streams. Read More


We analyze the caching overhead incurred by a class of multithreaded algorithms when scheduled by an arbitrary scheduler. We obtain bounds that match or improve upon the well-known $O(Q+S \cdot (M/B))$ caching cost for the randomized work stealing (RWS) scheduler, where $S$ is the number of steals, $Q$ is the sequential caching cost, and $M$ and $B$ are the cache size and block (or cache line) size respectively. Read More


A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say $k$, machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and round-optimal protocol. Read More