Secure algorithm based data aggregation for sensor networks

This paper attempts to solve the problem of aggregating data from a large number of sensors whereby an unknown number of sensors could potentially be reporting false data. These malicious sensors could either be operating independently or be working together to launch a collusive attack. This is in...

Full description

Saved in:
Bibliographic Details
Main Author: Chai, Jeremy Wen Zhang
Other Authors: Zhang Jie
Format: Final Year Project
Language:English
Published: 2016
Subjects:
Online Access:http://hdl.handle.net/10356/66764
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This paper attempts to solve the problem of aggregating data from a large number of sensors whereby an unknown number of sensors could potentially be reporting false data. These malicious sensors could either be operating independently or be working together to launch a collusive attack. This is in an attempt to force the algorithm to report an aggregated value that deviates from the true value. Three different algorithm will be analysed. These are Median, Robust Iterative Filtering and MaxTrust. Median is shown to be a poor performer because in the event of a collusive attack where the percentage of malicious sensors is less than 50, the median will always come from the tailing values of a non-malicious sensor. Robust Iterative Filtering is than shown to not work because of a wrong assumption that the sum of bias will be zero. MaxTrust is shown to be the best performer among these three algorithm. This paper than make three improvements to the MaxTrust algorithm in order to improve its performance. This improved algorithm is called Time-Sensitive MaxTrust. The first improvement introduces the idea of time period to the algorithm. A sensor that had previously reported a false reading is likely to report a false reading again in subsequent time periods. Therefore, the algorithm will aggregate the current time period data using an aggregated trust value comprising of 90% historical trust value and 10% current trust value. This resulted in an improvement to the RMSE by 13% to 23%. The second improvement introduce the idea of replacing all reported precision with an arbitrary precision. This is to prevent attackers from attacking the algorithm by reporting a false precision value and also to solve the problem of sensors not reporting their precision value. Ideally, this arbitrary precision will be near the sensor’s normal operating precision. If this is not possible because the value is unknown as in the case of crowd source data, the precisions can be replaced with a sufficiently high precision. The last improvement takes previous time period trust value as the initial value for the MaxTrust algorithm instead of a fixed pre-determined value. This is to reduce the number of iterations needed to reach optimal solution. Experimental result has shown that this resulted in an improvement in both the mean and standard deviation of iterations needed to reach optimal solution while maintaining the RMSE.