Attack on prediction confidence of deep learning neural networks
Machine learning has become a prevalent part of our everyday life, being utilised in tasks that most people may not be aware of, such as in transport, entertainment, and education. Due to its widespread usage, it has also become a prime target for cyber-attacks from malicious parties. Therefore, wit...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/157249 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Machine learning has become a prevalent part of our everyday life, being utilised in tasks that most people may not be aware of, such as in transport, entertainment, and education. Due to its widespread usage, it has also become a prime target for cyber-attacks from malicious parties. Therefore, with machine learning playing such a crucial role in the livelihood of society, it is vital that these threats be extensively researched and studied, to prevent the disruption of essential networks and systems.
One common threat is the data poisoning attack, in which the attacker manipulates training data to cause errors in the model. Most research conducted on this attack involves the implementation of “noise” in images, in the form of perturbations to the image that are imperceivable to the human eye. These methods are generally more complex to execute, and require a higher level of mathematical understanding. This project aims to shift the attention onto more simplistic methods of attack, that an inexperienced or less knowledgeable malicious party may attempt in their efforts to disrupt a DLNN. In doing so, it may empower defence against a wider variety of attacks, which results in increased versatility and robustness of cybersecurity in the machine learning industry.
The objective of this project is to investigate a simplistic approach to the data poisoning attack, by utilising basic image adjustments such as altering the vibrance and saturation of an image, and comparing the results to determine the most effective adjustment type in disrupting a DLNN. A single dataset was altered using image editing software into multiple subsets, each corresponding to a particular image adjustment type. These subsets contained further sets of images with varying degrees of severity, which were then tested on three different DLNNs (VGG-16, EfficientNet and ResNet). The generated results were analysed, and each adjustment type was ranked accordingly to its effectiveness. The most effective types were then combined together and subjected to further testing.
The results showed that out of all the image adjustment types, a combination of exposure and offset was the most effective in attacking the prediction confidence of DLNNs. It had an effectiveness of reducing the prediction score of the model by 4% for every factor value of 1 that the adjustment was increased. |
---|