Attack on prediction confidence of deep learning neural networks

Machine learning has become a prevalent part of our everyday life, being utilised in tasks that most people may not be aware of, such as in transport, entertainment, and education. Due to its widespread usage, it has also become a prime target for cyber-attacks from malicious parties. Therefore, wit...

Full description

Saved in:
Bibliographic Details
Main Author: Ng, Garyl Xuan
Other Authors: Liu Yang
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/157249
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-157249
record_format dspace
spelling sg-ntu-dr.10356-1572492022-05-11T05:59:16Z Attack on prediction confidence of deep learning neural networks Ng, Garyl Xuan Liu Yang School of Computer Science and Engineering yangliu@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Machine learning has become a prevalent part of our everyday life, being utilised in tasks that most people may not be aware of, such as in transport, entertainment, and education. Due to its widespread usage, it has also become a prime target for cyber-attacks from malicious parties. Therefore, with machine learning playing such a crucial role in the livelihood of society, it is vital that these threats be extensively researched and studied, to prevent the disruption of essential networks and systems. One common threat is the data poisoning attack, in which the attacker manipulates training data to cause errors in the model. Most research conducted on this attack involves the implementation of “noise” in images, in the form of perturbations to the image that are imperceivable to the human eye. These methods are generally more complex to execute, and require a higher level of mathematical understanding. This project aims to shift the attention onto more simplistic methods of attack, that an inexperienced or less knowledgeable malicious party may attempt in their efforts to disrupt a DLNN. In doing so, it may empower defence against a wider variety of attacks, which results in increased versatility and robustness of cybersecurity in the machine learning industry. The objective of this project is to investigate a simplistic approach to the data poisoning attack, by utilising basic image adjustments such as altering the vibrance and saturation of an image, and comparing the results to determine the most effective adjustment type in disrupting a DLNN. A single dataset was altered using image editing software into multiple subsets, each corresponding to a particular image adjustment type. These subsets contained further sets of images with varying degrees of severity, which were then tested on three different DLNNs (VGG-16, EfficientNet and ResNet). The generated results were analysed, and each adjustment type was ranked accordingly to its effectiveness. The most effective types were then combined together and subjected to further testing. The results showed that out of all the image adjustment types, a combination of exposure and offset was the most effective in attacking the prediction confidence of DLNNs. It had an effectiveness of reducing the prediction score of the model by 4% for every factor value of 1 that the adjustment was increased. Bachelor of Engineering (Computer Science) 2022-05-11T05:59:15Z 2022-05-11T05:59:15Z 2022 Final Year Project (FYP) Ng, G. X. (2022). Attack on prediction confidence of deep learning neural networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/157249 https://hdl.handle.net/10356/157249 en SCSE21-0227 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Ng, Garyl Xuan
Attack on prediction confidence of deep learning neural networks
description Machine learning has become a prevalent part of our everyday life, being utilised in tasks that most people may not be aware of, such as in transport, entertainment, and education. Due to its widespread usage, it has also become a prime target for cyber-attacks from malicious parties. Therefore, with machine learning playing such a crucial role in the livelihood of society, it is vital that these threats be extensively researched and studied, to prevent the disruption of essential networks and systems. One common threat is the data poisoning attack, in which the attacker manipulates training data to cause errors in the model. Most research conducted on this attack involves the implementation of “noise” in images, in the form of perturbations to the image that are imperceivable to the human eye. These methods are generally more complex to execute, and require a higher level of mathematical understanding. This project aims to shift the attention onto more simplistic methods of attack, that an inexperienced or less knowledgeable malicious party may attempt in their efforts to disrupt a DLNN. In doing so, it may empower defence against a wider variety of attacks, which results in increased versatility and robustness of cybersecurity in the machine learning industry. The objective of this project is to investigate a simplistic approach to the data poisoning attack, by utilising basic image adjustments such as altering the vibrance and saturation of an image, and comparing the results to determine the most effective adjustment type in disrupting a DLNN. A single dataset was altered using image editing software into multiple subsets, each corresponding to a particular image adjustment type. These subsets contained further sets of images with varying degrees of severity, which were then tested on three different DLNNs (VGG-16, EfficientNet and ResNet). The generated results were analysed, and each adjustment type was ranked accordingly to its effectiveness. The most effective types were then combined together and subjected to further testing. The results showed that out of all the image adjustment types, a combination of exposure and offset was the most effective in attacking the prediction confidence of DLNNs. It had an effectiveness of reducing the prediction score of the model by 4% for every factor value of 1 that the adjustment was increased.
author2 Liu Yang
author_facet Liu Yang
Ng, Garyl Xuan
format Final Year Project
author Ng, Garyl Xuan
author_sort Ng, Garyl Xuan
title Attack on prediction confidence of deep learning neural networks
title_short Attack on prediction confidence of deep learning neural networks
title_full Attack on prediction confidence of deep learning neural networks
title_fullStr Attack on prediction confidence of deep learning neural networks
title_full_unstemmed Attack on prediction confidence of deep learning neural networks
title_sort attack on prediction confidence of deep learning neural networks
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/157249
_version_ 1734310124465422336