Adversarial example construction against autonomous vehicle (part 2)

Autonomous vehicles have become increasingly popular due to its many potential benefits. They utilise many sensors such as LiDAR, cameras and radar combined with multiple machine learning models to sense, interpret and navigate the surrounding environment without human input. However, the use of mac...

Full description

Saved in:
Bibliographic Details
Main Author: Cheong, Benjamin Yii Leung
Other Authors: Tan Rui
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172414
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-172414
record_format dspace
spelling sg-ntu-dr.10356-1724142023-12-08T15:37:39Z Adversarial example construction against autonomous vehicle (part 2) Cheong, Benjamin Yii Leung Tan Rui School of Computer Science and Engineering tanrui@ntu.edu.sg Engineering::Computer science and engineering Autonomous vehicles have become increasingly popular due to its many potential benefits. They utilise many sensors such as LiDAR, cameras and radar combined with multiple machine learning models to sense, interpret and navigate the surrounding environment without human input. However, the use of machine learning models for decision-making comes with associated risks. One such risk is machine learning models’ vulnerability to adversarial attacks which can have serious safety consequences. This project explores the Fast Gradient Sign Method (FGSM) adversarial attack against the Apollo Autonomous Driving System’s traffic light recognition model, its effectiveness and impact. We have found that the FGSM attack successfully causes misclassification at epsilon values of 0.15 to 0.25. Other key findings include that variations in Time of Day, Rain and Fog conditions generally do not affect the attack’s performance in misclassification. Existing safeguards used by Apollo are also found to be generally ineffective. We conclude that the effectiveness of adversarial attacks results in a potential critical safety issue, additional research in the relevant defences and countermeasures is necessary. Bachelor of Engineering (Computer Science) 2023-12-08T08:03:13Z 2023-12-08T08:03:13Z 2023 Final Year Project (FYP) Cheong, B. Y. L. (2023). Adversarial example construction against autonomous vehicle (part 2). Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/172414 https://hdl.handle.net/10356/172414 en SCSE22-0729 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
spellingShingle Engineering::Computer science and engineering
Cheong, Benjamin Yii Leung
Adversarial example construction against autonomous vehicle (part 2)
description Autonomous vehicles have become increasingly popular due to its many potential benefits. They utilise many sensors such as LiDAR, cameras and radar combined with multiple machine learning models to sense, interpret and navigate the surrounding environment without human input. However, the use of machine learning models for decision-making comes with associated risks. One such risk is machine learning models’ vulnerability to adversarial attacks which can have serious safety consequences. This project explores the Fast Gradient Sign Method (FGSM) adversarial attack against the Apollo Autonomous Driving System’s traffic light recognition model, its effectiveness and impact. We have found that the FGSM attack successfully causes misclassification at epsilon values of 0.15 to 0.25. Other key findings include that variations in Time of Day, Rain and Fog conditions generally do not affect the attack’s performance in misclassification. Existing safeguards used by Apollo are also found to be generally ineffective. We conclude that the effectiveness of adversarial attacks results in a potential critical safety issue, additional research in the relevant defences and countermeasures is necessary.
author2 Tan Rui
author_facet Tan Rui
Cheong, Benjamin Yii Leung
format Final Year Project
author Cheong, Benjamin Yii Leung
author_sort Cheong, Benjamin Yii Leung
title Adversarial example construction against autonomous vehicle (part 2)
title_short Adversarial example construction against autonomous vehicle (part 2)
title_full Adversarial example construction against autonomous vehicle (part 2)
title_fullStr Adversarial example construction against autonomous vehicle (part 2)
title_full_unstemmed Adversarial example construction against autonomous vehicle (part 2)
title_sort adversarial example construction against autonomous vehicle (part 2)
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/172414
_version_ 1784855615723536384