Adversarial example construction against autonomous vehicle (part 2)
Autonomous vehicles have become increasingly popular due to its many potential benefits. They utilise many sensors such as LiDAR, cameras and radar combined with multiple machine learning models to sense, interpret and navigate the surrounding environment without human input. However, the use of mac...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/172414 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Autonomous vehicles have become increasingly popular due to its many potential benefits. They utilise many sensors such as LiDAR, cameras and radar combined with multiple machine learning models to sense, interpret and navigate the surrounding environment without human input. However, the use of machine learning models for decision-making comes with associated risks. One such risk is machine learning models’ vulnerability to adversarial attacks which can have serious safety consequences.
This project explores the Fast Gradient Sign Method (FGSM) adversarial attack against the Apollo Autonomous Driving System’s traffic light recognition model, its effectiveness and impact.
We have found that the FGSM attack successfully causes misclassification at epsilon values of 0.15 to 0.25. Other key findings include that variations in Time of Day, Rain and Fog conditions generally do not affect the attack’s performance in misclassification. Existing safeguards used by Apollo are also found to be generally ineffective.
We conclude that the effectiveness of adversarial attacks results in a potential critical safety issue, additional research in the relevant defences and countermeasures is necessary. |
---|