FGSM attacks on traffic light recognition of the apollo autonomous driving system

Autonomous vehicles rely on Autonomous Driving Systems (ADS) to control the car without human intervention. The ADS uses multiple sensors cameras to perceive the environment around the vehicle. These perception systems rely on machine learning models which are susceptible to adversarial attacks, in...

Full description

Saved in:
Bibliographic Details
Main Author: Samuel, Milla
Other Authors: Tan Rui
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/148086
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-148086
record_format dspace
spelling sg-ntu-dr.10356-1480862021-04-22T13:18:48Z FGSM attacks on traffic light recognition of the apollo autonomous driving system Samuel, Milla Tan Rui School of Computer Science and Engineering tanrui@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Autonomous vehicles rely on Autonomous Driving Systems (ADS) to control the car without human intervention. The ADS uses multiple sensors cameras to perceive the environment around the vehicle. These perception systems rely on machine learning models which are susceptible to adversarial attacks, in which a model’s input is intercepted and perturbations are added, causing models to make wrong predictions with very high confidence. We attempted the Fast Gradient Sign Method (FGSM) adversarial attack on the traffic light recognition module of the Baidu Apollo ADS in normal, bright, rainy and foggy conditions to test the robustness of the system against white-box adversarial attacks. While the model performed well against attacks in normal conditions, multiple attacks were able to fool the model to predict the wrong class with high confidence using almost imperceptible perturbations in bright and rainy conditions. This exposes a vulnerability of the Apollo system, in which the FGSM attack managed to exploit the linearity of the traffic light recognition model as well as pass through all the safeguards that Apollo had in place. Bachelor of Engineering Science (Computer Science) 2021-04-22T13:18:48Z 2021-04-22T13:18:48Z 2021 Final Year Project (FYP) Samuel, M. (2021). FGSM attacks on traffic light recognition of the apollo autonomous driving system. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/148086 https://hdl.handle.net/10356/148086 en SCSE20-0069 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Samuel, Milla
FGSM attacks on traffic light recognition of the apollo autonomous driving system
description Autonomous vehicles rely on Autonomous Driving Systems (ADS) to control the car without human intervention. The ADS uses multiple sensors cameras to perceive the environment around the vehicle. These perception systems rely on machine learning models which are susceptible to adversarial attacks, in which a model’s input is intercepted and perturbations are added, causing models to make wrong predictions with very high confidence. We attempted the Fast Gradient Sign Method (FGSM) adversarial attack on the traffic light recognition module of the Baidu Apollo ADS in normal, bright, rainy and foggy conditions to test the robustness of the system against white-box adversarial attacks. While the model performed well against attacks in normal conditions, multiple attacks were able to fool the model to predict the wrong class with high confidence using almost imperceptible perturbations in bright and rainy conditions. This exposes a vulnerability of the Apollo system, in which the FGSM attack managed to exploit the linearity of the traffic light recognition model as well as pass through all the safeguards that Apollo had in place.
author2 Tan Rui
author_facet Tan Rui
Samuel, Milla
format Final Year Project
author Samuel, Milla
author_sort Samuel, Milla
title FGSM attacks on traffic light recognition of the apollo autonomous driving system
title_short FGSM attacks on traffic light recognition of the apollo autonomous driving system
title_full FGSM attacks on traffic light recognition of the apollo autonomous driving system
title_fullStr FGSM attacks on traffic light recognition of the apollo autonomous driving system
title_full_unstemmed FGSM attacks on traffic light recognition of the apollo autonomous driving system
title_sort fgsm attacks on traffic light recognition of the apollo autonomous driving system
publisher Nanyang Technological University
publishDate 2021
url https://hdl.handle.net/10356/148086
_version_ 1698713639445331968