FGSM attacks on traffic light recognition of the apollo autonomous driving system

Autonomous vehicles rely on Autonomous Driving Systems (ADS) to control the car without human intervention. The ADS uses multiple sensors cameras to perceive the environment around the vehicle. These perception systems rely on machine learning models which are susceptible to adversarial attacks, in...

Full description

Saved in:
Bibliographic Details
Main Author: Samuel, Milla
Other Authors: Tan Rui
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/148086
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Autonomous vehicles rely on Autonomous Driving Systems (ADS) to control the car without human intervention. The ADS uses multiple sensors cameras to perceive the environment around the vehicle. These perception systems rely on machine learning models which are susceptible to adversarial attacks, in which a model’s input is intercepted and perturbations are added, causing models to make wrong predictions with very high confidence. We attempted the Fast Gradient Sign Method (FGSM) adversarial attack on the traffic light recognition module of the Baidu Apollo ADS in normal, bright, rainy and foggy conditions to test the robustness of the system against white-box adversarial attacks. While the model performed well against attacks in normal conditions, multiple attacks were able to fool the model to predict the wrong class with high confidence using almost imperceptible perturbations in bright and rainy conditions. This exposes a vulnerability of the Apollo system, in which the FGSM attack managed to exploit the linearity of the traffic light recognition model as well as pass through all the safeguards that Apollo had in place.