Universal adversarial example construction against autonomous vehicle

Autonomous Vehicles (AVs) have seen a rapid pace of development and made significant strides in technological capabilities. While AVs do not suffer from human error, they are not immune to other types of errors and even more worryingly, malicious attacks. Most AVs today utilize multiple machine lear...

Full description

Saved in:
Bibliographic Details
Main Author: Beh, Nicholas Chee Kwang
Other Authors: Tan Rui
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/153501
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Autonomous Vehicles (AVs) have seen a rapid pace of development and made significant strides in technological capabilities. While AVs do not suffer from human error, they are not immune to other types of errors and even more worryingly, malicious attacks. Most AVs today utilize multiple machine learning models which may or may not be resistant against adversarial attacks. A white-box attack conducted using Universal Adversarial Perturbations (Iterative-DeepFool) on the traffic light recognition component of the Baidu Apollo Autonomous Driving System (ADS) platform revealed that the model failed to hold up in conditions other than daylight. Furthermore, the perturbation is imperceptible to the human eye, posing an even greater safety risk. We also explore the current safeguards in place in Apollo and hypothesize potential solutions to mitigate this issue.