Adversarial example construction against autonomous vehicle (part 2)

The rapid development of autonomous vehicles can be seen around the world and it will soon make a global impact. Therefore, it is essential to address the technology related issues that autonomous vehicle are facing. Autonomous vehicles use Deep Neural Network (DNN) to predict the movement of the ca...

Full description

Saved in:
Bibliographic Details
Main Author: Toh, Koo Fong
Other Authors: Tan Rui
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/148110
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The rapid development of autonomous vehicles can be seen around the world and it will soon make a global impact. Therefore, it is essential to address the technology related issues that autonomous vehicle are facing. Autonomous vehicles use Deep Neural Network (DNN) to predict the movement of the car. However, DNN is vulnerable to cybersecurity attacks such as adversarial attacks. Such cybersecurity flaws in the vehicle can cause a huge impact on the trust of the autonomous vehicle industry. In this report, we will evaluate an adversarial attack against the open source Apollo autonomous vehicle. We focus on one adversarial attack which is one-pixel attack. Our approach is to extract the datasets from LGSVL and use it for generating the adversarial image. We will use the adversarial image to test the model in Apollo. The testing results will be used to evaluate the effectiveness of the adversarial attack.