Adversarial example construction against autonomous vehicles (part 2)

Autonomous vehicles (AVs) represent a transformative technology with the potential to revolutionize transportation systems through their ability to operate without human intervention. Deep Neural Networks (DNNs) play a pivotal role in AV technology, enabling tasks such as object detection and scene...

Full description

Saved in:
Bibliographic Details
Main Author: Malavade, Sanskar Deepak
Other Authors: Tan Rui
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175343
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Autonomous vehicles (AVs) represent a transformative technology with the potential to revolutionize transportation systems through their ability to operate without human intervention. Deep Neural Networks (DNNs) play a pivotal role in AV technology, enabling tasks such as object detection and scene understanding. However, recent research has highlighted vulnerabilities in DNNs, particularly their susceptibility to adversarial examples—inputs crafted to deceive machine learning models. This paper investigates adversarial examples in the three-dimensional (3D) space, specifically leveraging LIDAR data obtained from autonomous vehicles. Utilizing occlusion attacks, we construct adversarial examples by strategically removing points from the input data to misguide state-of-the-art object classification models, including PointNet and VoxNet. Our findings demonstrate that such attacks significantly degrade the accuracy of classification models, posing a threat to the safety of autonomous driving systems. By bridging the gap between synthetic point clouds and real-world LIDAR data, our study sheds light on the importance of defending against adversarial attacks in the 3D deep learning domain, ultimately contributing to the enhancement of AV safety.