Adversarial example construction against autonomous vehicles (part 2)

Autonomous vehicles (AVs) represent a transformative technology with the potential to revolutionize transportation systems through their ability to operate without human intervention. Deep Neural Networks (DNNs) play a pivotal role in AV technology, enabling tasks such as object detection and scene...

Full description

Saved in:
Bibliographic Details
Main Author: Malavade, Sanskar Deepak
Other Authors: Tan Rui
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175343
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-175343
record_format dspace
spelling sg-ntu-dr.10356-1753432024-04-26T15:44:54Z Adversarial example construction against autonomous vehicles (part 2) Malavade, Sanskar Deepak Tan Rui School of Computer Science and Engineering tanrui@ntu.edu.sg Computer and Information Science Adversarial examples Autonomous vehicles Autonomous vehicles (AVs) represent a transformative technology with the potential to revolutionize transportation systems through their ability to operate without human intervention. Deep Neural Networks (DNNs) play a pivotal role in AV technology, enabling tasks such as object detection and scene understanding. However, recent research has highlighted vulnerabilities in DNNs, particularly their susceptibility to adversarial examples—inputs crafted to deceive machine learning models. This paper investigates adversarial examples in the three-dimensional (3D) space, specifically leveraging LIDAR data obtained from autonomous vehicles. Utilizing occlusion attacks, we construct adversarial examples by strategically removing points from the input data to misguide state-of-the-art object classification models, including PointNet and VoxNet. Our findings demonstrate that such attacks significantly degrade the accuracy of classification models, posing a threat to the safety of autonomous driving systems. By bridging the gap between synthetic point clouds and real-world LIDAR data, our study sheds light on the importance of defending against adversarial attacks in the 3D deep learning domain, ultimately contributing to the enhancement of AV safety. Bachelor's degree 2024-04-23T12:06:23Z 2024-04-23T12:06:23Z 2024 Final Year Project (FYP) Malavade, S. D. (2024). Adversarial example construction against autonomous vehicles (part 2). Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175343 https://hdl.handle.net/10356/175343 en SCSE23-0025 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Adversarial examples
Autonomous vehicles
spellingShingle Computer and Information Science
Adversarial examples
Autonomous vehicles
Malavade, Sanskar Deepak
Adversarial example construction against autonomous vehicles (part 2)
description Autonomous vehicles (AVs) represent a transformative technology with the potential to revolutionize transportation systems through their ability to operate without human intervention. Deep Neural Networks (DNNs) play a pivotal role in AV technology, enabling tasks such as object detection and scene understanding. However, recent research has highlighted vulnerabilities in DNNs, particularly their susceptibility to adversarial examples—inputs crafted to deceive machine learning models. This paper investigates adversarial examples in the three-dimensional (3D) space, specifically leveraging LIDAR data obtained from autonomous vehicles. Utilizing occlusion attacks, we construct adversarial examples by strategically removing points from the input data to misguide state-of-the-art object classification models, including PointNet and VoxNet. Our findings demonstrate that such attacks significantly degrade the accuracy of classification models, posing a threat to the safety of autonomous driving systems. By bridging the gap between synthetic point clouds and real-world LIDAR data, our study sheds light on the importance of defending against adversarial attacks in the 3D deep learning domain, ultimately contributing to the enhancement of AV safety.
author2 Tan Rui
author_facet Tan Rui
Malavade, Sanskar Deepak
format Final Year Project
author Malavade, Sanskar Deepak
author_sort Malavade, Sanskar Deepak
title Adversarial example construction against autonomous vehicles (part 2)
title_short Adversarial example construction against autonomous vehicles (part 2)
title_full Adversarial example construction against autonomous vehicles (part 2)
title_fullStr Adversarial example construction against autonomous vehicles (part 2)
title_full_unstemmed Adversarial example construction against autonomous vehicles (part 2)
title_sort adversarial example construction against autonomous vehicles (part 2)
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175343
_version_ 1806059897258770432