Adversarial example construction against autonomous vehicles

With autonomous vehicles (AVs) approaching widespread adoption, there is a need to emphasize safety as it must not be neglected. Touted to be free from errors commonly made by humans, they are nevertheless not immune to attacks with malicious intent. In general, AVs utilize a variety of machine-l...

全面介紹

Saved in:
書目詳細資料
主要作者: Loh, Zhi Heng
其他作者: Tan Rui
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2023
主題:
在線閱讀:https://hdl.handle.net/10356/171944
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:With autonomous vehicles (AVs) approaching widespread adoption, there is a need to emphasize safety as it must not be neglected. Touted to be free from errors commonly made by humans, they are nevertheless not immune to attacks with malicious intent. In general, AVs utilize a variety of machine-learning models and sensors to help them understand their environment. However, based on past research on machine learning models, it is understood that they may be susceptible to adversarial attacks. In this paper, Daedalus, an attack algorithm that exploits the vulnerability in Non-Maximum Suppression (NMS) is used to generate adversarial examples using a surrogate model. The perturbations on the images are nearly imperceptible. The generated images are subsequently evaluated against the Single-Stage Monocular 3D Object Detection via Key Point Estimation [1] (SMOKE) utilized in Baidu Apollo’s Autonomous Driving System for camera-based object detection. In addition, look into potential mitigations that could be implemented to mitigate Daedalus.