Universal adversarial network attacks on traffic light recognition of Apollo autonomous driving system

Autonomous Vehicles are becoming increasingly important and relevant in today’s world. Their applications can be found everywhere, from public transport to overcome land and workforce constraints to personal uses for convenience to business uses for freight transportation and utility services sec...

全面介紹

Saved in:
書目詳細資料
主要作者: Chia, Yi You
其他作者: Tan Rui
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/156782
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Autonomous Vehicles are becoming increasingly important and relevant in today’s world. Their applications can be found everywhere, from public transport to overcome land and workforce constraints to personal uses for convenience to business uses for freight transportation and utility services sectors. Therefore, emphasising the importance of safety in these autonomous vehicles. Autonomous vehicles use Autonomous Driving Systems (ADS), which requires inputs from multiple camera sensors to be passed into a machine learning model to output the results that directly control the car movements. This paper focuses on the safety of these machine learning models. A black-box Universal Adversarial Network (UAN) is first trained to create a universal perturbation, which will be used to attack the machine learning model that recognises traffic light signals. Eventually producing a wrong traffic signal as an output. Multiple variations of the UAN are produced to study their effect on the accuracy of these machine learning models. This vulnerability will also be studied in a realistic environment using Baidu Apollo ADS and LGSVL. Lastly, basic defences of Apollo ADS will be explored.