Adversarial example construction against autonomous vehicle (part 2)

Autonomous vehicles have become increasingly popular due to its many potential benefits. They utilise many sensors such as LiDAR, cameras and radar combined with multiple machine learning models to sense, interpret and navigate the surrounding environment without human input. However, the use of mac...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Cheong, Benjamin Yii Leung
مؤلفون آخرون: Tan Rui
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2023
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/172414
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:Autonomous vehicles have become increasingly popular due to its many potential benefits. They utilise many sensors such as LiDAR, cameras and radar combined with multiple machine learning models to sense, interpret and navigate the surrounding environment without human input. However, the use of machine learning models for decision-making comes with associated risks. One such risk is machine learning models’ vulnerability to adversarial attacks which can have serious safety consequences. This project explores the Fast Gradient Sign Method (FGSM) adversarial attack against the Apollo Autonomous Driving System’s traffic light recognition model, its effectiveness and impact. We have found that the FGSM attack successfully causes misclassification at epsilon values of 0.15 to 0.25. Other key findings include that variations in Time of Day, Rain and Fog conditions generally do not affect the attack’s performance in misclassification. Existing safeguards used by Apollo are also found to be generally ineffective. We conclude that the effectiveness of adversarial attacks results in a potential critical safety issue, additional research in the relevant defences and countermeasures is necessary.