Adversarial examples in neural networks

In recent years, development in various areas such as computer vision and natural language processing, has exposed deep learning technology to security risks gradually. Adversarial examples are one of the security risks faced by deep learning technology, where they are inputs into machine learni...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Lim, Ruihong
مؤلفون آخرون: Zhang Tianwei
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2024
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/175179
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:In recent years, development in various areas such as computer vision and natural language processing, has exposed deep learning technology to security risks gradually. Adversarial examples are one of the security risks faced by deep learning technology, where they are inputs into machine learning models crafted for the purpose of causing models to make mistakes. These examples are made through imperceptible modifications to the input data that are naked to the human eye, which can alter the model’s initial output significantly, resulting in an abnormal output. There are many research works focusing on generating transferable adversarial examples and designing defence methods to protect networks from adversarial examples. This project explores various attacks as well as defence techniques that are currently in place. Through the analysis of the various attack and defence techniques, a defence method shall be proposed to aid in the defence against adversarial examples.