Secure text based CAPTCHA system adversarial examples

Recent developments in the field of Deep Learning(DL) have made it much easier to solve complex artificial intelligence problems. While many fields have benefited from this development, it is not particularly good news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and H...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Kant Mannan
مؤلفون آخرون: Jun Zhao
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2021
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/148112
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
الملخص:Recent developments in the field of Deep Learning(DL) have made it much easier to solve complex artificial intelligence problems. While many fields have benefited from this development, it is not particularly good news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), as their sole purpose is being threatened by DL based attacks. Such attacks can easily break through the CAPTCHA with significant training[1]. On the contrary, despite the high capacity of Deep Neural Networks(DNNs) it has been observed that they can be misled by small adversarial perturbations leading to misclassification[2][3]. We have come up with a user friendly CAPTCHA generation method called Secure Adversarial CAPTCHAs(SAC) to make them stronger and more robust against the aforementioned attacks while still continuing to be easily understandable by humans. In the following project report, we will explain how we have taken advantage of the vulnerability of DNN based attacks against adversarial perturbations in order to develop the said product. We start by synthesizing a random font with an adversarial background resulting in an intermediate adversarial CAPTCHA. This intermediate result is then passed on to a highly transferable adversarial attack which helps in optimizing and making the CAPTCHA more secure and robust. Lastly, we have performed rigorous testing on SAC with experiments covering a couple of popular DNN models, GoogLeNet and ResNet50. Our experiments have shown considerable promise regarding the usability and robustness of SAC against a variety of different attacks and scenarios.