Secure text based CAPTCHA system adversarial examples
Recent developments in the field of Deep Learning(DL) have made it much easier to solve complex artificial intelligence problems. While many fields have benefited from this development, it is not particularly good news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and H...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/148112 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-148112 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1481122021-04-23T14:33:57Z Secure text based CAPTCHA system adversarial examples Kant Mannan Jun Zhao School of Computer Science and Engineering junzhao@ntu.edu.sg Engineering::Computer science and engineering Recent developments in the field of Deep Learning(DL) have made it much easier to solve complex artificial intelligence problems. While many fields have benefited from this development, it is not particularly good news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), as their sole purpose is being threatened by DL based attacks. Such attacks can easily break through the CAPTCHA with significant training[1]. On the contrary, despite the high capacity of Deep Neural Networks(DNNs) it has been observed that they can be misled by small adversarial perturbations leading to misclassification[2][3]. We have come up with a user friendly CAPTCHA generation method called Secure Adversarial CAPTCHAs(SAC) to make them stronger and more robust against the aforementioned attacks while still continuing to be easily understandable by humans. In the following project report, we will explain how we have taken advantage of the vulnerability of DNN based attacks against adversarial perturbations in order to develop the said product. We start by synthesizing a random font with an adversarial background resulting in an intermediate adversarial CAPTCHA. This intermediate result is then passed on to a highly transferable adversarial attack which helps in optimizing and making the CAPTCHA more secure and robust. Lastly, we have performed rigorous testing on SAC with experiments covering a couple of popular DNN models, GoogLeNet and ResNet50. Our experiments have shown considerable promise regarding the usability and robustness of SAC against a variety of different attacks and scenarios. Bachelor of Engineering (Computer Science) 2021-04-23T14:33:57Z 2021-04-23T14:33:57Z 2021 Final Year Project (FYP) Kant Mannan (2021). Secure text based CAPTCHA system adversarial examples. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/148112 https://hdl.handle.net/10356/148112 en SCSE20-0290 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering |
spellingShingle |
Engineering::Computer science and engineering Kant Mannan Secure text based CAPTCHA system adversarial examples |
description |
Recent developments in the field of Deep Learning(DL) have made it much easier to solve
complex artificial intelligence problems. While many fields have benefited from this
development, it is not particularly good news for CAPTCHAs (Completely Automated Public
Turing tests to tell Computers and Humans Apart), as their sole purpose is being threatened
by DL based attacks. Such attacks can easily break through the CAPTCHA with significant
training[1]. On the contrary, despite the high capacity of Deep Neural Networks(DNNs) it
has been observed that they can be misled by small adversarial perturbations leading to
misclassification[2][3].
We have come up with a user friendly CAPTCHA generation method called Secure
Adversarial CAPTCHAs(SAC) to make them stronger and more robust against the
aforementioned attacks while still continuing to be easily understandable by humans. In the
following project report, we will explain how we have taken advantage of the vulnerability of
DNN based attacks against adversarial perturbations in order to develop the said product. We
start by synthesizing a random font with an adversarial background resulting in an
intermediate adversarial CAPTCHA. This intermediate result is then passed on to a highly
transferable adversarial attack which helps in optimizing and making the CAPTCHA more
secure and robust. Lastly, we have performed rigorous testing on SAC with experiments
covering a couple of popular DNN models, GoogLeNet and ResNet50. Our experiments
have shown considerable promise regarding the usability and robustness of SAC against a
variety of different attacks and scenarios. |
author2 |
Jun Zhao |
author_facet |
Jun Zhao Kant Mannan |
format |
Final Year Project |
author |
Kant Mannan |
author_sort |
Kant Mannan |
title |
Secure text based CAPTCHA system adversarial examples |
title_short |
Secure text based CAPTCHA system adversarial examples |
title_full |
Secure text based CAPTCHA system adversarial examples |
title_fullStr |
Secure text based CAPTCHA system adversarial examples |
title_full_unstemmed |
Secure text based CAPTCHA system adversarial examples |
title_sort |
secure text based captcha system adversarial examples |
publisher |
Nanyang Technological University |
publishDate |
2021 |
url |
https://hdl.handle.net/10356/148112 |
_version_ |
1698713656710135808 |