Cryptography techniques to defend neural networks from adversarial attacks

As the field of artificial intelligence proceeds to advance, the security and strength of neural network against adversarial attacks have resulted in critical area of concern. This academic research report aims to examine current defense mechanism and proposed plan of cryptographic strategies to sec...

Full description

Saved in:
Bibliographic Details
Main Author: Tan, Hong Meng
Other Authors: Anupam Chattopadhyay
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175454
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-175454
record_format dspace
spelling sg-ntu-dr.10356-1754542024-04-26T15:45:27Z Cryptography techniques to defend neural networks from adversarial attacks Tan, Hong Meng Anupam Chattopadhyay School of Computer Science and Engineering anupam@ntu.edu.sg Computer and Information Science Neural network Cryptography Machine learning Encryption As the field of artificial intelligence proceeds to advance, the security and strength of neural network against adversarial attacks have resulted in critical area of concern. This academic research report aims to examine current defense mechanism and proposed plan of cryptographic strategies to secure neural network against such risk and threats. We go deep into the complexities of adversarial attacks, emphasizing how vulnerabilities in neural networks can be exploited and abused, and following recommendations for model accuracy and consistent quality. We examine various cryptographic techniques often used for safe data transport, evaluating their good sense as a defense against malicious attacks. The primary goal is to determine if combining encryption techniques may increase the neural organization models’ resistance to adversarial control. Through this research and examination, we aim to provide a deep understanding of the threats that is posed by adversarial attacks, underline the necessity of broader security norms and support ongoing efforts to strengthen neural networks. The discoveries from this research about are necessary to contribute to the creation of more secure and versatile neural networks, in this manner cultivating increased trust and steadiness of the application in artificial intelligence. Bachelor's degree 2024-04-24T05:30:13Z 2024-04-24T05:30:13Z 2024 Final Year Project (FYP) Tan, H. M. (2024). Cryptography techniques to defend neural networks from adversarial attacks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175454 https://hdl.handle.net/10356/175454 en SCSE23-0248 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Neural network
Cryptography
Machine learning
Encryption
spellingShingle Computer and Information Science
Neural network
Cryptography
Machine learning
Encryption
Tan, Hong Meng
Cryptography techniques to defend neural networks from adversarial attacks
description As the field of artificial intelligence proceeds to advance, the security and strength of neural network against adversarial attacks have resulted in critical area of concern. This academic research report aims to examine current defense mechanism and proposed plan of cryptographic strategies to secure neural network against such risk and threats. We go deep into the complexities of adversarial attacks, emphasizing how vulnerabilities in neural networks can be exploited and abused, and following recommendations for model accuracy and consistent quality. We examine various cryptographic techniques often used for safe data transport, evaluating their good sense as a defense against malicious attacks. The primary goal is to determine if combining encryption techniques may increase the neural organization models’ resistance to adversarial control. Through this research and examination, we aim to provide a deep understanding of the threats that is posed by adversarial attacks, underline the necessity of broader security norms and support ongoing efforts to strengthen neural networks. The discoveries from this research about are necessary to contribute to the creation of more secure and versatile neural networks, in this manner cultivating increased trust and steadiness of the application in artificial intelligence.
author2 Anupam Chattopadhyay
author_facet Anupam Chattopadhyay
Tan, Hong Meng
format Final Year Project
author Tan, Hong Meng
author_sort Tan, Hong Meng
title Cryptography techniques to defend neural networks from adversarial attacks
title_short Cryptography techniques to defend neural networks from adversarial attacks
title_full Cryptography techniques to defend neural networks from adversarial attacks
title_fullStr Cryptography techniques to defend neural networks from adversarial attacks
title_full_unstemmed Cryptography techniques to defend neural networks from adversarial attacks
title_sort cryptography techniques to defend neural networks from adversarial attacks
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175454
_version_ 1800916342307553280