Cryptography techniques to defend neural networks from adversarial attacks

As the field of artificial intelligence proceeds to advance, the security and strength of neural network against adversarial attacks have resulted in critical area of concern. This academic research report aims to examine current defense mechanism and proposed plan of cryptographic strategies to sec...

Full description

Saved in:
Bibliographic Details
Main Author: Tan, Hong Meng
Other Authors: Anupam Chattopadhyay
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175454
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:As the field of artificial intelligence proceeds to advance, the security and strength of neural network against adversarial attacks have resulted in critical area of concern. This academic research report aims to examine current defense mechanism and proposed plan of cryptographic strategies to secure neural network against such risk and threats. We go deep into the complexities of adversarial attacks, emphasizing how vulnerabilities in neural networks can be exploited and abused, and following recommendations for model accuracy and consistent quality. We examine various cryptographic techniques often used for safe data transport, evaluating their good sense as a defense against malicious attacks. The primary goal is to determine if combining encryption techniques may increase the neural organization models’ resistance to adversarial control. Through this research and examination, we aim to provide a deep understanding of the threats that is posed by adversarial attacks, underline the necessity of broader security norms and support ongoing efforts to strengthen neural networks. The discoveries from this research about are necessary to contribute to the creation of more secure and versatile neural networks, in this manner cultivating increased trust and steadiness of the application in artificial intelligence.