Developing AI attacks/defenses

Deep Neural Networks (DNNs) serve as a fundamental pillar in the realms of Artificial Intelligence (AI) and Machine Learning (ML), playing a pivotal role in advancing these fields. They are computational models inspired by the human brain and are designed to process information and make decisions...

全面介紹

Saved in:
書目詳細資料
主要作者: Lim, Noel Wee Tat
其他作者: Jun Zhao
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2023
主題:
在線閱讀:https://hdl.handle.net/10356/172002
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Deep Neural Networks (DNNs) serve as a fundamental pillar in the realms of Artificial Intelligence (AI) and Machine Learning (ML), playing a pivotal role in advancing these fields. They are computational models inspired by the human brain and are designed to process information and make decisions in a way that resembles human thinking. This has led to their remarkable success in various applications, from image and speech recognition to natural language processing and autonomous systems. Alongside these potentials and capabilities, DNNs have also unveiled vulnerabilities, one of them being adversarial attacks which have been proven to be catastrophic against DNNs and have received broad attention in recent years. This raises concerns over the robustness and security of DNNs. This project is mainly to conduct a comprehensive study on DNNs and adversarial attacks, and to implement specific techniques within DNNs aimed at bolstering their robustness.