Developing AI attacks/defenses

Deep Neural Networks (DNNs) serve as a fundamental pillar in the realms of Artificial Intelligence (AI) and Machine Learning (ML), playing a pivotal role in advancing these fields. They are computational models inspired by the human brain and are designed to process information and make decisions...

Full description

Saved in:
Bibliographic Details
Main Author: Lim, Noel Wee Tat
Other Authors: Jun Zhao
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172002
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Deep Neural Networks (DNNs) serve as a fundamental pillar in the realms of Artificial Intelligence (AI) and Machine Learning (ML), playing a pivotal role in advancing these fields. They are computational models inspired by the human brain and are designed to process information and make decisions in a way that resembles human thinking. This has led to their remarkable success in various applications, from image and speech recognition to natural language processing and autonomous systems. Alongside these potentials and capabilities, DNNs have also unveiled vulnerabilities, one of them being adversarial attacks which have been proven to be catastrophic against DNNs and have received broad attention in recent years. This raises concerns over the robustness and security of DNNs. This project is mainly to conduct a comprehensive study on DNNs and adversarial attacks, and to implement specific techniques within DNNs aimed at bolstering their robustness.