Developing AI attacks/defenses

Deep Neural Networks (DNNs) serve as a fundamental pillar in the realms of Artificial Intelligence (AI) and Machine Learning (ML), playing a pivotal role in advancing these fields. They are computational models inspired by the human brain and are designed to process information and make decisions...

Full description

Saved in:
Bibliographic Details
Main Author: Lim, Noel Wee Tat
Other Authors: Jun Zhao
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172002
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-172002
record_format dspace
spelling sg-ntu-dr.10356-1720022023-11-24T15:37:52Z Developing AI attacks/defenses Lim, Noel Wee Tat Jun Zhao School of Computer Science and Engineering junzhao@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Deep Neural Networks (DNNs) serve as a fundamental pillar in the realms of Artificial Intelligence (AI) and Machine Learning (ML), playing a pivotal role in advancing these fields. They are computational models inspired by the human brain and are designed to process information and make decisions in a way that resembles human thinking. This has led to their remarkable success in various applications, from image and speech recognition to natural language processing and autonomous systems. Alongside these potentials and capabilities, DNNs have also unveiled vulnerabilities, one of them being adversarial attacks which have been proven to be catastrophic against DNNs and have received broad attention in recent years. This raises concerns over the robustness and security of DNNs. This project is mainly to conduct a comprehensive study on DNNs and adversarial attacks, and to implement specific techniques within DNNs aimed at bolstering their robustness. Bachelor of Engineering (Computer Science) 2023-11-20T06:16:59Z 2023-11-20T06:16:59Z 2023 Final Year Project (FYP) Lim, N. W. T. (2023). Developing AI attacks/defenses. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/172002 https://hdl.handle.net/10356/172002 en SCSE22-0834 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Lim, Noel Wee Tat
Developing AI attacks/defenses
description Deep Neural Networks (DNNs) serve as a fundamental pillar in the realms of Artificial Intelligence (AI) and Machine Learning (ML), playing a pivotal role in advancing these fields. They are computational models inspired by the human brain and are designed to process information and make decisions in a way that resembles human thinking. This has led to their remarkable success in various applications, from image and speech recognition to natural language processing and autonomous systems. Alongside these potentials and capabilities, DNNs have also unveiled vulnerabilities, one of them being adversarial attacks which have been proven to be catastrophic against DNNs and have received broad attention in recent years. This raises concerns over the robustness and security of DNNs. This project is mainly to conduct a comprehensive study on DNNs and adversarial attacks, and to implement specific techniques within DNNs aimed at bolstering their robustness.
author2 Jun Zhao
author_facet Jun Zhao
Lim, Noel Wee Tat
format Final Year Project
author Lim, Noel Wee Tat
author_sort Lim, Noel Wee Tat
title Developing AI attacks/defenses
title_short Developing AI attacks/defenses
title_full Developing AI attacks/defenses
title_fullStr Developing AI attacks/defenses
title_full_unstemmed Developing AI attacks/defenses
title_sort developing ai attacks/defenses
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/172002
_version_ 1783955619977363456