Protecting neural networks from adversarial attacks
This project investigates how Searchable Symmetric Encryption (SSE) can be applied to neural networks as a form of protection from adversarial attacks and the viability of such an implementation. The implementation of SSE used is done in Python with the usage of single-keyword static SSE schemes,...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175267 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-175267 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1752672024-04-26T15:42:21Z Protecting neural networks from adversarial attacks Yeow, Zhong Han Anupam Chattopadhyay School of Computer Science and Engineering anupam@ntu.edu.sg Computer and Information Science This project investigates how Searchable Symmetric Encryption (SSE) can be applied to neural networks as a form of protection from adversarial attacks and the viability of such an implementation. The implementation of SSE used is done in Python with the usage of single-keyword static SSE schemes, applied to a neural network that is generated using PyTorch. Metrics such as time taken for a search and space for the database files are taken into account and then calculated for viability. Results show that SSE is relevant for practical use in areas where the inputs can be controlled and the resulting storage size of database will be a reasonable cost. The security benefits of the SSE implementation will also have to outweigh the cost for it to see realistic use. Other methods such as homomorphic encryption can be used on larger datasets and more complex models and may even be able to allow models to be trained while fully encrypted, protecting against more possibilities of adversarial attacks. Bachelor's degree 2024-04-23T04:52:58Z 2024-04-23T04:52:58Z 2024 Final Year Project (FYP) Yeow, Z. H. (2024). Protecting neural networks from adversarial attacks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175267 https://hdl.handle.net/10356/175267 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science |
spellingShingle |
Computer and Information Science Yeow, Zhong Han Protecting neural networks from adversarial attacks |
description |
This project investigates how Searchable Symmetric Encryption (SSE) can be applied
to neural networks as a form of protection from adversarial attacks and the viability
of such an implementation. The implementation of SSE used is done in Python with
the usage of single-keyword static SSE schemes, applied to a neural network that is
generated using PyTorch. Metrics such as time taken for a search and space for the
database files are taken into account and then calculated for viability. Results show
that SSE is relevant for practical use in areas where the inputs can be controlled and
the resulting storage size of database will be a reasonable cost. The security benefits of
the SSE implementation will also have to outweigh the cost for it to see realistic use.
Other methods such as homomorphic encryption can be used on larger datasets and
more complex models and may even be able to allow models to be trained while fully
encrypted, protecting against more possibilities of adversarial attacks. |
author2 |
Anupam Chattopadhyay |
author_facet |
Anupam Chattopadhyay Yeow, Zhong Han |
format |
Final Year Project |
author |
Yeow, Zhong Han |
author_sort |
Yeow, Zhong Han |
title |
Protecting neural networks from adversarial attacks |
title_short |
Protecting neural networks from adversarial attacks |
title_full |
Protecting neural networks from adversarial attacks |
title_fullStr |
Protecting neural networks from adversarial attacks |
title_full_unstemmed |
Protecting neural networks from adversarial attacks |
title_sort |
protecting neural networks from adversarial attacks |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/175267 |
_version_ |
1814047274744938496 |