Protecting neural networks from adversarial attacks

This project investigates how Searchable Symmetric Encryption (SSE) can be applied to neural networks as a form of protection from adversarial attacks and the viability of such an implementation. The implementation of SSE used is done in Python with the usage of single-keyword static SSE schemes,...

Full description

Saved in:
Bibliographic Details
Main Author: Yeow, Zhong Han
Other Authors: Anupam Chattopadhyay
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175267
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This project investigates how Searchable Symmetric Encryption (SSE) can be applied to neural networks as a form of protection from adversarial attacks and the viability of such an implementation. The implementation of SSE used is done in Python with the usage of single-keyword static SSE schemes, applied to a neural network that is generated using PyTorch. Metrics such as time taken for a search and space for the database files are taken into account and then calculated for viability. Results show that SSE is relevant for practical use in areas where the inputs can be controlled and the resulting storage size of database will be a reasonable cost. The security benefits of the SSE implementation will also have to outweigh the cost for it to see realistic use. Other methods such as homomorphic encryption can be used on larger datasets and more complex models and may even be able to allow models to be trained while fully encrypted, protecting against more possibilities of adversarial attacks.