AcceleNetor: FPGA-accelerated neural network implementation for side-channel analysis

Data-intensive machine learning applications require significant computing power, which cannot be efficiently handled by general-purpose microprocessors. Field Programmable Gate Arrays (FPGAs) offer a solution by allowing the creation of application-specific circuits that can accelerate these tasks...

Full description

Saved in:
Bibliographic Details
Main Author: Wang, Di
Other Authors: Chang Chip Hong
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166976
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Data-intensive machine learning applications require significant computing power, which cannot be efficiently handled by general-purpose microprocessors. Field Programmable Gate Arrays (FPGAs) offer a solution by allowing the creation of application-specific circuits that can accelerate these tasks with high throughput and low latency. This project aims to explore existing open-source research on FPGA accelerators and optimize them using a more effective computation model. The accelerator will also be implemented such that it can defend against some Side-Channel Attacks. The resulting accelerator will be implemented on the DE-10 standard FPGA board. The report will detail the methodologies used, the challenges faced, and the evolution of the final product from ideation to maturity.