Optimization of neural networks through high level synthesis

With the increasing popularity of machine learning, coupled with increasing computing power, the field of machine learning algorithms has grown to be a very dynamic and fast-growing one. The effectiveness of such applications has led to concerted efforts to embed such applications into other s...

Full description

Saved in:
Bibliographic Details
Main Author: Liem, Jonathan Zhuan Kim
Other Authors: Smitha Kavallur Pisharath Gopi
Format: Final Year Project
Language:English
Published: 2018
Subjects:
Online Access:http://hdl.handle.net/10356/76135
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-76135
record_format dspace
spelling sg-ntu-dr.10356-761352023-03-03T20:41:15Z Optimization of neural networks through high level synthesis Liem, Jonathan Zhuan Kim Smitha Kavallur Pisharath Gopi School of Computer Science and Engineering DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence With the increasing popularity of machine learning, coupled with increasing computing power, the field of machine learning algorithms has grown to be a very dynamic and fast-growing one. The effectiveness of such applications has led to concerted efforts to embed such applications into other systems. However, such a drawback of machine learning algorithms is the humongous computational and space complexity, requiring large amounts of power and/or physical size to run. In embedded systems, these issues pose a problem, as size and performance are key constraints. However, optimizing such solutions require engineering at the Register Transfer Level (RTL), which is time-consuming and error-prone. In such implementations, it may be acceptable to accept a solution that does the job well enough, instead of one that is optimized down to the last bit through RTL designs. In this report, we have implemented a small-scale machine learning model, trained offline in Python, a Convolutional Neural Network (CNN) onto an Field-Programmable Gate Array, the Zedboard. This report explores the combinations of compiler directives or compiler pragmas, which are interpreted by the High-Level Synthesis (HLS) compiler. Under these directives, the designer can affect how the solution is implemented, and can improve the space and computational complexity. Bachelor of Engineering (Computer Engineering) 2018-11-19T08:49:40Z 2018-11-19T08:49:40Z 2018 Final Year Project (FYP) http://hdl.handle.net/10356/76135 en Nanyang Technological University 58 p. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Liem, Jonathan Zhuan Kim
Optimization of neural networks through high level synthesis
description With the increasing popularity of machine learning, coupled with increasing computing power, the field of machine learning algorithms has grown to be a very dynamic and fast-growing one. The effectiveness of such applications has led to concerted efforts to embed such applications into other systems. However, such a drawback of machine learning algorithms is the humongous computational and space complexity, requiring large amounts of power and/or physical size to run. In embedded systems, these issues pose a problem, as size and performance are key constraints. However, optimizing such solutions require engineering at the Register Transfer Level (RTL), which is time-consuming and error-prone. In such implementations, it may be acceptable to accept a solution that does the job well enough, instead of one that is optimized down to the last bit through RTL designs. In this report, we have implemented a small-scale machine learning model, trained offline in Python, a Convolutional Neural Network (CNN) onto an Field-Programmable Gate Array, the Zedboard. This report explores the combinations of compiler directives or compiler pragmas, which are interpreted by the High-Level Synthesis (HLS) compiler. Under these directives, the designer can affect how the solution is implemented, and can improve the space and computational complexity.
author2 Smitha Kavallur Pisharath Gopi
author_facet Smitha Kavallur Pisharath Gopi
Liem, Jonathan Zhuan Kim
format Final Year Project
author Liem, Jonathan Zhuan Kim
author_sort Liem, Jonathan Zhuan Kim
title Optimization of neural networks through high level synthesis
title_short Optimization of neural networks through high level synthesis
title_full Optimization of neural networks through high level synthesis
title_fullStr Optimization of neural networks through high level synthesis
title_full_unstemmed Optimization of neural networks through high level synthesis
title_sort optimization of neural networks through high level synthesis
publishDate 2018
url http://hdl.handle.net/10356/76135
_version_ 1759856325328633856