Algorithms and circuits for low-power machine learning IC

The world of artificial neural networks is an amazing field inspired by the biological model of learning. Multi layered feed-forward networks require significant human intervention for tuning and shows incredibly slow speeds of processing. An alternative model of a single layer feedforward neural ne...

Full description

Saved in:
Bibliographic Details
Main Author: Thosani, Tejas Hemant
Other Authors: Arindam Basu
Format: Final Year Project
Language:English
Published: 2017
Subjects:
Online Access:http://hdl.handle.net/10356/72045
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-72045
record_format dspace
spelling sg-ntu-dr.10356-720452023-07-07T16:21:44Z Algorithms and circuits for low-power machine learning IC Thosani, Tejas Hemant Arindam Basu School of Electrical and Electronic Engineering DRNTU::Engineering::Electrical and electronic engineering The world of artificial neural networks is an amazing field inspired by the biological model of learning. Multi layered feed-forward networks require significant human intervention for tuning and shows incredibly slow speeds of processing. An alternative model of a single layer feedforward neural network with randomized input layer and hidden layer bias has been proposed to improve efficiency and processing time by almost a thousand fold. We look at extreme learning machines proposed by Prof. Guang-Bin Huang which suggests that the input weights and the hidden layer biases can be randomly assigned if the activation functions are infinitely differentiable. We test different datasets to generate models using noisy parameters for regression, medical classification applications like Diabetes and speech recognition on cochlear implant extracted sound data. We study techniques to generalize data and optimize hidden layer and output of the machine by tuning parameters based on our needs. We also look at circuit implementations of sub-blocks of the neural network concerning the activation thresholding functions after optimizing the same for our datasets of interests. Future research in implementation of the entire neural network in hardware and the implications of non-idealities arising from the same are discussed. Bachelor of Engineering 2017-05-24T02:03:03Z 2017-05-24T02:03:03Z 2017 Final Year Project (FYP) http://hdl.handle.net/10356/72045 en Nanyang Technological University 57 p. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic DRNTU::Engineering::Electrical and electronic engineering
spellingShingle DRNTU::Engineering::Electrical and electronic engineering
Thosani, Tejas Hemant
Algorithms and circuits for low-power machine learning IC
description The world of artificial neural networks is an amazing field inspired by the biological model of learning. Multi layered feed-forward networks require significant human intervention for tuning and shows incredibly slow speeds of processing. An alternative model of a single layer feedforward neural network with randomized input layer and hidden layer bias has been proposed to improve efficiency and processing time by almost a thousand fold. We look at extreme learning machines proposed by Prof. Guang-Bin Huang which suggests that the input weights and the hidden layer biases can be randomly assigned if the activation functions are infinitely differentiable. We test different datasets to generate models using noisy parameters for regression, medical classification applications like Diabetes and speech recognition on cochlear implant extracted sound data. We study techniques to generalize data and optimize hidden layer and output of the machine by tuning parameters based on our needs. We also look at circuit implementations of sub-blocks of the neural network concerning the activation thresholding functions after optimizing the same for our datasets of interests. Future research in implementation of the entire neural network in hardware and the implications of non-idealities arising from the same are discussed.
author2 Arindam Basu
author_facet Arindam Basu
Thosani, Tejas Hemant
format Final Year Project
author Thosani, Tejas Hemant
author_sort Thosani, Tejas Hemant
title Algorithms and circuits for low-power machine learning IC
title_short Algorithms and circuits for low-power machine learning IC
title_full Algorithms and circuits for low-power machine learning IC
title_fullStr Algorithms and circuits for low-power machine learning IC
title_full_unstemmed Algorithms and circuits for low-power machine learning IC
title_sort algorithms and circuits for low-power machine learning ic
publishDate 2017
url http://hdl.handle.net/10356/72045
_version_ 1772828385388527616