Converting recurrent neural networks to minimal-state deterministic finite automata for deployment on edge devices
Edge computing for lowlatency internet-of-things (IoT) applications requires more data analysis to occur closer to the data source on edge devices such as micro-controllers, that have limited memory and computational resources. This project’s objective is to outline a generalisable procedure to c...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/158314 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-158314 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1583142022-05-17T02:11:37Z Converting recurrent neural networks to minimal-state deterministic finite automata for deployment on edge devices Hulagadri, Adithya Venkatadri Gu Mile School of Computer Science and Engineering gumile@ntu.edu.sg Engineering::Computer science and engineering Edge computing for lowlatency internet-of-things (IoT) applications requires more data analysis to occur closer to the data source on edge devices such as micro-controllers, that have limited memory and computational resources. This project’s objective is to outline a generalisable procedure to classify sequential discrete data using recurrent neural networks (RNNs) that are converted into deterministic finite automata (DFAs). Each DFA can always be minimised using Hopcroft’s algorithm to derive an equivalent DFA with the minimum number of states. This leads to a drastic reduction in the memory required to store the model’s parameters. The extraction of the states from the RNN’s memory is achieved using Quantised Binary Networks (QBNs) which are auto-encoders using a binary activation function that forces the RNN to step through discrete memory states. Lower memory requirements also reduce power consumption, enabling longer service life and applications in bio-computing, which is very heat-sensitive. Additionally, the minimum-state DFA can be always be directly implemented as an electrical circuit, leading to simpler hardware for running advanced RNNs on edge devices. This report utilises a sample classification problem with proven quantum memory advantage. A secondary aim of this report is to demonstrate that the optimum classical algorithm identified for the problem can be deduced from popular machine learning methods. The report shows that this project succeeded in doing so, even obtaining the theoretically optimum DFA from the RNN trained using the Torch framework in the python language. Bachelor of Engineering (Computer Science) 2022-05-17T02:11:37Z 2022-05-17T02:11:37Z 2022 Final Year Project (FYP) Hulagadri, A. V. (2022). Converting recurrent neural networks to minimal-state deterministic finite automata for deployment on edge devices. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158314 https://hdl.handle.net/10356/158314 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering |
spellingShingle |
Engineering::Computer science and engineering Hulagadri, Adithya Venkatadri Converting recurrent neural networks to minimal-state deterministic finite automata for deployment on edge devices |
description |
Edge computing for lowlatency internet-of-things (IoT) applications requires more data
analysis to occur closer to the data source on edge devices such as micro-controllers,
that have limited memory and computational resources. This project’s objective is to
outline a generalisable procedure to classify sequential discrete data using recurrent
neural networks (RNNs) that are converted into deterministic finite automata (DFAs).
Each DFA can always be minimised using Hopcroft’s algorithm to derive an equivalent
DFA with the minimum number of states. This leads to a drastic reduction in the
memory required to store the model’s parameters. The extraction of the states from
the RNN’s memory is achieved using Quantised Binary Networks (QBNs) which are
auto-encoders using a binary activation function that forces the RNN to step through
discrete memory states. Lower memory requirements also reduce power consumption,
enabling longer service life and applications in bio-computing, which is very
heat-sensitive. Additionally, the minimum-state DFA can be always be directly implemented
as an electrical circuit, leading to simpler hardware for running advanced
RNNs on edge devices. This report utilises a sample classification problem with proven
quantum memory advantage. A secondary aim of this report is to demonstrate that the
optimum classical algorithm identified for the problem can be deduced from popular
machine learning methods. The report shows that this project succeeded in doing so,
even obtaining the theoretically optimum DFA from the RNN trained using the Torch
framework in the python language. |
author2 |
Gu Mile |
author_facet |
Gu Mile Hulagadri, Adithya Venkatadri |
format |
Final Year Project |
author |
Hulagadri, Adithya Venkatadri |
author_sort |
Hulagadri, Adithya Venkatadri |
title |
Converting recurrent neural networks to minimal-state deterministic finite automata for deployment on edge devices |
title_short |
Converting recurrent neural networks to minimal-state deterministic finite automata for deployment on edge devices |
title_full |
Converting recurrent neural networks to minimal-state deterministic finite automata for deployment on edge devices |
title_fullStr |
Converting recurrent neural networks to minimal-state deterministic finite automata for deployment on edge devices |
title_full_unstemmed |
Converting recurrent neural networks to minimal-state deterministic finite automata for deployment on edge devices |
title_sort |
converting recurrent neural networks to minimal-state deterministic finite automata for deployment on edge devices |
publisher |
Nanyang Technological University |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/158314 |
_version_ |
1734310314678157312 |