Approximate implementations of neural networks
This research explores the application of approximate computing in neural networks, focusing on both classical models and the innovative Truth Table Nets (TTnet). The study aims to evaluate how approximation techniques can optimize computational efficiency without compromising the accuracy, a cru...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181121 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This research explores the application of approximate computing in neural networks,
focusing on both classical models and the innovative Truth Table Nets (TTnet). The
study aims to evaluate how approximation techniques can optimize computational
efficiency without compromising the accuracy, a crucial balance due to rising demands
of AI and ML applications. The research involved testing multiple approximations
tools, revealing significant challenges from outdated software dependencies.
As an alternate approach, custom Python programs were developed to generate
and evaluate truth tables by modifying Boolean expressions through term and variable
reduction. However, reproducing the original accuracy of TTnet with the approximated
TT-rules and associated weights proved difficult, limiting the project’s
ability to assess the full impact of these optimizations. Despite these setbacks, the
project offers insights into the challenges and potential of approximate computing
in novel neural network architectures, paving the way for future exploration in the
domain. |
---|