Approximate implementations of neural networks

This research explores the application of approximate computing in neural networks, focusing on both classical models and the innovative Truth Table Nets (TTnet). The study aims to evaluate how approximation techniques can optimize computational efficiency without compromising the accuracy, a cru...

全面介紹

Saved in:
書目詳細資料
主要作者: Sim, Wei Feng
其他作者: Thomas Peyrin
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2024
主題:
在線閱讀:https://hdl.handle.net/10356/181121
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:This research explores the application of approximate computing in neural networks, focusing on both classical models and the innovative Truth Table Nets (TTnet). The study aims to evaluate how approximation techniques can optimize computational efficiency without compromising the accuracy, a crucial balance due to rising demands of AI and ML applications. The research involved testing multiple approximations tools, revealing significant challenges from outdated software dependencies. As an alternate approach, custom Python programs were developed to generate and evaluate truth tables by modifying Boolean expressions through term and variable reduction. However, reproducing the original accuracy of TTnet with the approximated TT-rules and associated weights proved difficult, limiting the project’s ability to assess the full impact of these optimizations. Despite these setbacks, the project offers insights into the challenges and potential of approximate computing in novel neural network architectures, paving the way for future exploration in the domain.