Approximate implementations of neural networks

This research explores the application of approximate computing in neural networks, focusing on both classical models and the innovative Truth Table Nets (TTnet). The study aims to evaluate how approximation techniques can optimize computational efficiency without compromising the accuracy, a cru...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Sim, Wei Feng
مؤلفون آخرون: Thomas Peyrin
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2024
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/181121
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:This research explores the application of approximate computing in neural networks, focusing on both classical models and the innovative Truth Table Nets (TTnet). The study aims to evaluate how approximation techniques can optimize computational efficiency without compromising the accuracy, a crucial balance due to rising demands of AI and ML applications. The research involved testing multiple approximations tools, revealing significant challenges from outdated software dependencies. As an alternate approach, custom Python programs were developed to generate and evaluate truth tables by modifying Boolean expressions through term and variable reduction. However, reproducing the original accuracy of TTnet with the approximated TT-rules and associated weights proved difficult, limiting the project’s ability to assess the full impact of these optimizations. Despite these setbacks, the project offers insights into the challenges and potential of approximate computing in novel neural network architectures, paving the way for future exploration in the domain.