Retraining SNN conversions: CNN to SNN for audio classification tasks

Efficient yet powerful models are in high demand for its portability and affordability. Amongst other methods such as model-pruning, is limiting neural network operations to sparse event-driven spikes: Spiking Neural Networks (SNNs) aims to unravel a new direction in machine learning research. A si...

Full description

Saved in:
Bibliographic Details
Main Author: Chang, John Rong Qi
Other Authors: Goh Wang Ling
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/167383
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-167383
record_format dspace
spelling sg-ntu-dr.10356-1673832023-07-07T16:05:29Z Retraining SNN conversions: CNN to SNN for audio classification tasks Chang, John Rong Qi Goh Wang Ling School of Electrical and Electronic Engineering A*STAR Institute of Microelectronics EWLGOH@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Software::Software engineering Efficient yet powerful models are in high demand for its portability and affordability. Amongst other methods such as model-pruning, is limiting neural network operations to sparse event-driven spikes: Spiking Neural Networks (SNNs) aims to unravel a new direction in machine learning research. A significant amount of SNN literature straddles upon mature works of artificial neural networks (ANNs) by migrating its architecture and parameters into SNNs, optimizing the migration to retain as much performance as possible. We spearhead a novel approach: the architecture is migrated and retrained from scratch. We hypothesize that this new direction will unravel concepts that currently bottlenecks improvements in the field of SNN conversions. Furthermore, alike Transfer Learning, inspire future efforts of fine-tuning a well converted model through training. This paper presents our analysis of training converted Convolutional Neural Networks (CNNs) to SNNs on audio classification models. Results show that (1) SNN conversions consistently underperforms CNNs marginally during training, however we also show that model complexity has a possible association with this margin. (2) SNN converts doesn't necessarily approach the performance of its CNN counterparts asymptotically by increasing the number of time-steps. (3) SNN training from scratch is costly and impractical with current hardware and dedicated SNN optimization techniques are necessary. (4) Enabling the SNN membrane decay rate to be learned doesn't significantly affect performance. This paper provides valuable insights into the perspective of retraining converted SNNs for audio classification, and serves as a reference for future studies and hardware implementation benchmarks. Bachelor of Engineering (Information Engineering and Media) 2023-05-26T00:06:35Z 2023-05-26T00:06:35Z 2023 Final Year Project (FYP) Chang, J. R. Q. (2023). Retraining SNN conversions: CNN to SNN for audio classification tasks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/167383 https://hdl.handle.net/10356/167383 en B2286-221 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Software::Software engineering
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Software::Software engineering
Chang, John Rong Qi
Retraining SNN conversions: CNN to SNN for audio classification tasks
description Efficient yet powerful models are in high demand for its portability and affordability. Amongst other methods such as model-pruning, is limiting neural network operations to sparse event-driven spikes: Spiking Neural Networks (SNNs) aims to unravel a new direction in machine learning research. A significant amount of SNN literature straddles upon mature works of artificial neural networks (ANNs) by migrating its architecture and parameters into SNNs, optimizing the migration to retain as much performance as possible. We spearhead a novel approach: the architecture is migrated and retrained from scratch. We hypothesize that this new direction will unravel concepts that currently bottlenecks improvements in the field of SNN conversions. Furthermore, alike Transfer Learning, inspire future efforts of fine-tuning a well converted model through training. This paper presents our analysis of training converted Convolutional Neural Networks (CNNs) to SNNs on audio classification models. Results show that (1) SNN conversions consistently underperforms CNNs marginally during training, however we also show that model complexity has a possible association with this margin. (2) SNN converts doesn't necessarily approach the performance of its CNN counterparts asymptotically by increasing the number of time-steps. (3) SNN training from scratch is costly and impractical with current hardware and dedicated SNN optimization techniques are necessary. (4) Enabling the SNN membrane decay rate to be learned doesn't significantly affect performance. This paper provides valuable insights into the perspective of retraining converted SNNs for audio classification, and serves as a reference for future studies and hardware implementation benchmarks.
author2 Goh Wang Ling
author_facet Goh Wang Ling
Chang, John Rong Qi
format Final Year Project
author Chang, John Rong Qi
author_sort Chang, John Rong Qi
title Retraining SNN conversions: CNN to SNN for audio classification tasks
title_short Retraining SNN conversions: CNN to SNN for audio classification tasks
title_full Retraining SNN conversions: CNN to SNN for audio classification tasks
title_fullStr Retraining SNN conversions: CNN to SNN for audio classification tasks
title_full_unstemmed Retraining SNN conversions: CNN to SNN for audio classification tasks
title_sort retraining snn conversions: cnn to snn for audio classification tasks
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/167383
_version_ 1772826879562088448