Reconfigurable low-voltage low-power neuron cell for self-organizing maps

Self-Organizing Map (SOM), one type of Artificial Neural Networks (ANN), has wide application in pattern recognition, data clustering, and image processing. The cells in the network respond to various input patterns through competitive learning. The learning process involves complex non-linear mathe...

全面介紹

Saved in:
書目詳細資料
主要作者: Li, Ren Shi.
其他作者: Chang Chip Hong
格式: Final Year Project
語言:English
出版: 2009
主題:
在線閱讀:http://hdl.handle.net/10356/18425
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Self-Organizing Map (SOM), one type of Artificial Neural Networks (ANN), has wide application in pattern recognition, data clustering, and image processing. The cells in the network respond to various input patterns through competitive learning. The learning process involves complex non-linear mathematical calculations, time sharing and parallel processing techniques for solving the real application. A new SOM neuron architecture is presented in this project which is a low-voltage low-power design with promising learning accuracy and convergence stability of the SOM in analog domain. This report presents a review of SOM algorithm with emphasis on the mathematical operations involved in the algorithm. After that, the proposed architectures and the key components of the network are illustrated. The prototype SOM network consists of four neurons which are used to evaluate and verify the functionality and learning quality with an accurate Gaussian tapering function. A low power consumption current multiplier operating in subthreshold region in analog domain is presented in this project. The current multiplier in subthreshold region promises low power consumption and simplicity without sacrificing its accuracy. The complexity of Gaussian neighborhood function and weight adaption are reduced by reusing the analog current multiplier for multiplication and squaring operations. Simulated results show that with the same input patterns, better learning quality and much lower power consumption can be achieved with sacrifice on the cycle time.