FPGA library based design of a hardware model for convolutional neural network with automated weight compression using K-means clustering

In this paper, a design of a synthesizable hardware model for a Convolutional Neural Network (CNN) is presented. The hardware model is capable of self-training i.e. without the use of any external processors. It is trained to recognize four numerical digit images. Another hardware model is also desi...

Full description

Saved in:
Bibliographic Details
Main Authors: Yap, Roderick, Giron, Goldwin, Lanto, Leonard Miguel, Garcia, Lorenzo, Sta. Maria, David, Materum, Lawrence
Format: text
Published: Animo Repository 2019
Subjects:
Online Access:https://animorepository.dlsu.edu.ph/faculty_research/3014
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: De La Salle University
id oai:animorepository.dlsu.edu.ph:faculty_research-4013
record_format eprints
spelling oai:animorepository.dlsu.edu.ph:faculty_research-40132021-11-19T06:42:54Z FPGA library based design of a hardware model for convolutional neural network with automated weight compression using K-means clustering Yap, Roderick Giron, Goldwin Lanto, Leonard Miguel Garcia, Lorenzo Sta. Maria, David Materum, Lawrence In this paper, a design of a synthesizable hardware model for a Convolutional Neural Network (CNN) is presented. The hardware model is capable of self-training i.e. without the use of any external processors. It is trained to recognize four numerical digit images. Another hardware model is also designed for the K-means clustering algorithm. This second hardware model is used to for compressing the weights of the CNN through quantization. Weight compression is carried out through weight sharing. With weight sharing, the system is able to save component usage. The two hardware models designed are then subsequently integrated to automate the compression of the CNN weights after the CNN completes its training. The entire design is based on fixed point arithmetic operation using VHDL as design entry tool and XILINX Virtex 5 FPGA as the target library for synthesis. After completing the design, it is evaluated in terms of hardware consumption with respect to rate of compression. When evaluating the recognition performance ability of the hardware model, digit images experimentation results have shown that the weight compression can reach as high as 60% without any negative effect on the performance of the CNN. Based on data gathered, the compression with the least hardware consumption occurs at 80 %. For the various digits trained, the CNN outputs after the training, range from 89% to 97%. © 2019, World Academy of Research in Science and Engineering. All rights reserved. 2019-07-01T07:00:00Z text https://animorepository.dlsu.edu.ph/faculty_research/3014 Faculty Research Work Animo Repository Field programmable gate arrays Neural networks (Computer science) Data compression (Computer science) VHDL (Computer hardware description language) Electrical and Electronics
institution De La Salle University
building De La Salle University Library
continent Asia
country Philippines
Philippines
content_provider De La Salle University Library
collection DLSU Institutional Repository
topic Field programmable gate arrays
Neural networks (Computer science)
Data compression (Computer science)
VHDL (Computer hardware description language)
Electrical and Electronics
spellingShingle Field programmable gate arrays
Neural networks (Computer science)
Data compression (Computer science)
VHDL (Computer hardware description language)
Electrical and Electronics
Yap, Roderick
Giron, Goldwin
Lanto, Leonard Miguel
Garcia, Lorenzo
Sta. Maria, David
Materum, Lawrence
FPGA library based design of a hardware model for convolutional neural network with automated weight compression using K-means clustering
description In this paper, a design of a synthesizable hardware model for a Convolutional Neural Network (CNN) is presented. The hardware model is capable of self-training i.e. without the use of any external processors. It is trained to recognize four numerical digit images. Another hardware model is also designed for the K-means clustering algorithm. This second hardware model is used to for compressing the weights of the CNN through quantization. Weight compression is carried out through weight sharing. With weight sharing, the system is able to save component usage. The two hardware models designed are then subsequently integrated to automate the compression of the CNN weights after the CNN completes its training. The entire design is based on fixed point arithmetic operation using VHDL as design entry tool and XILINX Virtex 5 FPGA as the target library for synthesis. After completing the design, it is evaluated in terms of hardware consumption with respect to rate of compression. When evaluating the recognition performance ability of the hardware model, digit images experimentation results have shown that the weight compression can reach as high as 60% without any negative effect on the performance of the CNN. Based on data gathered, the compression with the least hardware consumption occurs at 80 %. For the various digits trained, the CNN outputs after the training, range from 89% to 97%. © 2019, World Academy of Research in Science and Engineering. All rights reserved.
format text
author Yap, Roderick
Giron, Goldwin
Lanto, Leonard Miguel
Garcia, Lorenzo
Sta. Maria, David
Materum, Lawrence
author_facet Yap, Roderick
Giron, Goldwin
Lanto, Leonard Miguel
Garcia, Lorenzo
Sta. Maria, David
Materum, Lawrence
author_sort Yap, Roderick
title FPGA library based design of a hardware model for convolutional neural network with automated weight compression using K-means clustering
title_short FPGA library based design of a hardware model for convolutional neural network with automated weight compression using K-means clustering
title_full FPGA library based design of a hardware model for convolutional neural network with automated weight compression using K-means clustering
title_fullStr FPGA library based design of a hardware model for convolutional neural network with automated weight compression using K-means clustering
title_full_unstemmed FPGA library based design of a hardware model for convolutional neural network with automated weight compression using K-means clustering
title_sort fpga library based design of a hardware model for convolutional neural network with automated weight compression using k-means clustering
publisher Animo Repository
publishDate 2019
url https://animorepository.dlsu.edu.ph/faculty_research/3014
_version_ 1718383321060212736