A study of CNN transfer learning for image processing

Transfer learning, a domain of machine learning, seeks to be an efficient solution over traditional machine learning techniques by adapting existing convolutional neural networks (CNN) to suit a new problem. Adapting a CNN for transfer learning can be done through the changing of hyperparameters and...

Full description

Saved in:
Bibliographic Details
Main Author: Koh, Yee Zuo
Other Authors: Kai-Kuang Ma
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/145039
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Transfer learning, a domain of machine learning, seeks to be an efficient solution over traditional machine learning techniques by adapting existing convolutional neural networks (CNN) to suit a new problem. Adapting a CNN for transfer learning can be done through the changing of hyperparameters and the freezing of CNN’s layers. In this paper, transfer learning was implemented to VGG-Face, a state-of-the-art facial recognition CNN, where it was adapted to understand and classify images from the JAFFE dataset consisting of four different human facial emotions: (1) Angry, (2) Happy, (3) Sad, (4) Surprised. A cascade transfer learning was performed using the FER2013 dataset for the first fine- tune and a portion of the Japanese Female Facial Expression (JAFFE) dataset for the second fine-tune. The test accuracy was then taken using a portion of the JAFFE dataset. The changing of hyperparameters and the freezing of the CNN’s layers within the VGG- Face CNN were also discussed in this paper. The experiments were ran using a NVIDIA RTX 2060 GPU on MATLAB R2020a using its various toolboxes. The final architecture proposed a validation accuracy of 62.41% on the FER2013 dataset, and a test accuracy 86.11% on the JAFFE test dataset, which was an increase compared to the baseline of 20.63% and 27.78% respectively.