Understanding variations (variant & invariant) of classification tasks/targets

There still lacks a certain mechanism to cater for variance in data and a lack of levels of impact brought by variance. We introduce a composite term called learning, where average improvement upon every epoch divided by previous loss value to have a standard reference across our models of differing...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Wan, Tai Fong
مؤلفون آخرون: Althea Liang
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2020
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/138001
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
id sg-ntu-dr.10356-138001
record_format dspace
spelling sg-ntu-dr.10356-1380012020-04-21T09:53:39Z Understanding variations (variant & invariant) of classification tasks/targets Wan, Tai Fong Althea Liang School of Computer Science and Engineering qhliang@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision There still lacks a certain mechanism to cater for variance in data and a lack of levels of impact brought by variance. We introduce a composite term called learning, where average improvement upon every epoch divided by previous loss value to have a standard reference across our models of differing architecture. We use specially designed datasets on Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) to observe the effects of variance on bottom-up and top-down neural network architectures respectively. We find that variance has degrees, such that given datasets of different applied operations, the amount of loss varies notably. We find that variance has dimensions, such that the amount of variance introduced to the image, affects the confidence of the model prediction. We find that even providing a single training data with no operation applied to it, the CNN and RNN architecture could give lower validation losses (with CNN being significantly lower). This study shows the significance of variance impact on model performance manifested in data and the pressing need to understand variance to better design mitigations and mechanisms to handle it. Bachelor of Engineering (Computer Science) 2020-04-21T09:53:39Z 2020-04-21T09:53:39Z 2020 Final Year Project (FYP) https://hdl.handle.net/10356/138001 en SCSE19-0087 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Wan, Tai Fong
Understanding variations (variant & invariant) of classification tasks/targets
description There still lacks a certain mechanism to cater for variance in data and a lack of levels of impact brought by variance. We introduce a composite term called learning, where average improvement upon every epoch divided by previous loss value to have a standard reference across our models of differing architecture. We use specially designed datasets on Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) to observe the effects of variance on bottom-up and top-down neural network architectures respectively. We find that variance has degrees, such that given datasets of different applied operations, the amount of loss varies notably. We find that variance has dimensions, such that the amount of variance introduced to the image, affects the confidence of the model prediction. We find that even providing a single training data with no operation applied to it, the CNN and RNN architecture could give lower validation losses (with CNN being significantly lower). This study shows the significance of variance impact on model performance manifested in data and the pressing need to understand variance to better design mitigations and mechanisms to handle it.
author2 Althea Liang
author_facet Althea Liang
Wan, Tai Fong
format Final Year Project
author Wan, Tai Fong
author_sort Wan, Tai Fong
title Understanding variations (variant & invariant) of classification tasks/targets
title_short Understanding variations (variant & invariant) of classification tasks/targets
title_full Understanding variations (variant & invariant) of classification tasks/targets
title_fullStr Understanding variations (variant & invariant) of classification tasks/targets
title_full_unstemmed Understanding variations (variant & invariant) of classification tasks/targets
title_sort understanding variations (variant & invariant) of classification tasks/targets
publisher Nanyang Technological University
publishDate 2020
url https://hdl.handle.net/10356/138001
_version_ 1681056381512187904