Self-paced regularization in label distribution learning
Label Distribution Learning is a learning paradigm which outputs a representation of how much each label describes the instance. Research into the paradigm involved machine learning algorithms but did not include deep learning as a possible alternative. Deep learning, a sub-discipline of machine lea...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/76943 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Label Distribution Learning is a learning paradigm which outputs a representation of how much each label describes the instance. Research into the paradigm involved machine learning algorithms but did not include deep learning as a possible alternative. Deep learning, a sub-discipline of machine learning, has seen a surge in popularity over the years. However, the process of training is time-consuming as it requires repetitively iterating over a huge training set. To combat this problem, we propose a combinatory method of self-paced regularization with deep learning, where the deep learning model is presented with training data with progressive levels of size, and by extension, difficulty. Experiment results were logged and compared to the traditional deep learning algorithm, as well as state-of-the-art machine learning algorithms. |
---|