Disentangled image representation: from affine transforms to facial attributes
Deep learning has shown unprecedented performance on computer vision tasks in recent years. One of the foundations of deep learning is the large datasets with human annotations. However, the datasets with human annotations are born with natural drawbacks. First, the cost of human annotations is e...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/166053 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Deep learning has shown unprecedented performance on computer vision tasks in
recent years. One of the foundations of deep learning is the large datasets with
human annotations. However, the datasets with human annotations are born with
natural drawbacks. First, the cost of human annotations is expensive, especially
with tasks such as segmentation. Next, the annotation itself may not be correct,
which could be due to the subjective nature of the problem. Last but not least, if
we wish the algorithm to evolve in real-world scenarios, it is not possible to keep
annotating all the surrounding objects in real-time.
To better utilize the algorithm in real-world scenarios, we want to deploy deep
learning with minimal human annotation, for example, in an unsupervised or self supervised manner. To be more specific, we tackle this problem from the perspec tive of generative models and disentangled representation. With generative mod els, the outputs of the model can be visualized. With disentangled representation,
different attributes learned by the model can be separated. The combination of
those two approaches provides a pathway to aligning the visualized attributes with
human instincts. To learn the disentangled representation in an unsupervised or
self-supervised manner, we tackle this problem from the perspective of contrastive
learning and inductive bias. With contrastive learning, we can produce more data
samples by transforming the original data and comparing the differences between
them. With inductive bias, we can formulate a meaningful relationship between
the transformed and original data sample pairs. In this thesis, we demonstrate the
effectiveness of inductive bias such as affine transforms and facial attributes.
In summary, the thesis contributes to the disentangled image representation, which
provides a pathway for us to understand the output of the generative model in a
more vivid manner by visualizing the results and aligning with human intuition. |
---|