When contrastive learning meets clustering : explore inter-image contrast for image representation learning
Self-supervised learning has gained immense popularity in the research field of deep learning as it gets rid of the effort to label vast amounts of data. Among self-supervised learning methods, contrastive learning is a paradigm which has demonstrated high potentials in representation learning. Rece...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/148079 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Self-supervised learning has gained immense popularity in the research field of deep learning as it gets rid of the effort to label vast amounts of data. Among self-supervised learning methods, contrastive learning is a paradigm which has demonstrated high potentials in representation learning. Recent methods such as SimCLR and MoCo have delivered an impressive performance which is close to the state-of-the-art results produced by the supervised counterparts. Popular contrastive learning methods rely on instance discrimination to generate representations which are invariant after different transformations are applied. This is to explore the intra-image invariance as a single image is constrained to have similar representations when it undergoes various visual transformations and to have different representations compared to other images. However, such constraint is too strict in the sense that two different images can still look visually alike and embed similar semantics. In other words, the current methods neglect the importance of inter-image invariance as a group of similar images can also share some invariance. Thus, this project aims to explore the effect of inter-image invariance on representation learning by combining contrastive learning and clustering. Our model showed an increase in the performance in downstream tasks such as classification and outperformed the baseline models by a large margin. |
---|