Self-supervised and supervised contrastive learning

There has been a recent surge in the interest in contrastive learning due to its success in self-supervised learning for vision related tasks. The main goal of contrastive learning is to guide a model to learn an embedding space, where samples from the same class will be pulled closer together and s...

Full description

Saved in:
Bibliographic Details
Main Author: Tan, Alvin De Jun
Other Authors: Yeo Chai Kiat
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166289
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-166289
record_format dspace
spelling sg-ntu-dr.10356-1662892023-04-28T15:39:39Z Self-supervised and supervised contrastive learning Tan, Alvin De Jun Yeo Chai Kiat School of Computer Science and Engineering ASCKYEO@ntu.edu.sg Engineering::Computer science and engineering There has been a recent surge in the interest in contrastive learning due to its success in self-supervised learning for vision related tasks. The main goal of contrastive learning is to guide a model to learn an embedding space, where samples from the same class will be pulled closer together and samples from a different class will be pulled apart from each other. This project will explore contrastive learning for computer vision in a self-supervised and supervised manner. Firstly, the self-supervised contrastive learning framework introduced in SimCLR will be implemented and an experiment will be conducted on the CIFAR10 dataset. Next, contrastive learning will be explored in a supervised setting as introduced in the Supervised Contrastive Learning framework. A study will be conducted using this technique to learn representations from the multi- domain DomainNet dataset and then evaluate the transferability of the representations learned on other downstream datasets. The fixed feature linear evaluation protocol will be used to evaluate the transferability on 7 downstream datasets that were chosen across different domains. The results obtained will be compared to a baseline model that was trained using the widely used cross entropy loss. Empirical results from the experiments showed that on average, the supervised contrastive learning model performed 6.05% better than the baseline model on the 7 downstream datasets. The findings suggest that supervised contrastive learning models can potentially learn more robust and better representations than cross entropy models when trained on a multi-domain dataset. Bachelor of Engineering (Computer Science) 2023-04-24T07:52:27Z 2023-04-24T07:52:27Z 2023 Final Year Project (FYP) Tan, A. D. J. (2023). Self-supervised and supervised contrastive learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/166289 https://hdl.handle.net/10356/166289 en SCSE22-0246 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
spellingShingle Engineering::Computer science and engineering
Tan, Alvin De Jun
Self-supervised and supervised contrastive learning
description There has been a recent surge in the interest in contrastive learning due to its success in self-supervised learning for vision related tasks. The main goal of contrastive learning is to guide a model to learn an embedding space, where samples from the same class will be pulled closer together and samples from a different class will be pulled apart from each other. This project will explore contrastive learning for computer vision in a self-supervised and supervised manner. Firstly, the self-supervised contrastive learning framework introduced in SimCLR will be implemented and an experiment will be conducted on the CIFAR10 dataset. Next, contrastive learning will be explored in a supervised setting as introduced in the Supervised Contrastive Learning framework. A study will be conducted using this technique to learn representations from the multi- domain DomainNet dataset and then evaluate the transferability of the representations learned on other downstream datasets. The fixed feature linear evaluation protocol will be used to evaluate the transferability on 7 downstream datasets that were chosen across different domains. The results obtained will be compared to a baseline model that was trained using the widely used cross entropy loss. Empirical results from the experiments showed that on average, the supervised contrastive learning model performed 6.05% better than the baseline model on the 7 downstream datasets. The findings suggest that supervised contrastive learning models can potentially learn more robust and better representations than cross entropy models when trained on a multi-domain dataset.
author2 Yeo Chai Kiat
author_facet Yeo Chai Kiat
Tan, Alvin De Jun
format Final Year Project
author Tan, Alvin De Jun
author_sort Tan, Alvin De Jun
title Self-supervised and supervised contrastive learning
title_short Self-supervised and supervised contrastive learning
title_full Self-supervised and supervised contrastive learning
title_fullStr Self-supervised and supervised contrastive learning
title_full_unstemmed Self-supervised and supervised contrastive learning
title_sort self-supervised and supervised contrastive learning
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/166289
_version_ 1765213836608012288