Self-supervised and supervised contrastive learning
There has been a recent surge in the interest in contrastive learning due to its success in self-supervised learning for vision related tasks. The main goal of contrastive learning is to guide a model to learn an embedding space, where samples from the same class will be pulled closer together and s...
محفوظ في:
المؤلف الرئيسي: | |
---|---|
مؤلفون آخرون: | |
التنسيق: | Final Year Project |
اللغة: | English |
منشور في: |
Nanyang Technological University
2023
|
الموضوعات: | |
الوصول للمادة أونلاين: | https://hdl.handle.net/10356/166289 |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
الملخص: | There has been a recent surge in the interest in contrastive learning due to its success in self-supervised learning for vision related tasks. The main goal of contrastive learning is to guide a model to learn an embedding space, where samples from the same class will be pulled closer together and samples from a different class will be pulled apart from each other. This project will explore contrastive learning for computer vision in a self-supervised and supervised manner. Firstly, the self-supervised contrastive learning framework introduced in SimCLR will be implemented and an experiment will be conducted on the CIFAR10 dataset. Next, contrastive learning will be explored in a supervised setting as introduced in the Supervised Contrastive Learning framework. A study will be conducted using this technique to learn representations from the multi- domain DomainNet dataset and then evaluate the transferability of the representations learned on other downstream datasets. The fixed feature linear evaluation protocol will be used to evaluate the transferability on 7 downstream datasets that were chosen across different domains. The results obtained will be compared to a baseline model that was trained using the widely used cross entropy loss. Empirical results from the experiments showed that on average, the supervised contrastive learning model performed 6.05% better than the baseline model on the 7 downstream datasets. The findings suggest that supervised contrastive learning models can potentially learn more robust and better representations than cross entropy models when trained on a multi-domain dataset. |
---|