IMPROVING SELF-SUPERVISED REPRESENTATION LEARNING IN MOCO V2 WITH QUEUE OPTIMIZATION
This research is motivated by the need to improve the performance of selfsupervised learning models, particularly the Momentum Contrastive version 2 (MoCo v2) architecture. This study aims to develop a more robust and accurate MoCo v2 model by adding a K-Nearest Neighbors (KNN) mechanism to the q...
Saved in:
Main Author: | |
---|---|
Format: | Theses |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/87791 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
id |
id-itb.:87791 |
---|---|
spelling |
id-itb.:877912025-02-03T11:30:17ZIMPROVING SELF-SUPERVISED REPRESENTATION LEARNING IN MOCO V2 WITH QUEUE OPTIMIZATION Jofandi, Gugun Indonesia Theses Self-Supervised Learning, Momentum Contrastive (MoCo v2), K-Nearest Neighbors (KNN), Data Augmentation, Object Detection. INSTITUT TEKNOLOGI BANDUNG https://digilib.itb.ac.id/gdl/view/87791 This research is motivated by the need to improve the performance of selfsupervised learning models, particularly the Momentum Contrastive version 2 (MoCo v2) architecture. This study aims to develop a more robust and accurate MoCo v2 model by adding a K-Nearest Neighbors (KNN) mechanism to the queue. The research method employed is experimentation, modifying the MoCo v2 architecture through KNN integration to filter the queue and select stronger positive representations. The results demonstrate that MoCo v2 + KNN achieves a 5% accuracy improvement to 87.7% on the CIFAR-10 dataset compared to the MoCo v2 baseline model. Utilizing KNN in filtering the queue proves effective in selecting more discriminative positive representations, thereby enhancing model performance. Furthermore, MoCo v2 + KNN exhibits better resilience to large queue sizes, overcoming MoCo v2's sensitivity to excessive queue size. This research also utilizes strong data augmentation, which has been shown to effectively increase model robustness in previous studies. In conclusion, adding KNN to the MoCo v2 for queue filtering successfully improved accuracy by 5% compared to the MoCo v2 baseline, resilience to queue size, and the overall performance of the self-supervised learning model. The research highlights the potential of incorporating KNN into MoCo v2 for advancing self-supervised learning in Object Detection. text |
institution |
Institut Teknologi Bandung |
building |
Institut Teknologi Bandung Library |
continent |
Asia |
country |
Indonesia Indonesia |
content_provider |
Institut Teknologi Bandung |
collection |
Digital ITB |
language |
Indonesia |
description |
This research is motivated by the need to improve the performance of selfsupervised
learning models, particularly the Momentum Contrastive version 2
(MoCo v2) architecture. This study aims to develop a more robust and accurate
MoCo v2 model by adding a K-Nearest Neighbors (KNN) mechanism to the queue.
The research method employed is experimentation, modifying the MoCo v2
architecture through KNN integration to filter the queue and select stronger
positive representations. The results demonstrate that MoCo v2 + KNN achieves a
5% accuracy improvement to 87.7% on the CIFAR-10 dataset compared to the
MoCo v2 baseline model. Utilizing KNN in filtering the queue proves effective in
selecting more discriminative positive representations, thereby enhancing model
performance. Furthermore, MoCo v2 + KNN exhibits better resilience to large
queue sizes, overcoming MoCo v2's sensitivity to excessive queue size. This
research also utilizes strong data augmentation, which has been shown to
effectively increase model robustness in previous studies. In conclusion, adding
KNN to the MoCo v2 for queue filtering successfully improved accuracy by 5%
compared to the MoCo v2 baseline, resilience to queue size, and the overall
performance of the self-supervised learning model. The research highlights the
potential of incorporating KNN into MoCo v2 for advancing self-supervised
learning in Object Detection. |
format |
Theses |
author |
Jofandi, Gugun |
spellingShingle |
Jofandi, Gugun IMPROVING SELF-SUPERVISED REPRESENTATION LEARNING IN MOCO V2 WITH QUEUE OPTIMIZATION |
author_facet |
Jofandi, Gugun |
author_sort |
Jofandi, Gugun |
title |
IMPROVING SELF-SUPERVISED REPRESENTATION LEARNING IN MOCO V2 WITH QUEUE OPTIMIZATION |
title_short |
IMPROVING SELF-SUPERVISED REPRESENTATION LEARNING IN MOCO V2 WITH QUEUE OPTIMIZATION |
title_full |
IMPROVING SELF-SUPERVISED REPRESENTATION LEARNING IN MOCO V2 WITH QUEUE OPTIMIZATION |
title_fullStr |
IMPROVING SELF-SUPERVISED REPRESENTATION LEARNING IN MOCO V2 WITH QUEUE OPTIMIZATION |
title_full_unstemmed |
IMPROVING SELF-SUPERVISED REPRESENTATION LEARNING IN MOCO V2 WITH QUEUE OPTIMIZATION |
title_sort |
improving self-supervised representation learning in moco v2 with queue optimization |
url |
https://digilib.itb.ac.id/gdl/view/87791 |
_version_ |
1823658277936824320 |