An empirical study on adaptation methods for large-scale vision-language models
Since the rise of powerful large-scale pre-trained Vision-Language (VL) models, such as CLIP and ALIGN, pre-training and fine-tuning have become promising paradigms to build transferable models for different downstream tasks. However, it is often prohibitive to fine-tune the whole pre-trained VL mod...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/165970 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-165970 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1659702023-04-21T15:37:34Z An empirical study on adaptation methods for large-scale vision-language models Wang, Annan Chen Change Loy School of Computer Science and Engineering ccloy@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Since the rise of powerful large-scale pre-trained Vision-Language (VL) models, such as CLIP and ALIGN, pre-training and fine-tuning have become promising paradigms to build transferable models for different downstream tasks. However, it is often prohibitive to fine-tune the whole pre-trained VL model due to the high computational resources required for full fine-tuning and the instability of fine-tuning a large model when the amount of available data is limited. Thus, various parameter-efficient fine-tuning methods have been proposed to adapt the VL model effectively. These fine-tuning methods have different design concepts and they are applied to different parts of the pre-trained models. We chose CLIP as a representative VL model and conducted a systematical empirical study on various adaptation methods which are applied to different parts of CLIP, namely Prompt Tuning, CLIP\_Adapter, and LayerNorm Tuning. We carefully chose 5 benchmark datasets with different characteristics, e.g. various inter-class variances and intra-class variances to examine the performance of different fine-tuning methods. Extensive experiments show that each current fine-tuning method has its own strengths and weaknesses on different datasets. Based on the analysis and experiment findings, we propose a hybrid fine-tuning strategy to effectively incorporate various fine-tuning methods and to leverage the advantages of each technique while mitigating their respective drawbacks. Extensive Experiments show each Hybrid Fine-tuning Model obtained by our hybrid fine-tuning strategy is effective and efficient. Bachelor of Engineering (Computer Science) 2023-04-17T06:54:57Z 2023-04-17T06:54:57Z 2023 Final Year Project (FYP) Wang, A. (2023). An empirical study on adaptation methods for large-scale vision-language models. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/165970 https://hdl.handle.net/10356/165970 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Wang, Annan An empirical study on adaptation methods for large-scale vision-language models |
description |
Since the rise of powerful large-scale pre-trained Vision-Language (VL) models, such as CLIP and ALIGN, pre-training and fine-tuning have become promising paradigms to build transferable models for different downstream tasks. However, it is often prohibitive to fine-tune the whole pre-trained VL model due to the high computational resources required for full fine-tuning and the instability of fine-tuning a large model when the amount of available data is limited. Thus, various parameter-efficient fine-tuning methods have been proposed to adapt the VL model effectively. These fine-tuning methods have different design concepts and they are applied to different parts of the pre-trained models. We chose CLIP as a representative VL model and conducted a systematical empirical study on various adaptation methods which are applied to different parts of CLIP, namely Prompt Tuning, CLIP\_Adapter, and LayerNorm Tuning. We carefully chose 5 benchmark datasets with different characteristics, e.g. various inter-class variances and intra-class variances to examine the performance of different fine-tuning methods. Extensive experiments show that each current fine-tuning method has its own strengths and weaknesses on different datasets. Based on the analysis and experiment findings, we propose a hybrid fine-tuning strategy to effectively incorporate various fine-tuning methods and to leverage the advantages of each technique while mitigating their respective drawbacks. Extensive Experiments show each Hybrid Fine-tuning Model obtained by our hybrid fine-tuning strategy is effective and efficient. |
author2 |
Chen Change Loy |
author_facet |
Chen Change Loy Wang, Annan |
format |
Final Year Project |
author |
Wang, Annan |
author_sort |
Wang, Annan |
title |
An empirical study on adaptation methods for large-scale vision-language models |
title_short |
An empirical study on adaptation methods for large-scale vision-language models |
title_full |
An empirical study on adaptation methods for large-scale vision-language models |
title_fullStr |
An empirical study on adaptation methods for large-scale vision-language models |
title_full_unstemmed |
An empirical study on adaptation methods for large-scale vision-language models |
title_sort |
empirical study on adaptation methods for large-scale vision-language models |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/165970 |
_version_ |
1764208155213955072 |