An empirical study on adaptation methods for large-scale vision-language models

Since the rise of powerful large-scale pre-trained Vision-Language (VL) models, such as CLIP and ALIGN, pre-training and fine-tuning have become promising paradigms to build transferable models for different downstream tasks. However, it is often prohibitive to fine-tune the whole pre-trained VL mod...

Full description

Saved in:
Bibliographic Details
Main Author: Wang, Annan
Other Authors: Chen Change Loy
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/165970
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Since the rise of powerful large-scale pre-trained Vision-Language (VL) models, such as CLIP and ALIGN, pre-training and fine-tuning have become promising paradigms to build transferable models for different downstream tasks. However, it is often prohibitive to fine-tune the whole pre-trained VL model due to the high computational resources required for full fine-tuning and the instability of fine-tuning a large model when the amount of available data is limited. Thus, various parameter-efficient fine-tuning methods have been proposed to adapt the VL model effectively. These fine-tuning methods have different design concepts and they are applied to different parts of the pre-trained models. We chose CLIP as a representative VL model and conducted a systematical empirical study on various adaptation methods which are applied to different parts of CLIP, namely Prompt Tuning, CLIP\_Adapter, and LayerNorm Tuning. We carefully chose 5 benchmark datasets with different characteristics, e.g. various inter-class variances and intra-class variances to examine the performance of different fine-tuning methods. Extensive experiments show that each current fine-tuning method has its own strengths and weaknesses on different datasets. Based on the analysis and experiment findings, we propose a hybrid fine-tuning strategy to effectively incorporate various fine-tuning methods and to leverage the advantages of each technique while mitigating their respective drawbacks. Extensive Experiments show each Hybrid Fine-tuning Model obtained by our hybrid fine-tuning strategy is effective and efficient.