Contrastive knowledge transfer from CLIP for open vocabulary object detection

Object detection has made remarkable progress in recent years. While in real-world scenarios, a model is expected to generalize to novel objects that it never explicitly trained on. Though pre-trained vision language model has shown powerful results in zero-shot classification task, adapting it to d...

Full description

Saved in:
Bibliographic Details
Main Author: Zhang, Chuhan
Other Authors: Hanwang Zhang
Format: Thesis-Master by Research
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172024
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Object detection has made remarkable progress in recent years. While in real-world scenarios, a model is expected to generalize to novel objects that it never explicitly trained on. Though pre-trained vision language model has shown powerful results in zero-shot classification task, adapting it to detection task is non-trivial due to the detection includes region-level reasoning as well as non-semantic localization. In this dissertation, a method built on detr-style architecture and contrastive dis- tillation has been proposed. It utilizes the CLIP model to provide semantic-rich features as priors for querying novel objects. Besides, the model is trained to align with CLIP in a latent space via contrastive loss, enabling it to distinguish unseen classes. The effectiveness of the proposed method is supported by the experimental results with 65.3 novel AR and 23.4 novel mAP on MSCOCO dataset. Its variants out- performs its counter part by 3.5 mAP and 3.1 mAP respectively. The proposed contrastive distillation loss could also be integrated with other framework and achieves the best performance. The significance of different modules is revealed through ablation study and visualization study. The qualitative analysis demonstrates the potential of the proposed method as an effective on-the-fly detector. In final part, a discussion section analyzes the critical factors that contribute to open vocabulary object detection. It provides a unified perspective on reconstruction loss and contrastive loss, offering an interpretation of feature transfer within the context of open vocabulary scenarios.