Multimodal few-shot classification without attribute embedding
Multimodal few-shot learning aims to exploit complementary information inherent in multiple modalities for vision tasks in low data scenarios. Most of the current research focuses on a suitable embedding space for the various modalities. While solutions based on embedding provide state-of-the-art re...
Saved in:
Main Authors: | Chang, Jun Qing, Rajan, Deepu, Vun, Nicholas |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175469 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
FEW-SHOT IMAGE RECOGNITION AND OBJECT DETECTION
by: LI YITING
Published: (2023) -
Few-shot vision recognition and generation for the open-world
by: Song, Nan
Published: (2024) -
Few-shot learning in Wi-Fi-based indoor positioning
by: Xie, Feng, et al.
Published: (2024) -
Learning to Self-Train for Semi-Supervised Few-Shot Classification
by: Xinzhe Li, et al.
Published: (2020) -
Few-shot fine-grained classification with Spatial Attentive Comparison
by: Ruan, Xiaoqian, et al.
Published: (2022)