Region-adaptive concept aggregation for few-shot visual recognition

Few-shot learning (FSL) aims to learn novel concepts from very limited examples. However, most FSL methods suffer from the issue of lacking robustness in concept learning. Specifically, existing FSL methods usually ignore the diversity of region contents that may contain concept-irrelevant informati...

Full description

Saved in:
Bibliographic Details
Main Authors: Han, Mengya, Zhan, Yibing, Yu, Baosheng, Luo, Yong, Hu, Han, Du, Bo, Wen, Yonggang, Tao, Dacheng
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/169205
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-169205
record_format dspace
spelling sg-ntu-dr.10356-1692052023-07-06T07:23:39Z Region-adaptive concept aggregation for few-shot visual recognition Han, Mengya Zhan, Yibing Yu, Baosheng Luo, Yong Hu, Han Du, Bo Wen, Yonggang Tao, Dacheng School of Computer Science and Engineering Engineering::Computer science and engineering Concept-Aggregation Concept Learning Few-shot learning (FSL) aims to learn novel concepts from very limited examples. However, most FSL methods suffer from the issue of lacking robustness in concept learning. Specifically, existing FSL methods usually ignore the diversity of region contents that may contain concept-irrelevant information such as the background, which would introduce bias/noise and degrade the performance of conceptual representation learning. To address the above-mentioned issue, we propose a novel metric-based FSL method termed region-adaptive concept aggregation network or RCA-Net. Specifically, we devise a region-adaptive concept aggregator (RCA) to model the relationships of different regions and capture the conceptual information in different regions, which are then integrated in a weighted average manner to obtain the conceptual representation. Consequently, robust concept learning can be achieved by focusing more on the concept-relevant information and less on the conceptual-irrelevant information. We perform extensive experiments on three popular visual recognition benchmarks to demonstrate the superiority of RCA-Net for robust few-shot learning. In particular, on the Caltech-UCSD Birds-200-2011 (CUB200) dataset, the proposed RCA-Net significantly improves 1-shot accuracy from 74.76% to 78.03% and 5-shot accuracy from 86.84% to 89.83% compared with the most competitive counterpart. This work was supported by National Natural Science Foundation of China (No. 62002090), Major Science and Technology Innovation 2030 “New Generation Artificial Intelligence” Key Project (No. 2021ZD0111700) and Special Fund of Hubei Luojia Laboratory, China (No. 220100014). 2023-07-06T07:23:38Z 2023-07-06T07:23:38Z 2023 Journal Article Han, M., Zhan, Y., Yu, B., Luo, Y., Hu, H., Du, B., Wen, Y. & Tao, D. (2023). Region-adaptive concept aggregation for few-shot visual recognition. Machine Intelligence Research. https://dx.doi.org/10.1007/s11633-022-1358-8 2731-538X https://hdl.handle.net/10356/169205 10.1007/s11633-022-1358-8 2-s2.0-85149140622 en Machine Intelligence Research © 2023 Institute of Automation, Chinese Academy of Sciences and Springer-Verlag GmbH Germany, part of Springer Nature. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Concept-Aggregation
Concept Learning
spellingShingle Engineering::Computer science and engineering
Concept-Aggregation
Concept Learning
Han, Mengya
Zhan, Yibing
Yu, Baosheng
Luo, Yong
Hu, Han
Du, Bo
Wen, Yonggang
Tao, Dacheng
Region-adaptive concept aggregation for few-shot visual recognition
description Few-shot learning (FSL) aims to learn novel concepts from very limited examples. However, most FSL methods suffer from the issue of lacking robustness in concept learning. Specifically, existing FSL methods usually ignore the diversity of region contents that may contain concept-irrelevant information such as the background, which would introduce bias/noise and degrade the performance of conceptual representation learning. To address the above-mentioned issue, we propose a novel metric-based FSL method termed region-adaptive concept aggregation network or RCA-Net. Specifically, we devise a region-adaptive concept aggregator (RCA) to model the relationships of different regions and capture the conceptual information in different regions, which are then integrated in a weighted average manner to obtain the conceptual representation. Consequently, robust concept learning can be achieved by focusing more on the concept-relevant information and less on the conceptual-irrelevant information. We perform extensive experiments on three popular visual recognition benchmarks to demonstrate the superiority of RCA-Net for robust few-shot learning. In particular, on the Caltech-UCSD Birds-200-2011 (CUB200) dataset, the proposed RCA-Net significantly improves 1-shot accuracy from 74.76% to 78.03% and 5-shot accuracy from 86.84% to 89.83% compared with the most competitive counterpart.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Han, Mengya
Zhan, Yibing
Yu, Baosheng
Luo, Yong
Hu, Han
Du, Bo
Wen, Yonggang
Tao, Dacheng
format Article
author Han, Mengya
Zhan, Yibing
Yu, Baosheng
Luo, Yong
Hu, Han
Du, Bo
Wen, Yonggang
Tao, Dacheng
author_sort Han, Mengya
title Region-adaptive concept aggregation for few-shot visual recognition
title_short Region-adaptive concept aggregation for few-shot visual recognition
title_full Region-adaptive concept aggregation for few-shot visual recognition
title_fullStr Region-adaptive concept aggregation for few-shot visual recognition
title_full_unstemmed Region-adaptive concept aggregation for few-shot visual recognition
title_sort region-adaptive concept aggregation for few-shot visual recognition
publishDate 2023
url https://hdl.handle.net/10356/169205
_version_ 1772825845999599616