Improving zero-shot learning baselines with commonsense knowledge

Zero-shot learning — the problem of training and testing on a completely disjoint set of classes — relies greatly on its ability to transfer knowledge from train classes to test classes. Traditionally semantic embeddings consisting of human-defined attributes or distributed word embeddings are used...

Full description

Saved in:
Bibliographic Details
Main Authors: Roy, Abhinaba, Ghosal, Deepanway, Cambria, Erik, Majumder, Navonil, Mihalcea, Rada, Poria, Soujanya
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170538
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-170538
record_format dspace
spelling sg-ntu-dr.10356-1705382023-09-19T01:58:13Z Improving zero-shot learning baselines with commonsense knowledge Roy, Abhinaba Ghosal, Deepanway Cambria, Erik Majumder, Navonil Mihalcea, Rada Poria, Soujanya School of Computer Science and Engineering Engineering::Computer science and engineering Commonsense Knowledge Zero-shot Learning Zero-shot learning — the problem of training and testing on a completely disjoint set of classes — relies greatly on its ability to transfer knowledge from train classes to test classes. Traditionally semantic embeddings consisting of human-defined attributes or distributed word embeddings are used to facilitate this transfer by improving the association between visual and semantic embeddings. In this paper, we take advantage of explicit relations between nodes defined in ConceptNet, a commonsense knowledge graph, to generate commonsense embeddings of the class labels by using a graph convolution network-based autoencoder. Our experiments performed on three standard benchmark datasets surpass the strong baselines when we fuse our commonsense embeddings with existing semantic embeddings, i.e., human-defined attributes and distributed word embeddings. This work paves the path to more brain-inspired approaches to zero-short learning. Agency for Science, Technology and Research (A*STAR) Ministry of Education (MOE) This research is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project #A18A2b0046) and Project T2MOE2008 awarded by Singapore's MoE under its Tier-2 Grant Scheme. 2023-09-19T01:58:13Z 2023-09-19T01:58:13Z 2022 Journal Article Roy, A., Ghosal, D., Cambria, E., Majumder, N., Mihalcea, R. & Poria, S. (2022). Improving zero-shot learning baselines with commonsense knowledge. Cognitive Computation, 14(6), 2212-2222. https://dx.doi.org/10.1007/s12559-022-10044-0 1866-9956 https://hdl.handle.net/10356/170538 10.1007/s12559-022-10044-0 2-s2.0-85134593579 6 14 2212 2222 en A18A2b0046 T2MOE2008 Cognitive Computation © 2022 The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Commonsense Knowledge
Zero-shot Learning
spellingShingle Engineering::Computer science and engineering
Commonsense Knowledge
Zero-shot Learning
Roy, Abhinaba
Ghosal, Deepanway
Cambria, Erik
Majumder, Navonil
Mihalcea, Rada
Poria, Soujanya
Improving zero-shot learning baselines with commonsense knowledge
description Zero-shot learning — the problem of training and testing on a completely disjoint set of classes — relies greatly on its ability to transfer knowledge from train classes to test classes. Traditionally semantic embeddings consisting of human-defined attributes or distributed word embeddings are used to facilitate this transfer by improving the association between visual and semantic embeddings. In this paper, we take advantage of explicit relations between nodes defined in ConceptNet, a commonsense knowledge graph, to generate commonsense embeddings of the class labels by using a graph convolution network-based autoencoder. Our experiments performed on three standard benchmark datasets surpass the strong baselines when we fuse our commonsense embeddings with existing semantic embeddings, i.e., human-defined attributes and distributed word embeddings. This work paves the path to more brain-inspired approaches to zero-short learning.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Roy, Abhinaba
Ghosal, Deepanway
Cambria, Erik
Majumder, Navonil
Mihalcea, Rada
Poria, Soujanya
format Article
author Roy, Abhinaba
Ghosal, Deepanway
Cambria, Erik
Majumder, Navonil
Mihalcea, Rada
Poria, Soujanya
author_sort Roy, Abhinaba
title Improving zero-shot learning baselines with commonsense knowledge
title_short Improving zero-shot learning baselines with commonsense knowledge
title_full Improving zero-shot learning baselines with commonsense knowledge
title_fullStr Improving zero-shot learning baselines with commonsense knowledge
title_full_unstemmed Improving zero-shot learning baselines with commonsense knowledge
title_sort improving zero-shot learning baselines with commonsense knowledge
publishDate 2023
url https://hdl.handle.net/10356/170538
_version_ 1779156304070180864