(Un)likelihood training for interpretable embedding

Cross-modal representation learning has become a new normal for bridging the semantic gap between text and visual data. Learning modality agnostic representations in a continuous latent space, however, is often treated as a black-box data-driven training process. It is well known that the effectiven...

Full description

Saved in:
Bibliographic Details
Main Authors: WU, Jiaxin, NGO, Chong-wah, CHAN, Wing-Kwong, HOU, Zhijian
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9819
https://ink.library.smu.edu.sg/context/sis_research/article/10819/viewcontent/2207.00282v3.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10819
record_format dspace
spelling sg-smu-ink.sis_research-108192024-12-24T03:43:57Z (Un)likelihood training for interpretable embedding WU, Jiaxin NGO, Chong-wah CHAN, Wing-Kwong HOU, Zhijian Cross-modal representation learning has become a new normal for bridging the semantic gap between text and visual data. Learning modality agnostic representations in a continuous latent space, however, is often treated as a black-box data-driven training process. It is well known that the effectiveness of representation learning depends heavily on the quality and scale of training data. For video representation learning, having a complete set of labels that annotate the full spectrum of video content for training is highly difficult, if not impossible. These issues, black-box training and dataset bias, make representation learning practically challenging to be deployed for video understanding due to unexplainable and unpredictable results. In this article, we propose two novel training objectives, likelihood and unlikelihood functions, to unroll the semantics behind embeddings while addressing the label sparsity problem in training. The likelihood training aims to interpret semantics of embeddings beyond training labels, while the unlikelihood training leverages prior knowledge for regularization to ensure semantically coherent interpretation. With both training objectives, a new encoder-decoder network, which learns interpretable cross-modal representation, is proposed for ad-hoc video search. Extensive experiments on TRECVid and MSR-VTT datasets show that the proposed network outperforms several state-of-the-art retrieval models with a statistically significant performance margin. 2023-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9819 info:doi/10.1145/3632752 https://ink.library.smu.edu.sg/context/sis_research/article/10819/viewcontent/2207.00282v3.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Cross-modal representation learning; Explainable embedding Neural networks Video search Artificial Intelligence and Robotics
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Cross-modal representation learning; Explainable embedding
Neural networks
Video search
Artificial Intelligence and Robotics
spellingShingle Cross-modal representation learning; Explainable embedding
Neural networks
Video search
Artificial Intelligence and Robotics
WU, Jiaxin
NGO, Chong-wah
CHAN, Wing-Kwong
HOU, Zhijian
(Un)likelihood training for interpretable embedding
description Cross-modal representation learning has become a new normal for bridging the semantic gap between text and visual data. Learning modality agnostic representations in a continuous latent space, however, is often treated as a black-box data-driven training process. It is well known that the effectiveness of representation learning depends heavily on the quality and scale of training data. For video representation learning, having a complete set of labels that annotate the full spectrum of video content for training is highly difficult, if not impossible. These issues, black-box training and dataset bias, make representation learning practically challenging to be deployed for video understanding due to unexplainable and unpredictable results. In this article, we propose two novel training objectives, likelihood and unlikelihood functions, to unroll the semantics behind embeddings while addressing the label sparsity problem in training. The likelihood training aims to interpret semantics of embeddings beyond training labels, while the unlikelihood training leverages prior knowledge for regularization to ensure semantically coherent interpretation. With both training objectives, a new encoder-decoder network, which learns interpretable cross-modal representation, is proposed for ad-hoc video search. Extensive experiments on TRECVid and MSR-VTT datasets show that the proposed network outperforms several state-of-the-art retrieval models with a statistically significant performance margin.
format text
author WU, Jiaxin
NGO, Chong-wah
CHAN, Wing-Kwong
HOU, Zhijian
author_facet WU, Jiaxin
NGO, Chong-wah
CHAN, Wing-Kwong
HOU, Zhijian
author_sort WU, Jiaxin
title (Un)likelihood training for interpretable embedding
title_short (Un)likelihood training for interpretable embedding
title_full (Un)likelihood training for interpretable embedding
title_fullStr (Un)likelihood training for interpretable embedding
title_full_unstemmed (Un)likelihood training for interpretable embedding
title_sort (un)likelihood training for interpretable embedding
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/9819
https://ink.library.smu.edu.sg/context/sis_research/article/10819/viewcontent/2207.00282v3.pdf
_version_ 1821237238567010304