Interpretable embedding for ad-hoc video search

Answering query with semantic concepts has long been the mainstream approach for video search. Until recently, its performance is surpassed by concept-free approach, which embeds queries in a joint space as videos. Nevertheless, the embedded features as well as search results are not interpretable,...

Full description

Saved in:
Bibliographic Details
Main Authors: WU, Jiaxin, NGO, Chong-wah
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2020
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6500
https://ink.library.smu.edu.sg/context/sis_research/article/7503/viewcontent/3394171.3413916.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7503
record_format dspace
spelling sg-smu-ink.sis_research-75032022-01-10T04:54:51Z Interpretable embedding for ad-hoc video search WU, Jiaxin NGO, Chong-wah Answering query with semantic concepts has long been the mainstream approach for video search. Until recently, its performance is surpassed by concept-free approach, which embeds queries in a joint space as videos. Nevertheless, the embedded features as well as search results are not interpretable, hindering subsequent steps in video browsing and query reformulation. This paper integrates feature embedding and concept interpretation into a neural network for unified dual-task learning. In this way, an embedding is associated with a list of semantic concepts as an interpretation of video content. This paper empirically demonstrates that, by using either the embedding features or concepts, considerable search improvement is attainable on TRECVid benchmarked datasets. Concepts are not only effective in pruning false positive videos, but also highly complementary to concept-free search, leading to large margin of improvement compared to state-of-the-art approaches. 2020-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6500 info:doi/10.1145/3394171.3413916 https://ink.library.smu.edu.sg/context/sis_research/article/7503/viewcontent/3394171.3413916.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University ad-hoc video search concept-based search concept-free search interpretable video search Databases and Information Systems Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic ad-hoc video search
concept-based search
concept-free search
interpretable video search
Databases and Information Systems
Graphics and Human Computer Interfaces
spellingShingle ad-hoc video search
concept-based search
concept-free search
interpretable video search
Databases and Information Systems
Graphics and Human Computer Interfaces
WU, Jiaxin
NGO, Chong-wah
Interpretable embedding for ad-hoc video search
description Answering query with semantic concepts has long been the mainstream approach for video search. Until recently, its performance is surpassed by concept-free approach, which embeds queries in a joint space as videos. Nevertheless, the embedded features as well as search results are not interpretable, hindering subsequent steps in video browsing and query reformulation. This paper integrates feature embedding and concept interpretation into a neural network for unified dual-task learning. In this way, an embedding is associated with a list of semantic concepts as an interpretation of video content. This paper empirically demonstrates that, by using either the embedding features or concepts, considerable search improvement is attainable on TRECVid benchmarked datasets. Concepts are not only effective in pruning false positive videos, but also highly complementary to concept-free search, leading to large margin of improvement compared to state-of-the-art approaches.
format text
author WU, Jiaxin
NGO, Chong-wah
author_facet WU, Jiaxin
NGO, Chong-wah
author_sort WU, Jiaxin
title Interpretable embedding for ad-hoc video search
title_short Interpretable embedding for ad-hoc video search
title_full Interpretable embedding for ad-hoc video search
title_fullStr Interpretable embedding for ad-hoc video search
title_full_unstemmed Interpretable embedding for ad-hoc video search
title_sort interpretable embedding for ad-hoc video search
publisher Institutional Knowledge at Singapore Management University
publishDate 2020
url https://ink.library.smu.edu.sg/sis_research/6500
https://ink.library.smu.edu.sg/context/sis_research/article/7503/viewcontent/3394171.3413916.pdf
_version_ 1770575978367549440