Look, read and feel : benchmarking ads understanding with multimodal multitask learning
Given the massive market of advertising and the sharply increasing online multimedia content (such as videos), it is now fashionable to promote advertisements (ads) together with the multimedia content. However, manually finding relevant ads to match the provided content is labor-intensive, and henc...
Saved in:
Main Authors: | Zhang, Huaizheng, Luo, Yong, Ai, Qiming, Wen, Yonggang, Hu, Han |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/152993 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
From community search to community understanding: A multimodal community query engine
by: LI, Zhao, et al.
Published: (2021) -
VISUAL CAUSAL INFERENCE
by: YICONG LI
Published: (2024) -
Multimodal distillation for egocentric video understanding
by: Peng, Han
Published: (2024) -
Fusion of multimodal embeddings for ad-hoc video search
by: FRANCIS, Danny, et al.
Published: (2019) -
DIALOG SYSTEMS GO MULTIMODAL
by: LIAO LIZI
Published: (2019)