Mix-DANN and dynamic-modal-distillation for video domain adaptation
Video domain adaptation is non-trivial due to video is inherently involved with multi-dimensional and multi-modal information. Existing works mainly adopt adversarial learning and self-supervised tasks to align features. Nevertheless, the explicit interaction between source and target in the tempora...
Saved in:
Main Authors: | YIN, Yuehao, ZHU, Bin, CHEN, Jingjing, CHENG, Lechao, JIANG, Yu-Gang |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9015 https://ink.library.smu.edu.sg/context/sis_research/article/10018/viewcontent/_ACM_MM_2022__MD_DMD.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Differentiated learning for multi-modal domain adaptation
by: LV, Jianming, et al.
Published: (2021) -
Cross-domain cross-modal food transfer
by: ZHU, Bin, et al.
Published: (2020) -
Efficient cross-modal video retrieval with meta-optimized frames
by: HAN, Ning, et al.
Published: (2024) -
Unsupervised modality adaptation with text-to-Image diffusion models for semantic segmentation
by: XIA, Ruihao, et al.
Published: (2024) -
Concept-driven multi-modality fusion for video search
by: WEI, Xiao-Yong, et al.
Published: (2011)