Multimodal distillation for egocentric video understanding
Advancements in smart devices, especially head-mounted wearables, create new egocentric video applications, leading to enormous multimodal egocentric scenarios. These days, multimodal egocentric video understanding has wide applications in augmented reality, education, and industries. Knowledge dis...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/177296 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Advancements in smart devices, especially head-mounted wearables, create new egocentric video applications, leading to enormous multimodal egocentric scenarios. These days, multimodal egocentric video understanding has wide applications in augmented reality, education, and industries.
Knowledge distillation transfers knowledge from a complex "teacher" model to a smaller "student" model. This technique is beneficial for model compression and can be applied to multimodal scenarios. Recent work uses traditional knowledge distillation scheme, assigning weights to knowledge from different modalities. But there's a lack of exploration in accelerating training, introducing more modalities. Research in multimodal egocentric video understanding is still limited.
This project reviews classification and distillation strategies for knowledge, and improved methods for knowledge distillation. We use Swin-T as the teacher model and consider Swin-T and ResNet3D with the depth of 18 and 50 as the student model. We applied the optimized distillation strategies, TTM and weighted TTM, to multimodal KD. In this experiment, we used FPHA and H2O datasets. RGB and optical flow frames were extracted and packaged for both datasets.
We conducted several experiments to comparatively study the performance of training different methods on different networks. We used top1 and top5 accuracy to measure the performances. It is concluded that Swin-T as a student model outperforms the ResNet3D model for distillation. TTM distillation strategy outperforms KD on different datasets and models. Finally, we summarize this project and suggest further work. |
---|