Multi modal video analysis with LLM for descriptive emotion and expression annotation

This project presents a novel approach to multi-modal emotion and action annotation by integrating facial expression recognition, action recognition, and audio-based emotion analysis into a unified framework. The system utilizes TimesFormer, OpenFace, and SpeechBrain to extract relevant features fro...

Full description

Saved in:
Bibliographic Details
Main Author: Fan, Yupei
Other Authors: Zheng Jianmin
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180715
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This project presents a novel approach to multi-modal emotion and action annotation by integrating facial expression recognition, action recognition, and audio-based emotion analysis into a unified framework. The system utilizes TimesFormer, OpenFace, and SpeechBrain to extract relevant features from video, audio, and facial expression data. These features are then fed into a Large Language Model (LLM) to generate descriptive annotations that provide a deeper understanding of emotions and actions in conversations, moving beyond traditional emotion labels like "happy" or "angry." This approach offers more contextually rich and human-like insights, which are especially valuable for applications in education and communication. The framework aims to highlight the potential of combining multiple state-of-the-art models to produce comprehensive descriptions, contributing to both the research community and real-world applications. Evaluation methods such as ROUGE and BERTScore are employed to assess the quality of the generated text, and visualizations like heatmaps and radar charts are used to provide insights into the effectiveness of the proposed approach.