Deep-learning based affective video analysis and synthesis
The major challenge in computational creativity within the context of audio-visual analysis, is the difficulty in extracting high quality content from large quantities of video footage. Current development focuses on using submodular optimization of frame-based quality-aware relevance model to creat...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/144505 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | The major challenge in computational creativity within the context of audio-visual analysis, is the difficulty in extracting high quality content from large quantities of video footage. Current development focuses on using submodular optimization of frame-based quality-aware relevance model to create summaries which are both diverse and representative of the entire video footage. Our work complements on existing work on query-adaptive video summarization, where we implement the Natural Language Toolkit and Rapid Automatic Keyword Extraction algorithm to extract keywords for query generation. The query is used in the Quality-Aware Relevance Estimation model for thumbnail selection. The generated thumbnails will identify key scenes in the video footage which will be subsequently summarized and merged by weighted sampling of the key scenes to the length of a short summary. We found that our video summary has more related scenes, higher average similarity score with key words compared to baseline, and it also improves on the average qualitative aspects of the summary. |
---|