Video summarization and scene detection by graph modeling

In this paper, we propose a unified approach for video summarization based on the analysis of video structures and video highlights. Two major components in our approach are scene modeling and highlight detection. Scene modeling is achieved by normalized cut algorithm and temporal graph analysis, wh...

Full description

Saved in:
Bibliographic Details
Main Authors: NGO, Chong-wah, MA, Yu-Fei, ZHANG, Hong-Jiang
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2005
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6351
https://ink.library.smu.edu.sg/context/sis_research/article/7354/viewcontent/tcsvt05.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7354
record_format dspace
spelling sg-smu-ink.sis_research-73542021-11-23T04:03:13Z Video summarization and scene detection by graph modeling NGO, Chong-wah MA, Yu-Fei ZHANG, Hong-Jiang In this paper, we propose a unified approach for video summarization based on the analysis of video structures and video highlights. Two major components in our approach are scene modeling and highlight detection. Scene modeling is achieved by normalized cut algorithm and temporal graph analysis, while highlight detection is accomplished by motion attention modeling. In our proposed approach, a video is represented as a complete undirected graph and the normalized cut algorithm is carried out to globally and optimally partition the graph into video clusters. The resulting clusters form a directed temporal graph and a shortest path algorithm is proposed to efficiently detect video scenes. The attention values are then computed and attached to the scenes, clusters, shots, and subshots in a temporal graph. As a result, the temporal graph can inherently describe the evolution and perceptual importance of a video. In our application, video summaries that emphasize both content balance and perceptual quality can be generated directly from a temporal graph that embeds both the structure and attention information. 2005-02-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6351 info:doi/10.1109/TCSVT.2004.841694 https://ink.library.smu.edu.sg/context/sis_research/article/7354/viewcontent/tcsvt05.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University attention model normalized cut scene modeling video summarization Computer Sciences Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic attention model
normalized cut
scene modeling
video summarization
Computer Sciences
Graphics and Human Computer Interfaces
spellingShingle attention model
normalized cut
scene modeling
video summarization
Computer Sciences
Graphics and Human Computer Interfaces
NGO, Chong-wah
MA, Yu-Fei
ZHANG, Hong-Jiang
Video summarization and scene detection by graph modeling
description In this paper, we propose a unified approach for video summarization based on the analysis of video structures and video highlights. Two major components in our approach are scene modeling and highlight detection. Scene modeling is achieved by normalized cut algorithm and temporal graph analysis, while highlight detection is accomplished by motion attention modeling. In our proposed approach, a video is represented as a complete undirected graph and the normalized cut algorithm is carried out to globally and optimally partition the graph into video clusters. The resulting clusters form a directed temporal graph and a shortest path algorithm is proposed to efficiently detect video scenes. The attention values are then computed and attached to the scenes, clusters, shots, and subshots in a temporal graph. As a result, the temporal graph can inherently describe the evolution and perceptual importance of a video. In our application, video summaries that emphasize both content balance and perceptual quality can be generated directly from a temporal graph that embeds both the structure and attention information.
format text
author NGO, Chong-wah
MA, Yu-Fei
ZHANG, Hong-Jiang
author_facet NGO, Chong-wah
MA, Yu-Fei
ZHANG, Hong-Jiang
author_sort NGO, Chong-wah
title Video summarization and scene detection by graph modeling
title_short Video summarization and scene detection by graph modeling
title_full Video summarization and scene detection by graph modeling
title_fullStr Video summarization and scene detection by graph modeling
title_full_unstemmed Video summarization and scene detection by graph modeling
title_sort video summarization and scene detection by graph modeling
publisher Institutional Knowledge at Singapore Management University
publishDate 2005
url https://ink.library.smu.edu.sg/sis_research/6351
https://ink.library.smu.edu.sg/context/sis_research/article/7354/viewcontent/tcsvt05.pdf
_version_ 1770575939901587456