Content-adaptive temporal consistency enhancement for depth video

The video plus depth format, which is composed of the texture video and the depth video, has been widely used for free viewpoint TV. However, the temporal inconsistency is often encountered in the depth video due to the error incurred in the estimation of the depth values. This will inevitably deter...

Full description

Saved in:
Bibliographic Details
Main Authors: Ma, Kai-Kuang, Zeng, Huanqiang.
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2013
Online Access:https://hdl.handle.net/10356/84779
http://hdl.handle.net/10220/12910
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The video plus depth format, which is composed of the texture video and the depth video, has been widely used for free viewpoint TV. However, the temporal inconsistency is often encountered in the depth video due to the error incurred in the estimation of the depth values. This will inevitably deteriorate the coding efficiency of depth video and the visual quality of synthesized view. To address this problem, a content-adaptive temporal consistency enhancement (CTCE) algorithm for the depth video is proposed in this paper, which consists of two sequential stages: (1) classification of stationary and non-stationary regions based on the texture video, and (2) adaptive temporal consistency filtering on the depth video. The result of the first stage is used to steer the second stage so that the filtering process will be conducted in an adaptive manner. Extensive experimental results have shown that the proposed CTCE algorithm can effectively mitigate the temporal inconsistency in the original depth video and consequently improve the coding efficiency of depth video and the visual quality of synthesized view.