No-reference view synthesis quality prediction for 3-D videos based on color-depth interactions

In a 3-D video system, automatically predicting the quality of synthesized 3-D video based on the inputs of color and depth videos is an urgent but very difficult task, while the existing full-reference methods usually measure the perceptual quality of the synthesized video. In this paper, a high-ef...

Full description

Saved in:
Bibliographic Details
Main Authors: Shao, Feng, Yuan, Qizheng, Lin, Weisi, Jiang, Gangyi
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/140031
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In a 3-D video system, automatically predicting the quality of synthesized 3-D video based on the inputs of color and depth videos is an urgent but very difficult task, while the existing full-reference methods usually measure the perceptual quality of the synthesized video. In this paper, a high-efficiency view synthesis quality prediction (HEVSQP) metric for view synthesis is proposed. Based on the derived VSQP model that quantifies the influences of color and depth distortions and their interactions in determining the perceptual quality of 3-D synthesized video, color-involved VSQP and depth-involved VSQP indices are predicted, respectively, and are combined to yield an HEVSQP index. Experimental results on our constructed NBU-3D Synthesized Video Quality Database demonstrate that the proposed HEVSOP has good performance evaluated on the entire synthesized video-quality database, compared with other full-reference and no-reference video-quality assessment metrics.