No-reference view synthesis quality prediction for 3-D videos based on color-depth interactions

In a 3-D video system, automatically predicting the quality of synthesized 3-D video based on the inputs of color and depth videos is an urgent but very difficult task, while the existing full-reference methods usually measure the perceptual quality of the synthesized video. In this paper, a high-ef...

Full description

Saved in:
Bibliographic Details
Main Authors: Shao, Feng, Yuan, Qizheng, Lin, Weisi, Jiang, Gangyi
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/140031
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-140031
record_format dspace
spelling sg-ntu-dr.10356-1400312020-05-26T05:12:32Z No-reference view synthesis quality prediction for 3-D videos based on color-depth interactions Shao, Feng Yuan, Qizheng Lin, Weisi Jiang, Gangyi School of Computer Science and Engineering Centre for Multimedia and Network Technology Engineering::Computer science and engineering Color-depth Interactions 3D Synthesized Video In a 3-D video system, automatically predicting the quality of synthesized 3-D video based on the inputs of color and depth videos is an urgent but very difficult task, while the existing full-reference methods usually measure the perceptual quality of the synthesized video. In this paper, a high-efficiency view synthesis quality prediction (HEVSQP) metric for view synthesis is proposed. Based on the derived VSQP model that quantifies the influences of color and depth distortions and their interactions in determining the perceptual quality of 3-D synthesized video, color-involved VSQP and depth-involved VSQP indices are predicted, respectively, and are combined to yield an HEVSQP index. Experimental results on our constructed NBU-3D Synthesized Video Quality Database demonstrate that the proposed HEVSOP has good performance evaluated on the entire synthesized video-quality database, compared with other full-reference and no-reference video-quality assessment metrics. 2020-05-26T05:12:32Z 2020-05-26T05:12:32Z 2017 Journal Article Shao, F., Yuan, Q., Lin, W., & Jiang, G. (2018). No-reference view synthesis quality prediction for 3-D videos based on color-depth interactions. IEEE Transactions on Multimedia, 20(3), 659-674. doi:10.1109/TMM.2017.2748460 1520-9210 https://hdl.handle.net/10356/140031 10.1109/TMM.2017.2748460 2-s2.0-85029157555 3 20 659 674 en IEEE Transactions on Multimedia © 2017 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Color-depth Interactions
3D Synthesized Video
spellingShingle Engineering::Computer science and engineering
Color-depth Interactions
3D Synthesized Video
Shao, Feng
Yuan, Qizheng
Lin, Weisi
Jiang, Gangyi
No-reference view synthesis quality prediction for 3-D videos based on color-depth interactions
description In a 3-D video system, automatically predicting the quality of synthesized 3-D video based on the inputs of color and depth videos is an urgent but very difficult task, while the existing full-reference methods usually measure the perceptual quality of the synthesized video. In this paper, a high-efficiency view synthesis quality prediction (HEVSQP) metric for view synthesis is proposed. Based on the derived VSQP model that quantifies the influences of color and depth distortions and their interactions in determining the perceptual quality of 3-D synthesized video, color-involved VSQP and depth-involved VSQP indices are predicted, respectively, and are combined to yield an HEVSQP index. Experimental results on our constructed NBU-3D Synthesized Video Quality Database demonstrate that the proposed HEVSOP has good performance evaluated on the entire synthesized video-quality database, compared with other full-reference and no-reference video-quality assessment metrics.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Shao, Feng
Yuan, Qizheng
Lin, Weisi
Jiang, Gangyi
format Article
author Shao, Feng
Yuan, Qizheng
Lin, Weisi
Jiang, Gangyi
author_sort Shao, Feng
title No-reference view synthesis quality prediction for 3-D videos based on color-depth interactions
title_short No-reference view synthesis quality prediction for 3-D videos based on color-depth interactions
title_full No-reference view synthesis quality prediction for 3-D videos based on color-depth interactions
title_fullStr No-reference view synthesis quality prediction for 3-D videos based on color-depth interactions
title_full_unstemmed No-reference view synthesis quality prediction for 3-D videos based on color-depth interactions
title_sort no-reference view synthesis quality prediction for 3-d videos based on color-depth interactions
publishDate 2020
url https://hdl.handle.net/10356/140031
_version_ 1681058542973353984