SGDNet : an end-to-end saliency-guided deep neural network for no-reference image quality assessment
We propose an end-to-end saliency-guided deep neural network (SGDNet) for no-reference image quality assessment (NR-IQA). Our SGDNet is built on an end-to-end multi-task learning framework in which two sub-tasks including visual saliency prediction and image quality prediction are jointly optimized...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/144191 https://doi.org/10.21979/N9/H38R0Z |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-144191 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1441912021-01-18T04:50:20Z SGDNet : an end-to-end saliency-guided deep neural network for no-reference image quality assessment Yang, Sheng Jiang, Qiuping Lin, Weisi Wang, Yongtao School of Computer Science and Engineering 27th ACM International Conference on Multimedia Engineering::Computer science and engineering Image Quality Assessment No-reference We propose an end-to-end saliency-guided deep neural network (SGDNet) for no-reference image quality assessment (NR-IQA). Our SGDNet is built on an end-to-end multi-task learning framework in which two sub-tasks including visual saliency prediction and image quality prediction are jointly optimized with a shared feature extractor. The existing multi-task CNN-based NR-IQA methods which usually consider distortion identification as the auxiliary sub-task cannot accurately identify the complex mixtures of distortions exist in authentically distorted images. By contrast, our saliency prediction sub-task is more universal because visual attention always exists when viewing every image, regardless of its distortion type. More importantly, related works have reported that saliency information is highly correlated with image quality while this property is fully utilized in our proposed SGNet by training the model with more informative labels including saliency maps and quality scores simultaneously. In addition, the outputs of the saliency prediction sub-task are transparent to the primary quality regression sub-task by providing a kind of spatial attention masks for a more perceptually-consistent feature fusion. By training the whole network with the two sub-tasks together, more discriminant features can be learned and a more accurate mapping from feature representations to quality scores can be established. Experimental results on both authentically and synthetically distorted IQA datasets demonstrate the superiority of our SGDNet, as compared to the state-of-the-art approaches. Ministry of Education (MOE) Accepted version This research was partially supported by Singapore Ministry of Education Tier-2 Fund MOE2016-T2-2-057(S). 2020-10-20T02:47:56Z 2020-10-20T02:47:56Z 2019 Conference Paper Yang, S., Jiang, Q., Lin, W., & Wang, Y. (2019). SGDNet : an end-to-end saliency-guided deep neural network for no-reference image quality assessment. Proceedings of 27th ACM International Conference on Multimedia, 1383-1391. doi:10.1145/3343031.3350990 9781450368896 https://hdl.handle.net/10356/144191 10.1145/3343031.3350990 1383 1391 en https://doi.org/10.21979/N9/H38R0Z © 2019 Association for Computing Machinery (ACM). All rights reserved. This paper was published in 27th ACM International Conference on Multimedia and is made available with permission of Association for Computing Machinery (ACM). application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Image Quality Assessment No-reference |
spellingShingle |
Engineering::Computer science and engineering Image Quality Assessment No-reference Yang, Sheng Jiang, Qiuping Lin, Weisi Wang, Yongtao SGDNet : an end-to-end saliency-guided deep neural network for no-reference image quality assessment |
description |
We propose an end-to-end saliency-guided deep neural network (SGDNet) for no-reference image quality assessment (NR-IQA). Our SGDNet is built on an end-to-end multi-task learning framework in which two sub-tasks including visual saliency prediction and image quality prediction are jointly optimized with a shared feature extractor. The existing multi-task CNN-based NR-IQA methods which usually consider distortion identification as the auxiliary sub-task cannot accurately identify the complex mixtures of distortions exist in authentically distorted images. By contrast, our saliency prediction sub-task is more universal because visual attention always exists when viewing every image, regardless of its distortion type. More importantly, related works have reported that saliency information is highly correlated with image quality while this property is fully utilized in our proposed SGNet by training the model with more informative labels including saliency maps and quality scores simultaneously. In addition, the outputs of the saliency prediction sub-task are transparent to the primary quality regression sub-task by providing a kind of spatial attention masks for a more perceptually-consistent feature fusion. By training the whole network with the two sub-tasks together, more discriminant features can be learned and a more accurate mapping from feature representations to quality scores can be established. Experimental results on both authentically and synthetically distorted IQA datasets demonstrate the superiority of our SGDNet, as compared to the state-of-the-art approaches. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Yang, Sheng Jiang, Qiuping Lin, Weisi Wang, Yongtao |
format |
Conference or Workshop Item |
author |
Yang, Sheng Jiang, Qiuping Lin, Weisi Wang, Yongtao |
author_sort |
Yang, Sheng |
title |
SGDNet : an end-to-end saliency-guided deep neural network for no-reference image quality assessment |
title_short |
SGDNet : an end-to-end saliency-guided deep neural network for no-reference image quality assessment |
title_full |
SGDNet : an end-to-end saliency-guided deep neural network for no-reference image quality assessment |
title_fullStr |
SGDNet : an end-to-end saliency-guided deep neural network for no-reference image quality assessment |
title_full_unstemmed |
SGDNet : an end-to-end saliency-guided deep neural network for no-reference image quality assessment |
title_sort |
sgdnet : an end-to-end saliency-guided deep neural network for no-reference image quality assessment |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/144191 https://doi.org/10.21979/N9/H38R0Z |
_version_ |
1690658276512890880 |