A lightweight handcrafted feature-based selective attention network for blind image quality assessment

This paper introduces a novel approach to Blind Image Quality Assessment (BIQA) by employing handcrafted features combined with a selective feature attention mechanism, drawing inspiration from the human visual system (HVS). This method aligns more closely with human perception of image quality, as...

Full description

Saved in:
Bibliographic Details
Main Author: Feng, HaoLin
Other Authors: Lin Weisi
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166046
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-166046
record_format dspace
spelling sg-ntu-dr.10356-1660462023-04-21T15:39:20Z A lightweight handcrafted feature-based selective attention network for blind image quality assessment Feng, HaoLin Lin Weisi School of Computer Science and Engineering WSLin@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision This paper introduces a novel approach to Blind Image Quality Assessment (BIQA) by employing handcrafted features combined with a selective feature attention mechanism, drawing inspiration from the human visual system (HVS). This method aligns more closely with human perception of image quality, as it integrates knowledge from the HVS. The handcrafted features utilized in this method include the Laplace operator, Scharr filter, and Discrete Cosine Transform (DCT), which were selected for their ability to capture essential low-level image properties critical for subjective image quality assessment tasks. The Laplace operator serves as an edge detection tool, the Scharr filter is a derivative-based filter for identifying edges in images, and the DCT helps analyze the frequency content of images. Compared to conventional BIQA algorithms, this method demonstrates improved accuracy while maintaining low memory complexity, making it suitable for real-time applications in image processing. This is particularly valuable in resource-constrained contexts where memory utilization is a major concern. The efficacy of this approach was validated through experiments on four synthetic distortion IQA datasets, with results indicating that the proposed method surpasses traditional BIQA algorithms in performance. In summary, the innovative BIQA method incorporates handcrafted features and a selective feature attention mechanism, drawing from the Human Visual System (HVS) to enhance image quality evaluation accuracy while preserving low memory complexity. Bachelor of Engineering (Computer Science) 2023-04-19T06:08:49Z 2023-04-19T06:08:49Z 2023 Final Year Project (FYP) Feng, H. (2023). A lightweight handcrafted feature-based selective attention network for blind image quality assessment. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/166046 https://hdl.handle.net/10356/166046 en PSCSE21-0015 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Feng, HaoLin
A lightweight handcrafted feature-based selective attention network for blind image quality assessment
description This paper introduces a novel approach to Blind Image Quality Assessment (BIQA) by employing handcrafted features combined with a selective feature attention mechanism, drawing inspiration from the human visual system (HVS). This method aligns more closely with human perception of image quality, as it integrates knowledge from the HVS. The handcrafted features utilized in this method include the Laplace operator, Scharr filter, and Discrete Cosine Transform (DCT), which were selected for their ability to capture essential low-level image properties critical for subjective image quality assessment tasks. The Laplace operator serves as an edge detection tool, the Scharr filter is a derivative-based filter for identifying edges in images, and the DCT helps analyze the frequency content of images. Compared to conventional BIQA algorithms, this method demonstrates improved accuracy while maintaining low memory complexity, making it suitable for real-time applications in image processing. This is particularly valuable in resource-constrained contexts where memory utilization is a major concern. The efficacy of this approach was validated through experiments on four synthetic distortion IQA datasets, with results indicating that the proposed method surpasses traditional BIQA algorithms in performance. In summary, the innovative BIQA method incorporates handcrafted features and a selective feature attention mechanism, drawing from the Human Visual System (HVS) to enhance image quality evaluation accuracy while preserving low memory complexity.
author2 Lin Weisi
author_facet Lin Weisi
Feng, HaoLin
format Final Year Project
author Feng, HaoLin
author_sort Feng, HaoLin
title A lightweight handcrafted feature-based selective attention network for blind image quality assessment
title_short A lightweight handcrafted feature-based selective attention network for blind image quality assessment
title_full A lightweight handcrafted feature-based selective attention network for blind image quality assessment
title_fullStr A lightweight handcrafted feature-based selective attention network for blind image quality assessment
title_full_unstemmed A lightweight handcrafted feature-based selective attention network for blind image quality assessment
title_sort lightweight handcrafted feature-based selective attention network for blind image quality assessment
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/166046
_version_ 1764208156832956416