Neighbourhood representative sampling for efficient end-to-end video quality assessment
The increased resolution of real-world videos presents a dilemma between efficiency and accuracy for deep Video Quality Assessment (VQA). On the one hand, keeping the original resolution will lead to unacceptable computational costs. On the other hand, existing practices, such as resizing or croppin...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/173445 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | The increased resolution of real-world videos presents a dilemma between efficiency and accuracy for deep Video Quality Assessment (VQA). On the one hand, keeping the original resolution will lead to unacceptable computational costs. On the other hand, existing practices, such as resizing or cropping, will change the quality of original videos due to difference in details or loss of contents, and are henceforth harmful to quality assessment. With obtained insight from the studies of spatial-temporal redundancy in the human visual system, visual quality around a neighbourhood has high probability to be similar, and this motivates us to investigate an effective quality-sensitive neighbourhood representative sampling scheme for VQA. In this work, we propose a unified scheme, spatial-temporal grid mini-cube sampling (St-GMS), and the resultant samples are named fragments. In St-GMS, full-resolution videos are first divided into mini-cubes with predefined spatial-temporal grids, then the temporal-aligned quality representatives are sampled to compose the fragments that serve as inputs for VQA. In addition, we design the Fragment Attention Network (FANet), a network architecture tailored specifically for fragments. With fragments and FANet, the proposed FAST-VQA and FasterVQA (with an improved sampling scheme) achieves up to 1612× efficiency than the existing state-of-the-art, meanwhile achieving significantly better performance on all relevant VQA benchmarks. |
---|