Q-instruct: improving low-level visual abilities for multi-modality foundation models

Multi-modality foundation models, as represented by GPT-4V, have brought a new paradigm for low-level visual perception and understanding tasks, that can respond to a broad range of natural human instructions in a model. While existing foundation models have shown exciting potentials on low-level...

Full description

Saved in:
Bibliographic Details
Main Authors: Wu, Haoning, Zhang, Zicheng, Zhang, Erli, Chen, Chaofeng, Liao, Liang, Wang, Annan, Xu, Kaixin, Li, Chunyi, Hou, Jingwen, Zhai, Guangtao, Xue, Geng, Sun, Wenxiu, Yan, Qiong, Lin, Weisi
Other Authors: College of Computing and Data Science
Format: Conference or Workshop Item
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/178464
http://arxiv.org/abs/2311.06783v1
https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Q-Instruct_Improving_Low-level_Visual_Abilities_for_Multi-modality_Foundation_Models_CVPR_2024_paper.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-178464
record_format dspace
spelling sg-ntu-dr.10356-1784642024-06-21T06:22:37Z Q-instruct: improving low-level visual abilities for multi-modality foundation models Wu, Haoning Zhang, Zicheng Zhang, Erli Chen, Chaofeng Liao, Liang Wang, Annan Xu, Kaixin Li, Chunyi Hou, Jingwen Zhai, Guangtao Xue, Geng Sun, Wenxiu Yan, Qiong Lin, Weisi College of Computing and Data Science 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) S-Lab Computer and Information Science Multi-modality large language models Computer vision Multi-modality foundation models, as represented by GPT-4V, have brought a new paradigm for low-level visual perception and understanding tasks, that can respond to a broad range of natural human instructions in a model. While existing foundation models have shown exciting potentials on low-level visual tasks, their related abilities are still preliminary and need to be improved. In order to enhance these models, we conduct a large-scale subjective experiment collecting a vast number of real human feedbacks on low-level vision. Each feedback follows a pathway that starts with a detailed description on the low-level visual appearance (*e.g. clarity, color, brightness* of an image, and ends with an overall conclusion, with an average length of 45 words. The constructed **Q-Pathway** dataset includes 58K detailed human feedbacks on 18,973 images with diverse low-level appearance. Moreover, to enable foundation models to robustly respond to diverse types of questions, we design a GPT-participated conversion to process these feedbacks into diverse-format 200K instruction-response pairs. Experimental results indicate that the **Q-Instruct** consistently elevates low-level perception and understanding abilities across several foundational models. We anticipate that our datasets can pave the way for a future that general intelligence can perceive, understand low-level visual appearance and evaluate visual quality like a human. Our dataset, model zoo, and demo is published at: https://q-future.github.io/Q-Instruct. Submitted/Accepted version 2024-06-21T02:28:15Z 2024-06-21T02:28:15Z 2024 Conference Paper Wu, H., Zhang, Z., Zhang, E., Chen, C., Liao, L., Wang, A., Xu, K., Li, C., Hou, J., Zhai, G., Xue, G., Sun, W., Yan, Q. & Lin, W. (2024). Q-instruct: improving low-level visual abilities for multi-modality foundation models. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 25490-25500. https://hdl.handle.net/10356/178464 http://arxiv.org/abs/2311.06783v1 https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Q-Instruct_Improving_Low-level_Visual_Abilities_for_Multi-modality_Foundation_Models_CVPR_2024_paper.pdf 25490 25500 en 10.21979/N9/GPLPNI © 2024 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Multi-modality large language models
Computer vision
spellingShingle Computer and Information Science
Multi-modality large language models
Computer vision
Wu, Haoning
Zhang, Zicheng
Zhang, Erli
Chen, Chaofeng
Liao, Liang
Wang, Annan
Xu, Kaixin
Li, Chunyi
Hou, Jingwen
Zhai, Guangtao
Xue, Geng
Sun, Wenxiu
Yan, Qiong
Lin, Weisi
Q-instruct: improving low-level visual abilities for multi-modality foundation models
description Multi-modality foundation models, as represented by GPT-4V, have brought a new paradigm for low-level visual perception and understanding tasks, that can respond to a broad range of natural human instructions in a model. While existing foundation models have shown exciting potentials on low-level visual tasks, their related abilities are still preliminary and need to be improved. In order to enhance these models, we conduct a large-scale subjective experiment collecting a vast number of real human feedbacks on low-level vision. Each feedback follows a pathway that starts with a detailed description on the low-level visual appearance (*e.g. clarity, color, brightness* of an image, and ends with an overall conclusion, with an average length of 45 words. The constructed **Q-Pathway** dataset includes 58K detailed human feedbacks on 18,973 images with diverse low-level appearance. Moreover, to enable foundation models to robustly respond to diverse types of questions, we design a GPT-participated conversion to process these feedbacks into diverse-format 200K instruction-response pairs. Experimental results indicate that the **Q-Instruct** consistently elevates low-level perception and understanding abilities across several foundational models. We anticipate that our datasets can pave the way for a future that general intelligence can perceive, understand low-level visual appearance and evaluate visual quality like a human. Our dataset, model zoo, and demo is published at: https://q-future.github.io/Q-Instruct.
author2 College of Computing and Data Science
author_facet College of Computing and Data Science
Wu, Haoning
Zhang, Zicheng
Zhang, Erli
Chen, Chaofeng
Liao, Liang
Wang, Annan
Xu, Kaixin
Li, Chunyi
Hou, Jingwen
Zhai, Guangtao
Xue, Geng
Sun, Wenxiu
Yan, Qiong
Lin, Weisi
format Conference or Workshop Item
author Wu, Haoning
Zhang, Zicheng
Zhang, Erli
Chen, Chaofeng
Liao, Liang
Wang, Annan
Xu, Kaixin
Li, Chunyi
Hou, Jingwen
Zhai, Guangtao
Xue, Geng
Sun, Wenxiu
Yan, Qiong
Lin, Weisi
author_sort Wu, Haoning
title Q-instruct: improving low-level visual abilities for multi-modality foundation models
title_short Q-instruct: improving low-level visual abilities for multi-modality foundation models
title_full Q-instruct: improving low-level visual abilities for multi-modality foundation models
title_fullStr Q-instruct: improving low-level visual abilities for multi-modality foundation models
title_full_unstemmed Q-instruct: improving low-level visual abilities for multi-modality foundation models
title_sort q-instruct: improving low-level visual abilities for multi-modality foundation models
publishDate 2024
url https://hdl.handle.net/10356/178464
http://arxiv.org/abs/2311.06783v1
https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Q-Instruct_Improving_Low-level_Visual_Abilities_for_Multi-modality_Foundation_Models_CVPR_2024_paper.pdf
_version_ 1806059812743544832