Evaluating vision-language models long-chain reasoning ability with multiple ground truths
With the recent advancements in vision-language models, many researchers start to evaluate their various zero-shot capabilities to answer questions given a video input. However, there has not been a standardised and “best practice” method to evaluate the quality of a model’s open-ended answer given...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175186 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | With the recent advancements in vision-language models, many researchers start to evaluate their various zero-shot capabilities to answer questions given a video input. However, there has not been a standardised and “best practice” method to evaluate the quality of a model’s open-ended answer given a question and multiple ground truths. We reviewed some current methods which includes using n-gram based metrics and using LLM (Large
Language Model) as a judge.
While n-gram based metrics scored some models answer on par with a human’s answer, these scores do not have high correlation with humans preference when used to rank the models from best to worst. The highest scoring models are found to only have 0.21 Spearman correlation score with human preference.
We also designed prompts to get LLM to judge which model answers is better given
multiple reference answers through (1) head-to-head which found to have some consistency
with human preference (2) ranking all possible answers which found to have higher
correlation than n-gram based metrics.
We offer a perspective that while additional ground truth would be useful for traditional (n-
grams based) metrics, but given a sophiscated LLM, one ground truth might be sufficient
to judge the quality of a model’s answer. This is especially moving forward with the rapid
advancement of capability of such Language Models. |
---|