Language models are domain-specific chart analysts
As the advancement of multi-modal Large Language Models (LLM) such as GPT4, the cognitive capability of models is facing new expectations. Meanwhile, when LLM trainings are getting more expensive, there has been a gap between the conventional pretrain-finetune paradigm and the LLM prompting paradigm...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/167416 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | As the advancement of multi-modal Large Language Models (LLM) such as GPT4, the cognitive capability of models is facing new expectations. Meanwhile, when LLM trainings are getting more expensive, there has been a gap between the conventional pretrain-finetune paradigm and the LLM prompting paradigm regarding model designing. In order to close the currently existing gaps, we propose an AI model engineering pipeline, Cost-efficient C2T Pipeline (C2P), towards an objective of C2T model cognitive capabilities on Chart Domain-specific Analyzing (CDA). A 41.5 million parameter model was trained under C2P, achieving a significantly higher cost-efficiency compared to other models, with a comparable performance. In order to conduct the experiment validation, we proposed a new dataset, EconCharts, which is a domain-specific dataset on economics. C2P explores the Domain-specific cognitive capabilities of C2T / LLM models and to fill the engineering gap between expensive LLM models together with lightweight C2T models. |
---|