Language models are domain-specific chart analysts
As the advancement of multi-modal Large Language Models (LLM) such as GPT4, the cognitive capability of models is facing new expectations. Meanwhile, when LLM trainings are getting more expensive, there has been a gap between the conventional pretrain-finetune paradigm and the LLM prompting paradigm...
Saved in:
Main Author: | Zhao, Yinjie |
---|---|
Other Authors: | Wen Bihan |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/167416 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Explainable Q&A system based on domain-specific knowledge graph
by: Zhao, Xuejiao
Published: (2021) -
Audio intelligence & domain adaptation for deep learning models at the edge
by: Ng, Linus JunJia
Published: (2021) -
Fake review detection by fusing parameter efficient adapters in pre-trained language model
by: Ho, See Cheng
Published: (2024) -
Vision language representation learning
by: Yang, Xiaofeng
Published: (2023) -
Neural architectures for natural language understanding
by: Tay, Yi
Published: (2019)