Developing locally trainable large language models
Emerging Large Language Models (LLMs) like GPT-3.5 and GPT-4 have been fundamentally transforming human society since their launch, as they demonstrate groundbreaking capabilities across various tasks. However, the colossal model size of such LLMs results a prohibitive training cost and computing r...
Saved in:
Main Author: | Chen, Hailin |
---|---|
Other Authors: | Joty Shafiq Rayhan |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182242 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Developing intelligent chatbots powered by large language models
by: Ng, Kai Teck
Published: (2024) -
Using micros and minis as cues to deception : examining the reliability and trainability.
by: Quek, Johny Li Qin.
Published: (2012) -
Medical chatbot interface for large language models
by: Chua, Yu Hao
Published: (2024) -
Revolutionising portfolio management with large language model
by: Kee, Kai Teng
Published: (2024) -
Event extraction for cybersecurity using large language models
by: Seah, Kai Heng
Published: (2024)