Improving Llama2 in game 24 with memory of thought and tree of thought
Memory of Thought and Tree of Thought are innovative prompting mechanisms designed to enable large language models to self-improve without the reliance on annotated datasets or significant resource expenditure for model fine-tuning. This dissertation integrates Chain of Thought (CoT) prompting to en...
Saved in:
Main Author: | Zhang, Yixiang |
---|---|
Other Authors: | Lihui Chen |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181809 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Llama2 self-improvement using memory-of-thought
by: Dong, Yuxiu
Published: (2024) -
Evaluating the carbon footprint of code implementation
by: Tar, Sreeja
Published: (2024) -
Chùa Một Cột xưa và nay
by: Lê, Khánh
Published: (2017) -
SIBO : A simple booster for parameter-efficient fine-tuning
by: WEN, Zhihao, et al.
Published: (2024) -
Thoughts to target : enhance planning for target-driven conversation
by: ZHENG, Zhonghua, et al.
Published: (2024)