Improving Llama2 in game 24 with memory of thought and tree of thought
Memory of Thought and Tree of Thought are innovative prompting mechanisms designed to enable large language models to self-improve without the reliance on annotated datasets or significant resource expenditure for model fine-tuning. This dissertation integrates Chain of Thought (CoT) prompting to en...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181809 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-181809 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1818092024-12-20T15:47:56Z Improving Llama2 in game 24 with memory of thought and tree of thought Zhang, Yixiang Lihui Chen School of Electrical and Electronic Engineering ELHCHEN@ntu.edu.sg Computer and Information Science MoT ToT CoT Fine-tune Llama 2 Large language model Natural language processing Memory of Thought and Tree of Thought are innovative prompting mechanisms designed to enable large language models to self-improve without the reliance on annotated datasets or significant resource expenditure for model fine-tuning. This dissertation integrates Chain of Thought (CoT) prompting to enhance the Memory of Thought (MoT) framework and evaluates the effectiveness of both MoT and Tree of Thought (ToT) mechanisms in improving Llama2’s logic-based problem-solving capabilities. The study compares the performance of the optimized Llama2 model against OpenAI’s ChatGPT-4 using the Game 24 dataset. Additionally, the results are benchmarked against outcomes achieved through fine-tuning approaches. The experimental results indicate that both MoT and ToT successfully enhance the comprehensive reasoning capabilities of Llama2 in Game 24 tasks. Furthermore, our analysis reveals that MoT may provide a more substantial improvement than ToT in addressing logic-based challenges, underscoring its effectiveness in enhancing Llama2’s performance. Master's degree 2024-12-20T11:14:36Z 2024-12-20T11:14:36Z 2024 Thesis-Master by Coursework Zhang, Y. (2024). Improving Llama2 in game 24 with memory of thought and tree of thought. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/181809 https://hdl.handle.net/10356/181809 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science MoT ToT CoT Fine-tune Llama 2 Large language model Natural language processing |
spellingShingle |
Computer and Information Science MoT ToT CoT Fine-tune Llama 2 Large language model Natural language processing Zhang, Yixiang Improving Llama2 in game 24 with memory of thought and tree of thought |
description |
Memory of Thought and Tree of Thought are innovative prompting mechanisms designed to enable large language models to self-improve without the reliance on annotated datasets or significant resource expenditure for model fine-tuning. This dissertation integrates Chain of Thought (CoT) prompting to enhance the Memory of Thought (MoT) framework and evaluates the effectiveness of both MoT and Tree of Thought (ToT) mechanisms in improving Llama2’s logic-based problem-solving capabilities. The study compares the performance of the optimized Llama2 model against OpenAI’s ChatGPT-4 using the Game 24 dataset. Additionally, the results are benchmarked against outcomes achieved through fine-tuning approaches. The experimental results indicate that both MoT and ToT successfully enhance the comprehensive reasoning capabilities of Llama2 in Game 24 tasks. Furthermore, our analysis reveals that MoT may provide a more substantial improvement than ToT in addressing logic-based challenges, underscoring its effectiveness in enhancing Llama2’s performance. |
author2 |
Lihui Chen |
author_facet |
Lihui Chen Zhang, Yixiang |
format |
Thesis-Master by Coursework |
author |
Zhang, Yixiang |
author_sort |
Zhang, Yixiang |
title |
Improving Llama2 in game 24 with memory of thought and tree of thought |
title_short |
Improving Llama2 in game 24 with memory of thought and tree of thought |
title_full |
Improving Llama2 in game 24 with memory of thought and tree of thought |
title_fullStr |
Improving Llama2 in game 24 with memory of thought and tree of thought |
title_full_unstemmed |
Improving Llama2 in game 24 with memory of thought and tree of thought |
title_sort |
improving llama2 in game 24 with memory of thought and tree of thought |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/181809 |
_version_ |
1819113081630883840 |