Improving Llama2 in game 24 with memory of thought and tree of thought
Memory of Thought and Tree of Thought are innovative prompting mechanisms designed to enable large language models to self-improve without the reliance on annotated datasets or significant resource expenditure for model fine-tuning. This dissertation integrates Chain of Thought (CoT) prompting to en...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181809 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Memory of Thought and Tree of Thought are innovative prompting mechanisms designed to enable large language models to self-improve without the reliance on annotated datasets or significant resource expenditure for model fine-tuning. This dissertation integrates Chain of Thought (CoT) prompting to enhance the Memory of Thought (MoT) framework and evaluates the effectiveness of both MoT and Tree of Thought (ToT) mechanisms in improving Llama2’s logic-based problem-solving capabilities. The study compares the performance of the optimized Llama2 model against OpenAI’s ChatGPT-4 using the Game 24 dataset. Additionally, the results are benchmarked against outcomes achieved through fine-tuning approaches. The experimental results indicate that both MoT and ToT successfully enhance the comprehensive reasoning capabilities of Llama2 in Game 24 tasks. Furthermore, our analysis reveals that MoT may provide a more substantial improvement than ToT in addressing logic-based challenges, underscoring its effectiveness in enhancing Llama2’s performance. |
---|