Improving Llama2 in game 24 with memory of thought and tree of thought

Memory of Thought and Tree of Thought are innovative prompting mechanisms designed to enable large language models to self-improve without the reliance on annotated datasets or significant resource expenditure for model fine-tuning. This dissertation integrates Chain of Thought (CoT) prompting to en...

全面介紹

Saved in:
書目詳細資料
主要作者: Zhang, Yixiang
其他作者: Lihui Chen
格式: Thesis-Master by Coursework
語言:English
出版: Nanyang Technological University 2024
主題:
MoT
ToT
CoT
在線閱讀:https://hdl.handle.net/10356/181809
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Memory of Thought and Tree of Thought are innovative prompting mechanisms designed to enable large language models to self-improve without the reliance on annotated datasets or significant resource expenditure for model fine-tuning. This dissertation integrates Chain of Thought (CoT) prompting to enhance the Memory of Thought (MoT) framework and evaluates the effectiveness of both MoT and Tree of Thought (ToT) mechanisms in improving Llama2’s logic-based problem-solving capabilities. The study compares the performance of the optimized Llama2 model against OpenAI’s ChatGPT-4 using the Game 24 dataset. Additionally, the results are benchmarked against outcomes achieved through fine-tuning approaches. The experimental results indicate that both MoT and ToT successfully enhance the comprehensive reasoning capabilities of Llama2 in Game 24 tasks. Furthermore, our analysis reveals that MoT may provide a more substantial improvement than ToT in addressing logic-based challenges, underscoring its effectiveness in enhancing Llama2’s performance.