Llama2 self-improvement using memory-of-thought

Memory-of-Thought (MoT) is a newly proposed mechanism for LLMs to self-improve without annotated datasets and expensive resource consumption for fine-tuning models. This project evaluates the effectiveness of MoT on a pre-trained Large Language Model Llama2 and compares the performance of improved L...

Full description

Saved in:
Bibliographic Details
Main Author: Dong, Yuxiu
Other Authors: Lihui Chen
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/179097
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Memory-of-Thought (MoT) is a newly proposed mechanism for LLMs to self-improve without annotated datasets and expensive resource consumption for fine-tuning models. This project evaluates the effectiveness of MoT on a pre-trained Large Language Model Llama2 and compares the performance of improved Llama2 with ChatGPT3.5 API on 10 benchmark data sets. The experiments demonstrate that MoT is able to improve the comprehensive reasoning capabilities of Llama2 on those downstream applications. Based on the experimental study, we also discover that MoT might yield a more substantial improvement in scenarios where the CoT process is employed. Meanwhile, further analysis indicates a greater improvement percentage of MoT on Llama2 than that on ChatGPT. Additionally, the experiments on long conversations prove that MoT can improve the performance of Llama2 in long open-domain conversations, resulting in better consistency, engagingness, and response selection.