Llama2 self-improvement using memory-of-thought

Memory-of-Thought (MoT) is a newly proposed mechanism for LLMs to self-improve without annotated datasets and expensive resource consumption for fine-tuning models. This project evaluates the effectiveness of MoT on a pre-trained Large Language Model Llama2 and compares the performance of improved L...

Full description

Saved in:
Bibliographic Details
Main Author: Dong, Yuxiu
Other Authors: Lihui Chen
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/179097
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-179097
record_format dspace
spelling sg-ntu-dr.10356-1790972024-07-19T15:43:50Z Llama2 self-improvement using memory-of-thought Dong, Yuxiu Lihui Chen School of Electrical and Electronic Engineering ELHCHEN@ntu.edu.sg Engineering Memory-of-thought Llama2 Large language model Natural language processing Memory-of-Thought (MoT) is a newly proposed mechanism for LLMs to self-improve without annotated datasets and expensive resource consumption for fine-tuning models. This project evaluates the effectiveness of MoT on a pre-trained Large Language Model Llama2 and compares the performance of improved Llama2 with ChatGPT3.5 API on 10 benchmark data sets. The experiments demonstrate that MoT is able to improve the comprehensive reasoning capabilities of Llama2 on those downstream applications. Based on the experimental study, we also discover that MoT might yield a more substantial improvement in scenarios where the CoT process is employed. Meanwhile, further analysis indicates a greater improvement percentage of MoT on Llama2 than that on ChatGPT. Additionally, the experiments on long conversations prove that MoT can improve the performance of Llama2 in long open-domain conversations, resulting in better consistency, engagingness, and response selection. Master's degree 2024-07-18T01:39:28Z 2024-07-18T01:39:28Z 2024 Thesis-Master by Coursework Dong, Y. (2024). Llama2 self-improvement using memory-of-thought. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/179097 https://hdl.handle.net/10356/179097 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering
Memory-of-thought
Llama2
Large language model
Natural language processing
spellingShingle Engineering
Memory-of-thought
Llama2
Large language model
Natural language processing
Dong, Yuxiu
Llama2 self-improvement using memory-of-thought
description Memory-of-Thought (MoT) is a newly proposed mechanism for LLMs to self-improve without annotated datasets and expensive resource consumption for fine-tuning models. This project evaluates the effectiveness of MoT on a pre-trained Large Language Model Llama2 and compares the performance of improved Llama2 with ChatGPT3.5 API on 10 benchmark data sets. The experiments demonstrate that MoT is able to improve the comprehensive reasoning capabilities of Llama2 on those downstream applications. Based on the experimental study, we also discover that MoT might yield a more substantial improvement in scenarios where the CoT process is employed. Meanwhile, further analysis indicates a greater improvement percentage of MoT on Llama2 than that on ChatGPT. Additionally, the experiments on long conversations prove that MoT can improve the performance of Llama2 in long open-domain conversations, resulting in better consistency, engagingness, and response selection.
author2 Lihui Chen
author_facet Lihui Chen
Dong, Yuxiu
format Thesis-Master by Coursework
author Dong, Yuxiu
author_sort Dong, Yuxiu
title Llama2 self-improvement using memory-of-thought
title_short Llama2 self-improvement using memory-of-thought
title_full Llama2 self-improvement using memory-of-thought
title_fullStr Llama2 self-improvement using memory-of-thought
title_full_unstemmed Llama2 self-improvement using memory-of-thought
title_sort llama2 self-improvement using memory-of-thought
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/179097
_version_ 1814047383835639808