Llama2 self-improvement using memory-of-thought
Memory-of-Thought (MoT) is a newly proposed mechanism for LLMs to self-improve without annotated datasets and expensive resource consumption for fine-tuning models. This project evaluates the effectiveness of MoT on a pre-trained Large Language Model Llama2 and compares the performance of improved L...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/179097 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Be the first to leave a comment!