Llama2 self-improvement using memory-of-thought
Memory-of-Thought (MoT) is a newly proposed mechanism for LLMs to self-improve without annotated datasets and expensive resource consumption for fine-tuning models. This project evaluates the effectiveness of MoT on a pre-trained Large Language Model Llama2 and compares the performance of improved L...
Saved in:
Main Author: | Dong, Yuxiu |
---|---|
Other Authors: | Lihui Chen |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/179097 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Improving Llama2 in game 24 with memory of thought and tree of thought
by: Zhang, Yixiang
Published: (2024) -
Performance analysis of Llama 2 among other LLMs
by: HUANG, Donghao, et al.
Published: (2024) -
Neural abstractive summarization: improvements at the sequence-level
by: Ravaut, Mathieu
Published: (2024) -
Punctuation restoration for speech transcripts using large language models
by: Liu, Changsong
Published: (2024) -
Wisdom in chinese thought and biblical writings: A comparative study in religious education
by: Young, William T.
Published: (1999)