INCORPORATING PARAMETER-EFFICIENT FINE-TUNING INTO INDOLEM EVALUATION TASK

The fine-tuning method is employed as a training approach for evaluating various NLU tasks. IndoLEM, which is a pioneer in the evaluation of Indonesian-language NLU, uses fine-tuning as its training method. Fine-tuning involves training the mo- del by modifying all of its parameters, which can...

Full description

Saved in:
Bibliographic Details
Main Author: Prasetya Wicaksana, Adiyansa
Format: Final Project
Language:Indonesia
Online Access:https://digilib.itb.ac.id/gdl/view/85035
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Institut Teknologi Bandung
Language: Indonesia
id id-itb.:85035
spelling id-itb.:850352024-08-19T13:42:42ZINCORPORATING PARAMETER-EFFICIENT FINE-TUNING INTO INDOLEM EVALUATION TASK Prasetya Wicaksana, Adiyansa Indonesia Final Project Fine-tuning, Parameter-efficient, IndoLEM, NER, Sentiment Ana- lysis, Summarization INSTITUT TEKNOLOGI BANDUNG https://digilib.itb.ac.id/gdl/view/85035 The fine-tuning method is employed as a training approach for evaluating various NLU tasks. IndoLEM, which is a pioneer in the evaluation of Indonesian-language NLU, uses fine-tuning as its training method. Fine-tuning involves training the mo- del by modifying all of its parameters, which can be challenging in terms of memory and training time. There exists a method called PEFT that can train models with per- formance comparable to fine-tuning. In this thesis, various PEFT methods, namely LoRA, Prefix-Tuning, Adapter, dan UniPELT, are utilized in the evaluation tasks of IndoLEM. The aim of this thesis is to leverage PEFT methods in IndoLEM, inclu- ding the incorporating of PEFT methods, performance comparisons for each PEFT method, and analysis of parameter usage and training time. This thesis successfu- lly leverages PEFT methods on IndoLEM. Through refactoring of IndoLEM, PEFT methods were successfully incorporated. Subsequently, experiments were condu- cted by training models using both fine-tuning and PEFT methods. Testing was carried out on three evaluation tasks, namely named entity recognition (NER), sen- timent analysis, dan summarization. The experimental results indicate that PEFT only uses approximately 0.2% to 15% of the model’s training parameters, with fas- ter training times. The performance achieved for the NER and sentiment analysis tasks ranged from -0.8% to -6.2%. This indicates a trade-off between the use of training parameters and the resulting performance. However, the Prefix-Tuning and UniPELT methods failed to provide consistent results on the summarization task. text
institution Institut Teknologi Bandung
building Institut Teknologi Bandung Library
continent Asia
country Indonesia
Indonesia
content_provider Institut Teknologi Bandung
collection Digital ITB
language Indonesia
description The fine-tuning method is employed as a training approach for evaluating various NLU tasks. IndoLEM, which is a pioneer in the evaluation of Indonesian-language NLU, uses fine-tuning as its training method. Fine-tuning involves training the mo- del by modifying all of its parameters, which can be challenging in terms of memory and training time. There exists a method called PEFT that can train models with per- formance comparable to fine-tuning. In this thesis, various PEFT methods, namely LoRA, Prefix-Tuning, Adapter, dan UniPELT, are utilized in the evaluation tasks of IndoLEM. The aim of this thesis is to leverage PEFT methods in IndoLEM, inclu- ding the incorporating of PEFT methods, performance comparisons for each PEFT method, and analysis of parameter usage and training time. This thesis successfu- lly leverages PEFT methods on IndoLEM. Through refactoring of IndoLEM, PEFT methods were successfully incorporated. Subsequently, experiments were condu- cted by training models using both fine-tuning and PEFT methods. Testing was carried out on three evaluation tasks, namely named entity recognition (NER), sen- timent analysis, dan summarization. The experimental results indicate that PEFT only uses approximately 0.2% to 15% of the model’s training parameters, with fas- ter training times. The performance achieved for the NER and sentiment analysis tasks ranged from -0.8% to -6.2%. This indicates a trade-off between the use of training parameters and the resulting performance. However, the Prefix-Tuning and UniPELT methods failed to provide consistent results on the summarization task.
format Final Project
author Prasetya Wicaksana, Adiyansa
spellingShingle Prasetya Wicaksana, Adiyansa
INCORPORATING PARAMETER-EFFICIENT FINE-TUNING INTO INDOLEM EVALUATION TASK
author_facet Prasetya Wicaksana, Adiyansa
author_sort Prasetya Wicaksana, Adiyansa
title INCORPORATING PARAMETER-EFFICIENT FINE-TUNING INTO INDOLEM EVALUATION TASK
title_short INCORPORATING PARAMETER-EFFICIENT FINE-TUNING INTO INDOLEM EVALUATION TASK
title_full INCORPORATING PARAMETER-EFFICIENT FINE-TUNING INTO INDOLEM EVALUATION TASK
title_fullStr INCORPORATING PARAMETER-EFFICIENT FINE-TUNING INTO INDOLEM EVALUATION TASK
title_full_unstemmed INCORPORATING PARAMETER-EFFICIENT FINE-TUNING INTO INDOLEM EVALUATION TASK
title_sort incorporating parameter-efficient fine-tuning into indolem evaluation task
url https://digilib.itb.ac.id/gdl/view/85035
_version_ 1822998895639134208