AI foundation models
Foundation models have been a key factor in revolutionizing the way we approach various language tasks today. These models typically boast large numbers of parameters and are pre-trained on vast amounts of text data, enabling them to capture intricate linguistic patterns and relationships. However,...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175068 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-175068 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1750682024-04-19T15:45:50Z AI foundation models Chng, Joshua Yuheng Jun Zhao School of Computer Science and Engineering junzhao@ntu.edu.sg Computer and Information Science Foundation models have been a key factor in revolutionizing the way we approach various language tasks today. These models typically boast large numbers of parameters and are pre-trained on vast amounts of text data, enabling them to capture intricate linguistic patterns and relationships. However, fine-tuning these models to perform different downstream tasks has been proven to be a difficult task as most users do not have the required computational resources. Consequently, users often pass their data directly to the model owners for fine-tuning. Alternatively, model owners may share the trained weights with the users, enabling them to utilise the model for specific taskings. However, both of these solutions raise concerns over privacy and model ownership. To handle these concerns, offsite tuning has been introduced as an efficient, privacy-preserving framework for users to fine-tune foundation models offsite. The fundamental components of offsite tuning revolve around a lightweight trainable adapter and a compressed emulator. In our research, we aim to experiment with different configurations of the compressed emulator and measure the performance of offsite tuning on various downstream tasks. Our experiments have revealed 2 main findings. Firstly, performing various layer-dropping strategies to compress the emulator yields varying impacts on performance depending on the nature of the downstream tasks. Secondly, while unfreezing specific sub-layers in the initially frozen compressed emulator is generally not worth doing so given the cost-to-rewards ratio, it has resulted in slight improvements for specific datasets, giving rise to valuable insights. Bachelor's degree 2024-04-19T02:42:17Z 2024-04-19T02:42:17Z 2024 Final Year Project (FYP) Chng, J. Y. (2024). AI foundation models. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175068 https://hdl.handle.net/10356/175068 en SCSE23-0283 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science |
spellingShingle |
Computer and Information Science Chng, Joshua Yuheng AI foundation models |
description |
Foundation models have been a key factor in revolutionizing the way we approach various language tasks today. These models typically boast large numbers of parameters and are pre-trained on vast amounts of text data, enabling them to capture intricate linguistic patterns and relationships. However, fine-tuning these models to perform different downstream tasks has been proven to be a difficult task as most users do not have the required computational resources. Consequently, users often pass their data directly to the model owners for fine-tuning. Alternatively, model owners may share the trained weights with the users, enabling them to utilise the model for specific taskings. However, both of these solutions raise concerns over privacy and model ownership. To handle these concerns, offsite tuning has been introduced as an efficient, privacy-preserving framework for users to fine-tune foundation models offsite. The fundamental components of offsite tuning revolve around a lightweight trainable adapter and a compressed emulator. In our research, we aim to experiment with different configurations of the compressed emulator and measure the performance of offsite tuning on various downstream tasks. Our experiments have revealed 2 main findings. Firstly, performing various layer-dropping strategies to compress the emulator yields varying impacts on performance depending on the nature of the downstream tasks. Secondly, while unfreezing specific sub-layers in the initially frozen compressed emulator is generally not worth doing so given the cost-to-rewards ratio, it has resulted in slight improvements for specific datasets, giving rise to valuable insights. |
author2 |
Jun Zhao |
author_facet |
Jun Zhao Chng, Joshua Yuheng |
format |
Final Year Project |
author |
Chng, Joshua Yuheng |
author_sort |
Chng, Joshua Yuheng |
title |
AI foundation models |
title_short |
AI foundation models |
title_full |
AI foundation models |
title_fullStr |
AI foundation models |
title_full_unstemmed |
AI foundation models |
title_sort |
ai foundation models |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/175068 |
_version_ |
1806059762410848256 |