Evaluating the carbon footprint of code implementation

This project undertakes the evaluation of the carbon footprint of large language models (LLMs). Three models are evaluated – Meta’s LLaMA-2 (7-billion parameter configuration), Mistral (7-billion parameter configuration), and Google’s Gemma (2-billion and 7-billion parameter configurations). The emi...

Full description

Saved in:
Bibliographic Details
Main Author: Tar, Sreeja
Other Authors: Lim Wei Yang Bryan
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/181172
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This project undertakes the evaluation of the carbon footprint of large language models (LLMs). Three models are evaluated – Meta’s LLaMA-2 (7-billion parameter configuration), Mistral (7-billion parameter configuration), and Google’s Gemma (2-billion and 7-billion parameter configurations). The emissions generated by these models in the fine-tuning phase are evaluated for three tasks – question answering, text summarisation, and sentiment analysis. This report also examines differences in emissions generated across GPUs. The project ultimately explores the impact of model optimisation on emissions and provides corresponding recommendations to reduce the carbon footprint of the selected models.