DEVELOPMENT AND IMPLEMENTATION OF COMMUNICATION MODIFICATION ON ADAM OPTIMIZER FOR DISTRIBUTED DEEP LEARNING
Deep learning is a technique used in many domains to solve various problems. One of the popular example at the time of writing is ChatGPT, which uses GPT model made by OpenAI. A deep learning model consists of several layers, each with its own parameters. More complex models are developed by incr...
Saved in:
Main Author: | |
---|---|
Format: | Final Project |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/74150 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
Summary: | Deep learning is a technique used in many domains to solve various problems. One
of the popular example at the time of writing is ChatGPT, which uses GPT model
made by OpenAI. A deep learning model consists of several layers, each with its
own parameters. More complex models are developed by increasing the number of
learned parameters.
Increase of parameters in the model will cause communication overhead when
learning in distributed architecture. Therefore, this final project will study about
existing techniques to reduce communication overhead using reduction in commu-
nication rounds and compression. This final project will review CADA (T. Chen
dkk., 2021) and Efficient-Adam (C. Chen dkk., 2022). Then, this final project will
incorporate ideas from both techniques, resulting in a technique which could reduce
communications to as much as 0.97 times the communication rounds of CADA and
using only 0.29 times the communication size of CADA. |
---|