Multi-task gradient descent for multi-task learning
Multi-Task Learning (MTL) aims to simultaneously solve a group of related learning tasks by leveraging the salutary knowledge memes contained in the multiple tasks to improve the generalization performance. Many prevalent approaches focus on designing a sophisticated cost function, which integrates...
Saved in:
Main Authors: | Bai, Lu, Ong, Yew-Soon, He, Tiantian, Gupta, Abhishek |
---|---|
其他作者: | School of Computer Science and Engineering |
格式: | Article |
語言: | English |
出版: |
2021
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/147806 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
機構: | Nanyang Technological University |
語言: | English |
相似書籍
-
Lying in pursuit evasion task with multi-agent reinforcement learning
由: Cheng, Damien Shiao Kiat
出版: (2022) -
Lifelong multi-agent pathfinding with online tasks
由: Tay, David Ang Peng
出版: (2023) -
Deep neural network compression for pixel-level vision tasks
由: He, Wei
出版: (2021) -
Multi-source propagation aware network clustering
由: He, Tiantian, et al.
出版: (2021) -
Synthetic word embedding generation for downstream NLP task
由: Hoang, Viet
出版: (2021)