Multi-task gradient descent for multi-task learning
Multi-Task Learning (MTL) aims to simultaneously solve a group of related learning tasks by leveraging the salutary knowledge memes contained in the multiple tasks to improve the generalization performance. Many prevalent approaches focus on designing a sophisticated cost function, which integrates...
Saved in:
Main Authors: | Bai, Lu, Ong, Yew-Soon, He, Tiantian, Gupta, Abhishek |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Article |
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/147806 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Lying in pursuit evasion task with multi-agent reinforcement learning
by: Cheng, Damien Shiao Kiat
Published: (2022) -
Lifelong multi-agent pathfinding with online tasks
by: Tay, David Ang Peng
Published: (2023) -
Deep neural network compression for pixel-level vision tasks
by: He, Wei
Published: (2021) -
Multi-source propagation aware network clustering
by: He, Tiantian, et al.
Published: (2021) -
Synthetic word embedding generation for downstream NLP task
by: Hoang, Viet
Published: (2021)