Multi-task gradient descent for multi-task learning

Multi-Task Learning (MTL) aims to simultaneously solve a group of related learning tasks by leveraging the salutary knowledge memes contained in the multiple tasks to improve the generalization performance. Many prevalent approaches focus on designing a sophisticated cost function, which integrates...

Full description

Saved in:
Bibliographic Details
Main Authors: Bai, Lu, Ong, Yew-Soon, He, Tiantian, Gupta, Abhishek
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/147806
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-147806
record_format dspace
spelling sg-ntu-dr.10356-1478062021-04-20T01:32:44Z Multi-task gradient descent for multi-task learning Bai, Lu Ong, Yew-Soon He, Tiantian Gupta, Abhishek School of Computer Science and Engineering Data Science and Artificial Intelligence Research Centre Singapore Institute of Manufacturing Technology Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Knowledge Transfer Cost Functions Multi-Task Learning (MTL) aims to simultaneously solve a group of related learning tasks by leveraging the salutary knowledge memes contained in the multiple tasks to improve the generalization performance. Many prevalent approaches focus on designing a sophisticated cost function, which integrates all the learning tasks and explores the task-task relationship in a predefined manner. Different from previous approaches, in this paper, we propose a novel Multi-task Gradient Descent (MGD) framework, which improves the generalization performance of multiple tasks through knowledge transfer. The uniqueness of MGD lies in assuming individual task-specific learning objectives at the start, but with the cost functions implicitly changing during the course of parameter optimization based on task-task relationships. Specifically, MGD optimizes the individual cost function of each task using a reformative gradient descent iteration, where relations to other tasks are facilitated through effectively transferring parameter values (serving as the computational representations of memes) from other tasks. Theoretical analysis shows that the proposed framework is convergent under any appropriate transfer mechanism. Compared with existing MTL approaches, MGD provides a novel easy-to-implement framework for MTL, which can mitigate negative transfer in the learning procedure by asymmetric transfer. The proposed MGD has been compared with both classical and state-of-the-art approaches on multiple MTL datasets. The competitive experimental results validate the effectiveness of the proposed algorithm. AI Singapore National Research Foundation (NRF) Accepted version This work were supported in part by the A*STAR Cyber-Physical Production System (CPPS)-Towards Contextual and Intelligent Response Research Program, under the RIE2020 IAF-PP Grant A19C1a0018, the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2018- 004), and Data Science & Artificial Intelligence Research Centre, Nanyang Technological University. 2021-04-20T01:32:44Z 2021-04-20T01:32:44Z 2020 Journal Article Bai, L., Ong, Y., He, T. & Gupta, A. (2020). Multi-task gradient descent for multi-task learning. Memetic Computing, 12(4), 355-369. https://dx.doi.org/10.1007/s12293-020-00316-3 1865-9292 0000-0003-4882-6672 https://hdl.handle.net/10356/147806 10.1007/s12293-020-00316-3 2-s2.0-85092731235 4 12 355 369 en AISG-RP-2018-004 A19C1a0018 Memetic Computing © 2020 Springer-Verlag Berlin Heidelberg. This is a post-peer-review, pre-copyedit version of an article published in Memetic Computing. The final authenticated version is available online at: http://dx.doi.org/10.1007/s12293-020-00316-3 application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Knowledge Transfer
Cost Functions
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Knowledge Transfer
Cost Functions
Bai, Lu
Ong, Yew-Soon
He, Tiantian
Gupta, Abhishek
Multi-task gradient descent for multi-task learning
description Multi-Task Learning (MTL) aims to simultaneously solve a group of related learning tasks by leveraging the salutary knowledge memes contained in the multiple tasks to improve the generalization performance. Many prevalent approaches focus on designing a sophisticated cost function, which integrates all the learning tasks and explores the task-task relationship in a predefined manner. Different from previous approaches, in this paper, we propose a novel Multi-task Gradient Descent (MGD) framework, which improves the generalization performance of multiple tasks through knowledge transfer. The uniqueness of MGD lies in assuming individual task-specific learning objectives at the start, but with the cost functions implicitly changing during the course of parameter optimization based on task-task relationships. Specifically, MGD optimizes the individual cost function of each task using a reformative gradient descent iteration, where relations to other tasks are facilitated through effectively transferring parameter values (serving as the computational representations of memes) from other tasks. Theoretical analysis shows that the proposed framework is convergent under any appropriate transfer mechanism. Compared with existing MTL approaches, MGD provides a novel easy-to-implement framework for MTL, which can mitigate negative transfer in the learning procedure by asymmetric transfer. The proposed MGD has been compared with both classical and state-of-the-art approaches on multiple MTL datasets. The competitive experimental results validate the effectiveness of the proposed algorithm.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Bai, Lu
Ong, Yew-Soon
He, Tiantian
Gupta, Abhishek
format Article
author Bai, Lu
Ong, Yew-Soon
He, Tiantian
Gupta, Abhishek
author_sort Bai, Lu
title Multi-task gradient descent for multi-task learning
title_short Multi-task gradient descent for multi-task learning
title_full Multi-task gradient descent for multi-task learning
title_fullStr Multi-task gradient descent for multi-task learning
title_full_unstemmed Multi-task gradient descent for multi-task learning
title_sort multi-task gradient descent for multi-task learning
publishDate 2021
url https://hdl.handle.net/10356/147806
_version_ 1698713701152980992