Online deep learning: Learning deep neural networks on the fly

Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch setting, requiring the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream. We aim to address an open c...

Full description

Saved in:
Bibliographic Details
Main Authors: SAHOO, Doyen, PHAM, Hong Quang, LU, Jing, HOI, Steven C. H.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2018
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/4083
https://ink.library.smu.edu.sg/context/sis_research/article/5086/viewcontent/7._May01_2018___Online_Deep_Learning_Learning_Deep_Neural_Networks_on_the_Fly__IJCAI2018_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-5086
record_format dspace
spelling sg-smu-ink.sis_research-50862020-03-26T07:39:23Z Online deep learning: Learning deep neural networks on the fly SAHOO, Doyen PHAM, Hong Quang LU, Jing HOI, Steven C. H. Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch setting, requiring the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream. We aim to address an open challenge of “Online Deep Learning” (ODL) for learning DNNs on the fly in an online setting. Unlike traditional online learning that often optimizes some convex objective function with respect to a shallow model (e.g., a linear/kernel-based hypothesis), ODL is more challenging as the optimization objective is non-convex, and regular DNN with standard backpropagation does not work well in practice for online settings. We present a new ODL framework that attempts to tackle the challenges by learning DNN models which dynamically adapt depth from a sequence of training data in an online learning setting. Specifically, we propose a novel Hedge Backpropagation (HBP) method for online updating the parameters of DNN effectively, and validate the efficacy on large data sets (both stationary and concept drifting scenarios). 2018-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4083 info:doi/10.24963/ijcai.2018/369 https://ink.library.smu.edu.sg/context/sis_research/article/5086/viewcontent/7._May01_2018___Online_Deep_Learning_Learning_Deep_Neural_Networks_on_the_Fly__IJCAI2018_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Neural Networks Online Learning Time-series Data Streams Machine Learning Deep Learning Databases and Information Systems Numerical Analysis and Scientific Computing
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Neural Networks
Online Learning
Time-series
Data Streams
Machine Learning
Deep Learning
Databases and Information Systems
Numerical Analysis and Scientific Computing
spellingShingle Neural Networks
Online Learning
Time-series
Data Streams
Machine Learning
Deep Learning
Databases and Information Systems
Numerical Analysis and Scientific Computing
SAHOO, Doyen
PHAM, Hong Quang
LU, Jing
HOI, Steven C. H.
Online deep learning: Learning deep neural networks on the fly
description Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch setting, requiring the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream. We aim to address an open challenge of “Online Deep Learning” (ODL) for learning DNNs on the fly in an online setting. Unlike traditional online learning that often optimizes some convex objective function with respect to a shallow model (e.g., a linear/kernel-based hypothesis), ODL is more challenging as the optimization objective is non-convex, and regular DNN with standard backpropagation does not work well in practice for online settings. We present a new ODL framework that attempts to tackle the challenges by learning DNN models which dynamically adapt depth from a sequence of training data in an online learning setting. Specifically, we propose a novel Hedge Backpropagation (HBP) method for online updating the parameters of DNN effectively, and validate the efficacy on large data sets (both stationary and concept drifting scenarios).
format text
author SAHOO, Doyen
PHAM, Hong Quang
LU, Jing
HOI, Steven C. H.
author_facet SAHOO, Doyen
PHAM, Hong Quang
LU, Jing
HOI, Steven C. H.
author_sort SAHOO, Doyen
title Online deep learning: Learning deep neural networks on the fly
title_short Online deep learning: Learning deep neural networks on the fly
title_full Online deep learning: Learning deep neural networks on the fly
title_fullStr Online deep learning: Learning deep neural networks on the fly
title_full_unstemmed Online deep learning: Learning deep neural networks on the fly
title_sort online deep learning: learning deep neural networks on the fly
publisher Institutional Knowledge at Singapore Management University
publishDate 2018
url https://ink.library.smu.edu.sg/sis_research/4083
https://ink.library.smu.edu.sg/context/sis_research/article/5086/viewcontent/7._May01_2018___Online_Deep_Learning_Learning_Deep_Neural_Networks_on_the_Fly__IJCAI2018_.pdf
_version_ 1770574302680186880