Generalization bounds of ERM-based learning processes for continuous-time Markov chains
Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2013
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/99342 http://hdl.handle.net/10220/13531 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-99342 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-993422020-05-28T07:17:47Z Generalization bounds of ERM-based learning processes for continuous-time Markov chains Zhang, Chao Tao, Dacheng School of Computer Engineering DRNTU::Engineering::Computer science and engineering Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process. 2013-09-19T07:18:39Z 2019-12-06T20:06:16Z 2013-09-19T07:18:39Z 2019-12-06T20:06:16Z 2012 2012 Journal Article Zhang, C., & Tao, D. (2012). Generalization bounds of ERM-based learning processes for continuous-time Markov chains. IEEE transactions on neural networks and learning systems, 23(12), 1872-1883. 2162-237X https://hdl.handle.net/10356/99342 http://hdl.handle.net/10220/13531 10.1109/TNNLS.2012.2217987 en IEEE transactions on neural networks and learning systems © 2012 IEEE |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Computer science and engineering |
spellingShingle |
DRNTU::Engineering::Computer science and engineering Zhang, Chao Tao, Dacheng Generalization bounds of ERM-based learning processes for continuous-time Markov chains |
description |
Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process. |
author2 |
School of Computer Engineering |
author_facet |
School of Computer Engineering Zhang, Chao Tao, Dacheng |
format |
Article |
author |
Zhang, Chao Tao, Dacheng |
author_sort |
Zhang, Chao |
title |
Generalization bounds of ERM-based learning processes for continuous-time Markov chains |
title_short |
Generalization bounds of ERM-based learning processes for continuous-time Markov chains |
title_full |
Generalization bounds of ERM-based learning processes for continuous-time Markov chains |
title_fullStr |
Generalization bounds of ERM-based learning processes for continuous-time Markov chains |
title_full_unstemmed |
Generalization bounds of ERM-based learning processes for continuous-time Markov chains |
title_sort |
generalization bounds of erm-based learning processes for continuous-time markov chains |
publishDate |
2013 |
url |
https://hdl.handle.net/10356/99342 http://hdl.handle.net/10220/13531 |
_version_ |
1681057219151396864 |