Robust Time-Inconsistent Stochastic Control Problems
This paper establishes a general analytical framework for continuous-time stochastic control problems for an ambiguity-averse agent (AAA) with time-inconsistent preference, where the control problems do not satisfy Bellman’s principle of optimality. The AAA is concerned about model uncertainty in th...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2018
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/89029 http://hdl.handle.net/10220/44781 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-89029 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-890292023-02-28T19:34:27Z Robust Time-Inconsistent Stochastic Control Problems Pun, Chi Seng School of Physical and Mathematical Sciences Robustness Stochastic Control This paper establishes a general analytical framework for continuous-time stochastic control problems for an ambiguity-averse agent (AAA) with time-inconsistent preference, where the control problems do not satisfy Bellman’s principle of optimality. The AAA is concerned about model uncertainty in the sense that she is not completely confident in the reference model of the controlled Markov state process and rather considers some similar alternative models. The problems of interest are studied within a set of dominated models and the AAA seeks for an optimal decision that is robust with respect to model risks. We adopt a game-theoretic framework and the concept of subgame perfect Nash equilibrium to derive an extended dynamic programming equation and extended Hamilton–Jacobi–Bellman–Isaacs equations for characterizing the robust dynamically optimal control of the problem. We also prove a verification theorem to theoretically support our construction of robust control. To illustrate the tractability of the proposed framework, we study an example of robust dynamic mean–variance portfolio selection under two cases: 1. constant risk aversion; and 2. state-dependent risk aversion. Accepted version 2018-05-11T08:20:41Z 2019-12-06T17:16:16Z 2018-05-11T08:20:41Z 2019-12-06T17:16:16Z 2018 2018 Journal Article Pun, C. S. (2018). Robust Time-Inconsistent Stochastic Control Problems. Automatica, 94, 249–257. 0005-1098 https://hdl.handle.net/10356/89029 http://hdl.handle.net/10220/44781 10.1016/j.automatica.2018.04.038 00 207762 en Automatica © 2018 Elsevier. This is the author created version of a work that has been peer reviewed and accepted for publication by Automatica, Elsevier. It incorporates referee’s comments but changes resulting from the publishing process, such as copyediting, structural formatting, may not be reflected in this document. The published version is available at: [https://dx.doi.org/10.1016/j.automatica.2018.04.038 00]. 20 p. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Robustness Stochastic Control |
spellingShingle |
Robustness Stochastic Control Pun, Chi Seng Robust Time-Inconsistent Stochastic Control Problems |
description |
This paper establishes a general analytical framework for continuous-time stochastic control problems for an ambiguity-averse agent (AAA) with time-inconsistent preference, where the control problems do not satisfy Bellman’s principle of optimality. The AAA is concerned about model uncertainty in the sense that she is not completely confident in the reference model of the controlled Markov state process and rather considers some similar alternative models. The problems of interest are studied within a set of dominated models and the AAA seeks for an optimal decision that is robust with respect to model risks. We adopt a game-theoretic framework and the concept of subgame perfect Nash equilibrium to derive an extended dynamic programming equation and extended Hamilton–Jacobi–Bellman–Isaacs equations for characterizing the robust dynamically optimal control of the problem. We also prove a verification theorem to theoretically support our construction of robust control. To illustrate the tractability of the proposed framework, we study an example of robust dynamic mean–variance portfolio selection under two cases: 1. constant risk aversion; and 2. state-dependent risk aversion. |
author2 |
School of Physical and Mathematical Sciences |
author_facet |
School of Physical and Mathematical Sciences Pun, Chi Seng |
format |
Article |
author |
Pun, Chi Seng |
author_sort |
Pun, Chi Seng |
title |
Robust Time-Inconsistent Stochastic Control Problems |
title_short |
Robust Time-Inconsistent Stochastic Control Problems |
title_full |
Robust Time-Inconsistent Stochastic Control Problems |
title_fullStr |
Robust Time-Inconsistent Stochastic Control Problems |
title_full_unstemmed |
Robust Time-Inconsistent Stochastic Control Problems |
title_sort |
robust time-inconsistent stochastic control problems |
publishDate |
2018 |
url |
https://hdl.handle.net/10356/89029 http://hdl.handle.net/10220/44781 |
_version_ |
1759853410255896576 |