Robust forecast comparison

Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence of the use of misspecified models in multiple model comparisons, relative forecast rankings are loss function dependent. In order to address this issue, a novel criterion for forecast evaluation that...

Full description

Saved in:
Bibliographic Details
Main Authors: JIN, Sainan, Corradi, Valentina, Swanson, Norman R.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2016
Subjects:
Online Access:https://ink.library.smu.edu.sg/soe_research/1951
https://ink.library.smu.edu.sg/context/soe_research/article/2950/viewcontent/RobustForecastComparison_2016.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.soe_research-2950
record_format dspace
spelling sg-smu-ink.soe_research-29502020-06-17T01:50:18Z Robust forecast comparison JIN, Sainan Corradi, Valentina Swanson, Norman R. Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence of the use of misspecified models in multiple model comparisons, relative forecast rankings are loss function dependent. In order to address this issue, a novel criterion for forecast evaluation that utilizes the entire distribution of forecast errors is introduced. In particular, we introduce the concepts of general-loss (GL) forecast superiority and convex-loss (CL) forecast superiority; and we develop tests for GL (CL) superiority that are based on an out-of-sample generalization of the tests introduced by Linton, Maasoumi, and Whang (2005, Review of Economic Studies 72, 735–765). Our test statistics are characterized by nonstandard limiting distributions, under the null, necessitating the use of resampling procedures to obtain critical values. Additionally, the tests are consistent and have nontrivial local power, under a sequence of local alternatives. The above theory is developed for the stationary case, as well as for the case of heterogeneity that is induced by distributional change over time. Monte Carlo simulations suggest that the tests perform reasonably well in finite samples, and an application in which we examine exchange rate data indicates that our tests can help identify superior forecasting models, regardless of loss function. 2016-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/soe_research/1951 info:doi/10.1017/S0266466616000426 https://ink.library.smu.edu.sg/context/soe_research/article/2950/viewcontent/RobustForecastComparison_2016.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Economics eng Institutional Knowledge at Singapore Management University Convex loss function Empirical processes Forecast superiority General loss function Econometrics
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Convex loss function
Empirical processes
Forecast superiority
General loss function
Econometrics
spellingShingle Convex loss function
Empirical processes
Forecast superiority
General loss function
Econometrics
JIN, Sainan
Corradi, Valentina
Swanson, Norman R.
Robust forecast comparison
description Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence of the use of misspecified models in multiple model comparisons, relative forecast rankings are loss function dependent. In order to address this issue, a novel criterion for forecast evaluation that utilizes the entire distribution of forecast errors is introduced. In particular, we introduce the concepts of general-loss (GL) forecast superiority and convex-loss (CL) forecast superiority; and we develop tests for GL (CL) superiority that are based on an out-of-sample generalization of the tests introduced by Linton, Maasoumi, and Whang (2005, Review of Economic Studies 72, 735–765). Our test statistics are characterized by nonstandard limiting distributions, under the null, necessitating the use of resampling procedures to obtain critical values. Additionally, the tests are consistent and have nontrivial local power, under a sequence of local alternatives. The above theory is developed for the stationary case, as well as for the case of heterogeneity that is induced by distributional change over time. Monte Carlo simulations suggest that the tests perform reasonably well in finite samples, and an application in which we examine exchange rate data indicates that our tests can help identify superior forecasting models, regardless of loss function.
format text
author JIN, Sainan
Corradi, Valentina
Swanson, Norman R.
author_facet JIN, Sainan
Corradi, Valentina
Swanson, Norman R.
author_sort JIN, Sainan
title Robust forecast comparison
title_short Robust forecast comparison
title_full Robust forecast comparison
title_fullStr Robust forecast comparison
title_full_unstemmed Robust forecast comparison
title_sort robust forecast comparison
publisher Institutional Knowledge at Singapore Management University
publishDate 2016
url https://ink.library.smu.edu.sg/soe_research/1951
https://ink.library.smu.edu.sg/context/soe_research/article/2950/viewcontent/RobustForecastComparison_2016.pdf
_version_ 1770573374778507264