Publication: Data-Driven Model Evaluation

All || By Area || By Year

Title Data-Driven Model Evaluation
Authors/Editors* J. Racine, C. Parmeter
Where published* Working paper
How published* Technical Report
Year* 2009
When comparing two competing approximate models, the one having smallest `expected true error' is closest to the data generating process (according to the specified loss function) and is therefore to be preferred. In this paper we consider a data-driven method of testing whether two competing approximate models, for instance a parametric and a nonparametric model, are equivalent in terms of their expected true error (i.e., their expected performance on unseen data drawn from the same data generating process). The proposed test is quite flexible with regards to the types of models and data types that can be compared (i.e., time-series, cross section, panel etc.). Moreover, by applying our method in time-series settings we can overcome two of the drawbacks associated with approaches that are popular and dominant among practitioners, namely, their reliance on only one split of the data and the need to have a sufficiently large hold-out sample in order for the test to have power. Some useful graphical summaries are also presented. Finite-sample performance and several illustrative applications are considered.
Go to Statistical Simulation
Back to page 40 of list