160 likes | 321 Views
“ Consensus estimates of forecast uncertainty: working out how little we know ” James Mitchell NIESR. June 2005. ESRC Social Science Week. Thanks to the ESRC for financial support Thanks to Stephen Hall (co-author) Our ESRC “output”… three papers: “Density forecast combination”
E N D
“Consensus estimates of forecast uncertainty: working out how little we know ”James MitchellNIESR June 2005
ESRC Social Science Week • Thanks to the ESRC for financial support • Thanks to Stephen Hall (co-author) Our ESRC “output”… three papers: • “Density forecast combination” http://www.niesr.ac.uk/pdf/ssw200605a.pdf 2. “Optimal density forecast combination” http://www.niesr.ac.uk/pdf/ssw200605b.pdf 3. “Evaluating, comparing and combining density forecasts using the KLIC with an application to the Bank of England and NIESR ‘fan’ charts of inflation” http://www.niesr.ac.uk/pdf/ssw200605c.pdf
Forecast Uncertainty • How do we judge if forecasts are any good? • How should a forecaster best acknowledge how little they really know ? Surely they are not always surprised when their forecast proves “wrong” ? • How should we judge if one forecast is better than another ? • Why is it misleading to say one forecast is “better” than another simply if it turns out closer to the subsequent outturn ? • Can we do better if we take some kind of average across competing forecasts of the same event ?
Forecasting: state of the art…dismal science • The importance of forecasts: forward looking policy • Point forecasts are better seen as the central points of ranges of uncertainty • It is not a question of one point forecast proving right and another proving wrong, despite what politicians may say • Users may not be surprised when inflation is higher than forecast. Indeed they may not be very surprised if it’s much higher
Density Forecasts • Increased attention is now given to providing measures of uncertainty associated with forecasts • Measures of uncertainty surrounding a point forecast can enhance its usefulness; affect policy response; essential with non-quadratic loss functions • So called “density” forecasts are being used increasingly since they provide commentators with a full impression of forecast uncertainty • They provide an estimate of the probability distribution of a variable’s possible future values
Production of density forecasts • Subjective or model-based • Ex post we will see that we can evaluate the quality of the assumed density, combined density, forecast • The “fan” chart: the Bank of England uses an asymmetric density forecast based on a two-piece normal distribution: upside vs. downside risk • NIESR uses a normal density with variance estimated from the historical forecast errors: how far back should they look?
Bank of England “fan” chart for CPI Inflation: Inflation Report May 2005
Evaluation of Density Forecasts • Evaluate density forecasts statistically using the “probability integral transform” (pit); analogous to evaluation of point forecasts using RMSE • The pit’s zit for the density forecast git of yt (say, inflation) are • zit are i.i.d. uniform (or via a CDF transform, normal) when the density forecast is correct
Consensus forecasts It is widely appreciated that combination forecasts normally outperform any single forecast There are debates about why this happens - All forecasts are wrong but in different ways
Consensus estimates of forecast uncertainty A natural question to ask is, would a combined density forecast also work better? This raises a number of issues: 1. How should we combine density forecasts 2. How should we evaluate the combined density 3. How should we test individual densities against each other
Combining density forecasts The early OR approaches Consider N forecasts made by N experts (i=1,…,N) of a variable yt. If their (continuous) density forecasts are git then the linear opinion pool is
Combined density forecasts • How do we determine the weights wi ? • Equal weights • Optimal combination: mimic the optimal combination of point forecasts • The combined density can have distinct characteristics from those of the individual forecasters; e.g.if all the densities are normal, but with different means and variances, then the combined density is mixture normal. But what if the true density is normal? • Indirect combination: moment by moment • Bayesian and copula based combination
The Bank of England and NIESR density forecasts of inflation
Does density forecast combination work? In-sample and out-of-sample experiments • Combined density forecasts can but need not help • Combining the Bank and NIESR density forecasts we find a weight of zero on NIESR • Combining Bank and time-series forecasts we find a weight of 0.73 on the time-series forecast and an improvement in accuracy
The tool-kit available to those willing to admit they may get it wrong • The Kullback-Leibler Information Criterion (KLIC) offers a unified statistical tool to evaluate, compare and combine density forecasts • The KLIC distance between the true density f(y) and the forecast density g(y) is: • Existing density forecast evaluation tests based on the pit’s implicitly test KLIC=0 but without having to know f(.) • KLIC can be used to test which density forecast is best: extension of Diebold-Mariano test • Basis for Bayesian Model Averaging
Conclusions • Producers of forecasts should be encouraged to indicate how uncertain they are. This is an admission of strength not weakness. Facilitates better policy-making • Combining density forecasts appears promising • Users require a tool-kit to evaluate, compare and combine density forecasts. This will enable us to work out how little we know and improve the reliability of this forecast