490 likes | 611 Views
How Does NCEP/CPC Make Operational Monthly and Seasonal Forecasts?. Huug van den Dool (CPC) ESSIC, February, 23, 2011. Assorted Underlying Issues. Jin Huang - R2O - CTB Which tools are used… How are tools combined??? CFSv1 CFSv2 Dynamical vs Empirical Tools
E N D
How Does NCEP/CPC Make Operational Monthly and Seasonal Forecasts? Huug van den Dool (CPC) ESSIC, February, 23, 2011
Assorted Underlying Issues • Jin Huang - R2O - CTB • Which tools are used… • How are tools combined??? • CFSv1 CFSv2 • Dynamical vs Empirical Tools • Skill of tools and OFFICIAL • How easily can a new tool be included? • US, yes, but occasional global perspective
Menu of CPC predictions: 6-10 day (daily) Week 2 (daily) Monthly (monthly + update) Seasonal (monthly) Other (hazards, drought monitor, drought outlook, MJO, UV-index, degree days, POE, SST) (some are ‘briefings’) Informal forecast tools (too many to list) http://www.cpc.ncep.noaa.gov/products/predictions/90day/tools/briefing/index.pri.html
EXAMPLE P U B L I C L Y I S S U E D
EMP EMP EMP EMP N/A DYN CON CON EMP DYN
SMLR CCA OCN LAN OLD-OTLK CFSV1 LFQ ECP IRI ECA CON 8 (15 CASES: 1950, 54, 55, 56, 64, 68, 71, 74, 75, 76, 85, 89, 99, 00, 08)
Element US-T US-P SST US-soil moisture Method:CCA X X X OCN X X CFS X X X XSMLR X XECCA X XConsolidation X X X Constr Analog X X X XMarkov X ENSO Composite X X Other (GCM) models (IRI, ECHAM, NCAR, CDC etc): X X CCA = Canonical Correlation AnalysisOCN = Optimal Climate NormalsCFS = Climate Forecast System (Coupled Ocean-Atmosphere Model)SMLR = Stepwise Multiple Linear RegressionCON = Consolidation
About OCN. Two contrasting views:- Climate = average weather in the past- Climate is the ‘expectation’ of the future30 year WMO normals: 1961-1990; 1971-2000 etcOCN = Optimal Climate Normals: Last K year average. All seasons/locations pooled: K=10 is optimal (for US T).Forecast for Jan 2012 = (Jan02+Jan03+... Jan11)/10. – WMO-normalplus a skill evaluation for some 50+ years.Why does OCN work?1) climate is not constant (K would be infinity for constant climate)2) recent averages are better3) somewhat shorter averages are better (for T)see Huang et al 1996. J.Climate. 9, 809-817.
OCN has become the bearer of most of the skill, see also EOCN method (Peng et al)
G H C N - C A M S F A N 2 0 0 8
Major Verification Issues ‘a-priori’ verification (used to be rare) After the fact (fairly normal)
After the fact….. Source Peitao Peng
(Seasonal) Forecasts are useless unless accompanied by a reliable a-priori skill estimate.Solution: develop a 50+ year track record for each tool. 1950-present.(Admittedly we need 5000 years)
OFFicial Forecast(element, lead, location, initial month) = a * A + b * B + c * C +…Honest hindcast required 1950-present. Covariance (A,B), (A,C), (B,C), and(A, obs), (B, obs), (C, obs) allows solution for a, b, c (element, lead, location, initial month)
Fig.7.6: The skill (ACX100) of forecasting NINO34 SST by the CA method for the period 1956-2005. The plot has the target season in the horizontal and the lead in the vertical. Example: NINO34 in rolling seasons 2 and 3 (JFM and FMA) are predicted slightly better than 0.7 at lead 8 months. An 8 month lead JFM forecast is made at the end of April of the previous year. A 1-2-1 smoothing was applied in the vertical to reduce noise. CA skill 1956-2005
M. Peña Mendez and H. van den Dool, 2008: Consolidation of Multi-Method Forecasts at CPC. J. Climate, 21, 6521–6538. Unger, D., H. van den Dool, E. O’Lenic and D. Collins, 2009: Ensemble Regression. Monthly Weather Review, 137, 2365-2379. (1) CTB, (2) why do we need ‘consolidation’?
RIW RI RIM Climo UR MMA COR
SEC SEC and CV 3CVRE
Bayesian Multimodel Strategies • Linear regression leads to unstable weights for small sample sizes. • Methods for producing more stable estimates have been proposed by van den Dool and Rukhovets (1994), Kharin and Zwiers (2002), Yun et al. (2003), and Robertson et al. (2004). • These methods are special cases of a Bayesian method, each distinguished by a different set of prior assumptions (DelSole 2007). • Some reasonable prior assumptions: • R:0 Weights centered about 0 and bounded in magnitude • (ridge regression) • R:MM Weights centered about 1/K (K = # models) and bounded in magnitude • R:MM+R Weights centered about an optimal value and bounded in magnitude • R:S2N Models with small S2N (signal-to-noise) ratio tend to have small weights • LS Weights are unconstrained (ordinary least squares) From Jim Kinter (Feb 2009)
If the multimodel strategy is carefully cross validated, then the simple mean beats all other investigated multimodel strategies. • Since Bayesian methods involve additional empirical parameters, proper assessment requires a two-deep cross validation procedure. This can change the conclusion about the efficacy of various Bayesian priors. • Traditional cross validation procedures are biased and incorrectly indicate that Bayesian schemes beat a simple mean. From Jim Kinter (Feb 2009)
Good to know: • If a model/methods has no systematic error (by design), leave it alone, please do not apply systematic error correction where none is needed. • There is more (need for) statistics in dynamical models and their post-processing than in any statistical method
See also: O’Lenic, E.A., D.A. Unger, M.S. Halpert, and K.S. Pelman, 2008: Developments in Operational Long-Range Prediction at CPC.Wea. Forecasting, 23, 496–515.
Empirical tools can be comprehensive! (Thanks to reanalysis, among other things). And very economic.
Z500 SST CA T2m Precip
SST Z500 CFS T2m Precip Source: Wanqiu Wang
Is CPC in good shape to admit more tools???Even if these tools come from the outside???Does the outside have the right expectations???
Dynamics> Empiricism Symbiosis • Successive generations of Reanalyses are produced because NWP exists, the desire for initial conditions etc, but empirical prediction methods are one of the main beneficiaries of having all this data. • Early empirical methods have served as an example (for anybody to follow) to produce (honest) hindcasts, a-priori skill assessment, cross validation etc • Consolidation (of tools) is “color-blind” relative to questions of dynamical or empirical origin
Progress in Empirical Methods More data to work with (1 year per year) More (and global) data to work with (Re-analyses). Oceans, land, atmosphere (note the symbiotic relationship with modeling here) New empirical methods New applications Inch-by-inch methods Revolutionary changes Always be ready for surprises
Why are empirical Methods competitive with dynamical methods (in seasonal prediction)?? Linearity (define) A system with skill in <= 3EDOFs is functionally linear See Chapter 10 of book
Mix of Dynamical & Empirical Day 1 – 14: NWP ‘reigns’ Wk 2 – 6 Dynamics & Empirical SI: Dynamics & Empirical Decadal ? Climate Change ?