1 / 49

How Does NCEP/CPC Make Operational Monthly and Seasonal Forecasts?

How Does NCEP/CPC Make Operational Monthly and Seasonal Forecasts?. Huug van den Dool (CPC) ESSIC, February, 23, 2011. Assorted Underlying Issues. Jin Huang - R2O - CTB Which tools are used… How are tools combined??? CFSv1  CFSv2 Dynamical vs Empirical Tools

brigid
Download Presentation

How Does NCEP/CPC Make Operational Monthly and Seasonal Forecasts?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How Does NCEP/CPC Make Operational Monthly and Seasonal Forecasts? Huug van den Dool (CPC) ESSIC, February, 23, 2011

  2. Assorted Underlying Issues • Jin Huang - R2O - CTB • Which tools are used… • How are tools combined??? • CFSv1  CFSv2 • Dynamical vs Empirical Tools • Skill of tools and OFFICIAL • How easily can a new tool be included? • US, yes, but occasional global perspective

  3. Menu of CPC predictions: 6-10 day (daily) Week 2 (daily) Monthly (monthly + update) Seasonal (monthly) Other (hazards, drought monitor, drought outlook, MJO, UV-index, degree days, POE, SST) (some are ‘briefings’) Informal forecast tools (too many to list) http://www.cpc.ncep.noaa.gov/products/predictions/90day/tools/briefing/index.pri.html

  4. EXAMPLE P U B L I C L Y I S S U E D

  5. From an internal CPC Briefing package

  6. EMP EMP EMP EMP N/A DYN CON CON EMP DYN

  7. SMLR CCA OCN LAN OLD-OTLK CFSV1 LFQ ECP IRI ECA CON 8 (15 CASES: 1950, 54, 55, 56, 64, 68, 71, 74, 75, 76, 85, 89, 99, 00, 08)

  8. Element  US-T US-P SST US-soil moisture Method:CCA X X X OCN X X CFS X X X XSMLR X XECCA X XConsolidation X X X Constr Analog X X X XMarkov X ENSO Composite X X Other (GCM) models (IRI, ECHAM, NCAR, CDC etc): X X CCA = Canonical Correlation AnalysisOCN = Optimal Climate NormalsCFS = Climate Forecast System (Coupled Ocean-Atmosphere Model)SMLR = Stepwise Multiple Linear RegressionCON = Consolidation

  9. About OCN. Two contrasting views:- Climate = average weather in the past- Climate is the ‘expectation’ of the future30 year WMO normals: 1961-1990; 1971-2000 etcOCN = Optimal Climate Normals: Last K year average. All seasons/locations pooled: K=10 is optimal (for US T).Forecast for Jan 2012 = (Jan02+Jan03+... Jan11)/10. – WMO-normalplus a skill evaluation for some 50+ years.Why does OCN work?1) climate is not constant (K would be infinity for constant climate)2) recent averages are better3) somewhat shorter averages are better (for T)see Huang et al 1996. J.Climate. 9, 809-817.

  10. OCN has become the bearer of most of the skill, see also EOCN method (Peng et al)

  11. G H C N - C A M S F A N 2 0 0 8

  12. Major Verification Issues ‘a-priori’ verification (used to be rare) After the fact (fairly normal)

  13. After the fact….. Source Peitao Peng

  14. (Seasonal) Forecasts are useless unless accompanied by a reliable a-priori skill estimate.Solution: develop a 50+ year track record for each tool. 1950-present.(Admittedly we need 5000 years)

  15. Consolidation

  16. --------- OUT TO 1.5 YEARS -------

  17. OFFicial Forecast(element, lead, location, initial month) = a * A + b * B + c * C +…Honest hindcast required 1950-present. Covariance (A,B), (A,C), (B,C), and(A, obs), (B, obs), (C, obs) allows solution for a, b, c (element, lead, location, initial month)

  18. CFS skill 1982-2003

  19. Fig.7.6: The skill (ACX100) of forecasting NINO34 SST by the CA method for the period 1956-2005. The plot has the target season in the horizontal and the lead in the vertical. Example: NINO34 in rolling seasons 2 and 3 (JFM and FMA) are predicted slightly better than 0.7 at lead 8 months. An 8 month lead JFM forecast is made at the end of April of the previous year. A 1-2-1 smoothing was applied in the vertical to reduce noise. CA skill 1956-2005

  20. M. Peña Mendez and H. van den Dool, 2008: Consolidation of Multi-Method Forecasts at CPC. J. Climate, 21, 6521–6538. Unger, D., H. van den Dool, E. O’Lenic and D. Collins, 2009: Ensemble Regression. Monthly Weather Review, 137, 2365-2379. (1) CTB, (2) why do we need ‘consolidation’?

  21. (Delsole 2007)

  22. RIW RI RIM Climo UR MMA COR

  23. SEC SEC and CV 3CVRE

  24. Bayesian Multimodel Strategies • Linear regression leads to unstable weights for small sample sizes. • Methods for producing more stable estimates have been proposed by van den Dool and Rukhovets (1994), Kharin and Zwiers (2002), Yun et al. (2003), and Robertson et al. (2004). • These methods are special cases of a Bayesian method, each distinguished by a different set of prior assumptions (DelSole 2007). • Some reasonable prior assumptions: • R:0 Weights centered about 0 and bounded in magnitude • (ridge regression) • R:MM Weights centered about 1/K (K = # models) and bounded in magnitude • R:MM+R Weights centered about an optimal value and bounded in magnitude • R:S2N Models with small S2N (signal-to-noise) ratio tend to have small weights • LS Weights are unconstrained (ordinary least squares) From Jim Kinter (Feb 2009)

  25. If the multimodel strategy is carefully cross validated, then the simple mean beats all other investigated multimodel strategies. • Since Bayesian methods involve additional empirical parameters, proper assessment requires a two-deep cross validation procedure. This can change the conclusion about the efficacy of various Bayesian priors. • Traditional cross validation procedures are biased and incorrectly indicate that Bayesian schemes beat a simple mean. From Jim Kinter (Feb 2009)

  26. No SEC, (no CV required), ‘raw’)

  27. Good to know: • If a model/methods has no systematic error (by design), leave it alone, please do not apply systematic error correction where none is needed. • There is more (need for) statistics in dynamical models and their post-processing than in any statistical method

  28. See also: O’Lenic, E.A., D.A. Unger, M.S. Halpert, and K.S. Pelman, 2008: Developments in Operational Long-Range Prediction at CPC.Wea. Forecasting, 23, 496–515.

  29. Empirical tools can be comprehensive! (Thanks to reanalysis, among other things). And very economic.

  30. Z500 SST CA T2m Precip

  31. SST Z500 CFS T2m Precip Source: Wanqiu Wang

  32. Is CPC in good shape to admit more tools???Even if these tools come from the outside???Does the outside have the right expectations???

  33. extra

  34. Dynamics> Empiricism Symbiosis • Successive generations of Reanalyses are produced because NWP exists, the desire for initial conditions etc, but empirical prediction methods are one of the main beneficiaries of having all this data. • Early empirical methods have served as an example (for anybody to follow) to produce (honest) hindcasts, a-priori skill assessment, cross validation etc • Consolidation (of tools) is “color-blind” relative to questions of dynamical or empirical origin

  35. Progress in Empirical Methods More data to work with (1 year per year) More (and global) data to work with (Re-analyses). Oceans, land, atmosphere (note the symbiotic relationship with modeling here) New empirical methods New applications Inch-by-inch methods Revolutionary changes Always be ready for surprises

  36. Why are empirical Methods competitive with dynamical methods (in seasonal prediction)?? Linearity (define) A system with skill in <= 3EDOFs is functionally linear See Chapter 10 of book

  37. Mix of Dynamical & Empirical Day 1 – 14: NWP ‘reigns’ Wk 2 – 6 Dynamics & Empirical SI: Dynamics & Empirical Decadal ? Climate Change ?

More Related