1 / 15

The Expanded UW SREF System and Statistical Inference STAT 592 Presentation Eric Grimit

The Expanded UW SREF System and Statistical Inference STAT 592 Presentation Eric Grimit. OUTLINE. 1. Description of the Expanded UW SREF System (How is this thing created?) 2. Spread-error Correlation Theory, Results, and Future Work 3. Forecast Verification Issues.

dawson
Download Presentation

The Expanded UW SREF System and Statistical Inference STAT 592 Presentation Eric Grimit

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Expanded UW SREF System and Statistical InferenceSTAT 592 PresentationEric Grimit OUTLINE 1. Description of the Expanded UW SREF System (How is this thing created?) 2. Spread-error Correlation Theory, Results, and Future Work 3. Forecast Verification Issues

  2. Core Members of the Expanded UW SREF System Multiple Analyses / Forecasts ICs LBCs MM5 M = 7 + CENT-MM5 Is this enough???

  3. Generating Additional Initial Conditions • POSSIBILITIES: • Random Perturbations • Breeding Growing Modes (BGM) • Singular Vectors (SV) • Perturbed Obs (PO) / EnKF / EnSRF • Ensembles of Initializations • Linear Combinations* } Insufficient for short-range, inferior to PO, and computationally expensive (BGM & SV) May be the optimal approach (unproven) Uses Bayesian melding (under development) Simplistic approach (no one has tried it yet) Why Linear Combinations? • Founded on the idea of “mirroring” (Tony Eckel) • IC* = CENT + PF * (CENT - IC) ; PF = 1.0 • Computationally inexpensive (restricts dimensionality to M=7) • May be extremely cost effective • Can test the method now • Size of the perturbations is controlled by the spread of the core members Selected Important Linear Combinations (SILC) ?

  4. Illustration of “mirroring” 1006 1004 1002 1000 998 996 994 cmcg C cmcg* ngps cmcg eta ukmo cmcg* Sea Level Pressure (mb) tcwb gasp avn cent ~1000 km 170°W 165°W 160°W 155°W 150°W 145°W 140°W 135°W STEP 1: Calculate best guess for truth (the centroid) by averaging all analyses. STEP 2: Find error vector in model phase space between one analysis and the centroid by differencing all state variables over all grid points. STEP 3: Make a new IC by mirroring that error about the centroid. IC* = CENT + (CENT - IC)

  5. Two groups of “important” LCs: (x) mirrors Xm* =  Xi – Xm ; m = 1, 2, …, M (+) inflated sub-centroids Xmn* =  Xi - (Xm+Xn) ; m,n = 1, 2, …, M ; mn M 2 M i = 1 M PF 2 1+PF M i = 1 ( ) 2*(M-1) (M-2) PF2 = • Must restrict selection of LCs to physically/dynamically “important” ones • At the same time, try for equally likely ICs • Sample the “cloud” as completely as possible with a finite number • (ie- fill in the holes)

  6. Root Mean Square Error (RMSE) by Grid Point Verification 12km Inner Domain 36km Outer Domain RMSE of MSLP (mb) 48h 36h 24h cmcg cmcg* avn avn* eta eta* ngps ngps* ukmo ukmo* tcwb tcwb* cent cmcg cmcg* avn avn* eta eta* ngps ngps* ukmo ukmo* tcwb tcwb* cent 12h

  7. Summary of Initial Findings • Set of 15 ICs for UW SREF are not optimal, but may be good enough to represent important features of analysis error • The centroid may be the best-bet deterministic model run, in the big picture • Need further evaluation... • How often does the ensemble fail to capture the truth? • How reliable are the probabilities? • Does the ensemble dispersion represent forecast uncertainty? Future Work • Evaluate the expanded UW MM5 SREF system and investigate multimodel applications • Develop a mesoscale forecast skill prediction system • Additional Work • mesoscale verification • probability forecasts • deterministic-style solutions • additional forecast products/tools (visualization)

  8. [ ] ( ) 2 1 - exp(-2)  1 - exp(-2) ; log S ~ N(0,2) , E ~ N(0,S2) Corr(S,|E|) = sqrt 2  Spread-error Correlation Theory Houtekamer 1993 (H93) Model: “This study neglects the effects of model errors. This causes an underestimation of the forecast error. This assumption probably causes a decrease in the correlation between the observed skill and the predicted spread.” agrees with... Raftery BMA variance formula: Var[Q | D] = Ek[Var(Q | D,Mk)] + Vark(E[Q | D,Mk]) “avg between model variance” “avg within model variance”

  9. RESULTS: 10-m WDIR Jan-Jun 2000 (Phase I) Observed correlations greater than those predicted by the H93 model • Possible explanations: • Artifact of the way spread and error are calculated! • Accounting for some of the model error? • Luck?

  10. RESULTS: 2-m TEMP Jan-Jun 2000 (Phase I) What’s happening here? Error saturation? Differences in ICs not as important for surface temperature

  11. 00 UTC T - 48 h CENT- MM5 12 UTC T - 36 h CENT- MM5 00 UTC T - 24 h CENT- MM5 12 UTC T - 12 h CENT- MM5 00 UTC T CENT- MM5 Does not have mesoscale features * “adjusted” CENT-MM5 analysis F48 F36 F24 F12 F00* M = 4 verification Another Possible Predictor of Skill Spread of a temporal ensemble ~ forecast consistency Temporal ensemble = lagged forecasts all verifying at the same time Temporal Short-range Ensemble with the centroid runs • BENEFITS: • Yields mesoscale temporal spread • Less sensitive to one synoptic-scale model’s time variability • Best forecast estimate of “truth”

  12. Future Investigation: Developing a Prediction System for Forecast Skill • Are spread and skill well correlated for other parameters? • ie. – wind speed & precipitation • use sqrt or logto transform data to be normally distributed • Do spread-error correlations improve after bias removal? • What is “high” and “low” spread? • need a spread climatology, i.e.- large data set • What are the synoptic patterns associated with “high” and “low” spread cases? • use NCEP/NCAR reanalysis data and compositing software • How do the answers change for the expanded UW MM5 ensemble? • Can a better single predictor of skill be formed from the two individual predictors? • IC spread & temporal spread

  13. CENT-MM5 “adjusted” OUTPUT OBSERVATIONS Noise TRUE VALUES Bias parameters Measurement error Small-scale structure Large-scale structure (after Fuentes and Raftery 2001) Mesoscale Verification Issues • Will verify 2 ways: • At the observation locations (as before) • Using a gridded mesoscale analysis • SIMPLE possibilities for the gridded dataset: • “adjusted” centroid analysis (run MM5 for < 1 h) • Verification has the same scales as the forecasts • Useful for creating verification rank histograms • Bayesian combination of “adjusted” centroid with • observations (e.g.- Fuentes and Raftery 2001) • Accounts for scale differences (change of support problem) • Can correct for MM5 biases

  14. Limitations of Traditional Bulk Error Scores • biased toward the mean • can get spurious zero errors by coincidence, not skill • also can be blind to position, phase, and/or rotation errors • This affects measurements of both spread & error! • Need to try new methods of verification… • consider the gradient of a field, not just the magnitude • addresses false zero errors / blindness to errors in the first derivative of a field • still biased toward the mean • pattern recognition software • would penalize the mean for absence/smoothness of features

More Related