1 / 19

Verification of COSMO-LEPS Model Against Observations for Weather Forecasting

Explore the validation process of COSMO-LEPS model in comparison to observational data, using techniques such as bi-linear interpolation and Brier Skill Score. Analyze the ROC area, contingency tables, and forecast usefulness metrics for different probability classes.

Download Presentation

Verification of COSMO-LEPS Model Against Observations for Weather Forecasting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COSMO-LEPS Verification Chiara Marsigli ARPA-SMR

  2. Available italian stations

  3. X X X X X X X X X Verification on station points Bi-linear interpolation (4 nearest points)

  4. X X X X X X X X X Verification on super-boxes Average value – maximum value – frequency PRED. OBS.

  5. X X X X OVERLAPPING BOXES

  6. COSMO-LEPS vs observations station points weighted Need for LM verification at these fc ranges ROC area Nov 02 – Dec 02 – Jan 03

  7. COSMO-LEPS vs observations station points not weighted ROC area

  8. w +48 h nw

  9. w +120 h nw

  10. COSMO-LEPS vs observations station points weighted Brier Skill Score

  11. COSMO-LEPS vs observations station points not weighted Brier Skill Score

  12. Brier Skill Score

  13. Weighting procedure It is possible to decide (in real-time) if it is better to weight or not to weight? Dependence from ensemble spread? Flow dependence?

  14. Brier Skill Score station points average values on super-boxes

  15. Brier Skill Score station points average values on super-boxes

  16. Brier Skill Score Nov 02 only

  17. Brier Skill Score

  18. contingency table Observed Yes No Forecast Yes a b No c d ROC area A contingency table can be built for each probability class (a probability class can be defined as the % of ensemble elements which actually forecast a given event). For the k-th probability class: The area under the ROC curve is used as a statistic measure of forecast usefulness

  19. Brier Skill Score Brier Score • oi= 1 if the event occurs • = 0 if the event does not occur  • fi is the probability of occurrence according to the forecast system (e.g. the fraction of ensemble members forecasting the event) • BS can take on values in the range [0,1], a perfect forecast having BS = 0 • The forecast system has predictive skill if BSS is positive, a perfect system having BSS = 1. Brier Skill Score = total frequency of the event (sample climatology)

More Related