190 likes | 215 Views
Explore the validation process of COSMO-LEPS model in comparison to observational data, using techniques such as bi-linear interpolation and Brier Skill Score. Analyze the ROC area, contingency tables, and forecast usefulness metrics for different probability classes.
E N D
COSMO-LEPS Verification Chiara Marsigli ARPA-SMR
X X X X X X X X X Verification on station points Bi-linear interpolation (4 nearest points)
X X X X X X X X X Verification on super-boxes Average value – maximum value – frequency PRED. OBS.
X X X X OVERLAPPING BOXES
COSMO-LEPS vs observations station points weighted Need for LM verification at these fc ranges ROC area Nov 02 – Dec 02 – Jan 03
COSMO-LEPS vs observations station points not weighted ROC area
w +48 h nw
w +120 h nw
COSMO-LEPS vs observations station points weighted Brier Skill Score
COSMO-LEPS vs observations station points not weighted Brier Skill Score
Weighting procedure It is possible to decide (in real-time) if it is better to weight or not to weight? Dependence from ensemble spread? Flow dependence?
Brier Skill Score station points average values on super-boxes
Brier Skill Score station points average values on super-boxes
Brier Skill Score Nov 02 only
contingency table Observed Yes No Forecast Yes a b No c d ROC area A contingency table can be built for each probability class (a probability class can be defined as the % of ensemble elements which actually forecast a given event). For the k-th probability class: The area under the ROC curve is used as a statistic measure of forecast usefulness
Brier Skill Score Brier Score • oi= 1 if the event occurs • = 0 if the event does not occur • fi is the probability of occurrence according to the forecast system (e.g. the fraction of ensemble members forecasting the event) • BS can take on values in the range [0,1], a perfect forecast having BS = 0 • The forecast system has predictive skill if BSS is positive, a perfect system having BSS = 1. Brier Skill Score = total frequency of the event (sample climatology)