190 likes | 310 Views
Assessing added value of high resolution forecasts Emiel van der Plas Maurice Schmeits, Kees Kok KNMI, The Netherlands. Introduction. Question: do high resolution (convection resolving) models perform better than the previous generation models? T2M, wind, precipitation! KNMI:
E N D
Assessing added value of high resolution forecasts Emiel van der Plas Maurice Schmeits, Kees Kok KNMI, The Netherlands
Introduction Question: do high resolution (convection resolving) models perform better than the previous generation models? T2M, wind, precipitation! KNMI: Harmonie (2.5 km) > Hirlam(11, 22 km) ? Harmonie > ECMWF (deterministic run: T1279)? Verification of high resolution NWP forecasts is challenging Precipitation: highly localised Radar/stationdata: double penalty! If there is extra skill, how to demonstrate objectively? In this talk: Fuzzy methods and Model Output Statistics
Set-up Harmonie (‘ECJAN’): 2.5 km, 300x300 domain, AROME physics, 3DVAR ECMWF boundaries Run with 800x800 points, Hirlam boundaries: no sufficient archive available… Hirlam (D11): 22 km, 136 x 226, 3DVAR ECMWF Operational (T1279) ±16 km, global, 3DVAR Radar: Dutch precipitation radar composite 1 km • Period: 1st February 2012 - 31st May 2012 • All output resampled to Harmonie grid • (nearest neighbour)
Example of Direct Model Output E.g. frontal precipitation, 7th March 2012 ECJAN, Hirlam, ECMWF, Radar RADAR Harmonie D11 ECMWF
Neo-classicalverification: fuzzymethods • MET: suite of verification tools by NCAR (WRF) • Grid based scores: with respect to gridded radar observations • Fractions Skill Score (Roberts, Lean 2008) • Hanssen-Kuiper discriminant, Gilbert Skill Score (ETS), … • Object based scores (not in this paper) GSS, 25x25, > 2mm/3h FSS, 3x3, > 1mm/3h
MOS: what is relevant in DMO? How would a trained meteorologist look at direct model output? ?
Predictors How would a trained meteorologist look at direct model output? 7/15
Predictors How would a trained meteorologist look at direct model output? 8/15
Predictors How would a trained meteorologist look at direct model output? 9/15
Model Output Statistics: predictive potential Construct a set of predictors (per model, station, starting and lead time): For now: use precipitation only Use various ‘areas of influence’: 25,50,75,100 km DMO, coverage, max(DMO) within area, distance to forecasted precipitation, … , threshold! Apply (extended) logistic regression [Wilks 2009] Use threshold (sqrt(q)) as predictor: complete distribution function (Wilks, 2009) Forward stepwise selection, backward deletion using R: stepPLR (Mee Young Park and Trevor Hastie, 2008) Verify probabilities based on coefficients of selected predictors in terms of reliability diagrams, Brier Skill Score
Results: example poor skill Harmonie 00UTC+003 ECMWF D11
Results: example good skill Harmonie 00UTC+012 ECMWF D11
Outlook • No conclusive results • Grid-based, “fuzzy” methods suggest reasonable skill for high resolution NWP model (Harmonie) • MOS: mixed bag • Frontal systems (FMAM) well captured by hydrostatic models • To do: • Larger dataset • Training data, independent data • Convective season: more cases, higher thresholds • Include Harmonie run on large domain • …
Binary predictand yi (here: precip > q) Probability: logistic: Joint likelihood: L2 penalisation (using R: stepPLR by Mee Young Park and Trevor Hastie, 2008): minimise Use threshold (sqrt(q)) as predictor: complete distribution function (Wilks, 2009) Few cases, many potential predictors: pool stations, max 5 terms Extended Logistic Regression (ELR)
Period 1st February 2012 -31st May 2012 The archive available for Harmonie was the limiting factor Mostly frontal precipitation ECJAN D11 ECMWF RADAR
Verification: classical, Fraction Skill Score Classical or categorical verification, eg: Hanssen-Kuiper discriminant, (aka True Skill Statistic, Peirce Skill Score) (a d – b c)/(a + c)(b + d) Fraction Skill Score: (Roberts & Lean, 2008) Straightforward interpretation but: Double penalty CTS Observed yesno Forecast yes| a b no | c d
Verification: MODE (object based), wavelets MET provides access to MODE analysis: “Method for Object-based Diagnostic Evaluation” Forecast, observation: convolution, thresholded, … FC OBS
Verification: MODE (object based), wavelets MET provides access to MODE analysis: “Method for Object-based Diagnostic Evaluation” … merged, matched and compared. Center of mass Area, Angle, Convex hull, … OBS FC