240 likes | 459 Views
Evaluation and Diagnostic. Verification Methods for High Resolution Ensemble Forecasts. Barbara Brown 1 , Tara Jensen 1 , Michelle Harrold 1 , Tressa Fowler 1 , Randy Bullock 1 , Eric Gilleland 1 , and Brian Etherton 2
E N D
Evaluation and Diagnostic Verification Methods for High Resolution Ensemble Forecasts Barbara Brown1, Tara Jensen1, Michelle Harrold1, Tressa Fowler1, Randy Bullock1, Eric Gilleland1, and Brian Etherton2 1NCAR/RAL - Joint Numerical Testbed Program2NOAA/ESRL - Global Systems Division And Developmental Testbed Center Warn-On-Forecast Workshop: 8-9 February 2012
Challenges for objective evaluation of convective scale short-term prediction... • Large variability in space and time • Difficult to identify meaningful impacts • Extreme, high impact, weather events • Small regions of importance • Difficult to identify impacts of forecast “improvements” across whole domain • Verification scores • Desire for a single score... But CSI alone does not give a lot of information about performance or improvements • Relationships and dependencies among scores • Double penalty for displaced high-res forecasts • Observations • Uncertainties in quality • Need to be on time and spatial scaled that support evaluation
Ex: Relationships among scores • CSI is a nonlinear function of POD and FAR • CSI depends on base rate (event frequency) and Bias CSI Very different combinations of FAR and POD lead to the same CSI value FAR POD
9km - Ensemble Mean – 6h Precip Performance Diagram Freq Bias Best Dots represent Different leads 0.9 0.8 All on same plot • POD • 1-FAR (aka Success Ratio) • CSI • Freq Bias 6HR>0.1 in. 0.7 Freq Bias 0.6 6HR>0.5 in. 0.5 0.4 6HR>1.0 in. 0.3 6HR>2.0 in. 0.2 Here we see: Decreasing skill with higher thresholds even with multiple metrics 0.1 CSI Roebber (WAF, 2009) Wilson (presentation, 2008) Success Ratio (1-FAR) Results from HMT-West 2010-2011 seasonCourtesy of Ed Tollerud, DTC/HMT Collab
14 May 2009 Init: 00 UTC MODE Objects Thresh: 30dBZ No Radar Assim. Solid FCST OBJ Line OBS OBJ Radar Assim. Forecast Field Observed Field Objects
Comparing objects can give more descriptive assessment of the forecast
QPE field Probability Field QPE_06 >1 in. 50% Prob (APCP_06> 1 in.) Okay Forecast with Displacement Error? Or Bad Forecast because Too Sharp Or Underdispersive? Traditional Metrics Brier Score: 0.07 Area Under ROC: 0.62 Spatial Metrics Centroid Distance: Obj1) 200 km Obj2) 88km Area Ratio: Obj1) 0.69 Obj2) 0.65 Under Fcst Perfect Reliability Over Fcst 1 Obj PODY: 0.72 Obj FAR: 0.32 2 Median Of Max Interest: 0.77
Reflectivity > 30 dBZ MODE Objects (01-06hr) Looks like there is definitely a timing error What would a spaghetti plot of MODE objects look like? Hatched - Observed OBJ Solid - Fcst OBJ
MODE Time Domain - (MODE – TD) East Time increasing West Color gives W-E movement
Example Attribute of MODE-TD • Preparing for implementation in Model Evaluation Tools (MET) in next release (Spring 2012) • Applied to model forecasts from 2010 HWT Spring Experiment
MODE-TD Attributes Duration (h) E-W speed (m/s) Conv. Rad. = 10 grid squares Conv Thresh. = APCP_01 > 3 mm HWT SE 2010 Data May 17 – June 16, 2010 (18 forecasts/days) Number of objects: Obs 143, ARPS 161 ARW 172, NMM 238 90th percentile intensity
Take aways: To evaluate high resolution ensembles, you could… • Continue with traditional scores (and methods of display) and not gain a true sense of why the ensembles are performing well and poorly - Or - • Adopt performance diagrams, reliability diagrams, rank histograms, and spatial methods for diagnostics and understanding
Where to get these methods? • Model Evaluation Tools (MET) developed by the Developmental Testbed Center (DTC) • R statistics package, including R-spatial developed by Eric Gilleland at NCAR/RAL Joint Numerical Testbed (JNT) and R-verification The DTC and JNT Program can also serve as a resource for Evaluation Techniques Recommended websites:http://www.dtcenter.org/met/users http://www.r-project.org/ http://www.cawcr.gov.au/projects/verification/ http://www.ral.ucar.edu/projects/icp/ http://www.dtcenter.org/eval/hwt/2011
Extra Material Tools Intercomparisons Outreach Collaboration
www.dtcenter.org/met/users METv3.1 Available Now METv4.0 With GRIB2 support and MODE-TD to be released in late spring 2012
Object Oriented Method: MODEHow it works • Matched Object 1 • Matched Object 2 • Unmatched Object ENS FCST OBS Radius=5 Thresh>0.25”” Radius=5 Thresh>0.25” Merging Merging Matching No false alarms Misses Example from HMT-West 2010-2011 seasonCourtesy of DTC/HMT Collaboration
Metrics to be available in MODE-TD • Volume / intersection / union / symmetric difference • Axis angles • Spatial orientation • Average speed • Centriods – space and time • No way to combine calculation of centriod of space and time because of their differing units • Fuzzy logic handles them by weighting each independently
Jolliffe and Stephenson, 2nd Ed “Completely updated chapter on the Verification of Spatial Forecasts taking account of the wealth of new research in the area “ Authors: Brown, Ebert, Gilleland
R spatial verification package Example: Neighborhood methods • Spatial verification methods in “R” • Implemented by Eric Gilleland • Includes all major spatial methods
Intercomparison project (ICP) and other collaborations ICP • International effort focused on comparison of capabilities of the different spatial verification methods • Central U.S. precipitation forecasts and observations from HWT 2005 • Many publications in WAF special collection • Preparing now for ICP2 • Other collaborations • Collaborations with WMO/WWRP Working Groups on Mesoscale and Nowcasting Research • Documents on verification of mesoscale forecasts and cloud forecasts • Workshops and tutorials Web site: http://www.ral.ucar.edu/projects/icp/
ICP2 • International collaboration • Goals • Further testing of spatial and other methods from ICP1 • Application to complex terrain • Variables: Precipitation, Wind • Forecasts (Model output), Obs analyses and observations from 2007 COPS and MAP / D-Phase • VERA analyses (Include ensemble observation analyses) • 2 or more sets of model output from MAP D-PHASE