180 likes | 202 Views
Learn about the future of EVS, new verification techniques, and planned improvements in probabilistic forecast verification. Explore methods to strengthen ensemble forecasting and discuss feedback for better diagnostics and real-time verification.
E N D
RFC Verification Workshop Plans for expanding probabilistic forecast verification James Brown James.D.Brown@noaa.gov
Contents • 1. Future of EVS • Release schedule • Planned improvements • 2. New verification techniques • Real time forecasting • Screening verification datasets • 3. Discussion and feedback survey 2
Release schedule • XEFS is outside of AWIPS • First limited release of EVS 1.0 beta • MARFC and CNRFC to conduct initial tests • Beginning 09/07 • Other RFCs?? • After initial tests complete (1-2 months) • Depends on expressions of interest 4
Planned improvements • Managing workload • Batch processing (for forecast groups etc.) • Tailor interface for different users • Functionality for screening results • Improved documentation/help • Use cases for different metrics • Improved user’s manual and online help • Confidence intervals for metrics 5
Planned improvements • Longer term goals • Common platform for ensemble forecasting… • …XEFS: common appearance and functions. • Led by HSEB, Steve Shumate and others. • Common platform for verification (with IVP). 6
Prognostic verification • The principle • Live forecast issued X days into future • How well are they likely to perform? • How did similar forecasts perform in past? • Three sources of information: • a) The forecaster: what determines ‘similar’? • b) The past forecasts that are ‘similar’ • c) Past observations corresponding to (b)
Lead day 1 from previous example
North Fork, CA: 13th June 2003 Lead day 1
Current status • Very early stage • Preparing a manuscript on methods • Generated several examples (/w code) • Critical questions • How to select ‘similar’ forecasts? • Conditioning may be simple or complex • What types of products (graphics etc.)? • What functionality to include in tools?
‘Meta-verification’ • Screening large verification datasets • Large volumes of data produced by EVS • End-users need condensed data • But a single summary metric = biased view • Better to build rules using several metrics • Make rules ‘aware’ of forecasting situations • Status • Will investigate possibilities soon (e.g. AI) 14
Questions What plans for ensemble verification? What priorities for diagnostic verif.? What priorities for real-time verif.? Ideas on specific products for each? 16