240 likes | 355 Views
Verification of SREF Aviation Forecasts at Binghamton, NY. Justin Arnott NOAA / NWS Binghamton, NY. Motivation. Ensemble information making impacts throughout the forecast process Examples SREFs, MREFs, NAEFS, ECMWF Ensemble
E N D
Verification of SREF Aviation Forecasts at Binghamton, NY Justin Arnott NOAA / NWS Binghamton, NY
Motivation • Ensemble information making impacts throughout the forecast process • Examples SREFs, MREFs, NAEFS, ECMWF Ensemble • SREF resolution is reaching the mesoscale (32-45 km), a scale at which some aviation impacts may be resolvable • Can SREFs provide useful information in the aviation forecast process?
SREF • 21 member multi-model ensemble • 10 ETA(32 km, 60 vertical levels) • 3 NCEP - NMM WRF (40 km, 52 vertical levels) • 3 NCAR - ARW WRF (45 km, 35 vertical levels) • 5 NCEP - RSM (45 km, 28 vertical levels) • Various IC/BCs, physical parameterizations
SREF Aviation Forecasts • Numerous Aviation parameters are created from the 9 and 21Z simulations • For creating TAFs, CIG/VSBY fields may provide the most potential use • Some outputted directly, • some derived • Include: CIG, VSBY, icing, • turbulence, jet stream, shear, • convection, precipitation, • freezing level, fog, etc. http://wwwt.emc.ncep.noaa.gov/mmb/SREF/SREF.html
SREF Aviation Forecasts • Verification of SREF CIG/VSBY forecasts has been minimal • Alaska Region has completed a study using SREF MEAN values • No verification study has been conducted over the lower 48
~40 km Expectations • CIGS/VSBYS, can vary greatly on scales far less than the 32-45 km scale of the SREFs • Some MVFR/IFR events are more localized than others • Summer MVFR/IFR tends to be more localized • Winter MVFR/IFR is typically more widespread • Bottom Line: Expect relatively poor SREF performance during the warm season, with improvements during the cool season
The Study So Far… • Gather SREF CIG/VISBY data daily starting July 1, 2008 • Data provided specifically for the project by Binbin Zhou at NCEP • Compute POD/FAR/CSI/BIAS statistics for July-September at KBGM • MVFR and IFR (due to small sample size) • Investigate using different probabilities to base forecast on • 50%, 30%, 20%, 10%
The Study So Far…continued • Compare SREF results to WFO Binghamton, NY and GFS MOS forecasts • Use stats-on-demand web interface to obtain this data
Results • Very little MVFR/IFR at KBGM in July-September • IFR or MVFR only ~10% of the time • So, we’re aiming at a very small target!
Results – MVFR/IFR CIGS • WFO BGM/GFS MOS more skillful than the SREF mean or any SREF probability threshold • 30% probability threshold shows best skill • Large false alarm ratios with nearly all SREF forecasts • Large positive biases for SREF mean and nearly all probability thresholds • IE over forecasting MVFR/IFR CIGS
Comparing Apples with Oranges? • These results compare 9-21 hr SREF forecasts with 0-6 hour WFO BGM forecasts and 6-12 hr GFS forecasts • Due to later availability of SREF data (9Z SREFS not available for use until 18Z TAFs) • How well does a 9-24 hr GFS MOS (or BGM) forecast perform? • 21 hr not available using stats-on-demand
Results – MVFR/IFR CIGS • WFO BGM / GFS MOS performance does not decrease substantially by changing the comparison time window
Results MVFR/IFR VSBYS • SREF Mean as well as 30 and 20% thresholds fail to identify enough cases to be useful • 10% threshold shows greatest skill and is comparable to GFS MOS forecasts! • There is a significant positive bias at this threshold
Results – IFR CIGS • SREF Mean poor at identifying IFR CIGS • CSI scores for SREF probability fields are an improvement on WFO BGM/GFS MOS • Bias scores indicate underforecasting at a 30% threshold but large overforecasting for 20,10% thresholds • WFO BGM/ GFS MOS tend to underforecast IFR CIGS
Results – IFR VSBYS • SREF can only readily identify IFR VSBY situations except at the 10% threshold • Tremendous biases indicate, however that these forecasts are not useful
Summary • SREF performance occasionally comparable to GFS MOS potentially useful guidance • Promising for “~direct” model output • Hampered by later arrival time at WFO • MEAN fields show little/no skill • Different probability thresholds show best skill for different variables/categories • CIGS: • SREFS frequently over forecast MVFR/IFR CIGS • SREFS perform surprisingly well with IFR CIGS • Best performing probability thresholds are 20-30% balancing BIAS with CSI
Summary, continued • VSBYS: • SREFS have trouble identifying VSBY restrictions • 10% probability threshold necessary to get any signal, but this may be useful for MVFR/IFR (not IFR alone)
Future Plans • Continue computing statistics through the upcoming cool season • Expect improved results given more widespread (i.e. resolvable) restrictions • Expand to other WFO BGM TAF sites • Work with NOAA/NWS/NCEP in improving calculations of CIG/VSBY
Acknowledgements • Binbin Zhou – NOAA/NWS/NCEP • For providing access to SREF data in near real-time