1 / 38

Enhancing NDFD Forecast Verification Using ADAS for Western U.S.

Evaluate and improve techniques to verify NDFD forecasts using ADAS in the western United States during the winter season of 2003-2004. Enhance gridded verification strategies to assess forecast capabilities and deficiencies for temperature, dew point temperature, and wind speed. Utilize real-time weather data sharing platforms like MesoWest and ROMAN for validation. Improve ADAS limitations for more accurate forecast analyses. Consultation of verification strategies to maintain high-quality analysis standards.

apope
Download Presentation

Enhancing NDFD Forecast Verification Using ADAS for Western U.S.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VERIFICATION OF NDFD GRIDDED FORECASTS USING ADASJohn Horel1, David Myrick1, Bradley Colman2, Mark Jackson31NOAA Cooperative Institute for Regional Prediction2National Weather Service, Seattle3National Weather Service, Salt Lake City Objective: Verify winter season 2003-2004 NDFD gridded forecasts of temperature, dew point temperature, and wind speed over the western United States

  2. Validation of NDFD Forecast Grids Developing effective gridded verification scheme is critical to identifying the capabilities and deficiencies of the IFPS forecast process (SOO White Paper 2003) • National efforts led by MDL to verify NDFD forecasts underway • Objective: • Evaluate and improve techniques required to verify NDFD grids • Method • Compare NDFD forecasts to analyses created at the Cooperative Institute for Regional Prediction (CIRP) at the University of Utah, using the Advanced Regional Prediction System Data Assimilation System (ADAS) • Period examined 00Z NDFD forecasts from 12 November 2003 – 29 February 2004. Verifying analyses from 17 November 2003- 7 March 2004. • Many complementary validation strategies: • Forecasts available from NDFD for a particular grid box are intended to be representative of the conditions throughout that area (a 5 x 5 km2 region) • Interpolate gridded forecasts to observing sites • Compare gridded forecasts to gridded analysis based upon observations • Verify gridded forecasts only where confidence in analysis is high

  3. MesoWest: Cooperative sharing of current weather information around the nation Real-time and retrospective access to weather information through state-of-the-art database http://www.met.utah. edu/mesowest ROMAN:Real-Time Observation Monitor and Analysis Network Provide real-time weather data around the nation to meteorologists and land managers for fire weather applications MesoWest and ROMAN

  4. 2003 Fire Locations (Red); ROMAN stations (Grey) Fire locations provided by Remote Sensing Applications Center from MODIS imagery

  5. Documentation • MesoWest: Horel et al. (2002) Bull. Amer. Meteor. Soc. February 2002 • ROMAN: • Horel et al. (2004) Submitted to International Journal of Wildland Fire. Jan. 2004 • Text: http://www.met.utah.edu/jhorel/homepages/jhorel/ROMAN_text.pdf • Figures: http://www.met.utah.edu/jhorel/homepages/jhorel/ROMAN_fig.pdf • Horel et al. (2004) IIPS Conference • ADAS: • Myrick and Horel (2004). Submitted to Wea. Forecasting.http://www.met.utah.edu/jhorel/cirp/WAF_Myrick.pdf • Lazarus et al. (2002) Wea. Forecasting. 971-1000. • On-line help: http://www.met.utah.edu/droman/help

  6. Are All Observations Equally Bad? • All measurements have errors (random and systematic) • Errors arise from many factors: • Siting (obstacles, surface characteristics) • Exposure to environmental conditions (e.g., temperature sensor heating/cooling by radiation, conduction or reflection) • Sampling strategies • Maintenance standards • Metadata errors (incorrect location, elevation) SNZ

  7. Are All Observations Equally Good? • Why was the sensor installed? • Observing needs and sampling strategies vary (air quality, fire weather, road weather) • Station siting results from pragmatic tradeoffs: power, communication, obstacles, access • Use common sense • Wind sensor in the base of a mountain pass will likely blow from only two directions • Errors depend upon conditions (e.g., temperature spikes common with calm winds) • Use available metadata • Topography • Land use, soil, and vegetation type • Photos • Monitor quality control information • Basic consistency checks • Comparison to other stations UT9

  8. ADAS: ARPS Data Assimilation System • ADAS is run in near-real time to create analyses of temperature, relative humidity, and wind over the western U. S. (Lazarus et al. 2002 WAF) • Analyses on NWS GFE grid at 2.5, 5, and 10 km spacing in the West • Test runs made for lower 48 state NDFD grid at 5 km spacing • Typically > 2000 surface temperature and wind observations available via MesoWest for analysis (5500 for lower 48) • The 20km Rapid Update Cycle (RUC; Benjamin et al. 2002) is used for the background field • Background and terrain fields help to build spatial & temporal consistency in the surface fields • Efficiency of ADAS code improved significantly • Anisotropic weighting for terrain and coasts added (Myrick et al. 2004) • Current ADAS analyses are a compromise solution; suffer from many fundamental problems due to nature of optimum interpolation approach

  9. ADAS Limitations • Analysis depends strongly upon the background field • Hour-to-hour consistency only through background field • Analysis sensitive to choice of background error decorrelation length scale • Wind field not adjusted to local terrain • Anisotropic weighting only partially implemented • Manual effort required to maintain station blacklist • Difficult to assess independently the quality of the analysis: analysis can be constrained to match observations, which typically leads to spurious analysis in data sparse regions

  10. How “Good” are the Analysis Grids?Relative to MesoWest Observations in the West Temperature (oC): 17 Nov. 2003- 7 Mar. 2004

  11. How “Good” are the Analysis Grids?Relative to MesoWest Observations in the West Wind Speed (m/s): 17 Nov. 2003- 7 Mar. 2004

  12. Arctic Outbreak: 21-25 November 2003 NDFD 48 h forecast ADAS Analysis

  13. Upper Level Ridging and Surface Cold Pools: 13 January 2004 NDFD 48 h forecast ADAS Analysis

  14. Validation of NDFD Forecasts at “Points” • NDFD forecasts are intended to be representative of 5x5 km2 grid box • Compare NDFD forecasts at gridpoint adjacent (lower/left) to observations: inconsistent but avoids errors in complex terrain introduced by additional bilinear interpolation to observation location • Compare NDFD forecasts to ADAS and RUC verification grids at the same sample of gridpoints: no interpolation • All observation points have equal weight • Since they are distributed unequally, not all regions receive equal weight

  15. Verification of NDFD relative to Obs or ADAS similar RUC: too warm at 12Z: leads to large bias and RMS Verification at ~2500 Obs. Locations in the West

  16. Smaller RMS relative to ADAS since evaluating NDFD at same grid points NDFD winds too strong and RUC winds too strong as well Verification at ~2000 Obs. Locations

  17. Where Do We Have Greater Confidence in the ADAS Analysis? White Regions- No observations close enough to adjust the RUC background ADAS confidence regions defined where total weight > .25 Varies: diurnally, from day-to-day, between variables

  18. Gridded Validation of NDFD Forecasts • RUC downscaled to NDFD grid using NDFD terrain • ADAS analysis performed on NDFD grid • Statistics based upon areas where sufficient observations to have “confidence” in the analysis denoted as “ADAS_C”

  19. Average 00Z Temperature: DJF 2003-2004 NDFD 48 h

  20. 48 h Forecast Temperature Bias (NDFD – Analysis)DJF 2003-2004 NDFD-RUC NDFD-ADAS

  21. 48 h Forecast Temperature RMS Difference (NDFD – Analysis)00z 18 Nov.-23 Dec. 2003 RUC ADAS

  22. Average 00Z Dewpoint and Wind Speed DJF 2003-2004 Dewpoint Wind Speed

  23. Wind Speed 48 h Forecast RMS Difference (NDFD – Analysis)DJF 2003-2004 Dewpoint

  24. Bias and RMS for Temperature as a function of forecast length: DJF 2003-2004 No difference when verification limited to areas where higher confidence in the ADAS analysis

  25. Bias and RMS for Dewpoint Temperatureas a function of forecast length: DJF 2003-2004 Lower confidence in analysis of dewpoint temperature

  26. NDFD has higher speed bias in regions with observations Bias and RMS for Wind Speedas a function of forecast length: DJF 2003-2004

  27. Arctic Outbreak: 21-25 November 2003 NDFD and ADAS DJF 2003-2004 seasonal means removed NDFD 48 h forecast ADAS Analysis

  28. Surface Cold Pool Event: 13 January 2004 NDFD and ADAS DJF 2003-2004 seasonal means removed NDFD 48 h forecast ADAS Analysis

  29. Solid-ADAS Dashed-ADAS_C

  30. Solid-ADAS Dashed-ADAS_C

  31. DJF 2003-2004 Anomaly Pattern Correlations

  32. Summary • At the present time, verification of NDFD forecasts is relatively insensitive to methodology. The errors of the NDFD forecasts are much larger than uncertainty in the verification data sets. • Differences between analyses (e.g., RUC vs. ADAS) and differences between analyses and observations are much smaller than differences between NDFD forecast grids and analyses or NDFD forecast grids and observations • Difference between ADAS temperature analysis on 5 km grid and station observations is order 1.5-2C • Difference between NDFD temperature forecast and ADAS temperature analysis is order 3-6C • Systematic NDFD forecast errors are evident that may be correctable at WFOs and through improved coordination between WFOs • Skill of NDFD forecast grids, when the seasonal average is removed to focus upon synoptic and mesoscale variation, depends strongly on the parameter and the synoptic situation: • Anomaly pattern correlations between NDFD and ADAS temperature grids over the western United States suggest forecasts are most skillful out to 72 h • Dew point temperature skill evident out to 48 h and wind speed out to 36 h • Little difference in NDFD skill when evaluated over areas where analysis confidence is higher • Some strongly forced synoptic situations are well forecast over the West as a whole • Persistence forecasts were hard to beat during cold pool events • Specific issues for NDFD Validation in Complex Terrain • Scales of physical processes • Analysis methodology • Validation techniques

  33. Issues for NDFD Validation in Complex Terrain • Physical Process: • Horizontal spatial scales of severe weather phenomena in complex terrain often local and not sampled by NDFD 5 km grid • Vertical decoupling from ambient flow of surface wind during night is difficult to forecast. Which is better guidance: match locally light surface winds or focus upon synoptic-scale forcing?

  34. Issues for NDFD Validation in Complex Terrain • Analysis Methodology • Analysis of record will require continuous assimilation of surface observations, as well as other data resources (radar, satellite, etc.) • Requires considerable effort to quality control observations (surface stations siting issues, radar terrain clutter problems, etc.) • Quality control of precipitation data is particularly difficult • NWP model used to drive assimilation must resolve terrain without smoothing at highest possible resolution (2.5 km) • NCEP proposing to provide analysis of record for such applications

  35. Issues for NDFD Validation in Complex Terrain • Validation technique: • Upscaling of WFO grids to NDFD grid introduces sampling errors in complex terrain • Which fields are verified? • Max/min T vs. hourly temperature? • Max/min spikes • fitting of sinusoidal curve to Max/Min T to generate hourly T grids • instantaneous/time average temperature obs vs. max/min • Objectively identify regions where forecaster skill limited by sparse data

  36. Ongoing and Future Work • Submit paper on ADAS evaluation of NDFD grids • Make available simplified ADAS code suitable for use at WFOs in GFE • Develop variational constraint that adjusts winds to local terrain • Improve anisotropic weighting • Collaborate with MDL and NCEP on applications of MesoWest observations and ADAS • Meeting on action plan for analysis of record: June 29-30

More Related