1 / 19

Validation of Surface Reference Data Sets using Satellite and Model Information

This presentation discusses the validation of surface reference data sets for precipitation retrieval schemes, highlighting the need to identify regions of good and poor data quality. It explores the performance evaluation and cross-comparison of different surface data sets, including radar, satellite, and model data. The analysis reveals the impact of local factors on data quality and provides insights into the limitations and biases of different data sources.

rstaley
Download Presentation

Validation of Surface Reference Data Sets using Satellite and Model Information

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center Validation of Surface Reference Data Sets using Satellite and Model Information Chris Kidd Earth System Science Interdisciplinary Center, University of Maryland & NASA/Goddard Space Flight Center Chris.kidd@nasa.gov

  2. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center Background Surface reference data sets (SRDs; radar, gauges, etc) are an integral part of any precipitation retrieval scheme. Despite extensive work to reduce/mitigate inherent errors within SRDs, errors still exist, particularly at local scales. There is a need to identify regions of good (and conversely, poor) data within SRDs to ensure that i) where used, good quality data is used for calibration, verification/ validation and; ii) so that the quality of the SRDs can improve.

  3. Timeline position 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center SE England analysis (vs radar) Reference Data: Surface Radar improving performance evaluation of individual 0.25° x 0.25° boxes

  4. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center SRDs vs other measures • Satellite & model vs surface errors should be random – particularly over small regions with similar features; • The relative performance of products at particular locations remain generally constant; • Local differences in surface reference data lead to differences in statistical performance Local factors include: • Radar range (beam height above ground) • Blockage (terrain/buildings) • Anaprop errors (terrain/buildings, shipping/aircraft)

  5. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center Cross-comparison of satellite/surface Kidd 1997: Comparison of satellite retrieval and surface radar • Analysis of AIP-3 (TOGA-COARE) radar data vs satellite estimates: Significant range effects (>120 km poor) • Spatial mapping of radar errors identifying range effects and surface clutter Satellite Surface Through the generation of a contingency table of rain/no-rain and subsequent spatial mapping, errors can be identified.

  6. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center Data sets: • US NMQ radar data (0.01 degree, 5 minute) • European Nimrod data (5 km, 15 minute) • TRMM Precipitation Radar data (4.3 km, occasional) • Global IR data (4 km, 30 minute) • ECMWF operational forecast output (15 km, 3 hour)

  7. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center TRMM PR vs NMQ surface radar PR rain vs NMQ no-rain PR no-rain vs NMQ rain Co-incident (time/space) matchups at 5 km resolution (2009-2011)

  8. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center TRMM PR vs NMQ surface radar Heidke Skill Score (0.5 mmh-1 threshold) • Radar range is a significant artefact • Identification of regions of ‘good’ surface data 0.8 0.6 0.4 0.2 0.1 0.0 Co-incident (time/space) matchups at 5 km resolution (2009-2011)

  9. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center Extension to extra-TRMM regions How do you verify surface data sets outside the TRMM PR region? • Surface radar data are inconsistently correct • Infrared retrievals are consistently incorrect - Use of Global IR data as proxy for rainfall (simple Tb thresholding) Also, use of modelled precipitation

  10. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center IR vs NMQ Darker = radar overestimates 0.04 degrees Global IR data as proxy for rain: radar over/under-estimation

  11. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center IR vs NMQ Darker = IR ‘overestimates’ 0.04 degrees Global IR data as proxy for rain: IR errors/characteristics NOTE: NMQ is NOT on an equal-area projection!

  12. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center ECMWF vs NMQ Darker = radar overestimates 0.2 degrees

  13. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center ECMWF vs NMQ Darker = ECMWF ‘overestimates’ 0.2 degrees

  14. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center IR & ECMWF vs NMQ IR comparison ECMWF comparison Darker = radar overestimates Darker = product overestimates

  15. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center HSS scores: ECMWF & PR vs NMQ 0.8 0.6 0.4 0.2 0.1 0.0 ECMWF vs NMQ 0.8 0.6 0.4 0.2 0.1 0.0 TRMM PR vs NMQ Magnitudes differ, but patterns are similar

  16. Clutter Large-scale differences due to IR terrain blockage Radar underestimation w.r.t. infrared Radar overestimation w.r.t. infrared 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center Europe: UKMO-Nimrod radar vs IR

  17. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center Europe: UKMO-Nimrod radar vs ECMWF Heidke Skill Score (0.5 mmh-1 threshold) • Radar range is a significant artefact • Eastern region – different surface radar thresholding? 0.8 0.6 0.4 0.2 0.1 0.0 Extension of technique to European OPERA & Australian Rainfields

  18. 6th IPWG meeting, 15-19 October 2012 Sao Jose Dos Campos, Brazil Goddard Space Flight Center Conclusions Emphases that the cross-validation of data sets is useful. IR data, using a simple threshold, can help identify small-scale artefacts within the surface radar data sets; Models, although coarser resolution reinforce the findings of the simple IR thresholding; PR data (the best) – similar observing systems (GPM-DPR…) Through use of IR/model/PR maps of ‘confidence’ can be produced to help identify regions of high-quality surface reference data: - helps to improve validation/verification satellite products - identifies regions requiring improved surface data

More Related