200 likes | 378 Views
Verification Studies of Model-Derived Cloud Fields in Support of Aviation Applications. Paul A. Kucera , Tara Jensen, Courtney Weeks, Cory Wolff, David Johnson, and Barbara Brown National Center for Atmospheric Research(NCAR)/Research Applications Laboratory(RAL)/Joint Numerical Testbed (JNT)
E N D
Verification Studies of Model-Derived Cloud Fields in Support of Aviation Applications Paul A. Kucera, Tara Jensen, Courtney Weeks, Cory Wolff, David Johnson, and Barbara Brown National Center for Atmospheric Research(NCAR)/Research Applications Laboratory(RAL)/Joint Numerical Testbed (JNT) Boulder, Colorado USA 02 December 2010
NCAR’s Joint Numerical TestbedWho we are and What we do Developmental Testbed Center (DTC)National OfficeDirector: Bill Kuo Teams share staff Community Systems Shared Evaluation Methods Mission: To support the sharing, testing, and evaluation of research and operational NWP systems, and to facilitate the transfer of research capabilities to operational prediction centers The Developmental Testbed Center (DTC) is an integral part of the JNT (and vice versa). However, the DTC is a multi-institutional effort between JNT, AFWA, NOAA, and the NWP Community to help bridge research and operational NWP efforts - The DTC National Office is located in the JNT
Aviation Related Evaluation within JNT We have two main aviation related projects within the JNT: HWT – DTC Collaboration Evaluation of simulated radar-based products NASA ROSES Project – Developing methods for the evaluation of satellite derived products
DTC Objective Evaluation for2010 HWT Spring Experiment Severe QPF Aviation Evaluation: Evaluation: Evaluation: REFC or CREF 20, 25, 30, 35, 40, 50, 60 dBZ APCP and Prob. 0.5, 1.0, 2.0 inches in 3h and 6h RETOP 25, 30, 35, 40, 45 kFT
Quick Glance at HWT-DTC Evaluation Results Frequency Bias indicates Ensemble Mean field exhibits a large over-forecast of areal coverage of cloud complexes CAPS Ensemble Mean Ratio of MODE Forecast objects to Observed objects implies the large frequency bias may be due to a factor of 3-5 over-prediction of forecast convective cells
NASA ROSES Project: Developing methods for the evaluation of satellite derived products
Satellite Cloud Evaluation Studies • NCAR/JNT has started to evaluate the use of A-Train observations for NWP and eventually aviation product evaluation • Currently focused on CloudSat-CALIPSO products • A goal of the project is to create a common toolkit in the Model Evaluation Tools (MET) for integrating satellite observations that will provide meaningful comparisons with NWP model output • Extend MET to include evaluations in the vertical plane
MET Overview • MET is designed to evaluate a variety of datasets (e.g., rain gauge, radar, satellite, NWP products). The following statistical tools are available: • Grid-to-point verification (e.g., Compare NWP or satellite gridded products to rain gauge observations) • Grid-to-grid verification (e.g., Compare NWP or satellite products to radar products) • Advanced spatial verification techniques • Compare precipitation “features” in gridded fields
Advanced Spatial Methods 24-h Precip forecast Precip analysis • Traditional statistics often are not able to account for spatial or temporal errors: • Displacement in time and/or space • Location • Intensity • Orientation • Spatial techniques such as Method for Object-based Diagnostic Evaluation (MODE) add value to the product evaluation
Example A-train and NWP Comparison • We performed our comparison using the RUC (http://ruc.noaa.gov/) cloud top height and derived reflectivity products • Performed comparison at different spatial resolutions (13 and 20 km) over the continental US • Compared observed cloud top and vertical structure (reflectivity) with model derived fields
Cloud Top Height Comparisons • Identified all CloudSat profiles and model grids that have cloud • Identified all model grid boxes containing at least 10 CloudSat points (roughly half the number of points that could be in a grid box) • Performed traditional statistics using multiple matching methods (nearest neighbor, mean, distance-weighted mean)
Reflectivity Profile Comparisons • Used RUC native grid mixing ratios (cloud water, rain, ice, snow, and graupel) and convert the mixing ratios to an estimated reflectivity using the CSU Quickbeam tool • Retrieved a vertical plane in the model fields along the CloudSat path • Performed spatial comparison between observed and model fields
Cloud Top Height Comparisons • Cloud Height Distributions along the track • The forecast distribution of cloud top is within the observed distribution
Cloud Top Height Comparisons • Forecast mean and standard deviation with 95% confidence intervals • Evaluation is not sensitive to weighting scheme 0800 UTC
Reflectivity Profile Comparisons • We are developing the tools to spatially and temporally evaluate model fields in the vertical plane • Match cloud objects along track, off track, and previous/past hours in time • The challenge is to create representative comparisons • Resolution differences • Not direct field comparison
Spatial-based Comparison: Search “Off Track” for Best Match - The most intense features are identified - Search “off track” found better matches (indicating spatial or temporal errors in the forecast) -However, the coarse model resolution makes matches of objects difficult
Future Work • Future updates to MET • Complete code to read A-Train products into MET • Apply and test object-based methods in the vertical plane in MET • Improve methods for verifying cloud and precipitation properties (e.g., how to compare different resolutions and model parameters) • Implement a display tool for visualization of satellite and model products within MET
Future Work • Future A-train comparisons • Cloud base, cloud type, cloud water content, ice water content, etc. • Evaluation of other weather features • Tropical storms, multilayer clouds, clouds over complex terrain, etc. • We would like to extend the tool to other satellite datasets (e.g., TRMM) and to other model products