1 / 32

Data Assimilation System – OSSE Experiments

Ensemble Data Assimilation and Model Validation Efforts Using Cloud and Water Vapor Sensitive Infrared Brightness Temperatures Jason A. Otkin University of Wisconsin-Madison/CIMSS 10 March 2015

woodham
Download Presentation

Data Assimilation System – OSSE Experiments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ensemble Data Assimilation and Model Validation Efforts Using Cloud and Water Vapor Sensitive Infrared Brightness Temperatures Jason A. Otkin University of Wisconsin-Madison/CIMSS 10 March 2015 Funding Sources: NOAA GOES-R Risk Reduction, NOAA GOES-R Algorithm Working Group, NOAA Office of Atmospheric Research, Joint Center for Satellite Data Assimilation (JCSDA), and NOAA Hurricane Forecast Improvement Project (HFIP)

  2. Data Assimilation System – OSSE Experiments • Infrared brightness temperature assimilation examined using a regional-scale Observing System Simulation Experiment approach • Relative impact of clear and cloudy sky observations • Horizontal covariance localization radius employed during the assimilation step • Impact of water vapor sensitive infrared bands on precipitation forecasts during a high impact weather event • Impact of observations in convective scale data assimilation • Assimilation experiments were performed using the WRF model and the EnKF algorithm in the DART data assimilation system • Successive Order of Interaction (SOI) forward radiative transfer model was implemented within the DART framework • Simulated fields used by the forward model include T, qv, Tskin, 10-m wind speed, and the mixing ratios and effective diameters for five hydrometeor species (cloud water, rain water, ice, snow, and graupel)

  3. Clear vs Cloudy Observation Impact – OSSE Configuration • Observations assimilated during each experiment: • B11-ALL – both clear and cloudy sky ABI 8.5 mm (band 11) Tb • B11-CLEAR – clear-sky only ABI 8.5 mm Tb • CONV – conventional observations only • CONV-B11 – both conventional observations and ABI 8.5 mm Tb • Control – no observations assimilated • Assimilation experiments were performed using a 40-member ensemble with 12-km horizontal resolution and 37 vertical levels • Observations were assimilated once per hour during a 12-hr period • Otkin, J. A., 2010: Clear and cloudy-sky infrared brightness temperature assimilation using an ensemble Kalman filter. J. Geophys Res., 115, D19207, doi:10.1029/2009JD013759.

  4. Ensemble-Mean ABI 11.2 mm Brightness Temperatures Images valid after first data assim-ilation cycle at 12 UTC B11-ALL Truth B11-CLEAR Control CONV CONV-B11 • Compared to the conventional-only case, the assimilation of 8.5 mm brightness temperatures had a larger and more immediate impact on the erroneous cloud cover across the southern portion of the domain and also improved the structure of the cloud shield further north

  5. Ensemble-Mean ABI 11.2 mm Brightness Temperatures Images valid after final data assim-ilation cycle at 00 UTC B11-ALL Truth B11-CLEAR Control CONV CONV-B11 • By the end of the assimilation period, the most accurate analysis is achieved when both conventional and 8.5 mm Tb are assimilated • Comparison of the CONV and B11-ALL images shows that the 8.5 mm Tb have a larger impact than the conventional observations

  6. Horizontal Localization Radius Tests – OSSE Configuration • Four assimilation experiments were performed: • Control – conventional observations only • HLOC-100KM – conventional + ABI 8.5 mm Tb (100 km loc. radius) • HLOC-200KM – conventional + ABI 8.5 mm Tb (200 km loc. radius) • HLOC-300KM – conventional + ABI 8.5 mm Tb (300 km loc. radius) • Assimilation experiments were performed using an 80-member ensemble with 18-km horizontal resolution and 37 vertical levels • Observations were assimilated once per hour during 12-hr period • Both clear and cloudy sky ABI 8.5 mm brightness temperatures were assimilated • Otkin, J. A., 2012: Assessing the impact of the covariance localization radius when assimilating infrared brightness temperature observations using an ensemble Kalman filter. Mon. Wea. Rev., 140, 543-561.

  7. Cloud Water Path Error Time Series RMSE RMSE Bias Bias • Different performance for the clear and cloudy grid points • Larger localization radius generally better for clear grid points but worsens the analysis in cloudy regions

  8. Cloud Errors After Last Assimilation Cycle • Total cloud condensate (QALL) errors over the entire model domain after the last assimilation cycle • Similar errors occurred for the clear sky grid points • Errors consistently decreased with decreasing localization radius for the cloudy grid points • Suggests different loc. radii should be used for clear and cloudy observations

  9. Thermodynamic Errors After Last Assimilation Cycle • Thermodynamic and moisture errors after the last assimilation cycle • Greater degradation tended to occur when a larger radius was used • These results show that a smaller radius is necessary to maintain accuracy relative to Control case

  10. Short-Range Forecast Impact • Overall, the initially large positive impact of the infrared observations decreases rapidly with time • Results show that without improvements in the thermodynamic and moisture fields, it is difficult to preserve initial improvements in the cloud field

  11. Impact of ABI Water Vapor Bands – OSSE Configuration • A regional-scale OSSE was used to evaluate the impact of the water vapor sensitive ABI bands on the analysis and forecast accuracy during a high impact weather event • Five assimilation experiments were performed: • Control – conventional observations only • Band-08 -- conventional + ABI 6.19 mm Tb (upper-level WV) • Band-09 -- conventional + ABI 6.95 mm Tb (mid-level WV) • Band-10 -- conventional + ABI 7.34 mm Tb (lower-level WV) • Band-11 -- conventional + ABI 8.5 mm Tb (window) • Assimilation experiments were performed using a 60-member ensemble containing 15-km horizontal resolution and 37 vertical levels • Observations were assimilated every 30 minutes during a 6-hr period • Otkin, J. A., 2012: Assimilation of water vapor sensitive infrared brightness temperature observations during a high impact weather event. J. Geophys. Res., 117, D19203, doi:10.1029/2012JD017568.

  12. Impact of ABI Water Vapor Bands – Assimilation Period PWAT Analysis Errors Cloud Analysis Errors • Large improvements made to the water vapor and cloud analyses after each assimilation cycle regardless of which band was assimilated • Smallest errors occurred when brightness temperatures from lower-peaking channels were assimilated • Each of the water vapor band assimilation cases have smaller cloud errors than the Control and window band 11 cases

  13. 6-hr Accumulated Precipitation Forecasts • Precipitation forecasts were more accurate during the brightness temperature assimilation cases.

  14. Brightness Temperature Assimilation at Convective Scales • A regional-scale OSSE was used to evaluate the potential impact of assimilating clear and cloudy sky ABI brightness temperatures and Doppler radar observations at convection-resolving resolutions • Synthetic satellite and radar observations created using output from a 2-km resolution truth simulation of a severe thunderstorm event across Kansas and Missouri on 04 June 2005 • Assimilation experiments were performed using a 50-member ensemble containing 4-km horizontal resolution and 52 vertical levels • Observations were assimilated every 5 minutes during a 2-hr period • Testing the impact of various assimilation settings on the assimilation results (observation coarsening, covariance localization, observation errors, inflation, and additive noise)

  15. Simulated ABI Band 8 (6.19 μm) at End of Assimilation Period • Cloud field associated with thunderstorms much more accurate when ABI band 8 observations are assimilated • Satellite observations introduce a cold bias (too much water vapor) across western part of domain • Radar observations did not have as large of a positive impact on the location and structure of the thunderstorms Truth No Assim Band 8 Only Radar Only

  16. Simulated Composite Reflectivity at End of Assimilation Period • Radar depiction is most accurate during ABI band 8 assimilation case • Infrared observations have a larger positive impact because they provide information about the water vapor and cloud fields in regions outside of thunderstorms • Results demonstrate the potential utility of GOES-R ABI observations in high-resolution model systems Truth No Assim Band 8 Only Radar Only

  17. Real SEVIRI Infrared Brightness Temperature Assimilation • Regional-scale assimilation experiments were performed using the COSMO model and KENDA ensemble data assimilation system under development at the German Deutscher Wetterdienst (DWD) • Assimilated clear and cloudy sky infrared brightness temperatures from the MSG SEVIRI sensor on the COSMO-DE domain that covers central Europe with 2.8 km resolution • Developed a cloud dependent bias correction scheme that removes the bias based on the cloud top height • Bias statistics were computed separately for clear sky grid points and for low (CTH < 2 km), middle (2 km < CTH < 6 km), and upper level (CTH > 6 km) clouds • Statistics were only computed when the observed and simulated cloud top heights were within 600 m of each other • OMB errors evaluated during a 3.5 day period

  18. Bias Monitoring of SEVIRI observations • Statistics shown for clear (black), low (red), mid (blue), and high clouds (green) • Errors vary with cloud regime • High clouds have largest errors • Largest errors occur when thin cirrus is present • Need cloud-dependent bias correction Mean Tb Bias RMSE

  19. Bias Monitoring of SEVIRI observations – Last Analysis Time • Simulated and observed satellite imagery at end of passive monitoring period (12 UTC on 04 June 2011) • Dry moisture bias evident in simulated water vapor bands (top two rows) • Simulated upper-level cloud cover is under-predicted (bottom row)

  20. Real SEVIRI Infrared Brightness Temperature Assimilation • Two assimilation experiments were performed: • Control – conventional observations only • SEVIRI -- conventional + SEVIRI 7.3 mm Tb (mid-level WV) • Clear and cloudy sky brightness temperatures assimilated once per hour during a 12-hour assimilation period with a 40 member ensemble • Bias correction values were set to -2.9 K for clear grid points and low-level clouds, -3.1 K for mid-level clouds, and -4.4 K for high clouds • Applied to all observations, not just those that cloud match • Bias corrections were applied to the individual ensemble members rather than to the observations to account for differences in the cloud field in each ensemble member

  21. Assimilation Results with Cloud Dependent Bias Correction • Error time series for Control (black line) and SEVIRI (red line) assimilation cases • Statistics are shown for cloud matched grid points • Errors are reduced when SEVIRI 7.3 mm observations are assimilated using the cloud-dependent bias correction scheme

  22. Analysis Accuracy After Last Assimilation Time • Large improvements made to the relative humidity field, especially in the middle and upper troposphere where observations are most sensitive • Smaller ensemble spread and root mean square error when infrared observations are assimilated • Neutral impact on temperature and wind fields • These results are similar to those found during the OSSE studies

  23. Brightness Temperature Error Analysis in GFS/CRTM • JCSDA-funded project to evaluate the bias error statistics for cloud and precipitation affected brightness temperatures in the GFS • Microwave (AMSU-A, etc.) and infrared (CrIS, IASI, etc.) • Will promote efforts to assimilate cloudy infrared data and precipitation-affected microwave observations • Assessing error characteristics as a function of cloud top height for infrared observations and cloud water path for microwave observations • Analysis is based on a 2-month time period of passively monitored cloud-affected observations from 01 April – 31 May 2014 • GFS T670 and CRTM version 2.1 on the S4 supercomputer

  24. Brightness Temperature Error Analysis in GFS/CRTM After Bias Correction using cloud liquid water as a predictor Before Bias Correction AMSU-A N19 April-May 2014 • RMSE brightness temperature departures (Observation - Background) as a function of AMSU-A estimated cloud liquid water • Cloud dependent errors are present in each channel • Bias correction greatly reduces the errors for some channels, though errors still increase with increasing cloud liquid water content

  25. Brightness Temperature Error Analysis in GFS/CRTM O-B Bias RMSE 13.96 um • Histogram in left panel shows that bias errors greatly increase as the cloud top pressure moves from the lower to upper troposphere for the CrIS 13.96 mm channel • Right panel shows that RMSE values are strongly dependent on the cloud top pressure and are much higher for upper level clouds

  26. GOES-Based Verification System for the HRRR Model • NOAA GOES-R3 project to develop a GOES-based verification system for the HRRR model • Goal is to provide forecasters objective measures to determine the accuracy of current and prior HRRR model forecasts • With many overlapping forecast cycles, the most accurate forecast may not be the most recent one • Accuracy of model forecasts ranked through comparison of observed and simulated brightness temperatures • Statistics are computed for different regions across the U.S. • System will initially be demonstrated at the 2015 HWT and AWT • http://cimss.ssec.wisc.edu/hrrrval/

  27. GOES-Based Verification System for the HRRR Model • Sample screen shot of project homepage

  28. GOES-Based Verification System for the HRRR Model • Screen shot of model comparison page

  29. HWRF Model Validation Using Satellite Observations • NOAA HFIP-funded project that is using simulated satellite brightness temperatures to evaluate the accuracy of the operational HWRF model • Using the 2014 operational HWRF and Unified Post Processor (UPP) codes maintained by the Developmental Testbed Center (DTC) • Simulated microwave and infrared brightness temperatures are used to assess the storm structure and intensity estimates using the satellite-derived Advanced Dvorak Technique (ADT) and ARCHER methods • Examine accuracy of simulated cloud and moisture fields • Updated the portion of the UPP code that computes the effective particle diameters for each cloud species used by the CRTM • Error with the rain diameter calculations for the Ferrier scheme – it was set to a constant value of 200 microns when it should have been variable

  30. Simulated TMI 37 GHz Brightness Temperature Comparison Variable-Const Difference Ferrier constant effr_rain Ferrier variable effr_rain ffr • Error in the original rain diameter (constant instead of variable) • Large differences when correct diameter is computed • 5-15 K colder in deep convection • 5-15 K warmer in outflow region

  31. Simulated TMI 85 GHz Brightness Temperature Comparison Variable-Const Difference ffr Ferrier constant effr_rain Ferrier variable effr_rain • Variable diameters lead to colder brightness temperatures due to increased scattering • Errors in the simulated microwave brightness temperatures could then lead to errors in intensity estimates

  32. Acknowledgments • Convective Scale Data Assimilation – Becky Cintineo, Thomas Jones, Steve Koch, and Dave Stensrud • SEVIRI Assimilation – Africa Perianez, Annika Schomburg, Robin Faulwetter, Hendrik Reich, Christoph Schraff, and Roland Potthast • CRTM Validation – Sharon Nebuda, Tom Auligne, and Ralf Bennartz • HRRR Model Verification – Justin Sieglaff, Sarah Griffin, Lee Cronce, and Curtis Alexander • HWRF Model Validation – Will Lewis, Allen Lenzen, Brian McNoldy, Sharan Majumdar, Sam Trahan, Ligia Bernardet, Christina Holt, and Kate Fossel

More Related