350 likes | 368 Views
Introduction to the OSSE program at the National Centers for Environmental Prediction (NCEP), discussing motivation, basic concepts, system components, testing, and preliminary results. Learn how OSSEs provide valuable insights into future observing systems, data assimilation, and impacts on forecasting.
E N D
Global OSSEs at NCEP Michiko Masutani NOAA/NWS/NCEP/EMC http://www.emc.noaa.gov/research/osse
Co-Authors and Contributors Michiko Masutani*1#, Stephen J. Lord1, John S. Woollen1+, Weiyu Yang1+ Haibing Sun2%, Thomas J. Kleespies2, G. David Emmitt 3, Sidney A. Wood3 Bert Katz1+, Russ Treadon1, John C. Derber1, Steven Greco3, Joseph Terry4 1NOAA/NWS/NCEP/EMC, Camp Springs, MD 2NOAA/NESDIS, Camp Springs, MD 3Simpson Weather Associates, Charlottesville, VA 4NASA/GSFC, Greenbelt, MD #RS Information Systems +Science Applications International Corporation %QSS Group, Inc. *Corresponding Author Address: Michiko Masutani, 5200 Auth Road Rm 207 Camp Springs, MD 20746 E-mail:michiko.masutani@noaa.gov
Overview • Introduction to OSSEs • Motivation • Basic Concepts • The NCEP OSSE System • Components • Testing • Preliminary NCEP results for Doppler Wind Lidar (DWL) • Planned upgrades • Conclusions
Introduction to OSSEs: Motivation • Costs of developing, maintaining & using observing systems often exceed $100 M/instrument • To date, there have been no quantitatively–based decisions on the design & implementation of future observing systems • Significant time lags between instrument deployment and eventual operational NWP use
Introduction to OSSEs: Motivation (cont) • Doing OSSEs will • Enable data formatting and handling in advance of “live” instrument • Exercise preliminary quality control algorithms provided simulated obs have realistic properties • OSSEs can provide quantitative information on observing system impacts and data assimilation system components • Requirements definition and instrument design for new instruments • Evaluate alternative mix of current instruments • Data assimilation system diagnosis and improvement • Understanding and formulation of observational errors • There is no perfect solution but more information can lead to better planning and decisions
Observations Data Assimilation Analysis Verification Forecast Model Introduction to OSSEs Basic Concepts Real /OSSE Data Assimilation System Existing Real Observations With & Without Truth Nature New & Existing Observations With & Without Nature Run
Introduction to OSSEs: Basic Concepts (cont.) • Simulated observations should • Exhibit similar system impact as real observations. (Any difference should be explainable and consistent) • Contain same kinds of errors as real observations (e.g., representativeness) • Nature Run is truncated spectrally in space & time • Real Nature is not truncated • Be produced by different instrument models than used in data assimilation system (e.g., radiances)
Current NCEP OSSE System Components • Nature Run (NR) • ECMWF Reanalysis Model (ERA-15) • Resolution T213/L31 (~60 km) • 06 UTC 5 Feb 1993 – 00 UTC 7 March 1993 • Climatologically representative period • Good agreement in synoptic behavior • Clouds need adjustment • Observations • Conventional (raob, surface, pireps, MDCRS (1993)) • HIRS, MSU NOAA-11, NOAA-12 • Cloud track winds • NCEP Global Data Assimilation System • 1999 version • T62/L28 (200 km horizontal resolution) • Radiance assimilation
Two OSSEs NCEP NASA/GLA • Winter time Nature run (1 month) • NR by ECMWF model T213 (~0.5 deg) • NCEP data assimilation T62 ~ 2.5 deg • 1993 data distribution • Simulate and assimilate satellite 1B radiance • Use line-of- sight wind for DWL • Evaluation and adjustment of cloud • Calibration performed • Effects of observational error tested • Scale decomposition of anomaly correlation skill • Summer time Nature run (4 month) including hurricanes. • NR by NASA FVCCM model with 0.5 deg • Use NASA DAO GEOS-3 data assimilation with 1-1.5 deg • 1999 data distribution • Use interpolated temperature for radiance data • Use U and V wind for DWL data • Impact on hurricane forecast • Evaluate cyclone tracks and Jet streak locations
Evaluation of NR clouds and adjustment NR clouds are evaluated and adjusted. Frequency distribution (in %) for ocean areas containing low level cloud cover in 20 5%- band categories. Solid line: NR cloud cover without adjustment. Dashed line: with adjustment.
Testing: OSSE Calibration • Compare real data sensitivity to sensitivity with simulated data • Relative order of impacts should be same for the same instruments • Magnitudes need not be the same but should be proportional • Quality control (rejection statistics) • Error characteristics (fits of background to Obs)
NH SH Calibration of Simulated Data Impacts Vs Real 500 hPa Height Anomaly Correlation 72 hour forecasts
Difference between analysis with real and constant SST. Time- Longitude section of 500hPa height. Averaged between 20S and 80S. Feb. 13 Real With TOVS Simulated with TOVS Mar. 7 Feb. 13 Simulated w/o TOVS Real w/o TOVS Mar. 7
SST and Impact of TOVS Anomalous warm localized SST in SH Pacific in REAL SST. In simulation experiment constant SST is used. Four analyses are performed with real SST, constant SST, with and w/o TOVS. With TOVS data the difference is small in mid troposphere but without TOVS data, large differences appear and propagate. Four experiments are repeated for simulated data. This atmospheric response to SST is reproduced by simulated experiments. Impact of TOVS is much stronger in real atmosphere. With variable SST TOVS radiance become much more important.
Observational Error FormulationSurface & Upper Air Observation simulated at the Nature Run Surface Nature Run NCEP Model Observation simulated at the Real Surface Real
Impact of Different Surface and observational Errors (obs-anl) error [with real sfc] Random error [with real sfc] Perfect [with real sfc] Random error [with NR sfc] Real data
Observational Error FormulationSurface & Upper Air Simulated with Systematic representation error Z • With random error: • Data rejection rate too small (top) • Fit of obs too small (bottom) time
All levels (Best-DWL DWL-PBL DWL-Upper: Non-Scan DWL Preliminary NCEP results for Doppler Wind Lidar (DWL) All levels (Best-DWL): Ultimate DWL that provides full tropospheric LOS soundings, clouds permitting. DWL-Upper: An instrument that provides mid and upper tropospheric winds only down to the levels of significant cloud coverage. DWL-PBL: An instrument that provides only wind observations from clouds and the PBL. Non-Scan DWL : A non-scanning instrument that provides full tropospheric LOS soundings, clouds permitting, along a single line that parallels the ground track. Number of DWL LOS Winds 2/12/93
Non scan Clustered-Sample Clustered-Sample Distribiuted Sample Distribiuted Sample Note: In order to measure U and V from non scan DWL, two satellite or DWL sample for two directions are required
Anomaly correlation with the Nature run (%). The skill is computed from12 hourly forecasts from Feb14 to Feb28, 1993.
Doppler Wind Lidar (DWL) Impact Time averaged anomaly correlations between forecast and NR for meridional wind (V) fields at 200 hPa and 850 hPa. Anomaly correlation are computed for zonal wave number from 10 to 20 components. Differences from anomaly correlation for the control run (conventional data only) are plotted. Forecast hour
Impact of DWL in Synoptic event (Note) • Data impact on analysis at 00Z February 26, 1993 and their 48 hour forecasts at 00Z February 28 in 200 hPa meridional wind fields. • Two figures on top show total fields of NR. Analysis and forecasts are presented as difference from NR. • Green indicate smaller differences from NR. • Analysis with • Conventional data only, • (b) Conventional data + TOVS 1B • (c) Conventional data+Best DWL • (d)Conventional data + TOVS 1B + Best DWL • (e) Conventional data+non-scan DWL • (f)Conventional data + TOVS 1B + non-scan DWL
Impact of DWL in Synoptic event b a c d e f
Effect of Observational Error on DWL Impact Total • Percent improvement over Control Forecast (without DWL) • Open circles: RAOBs simulated with systematic representation error • Closed circles: RAOBs simulated with random error • Orange: Best DWL • Purple: Non- Scan DWL Wave 10-20 Forecast length
Planned Upgrades • Advanced instruments (e.g. AIRS) and impact experiments at higher resolution require larger computing resources, more realistic observations distribution and Nature Run and advanced data assimilation • Computing • Current: 32 processor SGI O2000 • Future: share former NCEP operational computer (IBMSP) until April 2004 (NPOESS funding) • Data Assimilation • Current: 1999 version of SSI and Global model • Future: 2003 version of SSI and model • Advanced radiative transfer for hyperspectral sounders • Improved computational efficiency • Prognostic cloud water • Improved mountain drag • Observations and Nature Run • Current: 1993 obs distribution; 4 weeks, T213 NR • Future: 2003-4 obs distribution; longer, higher resolution NR
Conclusions • OSSE methodology has matured into scientifically valid, but not perfect, approach for evaluating • Instrument design • Cost – capability tradeoffs • Observing system design • Impact of components • Data assimilation system behavior • NCEP system has been developed • Attention to details of obs simulation, Nature Run calibration • DWL design is a test case • Can be applied to other NPOESS instruments (e.g. CrIS) • Can be used for observing system design (e.g. THORPEX) • Preliminary results for DWL give encouraging impact • Future testing at higher resolution and improved data assimilation system • More realistic observations (1993 -> 2004) and Nature Run