1 / 19

Primary Objectives :

RT5 Independent comprehensive evaluation of the ENSEMBLES simulation-prediction system against observations/analyses. 1.7Meuro (11%). Primary Objectives :

pelizabeth
Download Presentation

Primary Objectives :

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. RT5Independent comprehensive evaluation of the ENSEMBLES simulation-prediction system against observations/analyses

  2. 1.7Meuro (11%) Primary Objectives: O5.a: Production of daily gridded datasets for surface climate variables (max/min temperature, precipitation and surface air pressure) covering Europe for the greater part with a resolution high enough to capture extreme weather events and with attached information on data uncertainty; O5.b: Identification and documentation of systematic errors in model simulations, representation of processes and assessment of key climate variability phenomena and uncertainties in ESMs and RCMs; O5.c: Assessment of the actual and potential seasonal-to-decadal quality for the different elements of the multi-model ensemble prediction system using advanced methods to evaluate the different attributes of forecast quality (skill, resolution, reliability, etc.). O5.d: Assessment of the amount of change in the occurrence of extremes in (gridded) observational and RCM data; O5.e: Evaluation of the impacts models driven by downscaled reanalysis, gridded and probabilistic hindcasts over seasonal-to-decadal scales through the use of application specific verification data sets.

  3. RT5: Deliverables

  4. RT5: Milestones M5.4: Selection of "best-performing" interpolation scheme for producing the daily gridded datasets (month 18). M5.3: Early assessment of systematic errors in the ENSEMBLES models (month 18). M5.2: Prototype of an automatic system for forecast quality assessment of seasonal-to-decadal hindcasts (month 18). M5.1: Evaluation of ERA40 precipitation extremes in the Alpine region completed (month 18). D5.10: Workshop report on Lessons learned from seasonal forecasting: health protection (month 18)

  5. RT5: Issues • Coordination of analysis work with RT1-2 • Data sets • Systematic Error • Mean • Variability (dynamics) • Seasonal Scale -- easy • Quick Climate Assessment • Ensemble members are really different ? • Are they outside the envelope ?

  6. WP5.1: Production of daily gridded observational datasets • (KNMI, MeteoSwiss, Climatic Research Unit, Oxford University) • First 18 months: • Collection and evaluation of basic daily station data from various sources (see example on next slide) • Selection of best performing interpolation scheme • Beyond: • Producing grids for surface climate variables covering Europe, and attaching information on data uncertainty(available by month 36)

  7. Example of input data evaluation: T-mean series, 1946-2003 ECA dataset http://eca.knmi.nl

  8. RT5, WP5.2 : Evaluation of processes and phenomena INGV, CNRS-IPSL, MPI-MET, DMI, UREADMM • Objectives : • Analyse the capability of the models to reproduce and predict the major modes of variations in the climate system • Investigate the nature of the uncertainties due to the clouds and radiations processes MODEL-DATA 18 months : prepare tools and preliminary report for systematic comparisons

  9. 5.2.a) Tropics • ENSO, monsoon • Intraseasonal variability • 5.2.b) Extratropics • Seasonal to decadal variability • Atlantic-Europe, THC, Storm tracks • 5.2.c) Global Teleconnections • ENSO-global, • Monsoon-Mediterranean • Effect of Numerical Aspects (Resolution, ..) • Intraseasonal variation in tropical heating • 5.2.d) Feedacks and clouds • Decadal variation of • water vapour,clouds, radiation • Moist/convective and • dry/subsiding tropical regions • Link with surface fluxes • 5.2.f) Clouds and aerosols • Parametrization scheems • Analysis of tendency errors • Nudged simulations using ERA40 • 5.2.e) Synthesis • Report of model systematic biases • Overall assessment of ENSEMBLES models

  10. Example : ENSO-Indian ocean • the 1976-1977 climate regime shift was accompanied by a remarkable change in the lead-lag relationships between Indian Ocean Sea Surface Temperature (SST) and El Niño evolution. • It has implications for Niño predictions (S-E Indian ocean is now a precursor) Do models reproduce this? Why? What are the major processes involved? From Terray

  11. Example: Sensitivity of teleconnections Low Pass Total T30 High Pass T106 Obs

  12. WP5.3 Description • WP5.3: Assessment of forecast quality. • Participants: ECMWF, MeteoS.wiss, Met Office, CNRM, KNMI, IfM, Univ. Reading, IPSL, BMRC. • Objective: Assessment of the actual and potential skill of the different ensemble forecast systems. • First 18 months: • Develop a prototype of automatic verification system for seasonal-to-decadal probabilistic predictions (M5.2). • Formulation and verification of probabilistic rare event predictions (D5.3, D5.4) and skill assessment of extra-tropical variability modes (D5.7) based on DEMETER data.

  13. WP5.3 Description • The prototype verification system will be based upon the KNMI Climate Explorer and the DEMETER verification system

  14. WP5.3 Description • Plan beyond month 18: • Implement the verification system to assess the forecast quality of the simulations carried out within RT1/RT2A. • Use the web-based automatic verification system to document the forecast quality of the predictions. • Liaise with RT1 to use forecast quality information for the recommendation of best method to estimate forecast uncertainty. • Extrapolate the skill/reliability information from the seasonal-to-decadal ensemble systems to the centennial ensemble systems. • Design of methods to create probability predictions out of multi-model hindcasts, including verification and economic value assessment, especially from a risk management decision-making perspective. • Liaise with RT6 to tailor design prediction skill and value assessment for the end users.

  15. WP5.4: Evaluation of extreme events • (KNMI, Univ. Reading, Climatic Research Unit, FTS-Stuttgart, IWS-Stuttgart, ETH Zürich, Nat. Observatory Athens) • Study of both observed and RCM data (all groups) • Spatial pooling to improve the probability of detecting trends (2 groups) • Reproduction of observed trends in heavy precipitation over the Alpine region by ERA-40 driven RCMs (1 group) • Use of an objective classification of circulation types causing extreme events (2 groups)

  16. CP 11 CP 04 Critical wet CPs classified based on discharge of Moselle Catchment Frequency of occurrence of critical CPs and their contributions to the mean and extreme ( > 90%) precipitation in winter

  17. WP5.5 Evaluation of seasonal-to-decadal scale impact-models forced with downscaled ERA-40, hindcasts and gridded observational datasets. UNILIV (Morse), WHO (Menne), UREADMM (Slingo), ARPA-SIM (Marletto), JRC-IPSC (Genovese), METEOSWISS (Appenzeller), LSE (Smith), FAO (Gommes), IRI (Thomson), WINFORMATICS (Norton), EDF (Dubus), DWD (Becker). First 18 months: Seasonal application models will be tested with ERA-40 data and (selected models) with DEMETER forecasts to commence development of validation systems (requires downscaled ERA-40 and DEMETER data and bias corrected DEMETER data) working on Tier-2 (ERA-40 reference forecast) and Tier-3 (full validation) validation systems. Workshop on the use of seasonal probabilistic forecasting for health applications either 1. evaluation of the state of the art or 2. on setting the agenda for future research Beyond: For fields of interest at temporal and spatial scales of interest to impacts modellers- the validation of ERA-40 data against other gridded data as available, Tier-1 validation of DEMETER (downscaled) variables ERA-40 and other gridded data sets, impacts models driven with ENSEMBLES seasonal-to-decadal forecasts on Tier-2 (reference forecast) validation and Tier-3 (real observations –e.g. crop yield) validations

  18. ARPA crop modelling results • Wheat yields 1977-1987, Modena, Italy. • 72 ensembles (4 models (x9) x2) downscaling replicates • WOFOST based crop model observed data to 31st March and onwards with DEMETER hindcasts to harvest date (end June) • Box (IQR) whiskers (10th & 90th percentiles) • Observed weather simulation (control) solid triangle • Climatology based run hollow circle. Marletto et al. 2005 Tellus (submitted)

  19. WP5.2: Evaluation of processes and phenomena(INGV, CNRS-IPSL, MPIMET, DMI,, UREADMM) Evaluate the capability of models to reproduce and predict the major modes of variation of the climate system, with a special emphasis to tropical-extratropical teleconnection patterns

More Related