1 / 27

Trond Iversen met.no & Univ. Of Oslo

GLAMEPS: G rand L imited A rea M odel E nsemble P rediction S ystem Towards establishing a European-wide TIGGE – LAM ?. Trond Iversen met.no & Univ. Of Oslo. Based on discussions involving (inter alia) Dale Barker, Jan Barkmeijer, Jose Antonio Garcia-Moya,

buterbaugh
Download Presentation

Trond Iversen met.no & Univ. Of Oslo

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GLAMEPS:Grand Limited Area Model Ensemble Prediction SystemTowards establishing a European-wide TIGGE – LAM ? Trond Iversen met.no & Univ. Of Oslo Based on discussions involving (inter alia) Dale Barker, Jan Barkmeijer, Jose Antonio Garcia-Moya, Nils Gustafsson, Bent Hansen Sass, Andras Horanyi, Trond Iversen, Martin Leutbecher, Jeanette Onvlee, Bartolome Orfila, Xiaohua Yang.

  2. GLAMEPS First planning document exists- will be amended as new partners enter Contributions from partners – R & D and operational

  3. GLAMEPS - Working Groupestablished ad hoc at the Aladin-HIRLAM predictabilityplanning meeting in Madrid, March 2006 Jan Barkmeijer (KNMI), Jose Antonio Garcia-Moya (INM), Nils Gustafsson (SMHI), Andras Horanyi (HMS), Trond Iversen (met.no), Martin Leutbecher (ECMWF)

  4. The GLAMEPS objective is in real time to provide to all HIRLAM and ALADIN partner countries *: an operational, quantitative basis for forecasting probabilities of weather events in Europe up to 60 hours in advance to the benefit of highly specified as well as general applications, including risks of high-impact weather. * List of partners should be extended !

  5. Why ensemble prediction? Why not use all resources to produce the ”best possible model” and the ”best possible forecast”, in stead of a multitude of inferior models and forecasts? Answer: Yes, we need ”the best possible model” (e.g. high resolution), BUT: the best possible forecast is not ”deterministic”, because: • Weather predictionis not a deterministic problem! • If we pretend it’s deterministic, we lose crucial information for protecting human lives and property • Predictability of free flows decreases with decreasing scales; i.e.: higher resolution increases the need for information about spread and the timing of spread saturation

  6. Why ensemble prediction? Forecast products which require information of spread: • How certain is today’s weather forecast? • Ensemble, or rather cluster, averages, have longer predictability than the single ”best” (control) forecast • What are the risks of high-impact events? • In a well calibrated EPS: Probable sources to forecast errors can be diagnased • Forecasts beyond the predictability limit of pure atmospheric forecasts (monthly, seasonal, and longer) is impossible with a deterministic strategy.

  7. A. Murphy (1993): What is a good forecast? Consistency: correspondence between forecaster‘s best judgement and their forecasts Quality: correspondence between forecasts and matching observations Value: benefits realised by decision makers through the use of the forecasts

  8. Example: Norwegian 20 member LAMEPS (HIRLAM) and Targeted EPS (ECMWF IFS) >90mm/24h 130mm/24h “100 year precipitation event” in the middle part of Norway 30-31. January 2006

  9. LAMEPS P24 > 60mm Forecasted risk = probability x potential damage TEPS P24 > 60mm

  10. Schematic: Sources of prediction spread in a LAM • Initial analysis and lateral boundaries • Lower and upper boundaries Non-linear filtering of unpredictable components  ensemble mean better than control ”best forecast” model x ensemble mean model x Temperature, precipitation, etc. true development model x Truth outside spread  model error? t=0 t=60h

  11. Schematic: Sources of prediction spread in a LAM • Initial analysis and lateral boundaries • Lower and upper boundaries model y slightly better than model x Temperature, precipitation, etc. true development Ensemble mean model y ”best forecast” model y t=0 t=60h

  12. Schematic: Sources of prediction spread in a LAM • Initial analysis and lateral boundaries • Lower and upper boundaries • Numerical approximations and parameterized physics multimodel ”best forecast” model x ensemble mean model x ensemble mean Temperature, precipitation, etc. true development Ensemble mean model y ”best forecast” model y Warning: it’s not always this simple…. t=0 t=60h

  13. In the multi-modal spread regime: Cluster or modal means in stead of ensemble means(Fig. by R. Hagedorn) Temperature Forecast time Initial condition Forecast Complete description of weather prediction in terms of a Probability Density Function (PDF)

  14. Combining probabilistic forecasts from several models may give better scores than even the better of each models. Example (Norwegian system): ROC – Area as a function of precipitation Treshold. Black curve (NORLAMEPS) is a combination of the blue (LAMEPS) and the green (TEPS) [The red is the standard ECMWF EPS for reference.]

  15. Influence of North Atlantic SST in Europe? (I.-L. Frogner, M.H. Jensen, met.no)Targetted Forcing Singular Vectors, ECMWF IFS,Winter, High NAO(the 20% most sensitive days) Meridional Cross-section along 0 deg. From north pole To 40 deg N

  16. Approach and aims during HIRLAM-A • An array of LAM-EPS models or model versions: • Each partner runs a unique model version • Each partner runs between 5 and 20 ensemble members based on initial and lateral boundary perturbations • Some partners also perturb the lower boundary data • Grid resolution • 10 km or finer, 40 levels, identical in all model versions • Forecast range60h - starting daily from 12 UT • A common integration domain (!), including: • North Atlantic Ocean north of ca. 15 deg N. • Greenland, • European part of the Arctic • European continent to the Urals

  17. SRNWP-PEPS SRNWP – PEPS (only a tiny common area)

  18. Quality objective To operationally produce ensemble forecasts with • a spread reflecting known uncertainties in data and model; • a satisfactory spread-skill relationship (calibration); and • a better probabilistic skill than the operational ECMWF EPS; for • the chosen forecast range of 60 hours; • our common target domain; and • weather events of our particular interest (probabilistic skill parameters).

  19. Hirlam + Aladin +……+ is, by construction, particularly well suited for GLAMEPS: by commonly exploit the distributed resources for short-range NWP in Europe, for the benefit of our users.

  20. Experience • Global multimodel, grand / super ensembles: mainly in the medium to long range, and in climate predictions (demeter, ensembles, climatePrediction.net, IPCC, etc.) • Short range Europe: much less, and mainly • with single model LAMEPS, • downscaling of global EPS, • Targetted global EPS, • Multi-model LAM without initial/lateral boundary perturbations, • Some experience with breeding and LAM SVs

  21. Initial Step:build on existing operational experience • To select a small selection of HIRLAM / ALADIN model versions which are well established and significantly different, but still approx. equally valid representations of the atmosphere • different models (ALADIN, HIRLAM,…? Build on i.a. INM experience) • different physical packages (STRACO and RK-KF deep conv) • For some models to vary selected lower boundary parameters • Atlantic SST • Soil mosture • To construct initial/lateral boundary perturbations representative for trigging the ”instability of the day” given the uncertainty constraints • ECMWF TEPS / EPS (build on met.no LAMEPS) • Ensemble calibration (spread-skill-ratio)

  22. Operational To run a first phase suite at ECMWF (Special Project) • Some scripts are ready • Some verification tools are ready • Some tools for probabilistic products are ready A possibility is to establish a ”PAF” (Prediction Application Facility) with ECMWF

  23. Further on 1. Through research to gradually increase ensemble size and error-sources Include other types of model and lower boundary perturbations • vary model coefficients (e.g. learn from climate modelers; challenge for ”physical processes – community” in NWP) • Targeted Forcing SVs or Forcing Sensitivities (KNMI, met.no), • weak 4D-Var perturbed tendencies? • …. Include alternative intial/lateral boundary perturbations • ETKF generalized breeding (SMHI), • HIRLAM and ALADIN LAM SVs (KNMI, SMHI, HMS), • … • To run a de-centralized system with real-time dissemination of data, or through a common central (e.g. ECMWF) • Ensemble calibration in all phases

  24. Calibration: Spread-skill ratio Example from medium-range (T. Palmer) Too little spread: Model error missing? Too much spread: Wrong correction of too small spread at day 2-3?

  25. From climatePrediction.netUniv of Oxford & The Hadley CentreOnly cloud-related parameters are perturbed Does Climate modelers and SRNWP-modelers have common challenges? Pdf of climate sensitivity: dT at 2xCO2 Based on ~50000 ensemble members

  26. Practical condition for success: Broad participation with long term allocation of human and technical resources from at least 20 partners ( ~200 ensemble members or more) Please sign in! Workshop in Vienna, Nov. 13-14, 2006

  27. Thank you for your attention Mother of Pearl clouds over Oslo, January 2002

More Related