1 / 23

Assessing and predicting regional climate change

Assessing and predicting regional climate change. Hans von Storch, Jonas Bhend and Armineh Barkhordarian Institute of Coastal Research, GKSS, Germany. „Assessing and predicting“.

bracha
Download Presentation

Assessing and predicting regional climate change

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assessing and predicting regional climate change Hans von Storch, Jonas BhendandArminehBarkhordarian Institute ofCoastal Research, GKSS, Germany

  2. „Assessing and predicting“ • Assessing – detection non-natural ongoing climate change, and attribution most likely cause for such a change. (Hasselmann, 1993) • Assessing – if not possible: determination of consistency of ongoing change and deflated projections. (Bhend and von Storch, 2008) • Predicting – not really possible at this time (except for first examples), almost all cases are descriptions of possible, plausible, internally consistent futures (scenarios).

  3. Needed tools • Data describing homogeneously past and ongoing change (reconstructions); when insufficient observational evidence available, use RCM-reconstructions • Data describing plausible, internally consistent and possible future states (scenarios) prepared by global GCMs or (better:) regional RCMs. • Use scenarios to guide statistical analysis if ongoing changes are beyond the range of natural variations (detection) and how they are best “explained“ (attribution)

  4. Research questions Is the observed change different from internal variability? Is anthropogenic forcing a plausible explanation? Is anthropogenic forcing a necessary explanation? There is something predictable Predictability based on plausibility possible. Predictions for strong forcing possible

  5. Methodical options • Detection: “Is the observed change different from what we expect due to internal/natural variability alone?” – not always doable. • Trends – are there significant trends? – no useful results. • Consistency: “Are the observed changes similar to what we expect from anthropogenic forcing?”Doable: Plausibility argument using an a priori known forcing.

  6. Attribution Observations Detection • Regional detection and attribution • Needs • Homogeneous observational record • Estimates of natural (or internal) level of variability (based on observations, proxies or control run simulations) • Assumption of linear overlay of effect of different drivers • Realistic simulations of regional climate • Estimates of the effects of different drivers generated by model simulations

  7. „Significant“ trends • Often, an anthropogenic influence is assumed to be in operation when trends are found to be „significant“. • In many cases, the tests for assessing the significance of a trend are false as they fail to take into account serial correlation. • If the null-hypothesis is correctly rejected, then the conclusion to be drawn is – if the data collection exercise would be repeated, then we may expect to see again a similar trend. • Example: N European warming trend April – July as part of the seasonal cycle. • It does not imply that the trend will continue into the future (beyond the time scale of serial correlation). • Example. Usually September is cooler than July.

  8. Consistency analysis: attribution without detection The check of consistency of recent and ongoing trends with projections from dynamical (or other) models represents a kind of „attribution without detection“. This is in particular useful, when time series of insufficient length are available or the signal-to-noise level is too low. The idea is to estimate the driver-related change E from a (series of) model scenarios (or projections), and to compare this “expected change” E with the recent trend R. If R  E, then we may conclude that the recent change is not due to the suspected driver, at least not completely.

  9. DJF mean precip in the Baltic Sea catchment Example: Recent 30-year trend R Trend of DJF precip according to different data sources.

  10. Consistency analysis • Expected signals • six simulations with regional coupled atmosphere-Baltic Sea regional climate model RCAO (Rossby-Center, Sweden) • three simulations run with HadCM3 global scenarios, three with ECHAM4 global scenarios; 2071-2100 • two simulation exposed to A2 emission scenario, two simulations exposed to B2 scenario; 2071-2100 • two simulations with present day GHG-levels; 1961-90 • Regional climate change in the four scenarios relatively similar.

  11. Regional DJF precipitation Δ=0.05%

  12. Consistency analysis: Baltic Sea catchment All seasons: RCAO-ECHAM B2 scenario

  13. Consistency analysis: Baltic Sea catchment • Consistency of the patterns of model “projections” and recent trends is found in most seasons. • A major exception is precipitation in JJA and SON. • The observed trends in precipitation are stronger than the anthropogenic signal suggested by the models. • Possible causes:- scenarios inappropriate (false)- drivers other than CO2 at work (industrial aerosols?)- natural variability much larger than signal (signal-to-noise ratio  0.2-0.5).

  14. Perspectives s Consistency analysis requires the presence of homogeneous data extending across several decades. In case of insufficient observational evidence: use global re-analysis and regional downscalings thereof. If large-scale mean values are looked after: use GCMs; if regional details matter: use RCMs. Detection requires the presence of homogeneous data extending across many decades without significant external influences. If available use multi-century GCM control runs, and downscalings thereof (not existent so far) Attribution requires the presence of simulations of the changes caused several relevant drivers. Use global modelling efforts like CMIP3; for regional studies, available simulations insufficient

  15. Regional Modelling • Dynamical downscaling to obtain high-resolution (10-50 km grid; 1 hourly) description of weather stream.- use of NCEP or ERA re-analysis allows reconstruction of regional weather in past decades (1960-2003)- when global scenarios are used, regional scenarios with better description of space/time detail can be downscaled. • Problem: Model may be incomplete, “global driver” inadequate. • Advantages: Homogeneity (if driver homoigeneous), and very many (unobservable) variables available. Integration area used in GKSS reconstruction and regional scenarios

  16. regional model variance Insufficiently resolved Well resolved Spatial scales Added value

  17. RCM Physiographic detail 3-d vector of state Known large scale state projection of full state on large-scale scale Large-scale (spectral) nudging

  18. Mediterranean Sea Basin Barkhordarian, 2010

  19. Downscaling cascade

  20. Take home … • We need to assess current change, and to do so • by comparing against natural/internal variability (detection), and • by comparing with (deflated) scenarios of possible futures (attribution, consistency) • Data to this end may be generated by modelling. • Sometimes global GCM simulations suffice • For regional details, RCM simulations are better. • Reconstructions (and scenarios) should (can) be done with large scale constraining.

  21. Additional material

  22. Hansen’s scenario published in 1988 as a prediction up to 2010 (Hargreaves, 2010). Allen et al.’s (2000) forecast of global temperature made in 1999. Solid line shows original model projection. Dashed line shows prediction after reconciling climate model simulations with the HadCRUT temperature record, using data to August 1996. Grey band shows 5-95% uncertainty interval. Red diamond shows observed decadal mean surface temperature for the period 1 January 2000 to 31 December 2009 referenced to the same baseline.

More Related