1 / 30

Claudia Tebaldi Climate Central Currently visiting the Department of Global Ecology

This seminar discusses the use of multi-model ensembles in climate projections and explores the uncertainties associated with future climate change projections. It also highlights the value of multi-model ensembles in characterizing and quantifying climate uncertainties.

thompsonj
Download Presentation

Claudia Tebaldi Climate Central Currently visiting the Department of Global Ecology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The use of multi-model ensembles in climate projections: on uncertainty characterization and quantification Claudia Tebaldi Climate Central Currently visiting the Department of Global Ecology Carnegie Institution, Stanford claudia.tebaldi@gmail.com Reto Knutti (ETH, Zurich), Richard L. Smith (UNC, Chapel Hill) 02/19/08 Seminar at JPL

  2. Outline Multi-Model Ensembles and future climate change projections What are the main uncertainties in future projections, even given a specific scenario What MMEs can characterize and what they cannot Main issues, challenges in analyzing MMEs Value of MMEs An example of statistical analysis of MME projections: a joint model for temperature and precipitation change Summary and promising future directions

  3. http://www-pcmdi.llnl.gov/ipcc/about_ipcc.php This unprecedented collection of recent model output is officially known as the "WCRP CMIP3 multi-model dataset." It is meant to serve IPCC's Working Group 1, which focuses on the physical climate system -- atmosphere, land surface, ocean and sea ice -- and the choice of variables archived at the PCMDI reflects this focus. A more comprehensive set of output for a given model may be available from the modeling center that produced it. With the consent of participating climate modelling groups, the WGCM has declared the CMIP3 multi-model dataset open and free for non-commercial purposes. After registering and agreeing to the "terms of use," anyone can now obtain model output via the ESG data portal, ftp, or the OPeNDAP server. As of January 2007, over 35 terabytes of data were in the archive and over 337 terabytes of data had been downloaded among the more than 1200 registered users. Over 250 journal articles, based at least in part on the dataset, have been published or have been accepted for peer-reviewed publication.

  4. The way each model discretize its domain, the processes that it chooses to represent explicitly, those that it needs to parameterize (and the value for those parameters*) make the difference. Multi-Model Ensembles get at this structuraluncertainty. *Perturbed Physics Ensembles deal with the parameter uncertainty within a single model.

  5. About 20 state-of-the-art GCMs When it comes to future projections, Each says something different. Sometime only in absolute value, sometime in sign too! Agreement becomes harder to get the finer the spatial and temporal scale.

  6. At global scales it is mostly a matter of absolute values Shading is one inter-model standard deviation

  7. At regional scales it may be a matter of sign! Stippling means 90% or more of models agree in the sign of change

  8. IPCC-AR4 WG1 Chapter 10 and 11, Global and Regional Future Projections: Most of the projected changes are shown as multi-model means. Inter-model standard deviation/range is often used as a measure of the uncertainty What are we looking at? What uncertainties are characterized? What are left out? Why means? Why standard deviations?

  9. What kind of sample is the CMIP3 ensemble? • A collection of best guesses • Arguably mutually dependent (models share components) • Not random, but not systematic either

  10. MME mean is often better than a single model

  11. Models are not independent Histogram of correlation coefficients of all possible pairs of model bias (simulated minus observed mean temperature 1980-2000)

  12. Models are not distributed around the truth Average of N models Average of best N models 1/sqrt(N) Less than half of the temperature errors vanish for an average of an infinite number of models of the same quality

  13. Models do not sample a wide range of uncertainty Climate sensitivity of CMIP3 models is green curve IPCC-AR4 wg1 Ch10

  14. Ensemble means and ensemble ranges are easy to interpret, but they are not justified by the nature of the sample Likely, the uncertainty is larger than what is represented by the ensemble of best guesses The models in the ensemble have systematic and common errors

  15. Not all models are created equal • Can we use model performance in replicating • observed climate to weigh a model more or less? • Seems like a good idea, except • How do we account for model tuning? • Is there correlation between past and future performance? • What metrics of performance shall we use?

  16. Model tuning

  17. Is performance correlated to future projections?

  18. Is model performance on the mean state indicative of the ability to simulate (future) trends? R = -0.21 R = 0.27 Ability to simulate observed pattern of warming trend Ability to simulate observed pattern of mean climate

  19. Models continue to improve on present day climatology, but uncertainty in projections is not decreasing. • We may be looking at the wrong thing, i.e. climatology provides no strong constraint on projections. • We cannot verify our projections, but only test models indirectly.

  20. Even after all these caveats, There is value in multi-model ensembles They provide the only means of exploring/quantifying structural uncertainties besides initial, boundary condition and parametric uncertainties. Ensemble average has better skill than single models’ output, esp. when considering more than one diagnostic. There is no way around GCMs when we are concerned with regional projections.

  21. So, caveats notwithstanding, a plug for statistical modeling of climate model output Given these rich datasets, and the concurrent development of single-model sensitivity analysis (Perturbed Physics Experiments) Given all the work being done in validating model output Given the emerging awareness from the modeling communities of the value of experimental design It is hard to keep your hands off this problem as a statistician (esp.a Bayesian statistician) Maybe the trick is not to fool ourselves by blurring the line between the real system and the modeled

  22. Probabilistic projections of joint temperature and precipitation change • Main assumptions: • the true climate signal time series (both temperature and precipitation) is a piecewise linear trend with an elbow at 2000, to account for the possibility that future trends will be different from current trends; • superimposed to the piecewise linear trends is a bivariate gaussian noise with a full covariance matrix, which introduces correlation between temperature and precipitation; • observed decadal averages provide a good estimate of the current series and their correlation, and of their uncertainty; • GCMs may have systematic additive bias, assumed constant along the length of the simulation; • after the bias in each GCM simulation is identified there remains variability around the true climate signal that is model specific.

  23. We use uninformative priors on all the unknown parameters in the statistical model. We derive a joint posterior distribution for all of the parameters, In particular for and a posterior predictive distribution for a new model’sprojection.

  24. This analysis Can quantify systematic biases and models’ reliability Can provide a probabilistic “best guess” for the climate signal as a weighted mean between observations and models’ central tendency Can provide a predictive distribution for a “new model” It is an informed synthesis of the data You can accept it as such if you agree with its assumptions (likelihood of the data, priors on the unknown parameters) What does it tell us about the real world?

  25. Conclusions Combining multi-model ensembles can help quantify uncertainties in future climate projections, exploring and comparing models’ structural characteristics. The non-systematic nature of the sampling, the in-breeding among the population of models the complexities of choosing and drawing inference from diagnostics, the impossibility of verification present unique challenges in the formulation of a rigorous statistical approach at quantifying these uncertainties.

  26. Conclusions Things are to become even more interesting, with the introduction of ever more complex models and the consequent increasing heterogeneity of these ensembles, the increasing demands on computing resources and the consequent trade-offs of resolution vs scenarios vs ensemble size. The way forward: coordinated experiments, MMEs plus PPEs, process-based evaluation translated into metrics, representation of model dependencies, quantification of common biases. All along, transparency and two way communication between scientists and users, stake holders, policy makers.

More Related