1 / 22

Inconsistent strategies to spin up models in CMIP5 and effects on model performance assessment

Explore the impact of different strategies in running climate models in CMIP5 on model performance evaluation. Highlight the importance of consistent experimental setups for accurate model-data comparisons.

georgeo
Download Presentation

Inconsistent strategies to spin up models in CMIP5 and effects on model performance assessment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Inconsistent strategies to spin up models in CMIP5 and effects on model performance assessment Roland Séférian, Laurent Bopp, Marion Gehlen, Laure Resplandy, James Orr, Olivier Marti, Scott C. Doney, John P. Dunne, Paul Halloran, Christoph Heinze, Tatiana Ilyina, Jerry Tjiputra, Jörg Schwinger MISSTERRE – December 2015

  2. CMIP5: the age of the skill-score metrics… • 2000-2007 (Stow et al., 2009) • ~63 % of the reviewed paper provided a very simple evaluation • 2009-onward (e.g., Frölicher et al., 2009, Steinarcher et al., 2011) • Ensemble model evaluation (cross evaluation) + model weighted solution

  3. CMIP5: the age of the skill-score metrics… • 2000-2007 (Stow et al., 2009) • ~63 % of the reviewed paper provided a very simple evaluation • 2009-onward (e.g., Frölicher et al., 2009, Steinarcher et al., 2011) • Ensemble model evaluation (cross evaluation) + model weighted solution

  4. CMIP5: the age of the skill-score metrics… • 2012-onward (e.g., Anav et al., 2013) • Statistical metrics on seasonal cycle are used to rank models between each other • 2013-onward (e.g., Cox et al., 2013, Wenzel et al., 2014, Massonet et al. ) • Observational constrains as resonnable guess to weight model prediction

  5. CMIP5: the age of the skill-score metrics… • 2013-onward (e.g., Knutti et al., 2013) • Combination of of variables to rank models between each other

  6. CMIP6: skill-score metrics climax… • 2014-onward (e.g., Eyring et al., 2014) • Development of metrics package as unified framework to benchmark models (for CMIP6)

  7. CMIP6: skill-score metrics climax… • 2014-onward (e.g., Eyring et al., 2014) • Development of metrics package as unified framework to benchmark models (for CMIP6)

  8. What do we call intercomparison ? Model #1 Exp. Setup A Skill Scores (A) Observations Model #2 Skill Scores (B) Observations

  9. What do we call intercomparison ? Model #1 Exp. Setup A Exp. Setup B Skill Scores (A) Observations Model #2 Skill Scores (B) Observations

  10. Can we really speak of intercomparison if experimental setup differs between models ?

  11. Can we really speak of intercomparison if experimental setup differs between models ? Difference in duration

  12. Can we really speak of intercomparison if experimental setup differs between models ? Difference in Initial condition Difference in duration

  13. Can we really speak of intercomparison if experimental setup differs between models ? Difference in Initial condition Difference in strategy Difference in duration

  14. Can we really speak of intercomparison if experimental setup differs between models ? Difference in Initial condition Difference in strategy How these difference/inconsitencies impact model-data comparison ? Difference in duration

  15. Evaluating the impact of spin-up duration with IPSL-CM5A-LR • No information available from metafor • Parent_id: N/A • No spin-up simulation distributed to the CMIP5 archive • Need to re-do simulation in a very naïve experimental setup: • Initialize model (IPSL-CM5A-LR) at rest with observations (WOA, GLODAP) • Determine model skill-scores (correlation, biais, RMSE) along the spin-up time [500 yrs] with the same datasets • Focus on O2 proxy of physical air-sea fluxes, circulation

  16. Evaluating the impact of spin-up duration with IPSL-CM5A-LR IPSL-CM5 (AR4 style) Our simulation IPSL-CM5 Mignot et al., 2013 • Drift in OHC weak and comparable to other CMIP5 models after 250 years of spin-up

  17. Evaluating the impact of spin-up duration with IPSL-CM5A-LR O2 O2 Depth (2000m) Surface

  18. Tracking the drift in the CMIP5 archive Surface CMIP5 models IPSL-CM5A snapshots Depth (2000m) IPSL-CM5A snapshots CMIP5 models

  19. Tracking the drift in the CMIP5 archive Not so surprising… (1) Simple computation: Ocean volume 3 x 1018 m3 Deep water mass formation rate ~20 Sv ==== Mixing time of the ocean ~2000 years (2) Model simulation with data assimilation Wunch et al., 2008

  20. Revisit model ranking accounting for model drift Standard framework : Surface O2 Deep O2

  21. Revisit model ranking accounting for model drift Penalized framework : Surface O2 Deep O2

  22. Perspectives • Need to define a common framework to run ocean/bgc simulations • … as OCMIP2 (requiring 2000 years of spin-up simulation) • Need to expand model metadata (no information on the spin-up is available on metafor) • … Now: branchtime of piControl = N/A (not transparent at all !) • Provide some recommendations for model weighing and model ranking • … Skill score metrics are a ‘snapshot’ of the model and do not show is the model’s fields are drifting or not… • Further work will be done in CRESCENDO • …use drifts to define confidence level on model results

More Related