360 likes | 487 Views
Akm Saiful Islam. WFM 6311: Climate Change Risk Management. Lecture-6: Approaches to Select GCM data. Institute of Water and Flood Management (IWFM) Bangladesh University of Engineering and Technology (BUET). December, 2009. Approaches for selecting a Global Climate Model for an Impact Study.
E N D
Akm Saiful Islam WFM 6311: Climate Change Risk Management Lecture-6: Approaches to Select GCM data Institute of Water and Flood Management (IWFM) Bangladesh University of Engineering and Technology (BUET) December, 2009
Approaches for selecting a Global Climate Model for an Impact Study
The IPCC has a guidance document of interest… IPCC-TGICA, 2007 “General Guidelines on the use of Scenario Data for Climate Impact and Adaptation Assessment” Version 2, June 2007 Prepared by T.R. Carter with contributions from other authors The Task Group on Data and Scenario Support for Impact and Climate Assessment (TGICA) of IPCC This PDF is provided on the CCCSN Training DVD
From the Range of Projections… • IPCC recommends * the use of more than simply ONE model or scenario projection (one should use an ‘ensemble’ approach) – we saw why earlier • The use of a limited number of models or scenarios provides no information of the uncertainty involved in climate modelling • Alternatives to an ‘ensemble approach’ might involve the selection of models/scenario combinations which ‘bound’ the max/min of reasonable model projections (used in IJC Lake Ontario-St. Lawrence Regulatory Study) * (IPCC-TGICA, 2007)
Two Tests for the selection of a Model: TEST 1: How well does a model reproduce the historical climate? Commonly called ‘Model Validation’ TEST 2: How does the model compare with all other models for future projections?
First test: Baseline (historical) climate We can test how well a model has reproduced the historical baseline climate (Model VALIDATION) A model should be able to accurately reproduce past climate (baseline) as a criterion for further consideration Require reliable, long-term observed climate data from the location of interest OR we could use GRIDDED global datasets at the same scale as the models IMPORTANT: Remember we are comparing site-specific to a grid cell average, so an exact match is not to be expected.
Second test: Future Projection We can check how a model performs in comparison with many others in a future projection 5 criteria outlined by IPCC: Consistency with other model projections Physical plausibility (realistic?) Applicability for use (correct variables? timescale?) Representative Accessibility of data A model should not be an outlier in the community of model results
Check maps - CGCM3 - Temperature? OBS Stations NCEP GRIDDED CGCM3T47 1961-1990 Mean ANNUAL TEMPERATURE Reasonable pattern, with models slightly cold
Example: CGCM3 – Timeseries in the Historical Period The model is too cold, but the TREND is good
Check maps - CGCM3 - Precipitation? OBS Stations NCEP GRIDDED CGCM3T47 1961-1990 Mean ANNUAL PRECIPITATION Pattern not quite right –units here are mm/day
Example: CGCM3 – Timeseries in the Historical Period The model is too wet,TREND is reasonable
Test 1: Baseline Methodology: • Comparison of Annual, Seasonal, Monthly means over the same historical period • Use the variables of interest – most common – precipitation and temperature from the Archive • Keep in mind that we are comparing a single site location (meteorological station) against a gridded value • An improved method would be to include other nearby stations in the analysis as well with long records • We then obtain from CCCSN the model baseline values for the same location using the SCATTERPLOT
Test 1: (continued) • Compare the annual values and the distribution of temperature over the year • Models which best match the annual mean and the monthly distribution pattern can be identified • NOTE: it doesn’t matter which emission scenario we select since for the historical period, the models use the same baseline
Test 1: Baseline Methodology… observed means too warm too wet too cold too dry Annual Temperature Annual Precipitation
Test 1: Baseline Methodology…Looking at Temp and Precip together • Again, SCATTERPLOT on CCCSN – simply select BOTH variables at the same time and all models or combine the 2 initial results in a single spreadsheet ‘Perfect’ model • Almost all models are too wet • Most models are too cold • Outliers can be identified
Test 1: Baseline Methodology… Rank the models for the baseline period - ANNUAL Temperature Precipitation Total Score Sum of Model A ranks Sum of Model B ranks Sum of Model C ranks Sum of Model D ranks Sum of Model E ranks Sum of Model F ranks … Model A rank Model B rank Model C rank Model D rank Model E rank Model F rank … Model A rank Model B rank Model C rank Model D rank Model E rank Model F rank … + Lowest Score Model is Closest to Baseline
Test 1: Baseline Methodology • The same analysis can be done on a month and seasonal basis –this can be very important • This method is best used to reject models (models with largest scores) • We effectively remove from consideration those models with lowest agreement (largest scores) • The moderating effect of lakes, local elevation effects, lake-induced precip are all complicating factors
Test 2: Future Projections • No complications like observed data! • We look at the range of model projections for the same location and see how they vary • Models with outlier projections (excessive anomalies – which are too large or too small) are best rejected • Finding the anomalies is a simple process using SCATTERPLOT on CCCSN
Test 2: Future Projections The 1961-1990 or 1971-2000 period as baseline? Which projection period are we interested in? (2050s is a common period for planning purposes) Is an annual, seasonal or monthly projection needed? - depends on the study
Annual Temperature/Precipitation Change Scatterplot for Toronto Grid Cell: 2050s (ONLY SRES) Median T and P for all models/scenarios 1 Std. Dev
What do all the models and emission scenarios tell us for this gridcell? Median Annual Temperature Change in 2050s Toronto Pearson A Observed 1961-1990 Normal LOWER UPPER o o o +1.8 +2.6 +3.3 o 7.2 C Median Annual Precipitation Change in 2050s LOWER UPPER +0.4% +5.0% +9.7% 780.8mm
TEST 2: Which Models are Closest to the Median Projection? Rank the models for the 2050s Projections - ANNUAL Temperature Precipitation Total Score Sum of Model A ranks Sum of Model B ranks Sum of Model C ranks Sum of Model D ranks Sum of Model E ranks Sum of Model F ranks … Model A rank Model B rank Model C rank Model D rank Model E rank Model F rank … Model A rank Model B rank Model C rank Model D rank Model E rank Model F rank … + Lowest Score Model is Closest to ALL MODEL MEDIAN
Is there a ‘best’ model for both tests? TEST 1 TEST 2 (baseline) (projections) Resulting Models Resulting Models Best Models from both TESTS HADCM3 GISSAOM CGCM3T63
The Caveats: • We have only considered ANNUAL values, not SEASONAL or MONTHLY baseline (TEST 1) or projections (TEST 2) • The seasonal and monthly options are available on the SCATTERPLOT selector) • ‘Extreme variables’ have greater uncertainty than normals Models can show good ANNUAL agreement with baseline and good agreement with all model projections, but they can still have incorrect seasonal or monthly distributions
Will Regional Climate Model (RCM)s help? • They offer higher spatial resolution (~50 x 50 km) versus GCM at 200-300 km • The models are driven by an overlying model or gridded data source – so biases in those gridded datasets will also be included in the RCM • The time requirements and processing power available means there are fewer emission scenarios available = fewer future pathways for consideration • Some investigations will always require further statistical downscaling
Will RCMs Help in TEST 1? CRCM3.7.1: 6.1 C CRCM4.1.1: 4.9 C CRCM4.2.2: 6.1 C CRCM3.7.1: 758.5mm too dry CRCM4.1.1: 542.8mm too dry CRCM4.2.2: 860.7mm too wet all cold too warm too wet too cold too dry Annual Temperature Annual Precipitation
Will RCMs Help in TEST 2? crcm4.2.0 crcm4.1.1 Median T and P crcm3.7.1 1 Std. Dev
CCCSN.CA website Select Scenarios - Visualization
Get data • Input lat long • Select AR4 • Select Variable Tmean • Select Model(s) validated to Tmean • Click Get Data
Website Output Plus output table under chart
Get data for all variables including climate extremes You can select an ensemble of models by using Ctrl-Enter
Future Consecutive Dry Days at Windsor Using 3 GCM model output Can average all model results for ensemble