1 / 13

Homogenization of monthly Benchmark temperature series of network no. 3 – using ProClimDB software

Homogenization of monthly Benchmark temperature series of network no. 3 – using ProClimDB software. COST Benchmark meeting in Zürich 13-14 September 2010 – Lars Andresen. Software package. AnClim Homogeneity analysis (using txt-files) ProClimDB

jemima
Download Presentation

Homogenization of monthly Benchmark temperature series of network no. 3 – using ProClimDB software

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Homogenization of monthly Benchmark temperature series of network no. 3 – using ProClimDB software COST Benchmark meeting in Zürich 13-14 September 2010 – Lars Andresen

  2. Software package • AnClim • Homogeneity analysis (using txt-files) • ProClimDB • Automating the homogenization procedure (using mainly dbf-files) Petr Štěpánek

  3. Rank of monthly values Comparing with neighbours Dist. / Stand. to alt. / Outliers Replacing suspicious values Stations within 10 km Demands on data coverage Merging of different series Reference series (40 years, 10 years overlap) from correl. / weights Standardization to base station (AVG/STD) SNHT (Alexandersson test) Assessment of hom. results Reference series (10 years around inhomogeneity) from distances Standardization to base station (AVG/STD) Smoothing monthly adjustments / Demands on corr. after adjustm. Normal homogenization procedure Original Data Quality control Reconstruction of series Homogeneity testing Adjusting Data Iteration process

  4. Detecting breaks of network 3 (15 series) • Outliers removed from manipulated series • 10 outliers from 8 stations • Testing settings of ProClimDB • 40 year periods, 10 years overlap versus 20 years • Excluding breaks closer than 4 years to edge of series or to nearest break • Finding the more distinct breaks before the less distinct ones

  5. Removing outliers Station 01400 Value of 5/1978 changed from 14.8°C (outlier) to 10.8°C (true) 1976, 14.3/14.3 1977, 11.5/11.5 1978, 10.8/14.8 1979, 13.2/13.2 1980, 8.8/8.8

  6. 0.3° 0.5° 0.7° Consequences by changing overlap years – A case study, using SNHT method Single shift of +/- 0.3, 0.5, 0.7° Each pair 9 and 19 years from edge of the series Single shift of +/- 0.5° 2, 4, 9, 19 years from edge of a homogeneous temperature series of 40 years

  7. Criteria for detection • Approved • Correct year (two years involved, both correct) • Adjustment within ± 0.1 degrees, e.g. 0.5 ± 0.1 • T0 ≥ 8.1 (40 years – significance level 95%) • Nearly approved • Correct year, T0 ≥ 8.1, Adj = 0.5 ± 0.3 degrees • Correct year ± 1, T0 ≥ 8.1, Adj = 0.5 ± 0.2 • Correct year, T0 ≥ 7.0 (s.l.90%), Adj = 0.5 ± 0.1 • Fault • Significant break not approved or nearly approved

  8. Network 3 – comparing 46 breaks B: Breaks detected , M: Missing detection , F: Fault detection After 0 1 2 iterations Overlap 10 years 20 years Y_Poss ≥30 Y_Poss ≥25 Y_Poss ≥20

  9. Left: ”Official result” (46 breaks) Case study Y_Poss ≥30, 25 and 20, 2 iterations Y_Poss ≥15, no iteration

  10. Discussion – 1 Homogeneity analysis Reference series for finding breaks • Using correlations • Using distances • Weighting of neighbour values (0.5 or 1.0?) • Period (40 years) / Overlap (10 or 20 years?) Processing of results • Method (SNHT alone or in combination with others?) • Finding most probable breaks (Y_POSSIBLE). How? • Weighting of month, season, year (1, 2, 5) • Metadata (improving?) • Nearness to begin/end/other breaks (2 or 4 years?)

  11. Discussion – 2 Adjustments of the series Reference series for making adjustments • Using distance alone (limitation on distance) • Using distance and correlation (limitations on distance and correlation) Smoothing monthly adjustments • Gauss filter (0~no smoothing, 2~period of 5 values is recommended, other?) Checking correlation after adjustments • Keep smoothed adjustment if correlation improvement between candidate and neighbours (Corr+value) ≥ 0.005 or ≥ 0.000 ?

  12. Discussion – 3 Iterations • Using adjusted file for new analysis • How finding most probable breaks • More stringent criteria when automating procedure (depends on metadata and Y_POSSIBLE)?

  13. Conclusion • It is reason for concern about the high number of fault detections • Use of metadata is necessary in homogenization! Using metadata allows lower values of Y_Possible • It’s important to find the optimal conditions of a procedure before comparing methods • Homogenization has no correct answer !

More Related