1 / 26

Part 1 Baseline Comparisons

magee
Download Presentation

Part 1 Baseline Comparisons

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Original Graph SetThis is the first set of studies that we made after re-rating the stations using the updated Leroy (2010) system. We examine compliant (Class 1&2) stations with non-compliant (Class 3,4,5) using different classes as baseline comparisons. We also ran the data without baselines for purposes of comparison and to ensure that the results were not a methodological artifact.

  2. Part 1Baseline Comparisons How do stations compare with nearby stations of a different class? This uses a different approach than our (more recent) “nine regions” method. But we wanted to be certain that however we addressed the problem the basic results would be the same.

  3. Methodology • We cut the US up into 26 grid boxes. • We then compare well sited and poorly sited stations within each grid, using each Class as a separate touchstone. • This gives us four baselines of comparison: Class 1\2, Class 3, Class 4, and Class 5. Note: We combine Class 1&2 and treat them as a single class to ensure a robust sample and also because neither Class 1 nor 2 stations are temperature biased, according to Leroy (2010) and are therefore equivalent for our purposes.

  4. Grid Boxes1.) Comparisons are made within each box to establish baseline. 2.) All boxes are averaged for gridded results.

  5. Class 4 stations are the most numerous (36%) and therefore this comparison is the most robust. Compliant (Class 1\2) stations show a trend of 0.095°C / Decade lower than non-compliant (Class 3\4\5 stations).

  6. Class 3 stations are the second most numerous (33%). Compliant (Class 1\2) stations show a trend of 0.102°C / Decade lower than non-compliant (Class 3\4\5 stations).

  7. Class 1\2 stations comprise only 20% of the total number. Compliant (Class 1\2) stations show a trend of 0.082°C / Decade lower than non-compliant (Class 3\4\5 stations).

  8. Class 5 stations comprise only 12% of the total number. Results therefore cannot be considered to be robust. Yet the same pattern emerges. Compliant (Class 1\2) stations show a trend of 0.076°C / Decade lower than non-compliant (Class 3\4\5 stations).

  9. Part 2Equipment This is a look at how the different equipment affects the data. • CRS: Cotton Region Shelters (a/k/a “Stevenson Screens” • MMTS (Maximum-Minimum Temperature Sensors) • ASOS (Automated Surface Observing Systems)

  10. This compares different equipment. Note that the modern MMTS shows a significantly lower trend than the obsolete CRS and the notoriously unreliable ASOS. Yet rather than adjusting CRS and ASOS trends downward to match MMTS, MMTS trends are adjusted upwards to conform with the older, less reliable equipment.

  11. CRS equipment shows a higher overall trend than MMTS and somewhat less difference between compliant and non-compliant stations (0.64). Part of this is due to poor distribution of stations and is addressed by gridding (see next slide).

  12. After gridding and baselining to Class 4, CRS equipment shows a difference between compliant and non-compliant stations of 0.73.

  13. Modern MMTS equipment shows a much larger difference between compliant and non-compliant stations (0.173). (ASOS comparisons cannot be made, as there are too few for a robust internal comparison. They tend to be better sited, almost exclusively in airports, yet their trends are higher owing to an equipment (HO-83) failure issue and other factors pertaining to the unique situations in airports.)

  14. After gridding and baselining to Class 4, MMTS equipment shows a slightly smaller, yet still very large difference between compliant and non-compliant stations (0.164).

  15. Part 3Urban vs. Rural This section confirms the that urbanization not only increases the readings, but also the trends. In addition, urbanization is found to dampen, though not eliminate, the differences between complaint (Class 1&2) and non-compliant (Class 3,4,5) stations. Rural stations show the greatest disparity. This is significant because 10% of the rated sites are urban and 25% semi-urban, which is a far greater proportion than the actual CONUS surface area. Therefore, to that extent, the trends are exaggerated.

  16. We now turn our attention to urban vs. rural trends. Urban trends are much higher than rural (0.99) with semi-urban trends squarely in between, at 0.56 higher than rural.

  17. The difference between compliant and non-compliant rural stations is much greater (.095) than for urban stations.

  18. The difference between compliant and non-compliant semi-urban stations is also much greater (0.114) than for urban stations.

  19. The difference between compliant and non-compliant urban stations is much less (0.037), as urban waste heat overwhelms the stations, nominally compliant and non-compliant, alike. Class 4 station in urban areas show the same tendencies as Class 5 stations in rural areas.

  20. This chart demonstrates the large effect of urban areas on (otherwise) compliant stations. It also tells us how the NOAA deals with this by way of adjustment: Namely that non-urban trends appear to be adjusted upward to match urban trends rather than urban trends being adjusted downward to match rural trends.

  21. Non-compliant (Class 3,4,5) stations show somewhat less urban-rural difference than the compliant (Class 1,2) stations.

  22. Part 4Gridded, but with no Baseline We now examine the data without any baseline.

  23. Without a baseline, the data shows a .077 cooler trend for compliant (Class 1&2) stations than for non-compliant stations (Class 3,4,5). This is consistent with our overall findings.

  24. And this shows how NOAA adjusts for the differences: Not by adjusting the non-compliant stations downward to match the compliant stations, but by adjusting the compliant stations upward to match those stations that are out of compliance.

  25. Finally, we showcase the best equipment with urban and semi-urban stations excluded (there are a handful of rural airports included, however). This data is not gridded or baselined, but is a simple national average. The warming effects of poor siting are obvious, as are the effects of NOAA adjustment procedure. Compliant trends are fully .190 higher after NOAA adjustment.

  26. It is, of course, important to provide a simple, ungridded nationwide average of all well sited stations and poorly sited stations. And indeed, we provide those figures. But it is possible that a nationwide average result can be skewed by poor station distribution. If well (or poorly) sited stations are concentrated in some areas but not in others. Furthermore, it is not very revealing to compare a well sited station in Northern Virginia with a poorly sited station in Arizona. One would want to compare well sited (Class 1&2) stations with nearby poorly sited (Class 3,4,5) stations and vice versa . Therefore, gridding and baselining is desirable. Baseline Comparisons

More Related