190 likes | 341 Views
Air Quality Monitoring Network Assessment: Illustration of the Assessment and Planning Methodology Speciated PM2.5. Prepared for EPA OAQPS Richard Scheffe by Rudolf B. Husar and Stefan R. Falke * Center for Air Pollution Impact and Trend Analysis, CAPITA Washington University, St. Louis
E N D
Air Quality Monitoring Network Assessment:Illustration of the Assessment and Planning Methodology Speciated PM2.5 Prepared for EPA OAQPS Richard Scheffe by Rudolf B. Husar and Stefan R. Falke* Center for Air Pollution Impact and Trend Analysis, CAPITA Washington University, St. Louis Draft, January 31, 2000 *AAAR Fellow, EPA Office of Environmental Information
Background on AQ Network Assessment • Monitoring the ambient concentrations provide the necessary sensory input to the various aspects of AQ management. • Efforts are under way to implement new monitoring systems for ozone precursors (PAMS) as well as for PM2.5. At the same time the performance of the existing ozone and PM monitoring sites are re-assessed for possible re-location or elimination. • The establishment and operation of AQ monitoring networks is the main consumer of financial and personnel resources in most state and local AQ management agencies. • In order to streamline the AQ monitoring, EPA OAQPS has initiated a program to make the existing air pollutant monitoring networks more responsive to the needs of AQ management. • This work is a progress report by the CAPITA group on methodology to assess the performance of the O3 and PM2.5 monitoring networks.
Monitoring Network Evaluation: Multiple Criteria • Monitoring networks need to support multiple aspects of air quality management including risk assessment, compliance monitoring, source identification and tracking the effectiveness of control measures. • Multiple purposes may require very different network design. For example, health risk characterization requires sampling of the most harmful species over populated areas during high concentration episodes; tracking of emission-concentration changes requires broad regional sampling for establishing pollutant mass balance. • In general, an AQ monitoring network is characterized by the spatial distribution of sampling stations, temporal sampling pattern and the species measured. • This methodology is focused on evaluating the geographic features of the network. The consideration of the temporal and species aspects of network evaluation is left for future work.
Click on images for full resolution Network Layout:Uniform or Clustered? • The existing monitoring networks for O3, PM2.5 and weather parameters show different strategies: • The monitoring ozone network (top) is highly clustered in around populated areas. Evidently, O3 regulatory network is focused on areas ‘where the people are’. • The recently established PM2.5 FRM network (center) appears to be somewhat less clustered. • On the other hand, the NOAA’s automated surface weather observing system (ASOS) (bottom) is uniformly distributed in space for broad spatial coverage. • Clearly, the layout of these networks is tailored for different purposes.
Network Evaluation Combining Subjective and Objective Steps • Select multiple evaluation criteria i.e. risk assessment, compliance monitoring, trend tracking etc.This is a subjective procedure driven by the network objectives. • Decide on specific measures that can represent each criterion, i.e. number of persons in the sampling zone of each station; concentration etc. The selection of the suitable measures is also somewhat subjective. • Calculate the numeric value of each measure for each station in the network. This can be performed objectively using well defined, transparent algorithmic procedures. • Rank the stations according the each measure. This yields a separate rank value for each measure. For example, a station may be ranked 5 by day-max O3 and ranked 255 by persons in the sampling zone. This step can also be performed objectively. • Weigh the rankings, i.e. set the relative importance of various measures. This involves comparing ‘apples and oranges’ and it is clearly subjective. • Add the weighed rankings to derive the overall importance of the station and rank the stations by this aggregate measure. Use the aggregate ranking to guide decisions on network modifications.
AQ Management Activity Geographic Info. Need Station Measure 1. Risk assessment Pollutant concentration 4th highest O3 2. Risk Assessment Persons/Station 3. Compliance evaluation Conc. vicinity to NAAQS Deviation from NAAQS 4. Reg./local source attribution & tracking Spatial coverage Area of Sampling Zone 5. All above Estimation uncertainty Meas. & estimate difference Network Evaluation Using Five Independent Measures The approach is illustrated with the ozone network using five independent measures. The five different measures represent the information need for (1) risk assessment, (2) compliance monitoring and (3) tracking. The methodology allows easy incorporation of additional measures. These are all measures of the network benefits. Other benefits measures (temporal, species) should also be incorporated. For cost-benefit analysis, the cost of the network operation should also be incorporated. Note: Criteria 2 and 4 are not yet implemented: (2) Persons in Sampling Zone and (4) Area of Sampling Zone is not yet evaluated Persons in sampling zone
The Independent Measures of Network Performance • In this assessment, five independent measures are used to evaluate the AQ monitoring network performance. As the Network Assessment progresses, additional measure will be incorporated. • Further details about the five measures can be found in separate presentations pertaining each measure. (See links below. Note: Use the Back arrow on the browser to return to this presentation). • Pollutant Concentrationis a measure of the health risk. According to the NAAQS, the relevant statistic is the 4th highest daily max concentration over 3 years. The station with the highest 4th highest daily max value is ranked 1. • Deviation from NAAQSmeasures the station’s value for compliance evaluation. The station ranking is according to the absolute difference between the station value and the NAAQS (85 ppb). The highest ranking is for the station whose concentration is closest to the standard (smallest deviation). Stations well above or below the standard concentration are ranked low. • Estimation uncertaintymeasures the ability to estimate the concentration at a station location using data from all other stations. The station with the highest deviation between the actual and the estimated values (i.e. estimation uncertainty) is ranked #1. In other words, the stations who’s values can be estimated accurately from other data are ranked (valued) low. • Spatial coveragemeasures the geographic surface are each station covers. The highest ranking is for the station with the largest area in it’s sampling zone. This measure assigns high relative value to remote regional sites and low value to clustered urban sites with small sampling zones. • Persons/Stationmeasures the number of people in the ‘sampling zone’ of each station. Using this measure the station with the largest population in it’s zone is ranked #1. Note: Estimating the health risk requires both the population and the concentration in the sampling zone.
Ranking of Stations • In the following pages, the ozone network is evaluated using the five independent measures. • The results are shown in maps with consistent representation scale for each five measures. • For each measure, the stations are ranked in importance. The upper quartile (75th percentile) and the lower quartile (25th percentile) of the stations are highlighted. • The focus is on the stations with low ranking of their ‘value’ i.e. on candidate stations for elimination. • Subsequently, the rankings are aggregated subjectively to yield overall evaluation. • The details of the
Ranking by Daily Max Concentration • The daily max concentration is a factor in health risk. • The stations with the highest O3 levels (red) are located over the NE Megalopolis, Ohio River Valley and the urban center in the Southeast: Dallas, Houston, Atlanta. • The stations with the lowest O3 levels (blue) are located throughout the remainder EUS. • Contiguous regions of low O3 are found in Florida, Upper Midwest, and the inland part of the Northeast. • From O3 exposure perspective, the blue stations are ranked lowest.
Ranking by Deviation from NAAQS • The deviation from the NAAQS (85 ppb) measures the station importance for compliance. • The stations closest to the NAAQS (red dots) occupy much of the central Eastern US, south of the Great Lakes and N. of Tennessee-S. Carolina. • The stations with large +/- deviation from NAAQS (blue) are clustered over the megalopolis, Dallas, Houston (O3>85ppb) or over Florida, Upper Midwest (O3<85ppb) • From compliance monitoring perspective, the blue stations have the lowest rank.
Ranking by Estimation Uncertainty • The uncertainty measures the ability to estimate the concentration from other data. • The highest uncertainty (red) is found at urban stations where the concentrations are highly variable in space and time. • The lowest uncertainty (blue) is at remote sites where the concentrations are more homogeneous in space and time • From the perspective of estimation uncertainty, the blue stations have the lowest rank.
Ranking by Population in the Sampling Zone • The number of persons in a station’s sampling zone is a scaling factor for the overall health risk. • Areas of large population per station (red) are found over the NE megalopolis but also over more remote areas. • Small population/station (blue) is generally found at remote sites but also in some urban clusters, e.g. Chicago, New Orleans, St. Louis. • From the perspective of population coverage, the blue stations have the lowest rank.
Ranking by Area of Sampling Zone • The area of the sampling zone is measure of the spatial coverage and uniformity • The stations with large sampling areas (red dots) are unclustered remote sites outside of urban areas • Conversely, the stations with small sampling areas (blue dots) are in clusters, mostly in urban regions. • The clusters with small station areas are located over the NE megalopolis, Chicago, Pittsburgh, St. Louis, etc. They rank lowest in area coverage.
Aggregate Ranking of Stations • Aggregation of rankings is not simply a weighing the rankings since it involves subjective judgments. • However, once the relative weights of different rankings are available, (from the negotiation process) the current methodology allows their incorporation into the assessment • The following pages illustrate several aggregate station rankings.
Aggregate Ranking – Equal Weight • All five measures are weighed equal at 20% each. • High ‘aggregate value’ stations (red) are located over both urban and rural segments of the central EUS. • Low ‘value’ sites (blue) are inter-dispersed with high value sites. • Clusters of low value sites are found over Florida, Upper Midwest, and the inland portion of New England.
Aggregate Ranking – Focus on ‘Risk Assessment’ • This weighing of ranking adds extra weight for population. • The high ‘aggregate value’ stations (red) are distributed throughout the urban and no-urban areas of EUS. • The low ‘value’ stations are (blue) clustered over Chicago, inland New England, Florida and New Orleans and St. Louis.
Aggregate Ranking – Focus on ‘Compliance’ Monitoring • This weighing of ranking adds extra weight for concentration proximity to the standard and estimation uncertainty. • The high ‘value’ stations (red) are mostly in urban areas as well as the Ohio River Valley. • The low ‘value’ stations (blue) are clustered around Chicago, inland New England, and Florida. Most stations in the rural SE are also of low ‘value’ from compliance perspective.
Aggregate Ranking – Overall Low Value Stations • About 10% of the stations are of low ‘value’ (lowest quartile) from both risk assessment and compliance monitoring point of view. • Such overall ‘poor value’ stations appear to be candidates for elimination. • Disclaimer: These were only illustrations of the method, not specific recommendation. • Feedback from the Network Assessment community would be most appreciated.
Summary of the Network Evaluation Methodology: • Establish network evaluation criteria using subjective judgment. • Calculate objective measures for each criteria for each site using existing data. • Using the uniform standards across the network, rank stations according to each criteria using these objective measures. • Subjectively weigh the rankings to establish the aggregate ranking.