320 likes | 341 Views
Evaluating SF6 tracer experiment data in Barrio Logan to assess model performance. Includes hourly concentrations, meteorological data analysis, and model testing. Preliminary results show comparisons between observations and model predictions.
E N D
Modeling OverviewFor Barrio LoganCommunity HealthNeighborhood Assessment Program Andrew Ranzieri Vlad Isakov Tony Servin Shuming Du October 10, 2001 Air Resources Board California Environmental Protection Agency Working Draft - Do Not Cite or Quote
MODEL PERFORMANCE EVALUATION A scientific process to ensure models are working properly and predict reliable concentrations Working Draft - Do Not Cite or Quote
Microscale Tracer Experiment at Barrio Logan • Tracer Experiment conducted from August 21-30, 2001 • Hourly SF6 concentrations sampled at 50 sites • Tracer released at NASSO during daytime from 10 a.m. to 10 p.m. • Mobile van sampled continuously to measure crosswind SF6 concentrations • Mini-sodar to measure vertical winds up to 200m at 5m resolution • Six sonic anemometers to measure surface level winds and turbulence Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale Tracer Data • Contractor is conducting QA analysis on data sets to assure quality data • ARB evaluating non-QA data for SF6 and meteorology • Not all meteorological data are currently available (sonics) • Conducting “preliminary” data analysis Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • Plotting hourly SF6 data (spatial maps) to understand data set and identify outliers • consistency between winds and concentrations • identify plume centerline and plume width • evaluate downwind dilution ratios • identify data sets for initial model testing and performance evaluation • work with contractor to resolve problems Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • ARB/UCR are conducting preliminary modeling to assist in QA work and provide “fast track” modeling results • ISCST3 • AERMOD • CALPUFF • UCR Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale Meteorology from NASSCO ( sonic ) • Preliminary results from data analysis and model performance - ISCST3 results Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale Meteorology from Logan HS ( sodar ) • Preliminary results from data analysis and model performance - ISCST3 results Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale Meteorology from Lindbergh ( NWS data ) • Preliminary results from data analysis and model performance - ISCST3 results Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • Preliminary results - comparison of ISCST3 results with observations (selected days/hours) Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • Preliminary results - comparison of ISCST3 results with observations (all days) Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • Preliminary results - comparison of ISCST3 results with observations (all data) Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • Preliminary modeling results: CALPUFF, 08/21/01, 11 a.m. Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • Comparison between observations and predictions of CALPUFF - all data Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • Comparison between observations and predictions of CALPUFF - all data Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • Comparison between observations and predictions of CALPUFF, Run-length average Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • Comparison between observations and predictions of CALPUFF (without turbulence profile data)- selected data set, correlation coefficient = 0.747 Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • Comparison between observations and predictions of CALPUFF (without turbulence profile data)- selected data set Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • Comparison between observations and predictions of CALPUFF - Two examples of hourly comparison • Two examples are closely examined • one good case: hour 11,8/21/2001 • one bad case: hour 21, 8/29/2001 • These two examples suggest that wind direction has a controlling effect on estimating concentrations. Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • CALPUFF modeling results at hour 11, 8/21/2001 Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • CALPUFF modeling results at hour 21, 8/29/2001 Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • CALPUFF modeling results at hour 21, 8/29/2001 (wind direction is shifted to make the predicted plume line up with the observed) Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale Comparison between peak concentrations along monitoring arcs (1000 m, & 2000 m) Comparison between peak concentrations along monitoring arcs (500m, 1000 m, & 2000 m) • Comparison between observations and predictions of CALPUFF Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • Planning for the winter tracer experiment • During the winter time it is more difficult to choose suitable monitoring locations because of the high variability of wind direction • Several examples are presented to show the variability of daytime wind direction during winter is higher than during summer Working Draft - Do Not Cite or Quote
Status of Performance Evaluation – Microscale • CALPUFF result for monthly average in January (left) and August (right) 2000
Status of Performance Evaluation – Microscale • CALPUFF results: hour 10, 1/18/00 (left), hour 11, 1/18/000 (middle), and hour 14, 1/18/00 (right) Working Draft - Do Not Cite or Quote
Status of Model Performance Evaluation - Regional • Defined modeling domain • Generated 3-dimensional winds and temperatures for 1998 using MM5 for input to CMAQ • Generated 3-dimensional winds using CALMET for input to UAM • Development of gridded emissions inventory • Initial testing of CMAQ to estimate secondary pollutants • Comparison of CMAQ results with other models and observations Working Draft - Do Not Cite or Quote
Initial Model Testing: CMAQFormaldehyde Concentrations [µg/m3] • CMAQ(1 day 08/05/97)2 - 10 (San Diego) 2 - 18 (Los Angeles) • EPA OZIPR (summer)8 - 19 (Los Angeles) EPA OZIPR (ann. avg.)14.5 (Los Angeles) 1.1 - primary, 13.4 - secondary • CALINE (annual avg.)0.1- 0.2 (Barrio Logan) - primary • ISCST3 (annual avg.)< 1 (Barrio Logan) - primary • Observed (ann. avg., 97) 2.9 (San Diego, Chula Vista) 4.5 (Los Angeles, N. Long Beach) 1.4 - 5.5 (Barrio Logan, 1999-2000 monthly averages) Working Draft - Do Not Cite or Quote
Future Work • Conduct another SF6 tracer experiment at Barrio Logan and VOC sampling at Barrio Logan (November 15 – January 15, 2002) • Evaluate microscale modeling for summer and winter time conditions at Barrio Logan • Recommend models for neighborhood assessment MICROSCALE MODELING Working Draft - Do Not Cite or Quote
Future Work • Assess accuracy of emission inventory estimates at Barrio Logan • Generate gridded hourly emissions inventory for 1998 for input to CMAQ and UAM EMISSIONS INVENTORY Working Draft - Do Not Cite or Quote
Future Work • Evaluate regional performance for CMAQ and UAM for hourly, 24 hour, and annual averaging times • Predict spatially resolved annual ambient toxic concentrations for southern California REGIONAL MODELING Working Draft - Do Not Cite or Quote