250 likes | 480 Views
Preliminary Analysis of the Chinchilla Blast Overpressure Data . William J. Murphy Amir Khan Peter B. Shaw Hearing Loss Prevention Section Division of Applied Research and Technology National Institute for Occupational Safety and Health September 29,2008.
E N D
Preliminary Analysis of the ChinchillaBlast Overpressure Data William J. Murphy Amir Khan Peter B. Shaw Hearing Loss Prevention Section Division of Applied Research and Technology National Institute for Occupational Safety and Health September 29,2008 The results reported in this paper represent the opinions of the authors and are not representative of the policies of the National Institute for Occupational Safety and Health.
Outline • Summary of Exposures • Spectrum • Interstimulus Interval • Level • Exposure Metrics • MilStd 1474D • LAeq8hr Unprotected • AHAAH Warned & Unwarned • Pfander • Smoorenburg
Statistical Analysis of BOP Data • Statistical model for effects threshold • Linear Mixed Effects Models • Statistical Fits • Questions to consider for the analysis • Trading Ratios? • Log(AHAAH) • Frequency Dependency
Statistical Analysis of Chinchilla Data Applied linear mixed effects regression models to compare the different metrics • Fixed effects: metric and frequency and log transformations of these variables • Random effects: subject and exposure code (takes into account correlated nature of data for given subject and exposure code)
Preliminary Findings • LAeq8 provides the best fit to the TTS data of the competing metrics. • Unwarned AHAAH tends to provide the best fit to the PTS data after a log(AHAAH) transformation.
Discussion Topics • Utility of providing a transformation of AHAAH estimates. • Sampling Rate questions for all models. • Frequency dependency for the models. • AHAAH model may have predictive capability for frequency that other models do not. • Should we pursue finding best trading ratios? • 10 log(N), 5 log(N), x log(N)
Discussion topics • What outcome variable is the most useful here? • Are we interested in developing a better model to fit the data or just existing models?