1 / 31

Measurement of Community Preparedness: Metrics and Standards

This research explores the measurement and metrics of community preparedness, including standards, rating scales, and existing indexing systems. It also discusses the importance of measurement, challenges in creating metrics, and examples of practitioner-based rating scales. The role of standards and breakdown of indices methodology are presented, along with examples of indices measuring vulnerability, hazard, risk, emergency response, exposure, and resilience. The project concludes with a discussion on measuring cross-community disaster preparedness and resiliency.

irons
Download Presentation

Measurement of Community Preparedness: Metrics and Standards

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hazard and Preparedness Metrics: research on measurement of community preparedness David M. Simpson, PhD, ACIP Matin Katirai, MPH for the Subcommittee on Disaster Reduction (SDR) May 2006

  2. Overview • Measurement and Metrics • Standards and Rating Scales • Examples of Existing Indexing Systems • The CHR project • Conclusions

  3. Why is Measurement Important? Potentially: We could understand the dynamics of community preparedness in a more complete way We could allocate resources more efficiently We could lobby for improvement of preparedness for lagging communities We could price risk more effectively and accurately

  4. What do we know about indicators? • Some Lessons from the history of social indicators: (after Innes 1990, and Cobb and Rixford1998) • Having a number does not mean you have a good indicator • Effective Indicators require a clear conceptual basis • There is no such thing as a value free indicator • Comprehensiveness may be the enemy of effectiveness • The symbolic nature of an indicator may outweigh its value as a literal measure • Don’t confuse indicators with reality • To move toward action, look for indicators that reveal causes, not symptoms

  5. Issues and Challenges in Creating Metrics Scope and dimensions Scale/level of detail Standardization/normalization Variation by time/location Weighting Units of Measurement Validation Data Collection techniques availability and type frequency accuracy

  6. Examples of practitioner based rating scales: Department of Homeland Security- UASI Insurance Services Office- Fire Suppression Rating Schedule (FSFR) National Floodplain Insurance Program, Community Rating System (CRS) ISO Building Code Effectiveness Grade Each has real monetary impact; very little information on the causal connections between the standards of measurement and effectiveness.

  7. DHS- UASI Risk based allocation of grant funding:

  8. ISO Public Protection Classification (PPC) system • A community's PPC depends on: • fire alarm and communication systems, including telephone systems, telephone lines, staffing, and dispatching systems • the fire department, including equipment, staffing, training, and geographic distribution of fire companies • the water supply system, including the condition and maintenance of hydrants, and a careful evaluation of the amount of available water compared with the amount needed to suppress fires

  9. CRS points and associated class/reduction in premium

  10. The Role of Standards The State Capability Assessment for Readiness NFPA 16000 Emergency Management Accreditation Program (EMAP)

  11. Breakdown of Indices Methodology • Additive or Multiplicative composite indices • Each component that goes into an index is usually comprised of several subcomponents that are weighted differently • Weights are determined subjectively by index author, or determined by surveys results completed by experts in the field • In many instances variables may be selected based on data availability, however, a more comprehensive approach may be to determine what factors will encapsulate the item that is being measured then select variables for index. • Other techniques for variable selection include principal component analysis or multiple regression analysis to determine the most significant variables that will go into an analysis

  12. Breakdown of Indices Formula IndexN= A + B +C or IndexN= A*B*C A = (.5a1)+ (.25a2)+(.25a3) or (a1)3 * (a2)2 (a3)2 Items Measured Varies depending on what the index measures. Most common include: vulnerability, hazard, risk, emergency response, exposure, and resilience. Each item that is measured is operationalized through the variables Variables Examples include: population, population density, # housing units, average sea temperature, # of deaths, # of emergency shelters, #’s of hospital beds per 1000 people, % of deforested land, dependency ratio, # of water treatment plants, #’s of trained personal, # of mobile homes, etc.

  13. Breakdown of Indices Scope Measure: earthquakes, floods, hurricanes, drought, environmental degradation Scale of measurement: Ranges from very large to very small. Indices can measure at the continental and national level to the city and county scale. Smaller units of analysis, such as census tracts are also possible using Geographic Information Systems. Examples of Indices Urban Earthquake Disaster Risk Index – Davidson Environmental Vulnerability Index – South Pacific Geoscience commission Disaster Risk Index – United Nations Development Programme Hurricane Disaster Risk Index – Lambert & Davidson Risk Management Index – Inter-American Development Bank, Cardona Social Vulnerability Index – Cutter, Mitchell, Scott

  14. Application – Urban Earthquake Risk

  15. Application – Environmental Vulnerability

  16. Application- Disaster Risk U.N.

  17. Application – Hurricane Disaster Risk

  18. Application - Risk Management Index

  19. The Measurement Project at CHR Measuring Cross-Community Disaster Preparedness and Resiliency: Theoretical and Practical Application Development Funding: National Science Foundation, CMS 0408856 Time frame: Three years, ending January 2008 Amount: $300K Completion: ~ 50%

  20. Project Synopsis • Research Objectives • Explore measurement of “preparedness” and “resilience” at the • community scale • Develop set of characteristics to be standardized and measured • Use expert panels to weight model • Apply to pilot set of communities • Reevaluate and refine for wider application

  21. Research Design • comprehensive examination of the formulation of concepts of community • preparedness, vulnerability and resiliency through literature review, • practitioner reports, and similar document analysis 2) expansion/adjustment of pilot model developed and applied in two-city test 3) the utilization of an expert panel to adjust, refine, and assign attribute rating scales and weighting; 4) application to 15 selected cities through data collection (combination of secondary data, site visit and in-person interviews); and 5) data compilation and analysis, using GIS as basic data frame

  22. Conceptualizing Community Preparedness Measurement:

  23. Progress and Products: • (working papers available from the Center) • Literature Review Complete • Annotated disaster and planning bibliography • (Franke and Simpson, 2005) • Measurement Indicators in Hazards: A Topical bibliography • (Simpson and Katirai, 2006) • Analysis of Issues in rating systems Complete • Comparison of Rating Systems in Hazards • (Franke and Simpson, 2005) • Analysis of existing Index Systems Complete • Comparison of Hazard Indices • (Katirai and Simpson, draft in process)

  24. On the hopeful side: • More powerful computing, faster processing, improving software • Use of Geographic Information Systems (GIS) • Return of large-scale urban modeling • Currently a receptive policy environment • On the cautionary side: • Potential for indices to be created with particular agendas, without • an open and debatable (transparent) process • Data collection could be based on cost and convenience, rather • than usefulness and validity • Interest in developing more comprehensive metrics will fade, as • the last big disaster fades

  25. Some criteria for success: Use data sources that are not “self-report” (secondary data) Build on existing activity and interest groups, use interactive processes to create socially constructed and consensus- based metrics Seek metrics with associated data that are reasonable to collect, and can become standardized Find channels to institutionalize and routinize the metrics

  26. Working Papers available by download at our Center website: http://hazardcenter.louisville.edu Thank you for the opportunity to discuss this research Questions?

More Related