1 / 47

Assessing Patient Safety through Administrative Data: Adapting and Improving Existing Systems

Assessing Patient Safety through Administrative Data: Adapting and Improving Existing Systems. Patrick S. Romano, MD MPH Professor of Medicine and Pediatrics UC Davis School of Medicine Sacramento CA, USA June 29, 2006. Acknowledgments.

mead
Download Presentation

Assessing Patient Safety through Administrative Data: Adapting and Improving Existing Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assessing Patient Safety through Administrative Data:Adapting and Improving Existing Systems Patrick S. Romano, MD MPH Professor of Medicine and Pediatrics UC Davis School of Medicine Sacramento CA, USA June 29, 2006

  2. Acknowledgments Support for Quality Indicators II (Contract No. 290-04-0020) • Mamatha Pancholi, AHRQ Project Officer • Marybeth Farquhar, AHRQ QI Senior Advisor • Mark Gritz and Jeffrey Geppert, Project Directors, Battelle Health and Life Sciences • Kathryn McDonald (PI) and Sheryl Davies (project manager), Stanford University • Other clinical team members: Douglas Payne (medicine), Garth Utter (surgery), Shagufta Yasmeen (obstetrics & gynecology), Corinna Haberland (pediatrics), Banafsheh Sadeghi (research assistant)

  3. Overview • General approaches to assessing inpatient safety • Rationale for using administrative data: strengths and limitations • Background about the AHRQ Quality Indicators program • Development and maintenance of the AHRQ Patient Safety Indicators (PSIs) • OECD international expert panel review • International interest in the AHRQ PSIs • Practical issues associated with international application of the AHRQ PSIs

  4. Taxonomy of patient safety measures Zhan et al., Med Care 2005;43:I42-I47

  5. General approaches to assessing inpatient safety • Analyze administrative data (adverse events, selected types of medical errors) • Review medical records (adverse events, selected types of medical errors) • Collect confidential provider reports of “incidents” or “safety events” (passive surveillance of medical errors or near misses) • Conduct active surveillance or real-time observation (same) • Survey patients • Survey employees or managers on organizational capabilities or climate (“culture of safety”)

  6. Ethnographic observation to identify adverse events and errors 480 of 1047 patients (46%) experienced a mean of 4.5 events Andrews LB, et al. Lancet 1997;349:309-13. “Ethnographers trained in qualitative observational research attended regularly scheduled attending rounds, resident work rounds, nursing shift changes, case conferences, and other scheduled meetings” (e.g., M&M conferences, QA meetings) on 3 units at one teaching hospital.

  7. Video recording to identify errors in pediatric trauma resuscitation Mean of 5.9 errors per resuscitation, with 93% agree-ment between 2 reviewers. Mean of 2.2 errors in each seriously injured child, with 20% capture on medical records Oakley, E. et al. Pediatrics 2006;117:658-664

  8. Limitations Limited/no information on processes of care and physiologic measures of severity Limited/no information on timing (comorbidities vs. adverse events) Heterogeneous severity within some ICD codes Accuracy depends on documentation and coding Data are used for other purposes, subject to gaming Time lag limits usefulness Opportunities Data availability improving Coding systems and practices improving Large data sets optimize precision Comprehensiveness (all hospitals, all payers) avoids sampling/selection bias Data are used for other purposes, subject to auditing and monitoring Rationale for using administrative data

  9. AHRQ Quality Indicators (QIs) • Developed through contracts with UC-Stanford Evidence-based Practice Center • Use existing hospital discharge data, based on readily available data elements • Incorporate severity adjustment methods (APR-DRGs, comorbidity groupings) when possible • Offer free, downloadable software (SAS, Windows) with documentation, biennial updates, and user support through listserve, newsletters, national meetings, web seminars, e-mail system • User feedback drives continuous improvement

  10. Pediatric QIs AHRQ Quality Indicators Inpatient QIs Mortality Utilization Volume Prevention QIs (Area Level) Avoidable Hospitalizations Other Avoidable Conditions Patient Safety Indicators Complications Failure-to-rescue Unexpected death

  11. Structure of indicators • Definitions based on • ICD-9-CM diagnosis and procedure codes • Inclusion/exclusion criteria based upon DRGs, sex, age, procedure dates, admission type • Numerator = number of cases “flagged” with the complication or situation of interest • e.g., postoperative sepsis, avoidable hospitalization for asthma, death • Denominator = number of patients considered to be at risk for that complication or situation • e.g. elective surgical patients, county population from census data • Indicator “rate” = numerator/denominator

  12. AHRQ QI development: General process • Literature review (all) • To identify quality concepts and potential indicators • To find previous work on indicator validity • ICD-9-CM coding review (all) • To ensure correspondence between clinical concept and coding practice • Clinical panel reviews (PSI’s, pediatric QIs) • To refine indicator definition and risk groupings • To establish face validity when minimal literature • Empirical analyses (all) • To explore alternative definitions • To assess nationwide rates, hospital variation, relationships among indicators • To develop methods to account for differences in risk

  13. Literature review to find candidate PSI indicators • MEDLINE/EMBASE search guided by medical librarians at Stanford and NCPCRD (UK) • Few examples described in peer reviewed journals • Iezzoni et al.’s Complications Screening Program (CSP) • Miller et al.’s Patient Safety Indicators • Review of ICD-9-CM code book • Codes from above sources were grouped into clinically coherent indicators with appropriate denominators

  14. Coding (criterion) validity based on literature review (MEDLINE/EMBASE) • Validation studies of Iezzoni et al.’s CSP • At least one of three validation studies (coders, nurses, or physicians) confirmed PPV at least 70% among flagged cases • Nurse-identified process-of-care failures were more prevalent among flagged cases than among unflagged controls • Other studies of coding validity • Very few in peer-reviewed journals, some in “gray literature”

  15. Construct validity based on literature review (MEDLINE/EMBASE) • Approaches to assessing construct validity • Is the outcome indicator associated with explicit processes of care (e.g., appropriate use of medications)? • Is the outcome indicator associated with implicit process of care (e.g., global ratings of quality)? • Is the outcome indicator associated with nurse staffing or skill mix, physician skill mix, or other aspects of hospital structure?

  16. ICD-9-CM coding consultant review • All definitions were reviewed by an expert coding consultant from the American Health Information Management Association, with special attention to prior coding guidelines • Central staff of ICD-9-CM were queried as necessary • Definitions were refined as appropriate

  17. Face validity: Clinical panel review • Intended to establish consensual validity • Modified RAND/UCLA Appropriateness Method • Physicians of various specialties and subspecialties, nurses, other specialized professionals (e.g., midwife, pharmacist) • Potential indicators were rated by 8 multispecialty panels; surgical indicators were also rated by 3 surgical panels

  18. Face validity: Clinical panel review (cont’d) • All panelists rated all assigned indicators on: • Overall usefulness • Likelihood of identifying the occurrence of an adverse event or complication (i.e., not present at admission) • Likelihood of being preventable (i.e., not an expected result of underlying conditions) • Likelihood of being due to medical error or negligence (i.e., not just lack of ideal or perfect care) • Likelihood of being clearly charted • Extent to which indicator is subject to case mix bias

  19. Evaluation framework Medical error and complications continuum Medical error Nonpreventable Complications • Pre-conference ratings and comments/suggestions • Individual ratings returned to panelists with distribution of ratings and other panelists’ comments/suggestions • Telephone conference call moderated by PI and attended by note-taker, focusing on high-variability items and panelists’ suggestions (90-120 mins) • Suggestions adopted only by consensus • Post-conference ratings and comments/ suggestions

  20. Postop Pneumonia Decubitus Ulcer (5) (8) (7) (8) (4) (8) (8) (2) (7) (6) (3) (7) Example reviewsMultispecialty panels • Overall rating • Not present on admission • Preventability (4) • Due to medical error (2) • Charting by physicians (6) • Not biased (3)

  21. Final selection of indicators • Retained indicators for which “overall usefulness” rating was “Acceptable” or “Acceptable-” : • Median score 7-9 • Definite or indeterminate agreement • Excluded indicators rated “Unclear,” “Unclear-,” or “Unacceptable”: • Median score <7, OR • At least 2 panelists rated the indicator in each of the extreme 3-point ranges

  22. Candidate PSIs reviewed • 48 indicators reviewed in total • 37 reviewed by multispecialty panel • 15 of those reviewed by surgical panel • 20 “accepted” based on face validity • 2 dropped due to operational concerns • 17 “experimental” or promising indicators • 11 rejected

  23. Selected postop complications Postoperative thromboembolism Postoperative respiratory failure Postoperative sepsis Postoperative physiologic and metabolic derangements Postoperative abdominopelvic wound dehiscence Postoperative hip fracture Postoperative hemorrhage or hematoma Selected technical adverse events Decubitus ulcer Selected infections due to medical care Technical difficulty with procedures Iatrogenic pneumothorax Accidental puncture or laceration Foreign body left in during procedure Other Complications of anesthesia Death in low mortality DRGs Failure to rescue Transfusion reaction (ABO/Rh) Obstetric trauma and birth trauma Birth trauma – injury to neonate Obstetric trauma – vaginal delivery with instrument Obstetric trauma – vaginal delivery without instrument Obstetric trauma – cesarean section delivery “Accepted” PSIs

  24. Pediatric Quality Indicators • Inpatient Indicators • Accidental puncture and laceration • Decubitus ulcer • Foreign body left in after procedure • Iatrogenic pneumothorax in neonates at risk • Iatrogenic pneumothorax in non-neonates • Pediatric heart surgery mortality • Pediatric heart surgery volume • Postoperative hemorrhage or hematoma • Postoperative respiratory failure • Postoperative sepsis • Postoperative wound dehiscence due to medical care • Transfusion reaction

  25. PSI risk adjustment methods • Must use only administrative data • APR-DRGs and other canned packages may adjust for complications • Final model • DRGs (complication DRGs aggregated) • Modified Comorbidity Index based on list developed by Elixhauser et al. (completely redesigned for Pediatric QIs) • Age, Sex, Age-Sex interactions

  26. Pediatric QI Risk Adjustment • Reason for admission/type of procedure • DRGs (with/without CC collapsed) • Other (e.g., diagnostic/therapeutic procedure categories for accidental injury) • Comorbidity • Special pediatric-oriented comorbidity list • Gender, age groups • <29 d, 29-60 d, 61-90 d, 91-365 d, 1-2 yrs, 3-5 yrs, 6-12 yrs, 13-17 yrs • Low birth weight categories (neonates) • 500 gram categories (500-2500 g)

  27. OECD Health Care Quality Indicators Project • Includes 21 countries, WHO, European Commission, World Bank, ISQua, etc. • Five priority areas • Cardiac care • Diabetes mellitus • Mental health • Patient safety • Prevention/health promotion and primary care

  28. OECD Indicator Selection Criteria • Importance • Impact on health • Policy importance (concern for policymakers and consumers) • Susceptibility to being influenced by the health care system • Scientific soundness • Face validity (clinical rationale and past usage) • Content validity • Feasibility • Data availability on the international level • Reporting burden

  29. OECD Review Process • Patient safety panel constituted with 5 members (Dr. John Millar, Chair) • 59 indicators from 7 sources submitted for review (US, Canada, Australia) • Modified RAND/UCLA Appropriateness Method • Panelists rated each indicator on importance and scientific soundness (2 rounds with intervening discussion) • Retained 21 indicators with median score >7 (scale 1-9) on both domains; rejected indicators with median score ≤5 on either domain

  30. OECD expert panel ratings of PSIs

  31. AHRQ panel ratings of PSI “preventability”very similar to OECD ratings a Panel ratings were based on definitions different than final definitions. For “Iatrogenic pneumothorax,” the rated denominator was restricted to patients receiving thoracentesis or central lines; the final definition expands the denominator to all patients (with same exclusions). For “In-hospital fracture” panelists rated the broader Experimental indicator, which was replaced in the Accepted set by “Postoperative hip fracture” due to operational concerns. b Vascular complications were rated as Unclear (-) by surgical panel; multispecialty panel rating is shown here.

  32. US rates of OECD-endorsed PSIs

  33. Primary uses of the AHRQ PSIs • Internal hospital quality improvement • Individual hospitals and health care systems, hospital associations • Trigger case finding, root cause analyses, identification of clusters • Evaluate impact of local interventions • Monitor performance over time • External hospital accountability to the community • National, State and regional analyses • National Healthcare Quality/Disparities Reports • Surveillance of trends over time • Disparities across areas, SES strata, ethnicities

  34. Relative change from 1999-2000 to 2002-2003 in observed and risk-adjusted AHRQ PSI rates

  35. Newer uses of the AHRQ PSIs • Testing research hypotheses related to patient safety • Housestaff work hours reform • Nurse staffing regulation • Public reporting by hospital • Texas, New York, Colorado, Oregon, Massachusetts, Wisconsin, Florida, Utah • Pay-for-performance by hospital • CMS/Premier Demonstration (278 hospitals, focus on 2 postop events after THA/TKA) • Anthem of Virginia (focus on monitoring any two) • Hospital profiling • Blue Cross/Blue Shield of Illinois

  36. International inquiries regarding the AHRQ QIs

  37. International inquiries regarding the AHRQ QIs

  38. Practical issues in international implementation of AHRQ PSIs • ICD-9-CM to ICD-10 conversion • Entirely different coding structure • Three new chapters • 12,420 codes versus 6,969 • Nation-specific versions (CA, AU, GM) • No internationally accepted coding system for procedures

  39. Practical issues in international implementation of AHRQ PSIs • Variation in documentation and coding practices • Variation in other data definitions • Principal versus primary diagnosis • Number of diagnosis codes • Procedure dates • External cause of injury codes • Type of admission (elective, urgent, emergency) • Variation in how administrative data are collected and used • DRG-based payment versus global budgeting versus service-based payment

  40. Coding of secondary diagnoses in the USA • For reporting purposes the definition for "other diagnoses" is interpreted as additional conditions that affect patient care in terms of requiring: clinical evaluation; or therapeutic treatment; or diagnostic procedures; or extended length of hospital stay; or increased nursing care and/or monitoring. • “All conditions that occur following surgery…are not complications… there must be more than a routinely expected condition or occurrence… there must be a cause-and-effect relationship between the care provided and the condition…”

  41. ICD-9-CM Coding: Procedures • Coding of procedures “The UHDDS requires all significant procedures to be reported… A significant procedure is defined as one that meets any of the following conditions: Is surgical in nature Carries an anesthetic risk Carries a procedural risk Requires specialized training.” What about central venous catheters?

  42. International initiatives • Conversion efforts are underway, but need to be coordinated internationally • Undertake detailed meta-analysis of national data systems • Review international variation in coding rules and procedures • Improve data systems (e.g., “present at admission” coding in USA) and develop data on accuracy • Prioritize indicators based on likelihood of international comparability

  43. International collaborative meeting of health services researchers using administrative data Calgary, Alberta, June 2005; supported by CIHR; forthcoming in BMC HSR

  44. Conversion of Elixhauser comorbidity list from ICD-9-CM to ICD-10, ICD-10-CA Quan H, et al., reported at AcademyHealth 2006

  45. German mapping of PSIs from ICD-9-CM to ICD-10-GM Saskia E. Droesler and Juergen Stausberg

  46. PSI incidence comparisonGermany vs. USA German population rate (log) 2004 US population rate (log) 2002

  47. Developing data on accuracy and relevance: AHRQ PSIs in Children’s HospitalsSedman A, et al. Pediatrics 2005;115(1):135-145

More Related