350 likes | 472 Views
A Consensus on Measuring Quality in Emergency Medicine?. Peter Cameron, President International Federation of Emergency Medicine Director, Centre for Research Excellence in Patient Safety Monash University. Why do we need to measure??. Don’t know whether there is a problem?
E N D
A Consensus on Measuring Quality in Emergency Medicine? Peter Cameron, President International Federation of Emergency Medicine Director, Centre for Research Excellence in Patient Safety Monash University
Why do we need to measure?? • Don’t know whether there is a problem? • Don’t know whether the intervention is making a difference? • Don’t know whether you are better or worse than peers? • Don’t know where to focus effort for improvement? • HAVE ALL THE EFFORTS TO IMPROVE QUALITY MADE ANY DIFFERENCE???
Do we know what a high performing Emergency Department is? • There is virtually no agreement between various measures • None appears better than opinion • Have we progressed since the 19th century?
Despite the “gloom” regarding credible measurement • There are many Quality initiatives around the world • Big variations in emphasis • Revenue raising vs meeting community demand • Time targets • Audits… • BUT • Many common themes
IFEM • Only global umbrella organisation for EM • Given the importance of Quality and safety • Essential for involvement of IFEM • Very powerful if we can have consensus…..
Framework for measuring quality and safety? • Structure • ie what physical and human resources are available. How is the health service organised. • Usually measured by Accreditation • Process • How does the health service function • Easiest to measure – but not always related to outcome • Can distract from the real issues – But… • Outcome • This is fundamental • Yet we have little risk adjusted data on this
Dimensions of Quality • Access • Interminable measures • Safety?? • Effectiveness?? • Efficiency • Hard to measure when outcomes are unknown? • Appropriateness?? • Acceptability • Pt satisfaction surveys??
Data can drive change • Witness • “Breakthrough” collaborations • PDSA cycles • Rely on data • Other industries • Road tolls …
What is worse than no data?? • Bad data…. • Commonly stated – just collect SOME data – at least it will get them moving… (which direction??) • Data must be fit for purpose • Data can be used to screen/monitor populations • If data is to be used for benchmarking and engaging clinicians – must be credible
Bad data • Disengages clinicians • Breaks reputations • Misinforms policy makers and the community • Distorts activity • Provides perverse incentives • Distorts funding GOOD DATA CAN ALSO BE USED BADLY…
How do we get good data • Good data is fit for purpose • Must know why you are collecting the data • i.e. what is the question?
What sources are there for measurement? • Case sampling • Unit audits/chart reviews etc. • Routine data bases • Incident monitoring • Mortality reviews – preventable deaths • Emergency Department/Admissions data • Registries • Prospective cohorts/Trials • Other?? • E.g. video audit (Fitzgerald, Trauma reception project, Arch Surg 2011)
Random Chart audits • Can be used for specific projects • Check compliance • Less value for benchmarking • Bias • Bias of hindsight very hard to avoid.. • Not rigorous data collection • High staff cost…. • Good for identifying issues • Don’t know if you have improved treatment • Or how you compare • The Harvard (Brennan) and Australian (Wilson) studies on medical error • Limited usefulness for comparisons
Routinely collected data • Convenient • Cheap • But • What if it is misleading?
How has routinely collected data been used • Usually based on admissions data • HSMR • Largely discredited at a hospital level • Clinical Indicators • Little validation and little credibility • Presently being implemented in Australia…… • Screening • Limited event screening • eg return to theatre • eg readmission • Low mortality DRGs • May cause more work than benefit • Population monitoring • Can be useful for overall trends • But beware
Routinely collected data • Can be made more useful with linkage • E.g. lab data • E.g. Deaths registry • Triangulation • Data interpretation • Privacy
Incident monitoring • Good for identifying issues • Should not be used for quantitative data • Numbers depend on degree that events are reported • NOT on absolute frequency….. • Sentinel events? • Not a way to measure health systems • Rare and by definition extreme • Role of RCAs??
Mortality reviews • Preventable deaths • Good for identifying issues • Should not be used quantitatively • As in trauma…. • Death audits – shown to engage clinicians • But may divert attention from common and impt issues • Remember - Most “bad” medicine does not result in death
Registries • In Australia intending to commence a national portal for clinical quality registries • Registries important to measure whether processes and outcomes are improving
Background and Rationale How a Registry works Hospital 1 Governance process Hospital 2 Central Data Collation identical collection methods identical definitions Systematic Outcome Assessment Hospital 3 Quality control Hospital 4 etc
Application of Clinical Registries • Expensive to develop & maintain • Principal rationale is outcome improvement • Limit to • high-cost high-significance procedure • known variation in outcomes or practice • economic case for improved outcomes • In practice • clinical procedures • rare and/or acute illness • drugs & devices
Value of Clinical Registries • Ancillary • credentialing • compliance with guidelines • facilitates clinical research, clinical trials • If population based • access to care • monitoring disease incidence or trends in practice • Fundamental • information to improve outcomes especially: • identification & exploration of clinical variation • benchmarking & quality improvement • long-term safety monitoring (drugs and devices)
Benefits from a Registry • Monitoring • Access to care • Appropriateness of pre & post-hospital care • Quality of care • Benchmarking • Improve outcomes by stimulating competition • Identify variation in outcome (& explore ways to improve) • Safety • Determine medium and long-term safety of new procedures and devices • Cost Benefits • Reduction in costs associated with morbidity and mortality • Platform for Research
The Economic Case • Cardiac angioplasty and stenting • Known variation in outcome • Poor results may lead to death and chronic disability (cardiac failure, angina) • Registry provides the capacity to benchmark and improve outcomes • Renal transplantation • Poor outcomes with transplantation lead to increased dialysis • Cost difference: transplantation: $10K p.a., dialysis $50K p.a. • ANZDATA registry has contributed substantially to improved outcomes of renal transplantation
Victorian Trauma system • Appropriate patients to appropriate hospitals • 30% reduction in in-hospital mortality over 5 years
Key Features of Registry Data Registries are a data spine • Minimal • Epidemiologically sound • Prospective • ‘All or none’ i.e. no cherry-picking • Linkable* • Identifiable* * When needed for determination of delayed outcomes Registries are a data spine: additional data may be sought from limited samples over limited time for specific additional studies to answer specific questions
Engaging the clinical community • Need to demonstrate the registry’s ability to improve patient care • Publish methodology/findings • Where possible, integrate into clinical practice • Good governance structures • Data should be reversibly anonymised • Incentives • Funded forums to discuss outcomes/research stemming from registries
Other Credible Data • Registries cover a small part of our practice • Trauma/cardiac/stroke….. • Systematically collected audits? • Very important for generic issues such as Pain relief, hand washing etc
Conclusion • Qualitative data sources useful for driving change • Mortality reviews • Incident reports • Sentinel events • Certain processes can be measured using routine data sources • Accurate risk adjusted outcomes • Measuring improvements over time • Requires data collected for purpose and interpreted for purpose
Conclusion • Depending on data sources and resourcing • Flexible approach needed • Basic structures must be in place • Standard process measures must be agreed • Improvements in Risk adjusted outcomes should be the goal! • This must be done within a framework and according to basic principles to enable comparisons