1 / 28

Evaluating Health Information Technology: Putting Theory Into Practice

Evaluating Health Information Technology: Putting Theory Into Practice. Eric Poon, MD MPH Clinical Informatics Research and Development, Partners Information Systems David F. Lobach, MD, PhD, MS Division of Clinical Informatics Department of Community and Family Medicine

lona
Download Presentation

Evaluating Health Information Technology: Putting Theory Into Practice

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating Health Information Technology: Putting Theory Into Practice Eric Poon, MD MPH Clinical Informatics Research and Development, Partners Information Systems David F. Lobach, MD, PhD, MS Division of Clinical Informatics Department of Community and Family Medicine Duke University Medical Center, Durham, North Carolina AHRQ’s National Resource Center for Health Information Technology Annual Meeting June 2005

  2. Outline • Overview of Evaluating HIT • Why evaluate? • General Approach to Evaluation • Choosing Evaluation Measures • Study Design Types • Analytical issues in HIT evaluations • Evaluation in the ‘real world’ • Duke University Medical Center

  3. Why Measure Impact of HIT? • Impact of HIT often hard to predict • Many “slam dunks” go awry • You can’t manage/improve what isn’t measured • Understand how to clear barriers to effective implementation • Understand what works and what doesn’t • Invent the wheel only once • Justify enormous investments • Return on investment • Allow other institutions to make tradeoffs intelligently • Use results to win over late adopters

  4. General Approach to Evaluating HIT • Understand your intervention • Formulate questions to answer • Select and define measures • Pick the study design • Data analysis

  5. Getting Started: Get to know your intervention • What problem(s) is it trying to solve? • Think about intermediate processes • Identify potential barriers to successful implementation: • Managerial barriers • End-user behavioral barriers • Understand how your peers around the country are addressing (or not) the same issues.

  6. Formulating Questions • Likely questions: • Does the HIT work? • What would have made it work better? • What would the next set of designers/implementors like to know? • Has this question been fully answered before? • Don’t reinvent the wheel! (not a big concern) • What impact would the answer have? • Peers • Policy makers

  7. Quality and Safety Clinical Outcomes Clinical Processes Knowledge Patient Provider Satisfaction & Attitudes Patient Provider Resource utilization Costs and charges LOS Employee time/workflow Lessons learned Array of Measures

  8. Choosing Study Measures • Clinical vs Process Measures • Clinical outcomes (e.g. mortality) desirable • Justifiable to measure process outcomes (e.g. door to abx time) if relationship between outcome and process already demonstrated • Will outcomes be impacted by the intervention? • Will impact on outcomes be detectable during the study period? • ? Rare events, e.g. adverse outcomes • ? Colon cancer screening • What resources do you have? • Don’t bit off more than what you can chew.

  9. Selecting Study Types • Commonly used study types: • Optimal design: Randomized Controlled Trials • Factorial Design • Before-and-after time series Trials • Main study design issues: • Secular Trend: Can a simultaneous control group be established? • Confounding: Can you randomly assign individuals to study groups? • Study design often influenced by implementation plan • Need to respect operational needs, but often there is room for creative designs

  10. Randomization Nuts and Bolts • Justifiable to have a control arm as long as benefit not already demonstrated (usual care) • Want to choose a truly random variable • Not day of the week • Consideration: Stratified randomization • Ensures that intervention and control group are similar on important characteristics (e.g. baseline computer literacy) • Strongest possible intervention

  11. Randomization Unit:How to Decide? • Small units (patients) vs. Large units (practices wards) • Contamination across randomization units • If risk of contamination is significant, consider larger units • Effect contamination-can underestimate impact • However, if you see a difference, impact is present • Randomization by patient generally undesirable • Contamination • Ethical concern

  12. Baseline Period Intervention Period Post- Intervention Period Intervention arm Intervention Deployed XX Clinics 3 month burn-in period Control arm No Intervention Control arm gets intervention Baseline Data Collection Data Collection for RCT Randomization Schemes:Simple RCT • Burn-in period • Give target population time to get used to new intervention • Data not used in final analysis

  13. May be used to concurrently evaluate more than one intervention: Assess interventions independently and in combination Loss of statistical power Usually not practical for more than 2 interventions B Control (no interventions) A+B A Randomization schemes: Factorial Design

  14. Intervention Group Intervention Group Intervention Group Intervention Group Control Group Control Group Control Group Control Group Randomization Schemes:Staggered Deployment • Advantages of staggering • Easier for user education and training • Can fix IT problems up front • Need to account for secular trend and baseline differences • Time variable in regression analysis • Control for practice characteristics

  15. Inherent Limitations of RCTs in Informatics • Blinding is seldom possible • Effect on documentation vs. clinical action • People always question generalizability • Success is highly implementation independent • Efficacy-effectiveness gap: ‘Invented here’ effect

  16. Mitigating the Limitations of Before-and-After Study Designs • Before-and-after trial common in informatics • Concurrent randomization is hard • Don’t lose the opportunity to collect baseline data! • Leave the time gap between before and after trends relatively short • Look for secular trend in statistical analysis and adjust for it if present

  17. Common Pitfalls with Data Collection • Measures you define and collect on your own • Pilot data collection and refine definition early • Ask yourself early whether data your collect measure what you intended to measure. • Measures others defined but you collect on your own • Do you need to adapt other people’s instruments? • Measures others define and collect for you • Understand nuisances and limitations, particular with administrative data.

  18. Electronic Data Abstraction: There’s no free lunch! • Convenient and time-saving, but… • Some chart review (selected) to get information not available electronically • Get ready for surprises • Documentation effect of EMRs

  19. Randomization schemes often lead to imbalance between intervention and control arms: Need to collect baseline data and adjust for baseline differences Interaction term ( Time * Allocation Arm) gives effect for intervention in regression analysis Data Collection Issue: Baseline Differences

  20. Data Collection Issue: Completeness of Followup • The higher the better: • Over 90% • 80-90% • Less than 80% • Intention to treat analysis • In an RCT, should analyze outcomes according to the original randomization assignment

  21. A Common Analytical Issue The Clustering Effect • Occurs when your observations are not independent: • Example: Each physician treats multiple patients: • May need to increase sample size to account for loss of power. Intervention Group Control Group Physicians Patient -> Outcome assessed

  22. Looking at Usage Data • Great way to tell how well the intervention is going • Target your trouble-shooting efforts • In terms of evaluating HIT: • Correlate usage to implementation/training strategy • Correlate usage to stakeholder characteristics • Correlate usage to improved outcome

  23. Studies on Workflow and Usability • How to make observations? • Direct observations • Stimulated observations • Random paging method • Subjects must be motivated and cooperative • Usability Lab • What to look for? • Time to accomplish specific tasks: • Need to pre-classify activities • Handheld/Tablet PC tools may be very helpful • Workflow analysis • Asking users to ‘think aloud’ • Unintended consequences of HIT

  24. Cost Benefit Analysis • Do the benefits of the technology justify the costs? • Monetary benefits – Monetary costs • Important in the policy realm • Need to specify perspective • Organizational • Societal • Cost analysis more straight forward • Prospective data collection preferred • Discounting: a dollar spent today worth more than a dollar 10 years from now • Benefits analysis more controversial • Cost of illness averted: medical costs, productivity for patient • What is the cost of suffering due to preventable adverse events? • What is the cost of a life?

  25. Using Surveys – Stay Tuned! • Survey of user believes, attitude and behaviors • Response rate – responder bias: Aim for response rate > 50-60% • Keep the survey concise • Pilot survey for readability and clarity • Need formal validation if you want plan to develop a scale/summary score

  26. Qualitative Methodologies – Don’t touch that dial! • Major techniques • Direct observations • Semi-structured interviews • Focus groups • Adds richness to the evaluation • Explains successes and failures. Generate Lessons learned • Captures the unexpected • Great for forming hypotheses • People love to hear stories • Data analysis • Goal is to make sense of your observations • Iterative & interactive

  27. Concluding Remarks • Don’t bite off more than what you can chew • Pick a few study outcomes and study them well. • It’s a practical world • Balancing operational and research needs is always a challenge. • Life (data collection) is like a box of chocolates… • You don’t know what you’re going to get until you look, so look early!

  28. Thank you • Eric Poon, MD MPH • Email: epoon@partners.org • Acknowledgements • Davis Bu, MD MA • CITL, Partners Healthcare • David Bates, MD MSc • Chief, Div of General Medicine, Brigham and Women’s Hospital

More Related