460 likes | 632 Views
Evaluating Health Information Technology: A Primer. Eric Poon, MD MPH Clinical Informatics Research and Development, Partners Information Systems Davis Bu, MD MA Center for Information Technology Leadership, Partners Information Systems
E N D
Evaluating Health Information Technology: A Primer Eric Poon, MD MPH Clinical Informatics Research and Development, Partners Information Systems Davis Bu, MD MA Center for Information Technology Leadership, Partners Information Systems AHRQ National Resource Center for Health Information Technology
Pre-Conference Logistics • To Access Slides: • Go to http://extranet.ahrq.gov/rc • Login with username and password • Follow the links to download slides • Problems? Email ResourceCenter@norc.org • Q&A Session at the End • Dial *1 to ask a question • Please pick up handset (not speakerphone) • Note that this teleconference is being recorded
Outline • Why evaluate? • General Approach to Evaluation • Deciding what to Measure • Study Design Types • Analytical issues in HIT evaluations • Some practical advice on specific evaluation techniques
Why Measure Impact of HIT? • Impact of HIT often hard to predict • Many “slam dunks” go awry • Understand how to clear barriers to effective implementation • Understand what works and what doesn’t • Justify enormous investments • Return on investment • Allow other institutions to make tradeoffs intelligently • Use results to win over late adopters • You can’t manage/improve what isn’t measured • Good publicity for organization
General Approach to Evaluating HIT • Understand your intervention • Select meaningful measures • Pick the study design • Validate data collection methods • Data analysis
Getting Started: Get to know your intervention • Clarify the question: What problem does it address? • Think about intermediate processes • Identify potential barriers to successful implementation • Identify potential managerial and behavioral process to overcome implementation barriers
Quality and Safety Clinical Outcomes Clinical Processes Knowledge Patient knowledge Provider knowledge Satisfaction Patient satisfaction Provider satisfaction Resource utilization Costs and charges LOS Employee time/workflow Array of Measures
Introducing the Evaluation Toolkit • Rough guides on general approach, costs and potential pitfalls • Major domains: • Clinical Outcomes • Clinical Process • Provider Adoption & Attitudes • Measure Characteristics: • IOM Domain • Data Source • Relative Cost • Would love to hear your feedback • Patient Knowledge & Attitudes • Workflow Impact • Financial Impact • Potential Pitfalls • General Notes
Computerized Provider Order Entry (CPOE) Example • Clarify the primary question: • Does CPOE improve quality of care? • Competing questions: • Does CPOE save money? • What are the barriers to physician acceptance? • Does CPOE introduce new errors?
CPOE: How can it affect quality? • Think about intermediate processes • Patient data is presented to ordering physician • ADE alerts may be triggered and presented at the point of care (which alerts?) • Guideline reminders may be triggered an presented at the point of care (which guidelines?) • Medication order is entered • Medication order is executed by pharmacy • Medication order is executed by nursing staff
Does CPOE Improve Quality of Care? • Identify measures
Evaluating CPOE’s Impact on Quality • Select Appropriate Methodology • Does existing data exist that can be leveraged? (e.g. ongoing QA activities) • Does concurrent control exist? • How will the data be analyzed?
Electronic Medical Records (EMR) Example • Clarify the primary question: • What are the barriers and facilitators to effective EMR implementation? • Competing questions: • Do EMRs save money? • Do EMRs improve quality of care? • Do EMRs introduce new errors?
EMR: Dissecting the EMR Implementation Process • Identify stakeholders • Providers, et al. • Catalogue stakeholder interests and values • Workflow efficiency • Clarify stakeholder role in implementation • Users of system, clinical leaders, administrative leaders • Clarify impact of Implementation on clinical processes • User interface optimization, workflow re-engineering • Define implementation success criteria • Provider buy-in, provider use and acceptance
EMR: Understanding the Barriers and Facilitators to Implementation • Identify measures
EMR: Understanding the Barriers and Facilitator to Implementation • Select Appropriate Methodology • Combination of quantitative and qualitative studies • Example: efficiency measures: • Time motion studies: how did the system affect provider efficiency? • Attitude Surveys: How did the system affect provider perception of efficiency? • Semi-structured interviews: How did the implementation affect stakeholder workflow? Did that effect change over time and why?
Local Health Information Infrastructure (Laboratory) • Clarify the primary question • Can LHIIs for labs generate a positive ROI? • Competing questions: • Can LHIIs for labs improve quality of care? • Which architecture is best suited for LHIIs for labs? • How do LHIIs for labs affect provider and patient perception of the health care system?
LHII (Laboratory): Defining the ROI • Specify intermediate processes • Data is pulled from local laboratories • Previous labs pulled • Lab order entered • Lab order transmitted • Administrative handling • Lab results reported • Lab results recorded • Data is pulled from primary provider • Authorization and payment is coordinated with payer • Implementation of LHIO
LHII (Laboratory): Defining the ROI • Identify associated measures
LHII (Laboratory): Evaluating the ROI • Select Appropriate Methodology • Does concurrent control exist? • Are there ongoing trends over time? • How will the data be analyzed?
Selecting Outcome Measures: General Comments • Generally want to pick 1-3 outcomes of primary interest • If choose more, need to make correction (e.g. Bonferroni) • Outcome must be sufficiently frequent to be detectable • Rare events such as adverse events due to errors particularly challenging • Important enough to provoke interest • Whether study is positive or negative • How would the results change policy (local or national)? • Process vs. outcome • Legitimate to measure process • Outcome often takes too long • In many situations link between process, outcome clear
Study Types • Commonly used study types: • Before-and-after time series Trials • Randomized Controlled Trials • Factorial Design • Study design often influenced by implementation plan
Time Series vs. Randomized Controlled Trials • Before-and-after trial common in informatics • Concurrent randomization is hard • Don’t lose the opportunity to collect baseline data! • Off-On-Off trial design possible • But may not be politically/ethically acceptable to turn off a highly used feature • RCT preferable if feasible • Eliminates the issue of secular trend • Balance of baseline confounding
Randomization Considerations • Justifiable to have a control arm as long as benefit not already demonstrated (usual care) • Want to choose a truly random variable • Not day of the week • Legitimate to stratify on baseline variables (e.g. education for pt, computer experience for providers) • Minimal number of arms • More arms, less power • Strongest possible intervention
Unit of Randomization • Patients • Physicians • Practices/wards
Randomization Unit:How to Decide? • Small units (patients) vs. Large units (practices wards) • Contamination across randomization units • If risk of contamination is significant, consider larger units • Effect contamination-can underestimate impact • However, if you see a difference, impact is present • Randomization by patient generally undesirable • Contamination • Ethical concern
Baseline Period Intervention Period Post- Intervention Period Intervention arm Intervention Deployed XX Clinics 3 month burn-in period Control arm No Intervention Control arm gets intervention Baseline Data Collection Data Collection for RCT Randomization Schemes:Simple RCT • Burn-in period • Give target population time to get used to new intervention • Data not used in final analysis
May be used to concurrently evaluate more than one intervention: Assess interventions independently and in combination Loss of statistical power Usually not practical for more than 2 interventions B Control (no interventions) A+B A Randomization schemes: Factorial Design
Randomization Schemes:Staggered Deployment • Advantages • Easier for user education and training • Can fix IT problems up front • Need to account for secular trend • Time variable in regression analysis
6 clinics Medication Tracking; Diabetes Care • 4 Interventions involving patient’s use of shared online medical records: • Medication Tracking • Diabetes Care • Prev. Care Reminders • Family History Arm 1 18 mo Control for Arm 2 12 clinics Randomize Prev Care Reminders; Family History Arm 2 6 clinics Control for Arm 1 Randomization Schemes:Multiple Interventions • Time efficient design • Every clinic gets something. (Keeps clinics and IRB happy) • Watch out for cross-arm intervention contamination
Inherent Limitations of RCTs in Informatics • Blinding is seldom possible • Effect on documentation vs. clinical action • People always question generalizability • Success is highly implementation independent • Efficacy-effectiveness gap: ‘Invented here’ effect
Data Collection • Electronic data abstraction • Convenient and time-saving, but… • Some chart review (selected) to get information not available electronically • Get ready for nasty surprises • Pilot your data collection protocol early • And then pilot some more…
Randomization schemes often lead to imbalance between intervention and control arms: Need to collect baseline data and adjust for baseline differences Interaction term ( Time * Allocation Arm) gives effect for intervention in regression analysis Data Collection Issue: Baseline Differences
Data Collection Issue: Completeness of Followup • The higher the better: • Over 90% • 80-90% • Less than 80% • Intention to treat analysis • In an RCT, should analyze outcomes according to the original randomization assignment
A Common Analytical Issue The Clustering Effect • Occurs when your observations are not independent: • Example: Each physician treats multiple patients: Intervention Group Control Group Physicians Patient -> Outcome assessed
Options for Dealing with the Clustering Effect • Analyze at the level of clinician • Example: Analyze % of MD’s patients in compliance with guideline, and make MD unit of analysis • Huge drop in statistical power. • Not recommended. • Generalized Estimating Equations • PROC GENMOD in SAS, or PROC RLOGIST in SUDAAN • Allows you to randomize at one level (e.g. physician) and then do analysis at another (e.g. patient) • Accounts for correlation of behaviors within a single physician (i.e. adjusts for the fact that observations across patients are NOT independent)
A Word About Surveys • Survey of user believes, attitude and behaviors • Response rate – responder bias: Aim for response rate > 50-60% • Keep the survey concise • Pilot survey for readability and clarity • Need formal validation if you want plan to develop a scale
Looking at Usage Data • Great way to tell how well the intervention is going • Target your trouble-shooting efforts • In terms of evaluating HIT: • Correlate usage to implementation/training strategy • Correlate usage to stakeholder characteristics • Correlate usage to improved outcome
Studies on Workflow and Usability • How to make observations? • Direct observations • Stimulated observations • Random paging method • Subjects must be motivated and cooperative • Usability Lab • What to look for? • Time to accomplish specific tasks: • Need to pre-classify activities • Handheld/Tablet PC tools may be very helpful • Workflow analysis • Asking users to ‘think aloud’ • Unintended consequences of HIT
Qualitative Methodologies • Major techniques • Direct observations • Semi-structured interviews • Focus groups • Adds richness to the evaluation • Explains successes and failures. Generate Lessons learned • Captures the unexpected • Great for forming hypotheses • People love to hear stories • Data analysis • Goal is to make sense of your observations • Iterative & interactive
Cost Benefit Analysis • Cost Data • Generally available • Caveat: allocation of indirect costs • Financial Benefit Data • Revenue Enhancement • Cost Avoidance • Benefit Allocation • Benefits may accrue to multiple parties • Are benefits realizable (e.g. labor savings)? • Calculation of benefits to external parties may be of interest, even if it does not impact on ROI
Cost Benefit Analysis • Activity Based Costing Example • Simply put, a method for assigning costs to particular activities • Alternate method of assigning indirect costs to the project • Also, may serve as a framework for capturing cost savings * http://www.pitt.edu/~roztocki/abc/abctutor/
Concluding Remarks • Don’t bite off more than what you can chew • Pick a few study outcomes and study them well. • It’s a practical world • Balancing operational and research needs is always a challenge. • Life (data collection) is like a box of chocolates… • You don’t know what you’re going to get until you look, so look early!
Thank you • Eric Poon, MD MPH • Email: epoon@partners.org • Davis Bu, MD MA • Email: dbu@partners.org
Give Us Feedback! • We are eager to hear your feedback! • Go to http://extranet.ahrq.gov/rc • Login with username and password • Follow the links to provide feedback-thanks! • Want to hear this teleconference again? • Dial 1-800-486-4195 to replay until 5/4/05