1 / 39

A methodology to develop quality indicators for Health IT

A methodology to develop quality indicators for Health IT. Nicolette de Keizer, Elske Ammenwerth, Hannele Hyppönen on behalf of the EFMI and IMIA WG on Evaluation. Program. Background.

kali
Download Presentation

A methodology to develop quality indicators for Health IT

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A methodology to develop quality indicators for Health IT Nicolette de Keizer, Elske Ammenwerth, Hannele Hyppönen on behalf of the EFMI and IMIA WG on Evaluation

  2. Program

  3. Background • EU and WHO: ICT solutions in health care are prerequisites for modern, patient-centered and efficient health care services • Potential benefit of health ICT is widely endorsed, but software used in health care is not necessarily inherently safe • ICT interventions should therefore be thoroughly evaluated before wide scale implementation

  4. Thorough evaluation • Dissemination of successful ICT interventions is only possible if quality and success of the ICT intervention can be measured • -> Quality Indicator (QI) • a measurable element of a system for which there is evidence or consensus that it can be used to assess a defined aspect of the system in question

  5. Problem • There are so many health IT systems • National infrastructures for EMR • Local EMRs • Decision Support Systems • CPOE systems • Telemedicine applications • PACS • ……. • Some QIs can be shared, others are specific

  6. Aims of the workshop • to inform on and discuss a methodology for developing QIs for health IT systems • to draw up the interest within the community to develop QI for health IT systems • to recruit groups that will develop QIs for a specific type of health IT systems

  7. Program

  8. eHealth indicator development

  9. Definitions eHealth: A review in 2005 included 36 different definitions for the term e-health. - Using, processing, sharing and controlling health related information in electronic format who, for what purpose, how? Indicator: a measurable element of practice or system for which there is evidence or consensus that it can be used to assess a defined aspect of the practice or system in question

  10. Why are eHealth indicators needed? • to learn from initiatives, to see if they are worth the money, to make informed decisions needed in management of eHealth on local and national (international) level • State of the art: • eHealth indicator-work is lagging behind, indicators ambiguous or missing. • Methodologies not transparent, gaps in existing classifications used to group eHealth indicators, gaps in indicators proposed by the experts - need for a more formal generic methodology to define indicators

  11. Development of eHealth indicators Generic - specific • Literature on indicator methodology • Articles discussing suitability of specific indicators as measures in a specific field (e.g. drug treatment data as an epidemiological indicator) • Articles describing a methodology for defining indicators for a specific assessment topics (e.g. care process quality indicator) • Articles describing a framework for defining indicators for specific policy goals (e.g. eHealth or sustainable development) • Category 3 was most suited as a starting point for creation of a generic methodology (common for different topics)

  12. Two main approaches for indicator definition • Expert-led Top-down methodology: predominant in fields focusing on monitoring implementation of policies and their impact on society level (e.g. economic growth, main aim also in European level eHealth indicator work). . • Bottom-up methodology: predominant in the fields aiming to monitor or assess policy or strategy implementation and impacts on micro level – e.g. on local environment. • indicators tailored to the needs and resources of the indicator users, still rooted in the policy in question (e.g. sustainable development in environmental policy)

  13. Common phases to proceed Defining the context (human and environmental) for measurement: key stakeholders and relevant systems. For whom, about what, what for? Defining systems OR key functionalities needed for core tasks? Defining the goals. Top–down approaches rarely include this, the goals are pre-determined by government offices or policies. Whose goals, which goals for which stakeholders? Defining methods for indicator selection and categorization by reviewing existing indicator work, expert knowledge, peer- reviewed literature, selection of key indicators per purpose Defining the data. This step tests the indicators by applying them. Initiative data are collected, analyzed, reported and feedback is acquired from different user groups.

  14. Questions system vs functionality-based indicator work (OECD eHealth indicators are functionality-based, much of the existing indicator data is collected system-based) Need for transparency of goals, stakeholders and use purposes.

  15. Program

  16. A methodology to develop quality indicators for Health IT

  17. Background • RAND cooperation – methodology for clinical QI development • weak reliability of the rating and consensus procedures • modified by Van Engen et al [MIE2011] • We adjusted this ‘modified RAND method’ to the context of quality of health IT

  18. Methodology 1. Expert and stakeholders panel 2. Literature research 3. Review of guidelines 4. Inductive content analysis: Draft set of QI 5. Individual rating 6. Group discussion and anonymous voting 7. Define target standards (repeat step 1-6)

  19. Expert and stakeholders panel • Search for representatives of stakeholders of the system: developers, researchers, users • Clear invitation • goal of the questionnaire and a response date • their involvement in the rest of the project etc. • Web-based survey (LimeSurvey) on QI • subtasks or dimensions of the system • UMIT support development survey

  20. Literature research • Search terms concerning the field of interest (e.g. CPOE) combined with MeSH terms and keywords referring to ‘effectiveness’, ‘assessment’, ‘outcome’, ‘quality assurance’ or ‘quality indicators’, ‘evaluation’ or ‘monitoring’. • Use the evaluation inventory available from http.//evaldb.umit.at to

  21. Review of guidelines • Clinical guidelines important for clinical QI • desirable level of outcomes of care • minimum procedures, standards and facilities that services should include • (ISO/CEN) standards in the field of interest

  22. Inductive content analysis • Combine results of the three sources • Indentify themes/concepts • “I liketoknowhowmuchtimeitcostto enter a patient consult” 1. Ammenwerth E, de Keizer N. An inventory of evaluation studies of information technology in health care: Trends in evaluation research 1982 - 2002. Methods of Information in Medicine. 2005;44:44-56

  23. Inductive content analysis • Combine results of the three sources • Indentify themes/concepts • Classify QIs: • http.//evaldb.umit.at 1 • structural quality • information logistics quality • effects on quality of processes • effects on outcome quality of care • Delone and Maclean • Information quality • System quality • Use • User satisfaction • Further fine-grained classifications • UMIT support inductive content analysis 1. Ammenwerth E, de Keizer N. An inventory of evaluation studies of information technology in health care: Trends in evaluation research 1982 - 2002. Methods of Information in Medicine. 2005;44:44-56

  24. Individual rating • Likert scale from 1 (total disagreement) to 5 (total agreement): • Importance • Actionability • Easy to record or obtain • Rank on mean score

  25. Group discussion and anonymous voting • Consensus in expert panel through rounds of debate, discussion and an anonymous voting process • Face to face meetings • web-based chat meetings

  26. Define target standards • Repeat step 1-6 to obtain target values for each of the review criteria • QI: time to enter one patient • Value: < 1 minute

  27. Program

  28. CPOE • Computerized Physian Order Entry Systems • Typical functionality: • Review recently given drugs • Modify drugs/order new drugs • Decision support: Drugs overdosage, drug-drug-interaction, contraindications, allergies etc.

  29. CPOE: Benefit • CPOE systems have been found to be effective to reduce medication errors and - partly - ADEs Ammenwerth et al, JAMIA, 2008

  30. CPOE: Challenges • Increasing reports of problems when introducing CPOE systems • Unexpected increased mortality after CPOE implementation, YY. Han, 2005 • Unintended consequences of CPOE, Joan Ash, 2007

  31. CPOE: Needs for evaluation • Systematic monitoring and evaluation of CPOE seems needed • What are good quality indicators to monitor CPOE quality?

  32. Workshop: CPOE QI indicators 1. Expert and stakeholders panel 2. Literature research 3. Review of guidelines 4. Inductive content analysis: Draft set of QI 5. Individual rating 6. Group discussion and anonymous voting 7. Define target standards (repeat step 1-6)

  33. Group 1: The Expert Panel • Task: • Collect characteristics of excellent CPOE performance • What do you need to know about a CPOE implementation in order to assess its quality? • Expected results: • List of proposed quality indicators

  34. Group 2: The Literature Team • Task: • Read abstracts of CPOE studies from http://evaldb.umit.at or from PubMed • Identify CPOE quality indicators used in these studies • Expected results: • List of identified quality indicators

  35. Classifying the Quality Indicators • QI concerning structural CPOE quality • E.g. system quality, information quality, usage • QI concerning impact on process quality • E.g. impact on clinical workflow, communication, medication errors • QI concerning impact on outcome quality • E.g. mortality, length of stay, ADE

  36. Classifying the Quality Indicators • You can modify these three main categories by adding further sub-categories when needed

  37. Setting up the groups • Define time frame for groups • Working in groups • Result reporting

  38. Next steps • Are you interested to lead a group on QI for a special type of system (e.g. CPOE, LIS, RIS, EHR)? Please contact us! • Evaluation WG provides support for organizing group, conducting survey, analysing intermediate results

  39. Contact • Elske Ammenwerth, UMIT, elske.ammenwerth@umit.at • Nicolette de Keizer, Amsterdam Medical Center, n.f.keizer@amc.uva.nl • Further information: http://iig.umit.at/efmi

More Related