1 / 19

The quality of reporting of Health Informatics evaluation studies

The quality of reporting of Health Informatics evaluation studies. Jan Talmon, Elske Ammenwerth, Thom Geven University Maastricht, UMIT. Content. Background STARE-HI Study design Results Discussion Future prospects. Background. HISEVAL workshop 2003

charis
Download Presentation

The quality of reporting of Health Informatics evaluation studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The quality of reporting of Health Informatics evaluation studies Jan Talmon, Elske Ammenwerth, Thom Geven University Maastricht, UMIT

  2. Content • Background • STARE-HI • Study design • Results • Discussion • Future prospects

  3. Background • HISEVAL workshop 2003 • Visions and strategies to improve evaluation of health information systems: Reflections and lessons based on the HIS-EVAL workshop in Innsbruck, IJMI, 2004, 479 • Poor quality of manuscripts submitted to MI journals • Editor and reviewer perspective • Development of STARE-HI • STAtement on the Reporting of Evaluation studies in Health Informatics

  4. Background • Study questions: • Can STARE-HI be used for the assessment of the quality of evaluation studies in HI • What is the current quality of reporting • What areas are open for improvement

  5. STARE-HI • Iterative development • Core writing team (JT, EA), active discussants, open comments through publication on Internet: http://iig.umit.at/efmi/ • There are 12 item categories described, some expanded • Title, abstract, keywords, introduction*, study context*, methods*, results*, discussion*, conclusion, conflict of interest, references, apendices

  6. Study Design • Hand search of all issues of 2005 of three major MI journals – consensus (JT&EA) • IJMI, JAMIA, MIM • Develop a scoring list from STARE-HI (TG) • Test usability of scoring form • 3 papers assessed by 5 reviewers • Apply revised form on all selected papers

  7. Results – paper selection • 282 papers reviewed on basis of title and abstract • 55 selected by JT • 37 selected by EA • Initial agreement on 32 • Final selection 48 papers: • 21 IJMI, 23 JAMIA, 4 MIM

  8. Results – Scoring form • Extract issues from description in STARE-HI • The abstract should preferably be structured and must clearly describe the objective, setting, participants, measures, study design, major results, limitations and conclusions • Each issue scored • Maximum score/item • For abstract maximum is 9

  9. Results – usability of form • 5 reviewers • EA, TG, JB, PN, NdK • All familiar with STARE-HI • 3 papers • Fitted to various degrees with STARE-HI (TG) • Intraclass Correlation Coefficient for each paper • 0.82-0.85

  10. Results – usability of form • Feedback by scorers revealed some problems • Three items of STARE-HI were not clearly described • Twice it was unclear how extensive a description should be • Six issues could appear at any place in the article – affects reliability of scoring

  11. Results – quality of reporting • Not all 48 papers could be properly assessed: • Some papers were more descriptions rather than evaluation studies • Secondary analyses • Evaluation of an algorithm or a general application (like email) • Final analysis on 39 papers • 19 IJMI, 20 JAMIA

  12. Results – quality of reporting

  13. Results – quality of reporting

  14. Results – some observations

  15. Results – some observations • Keywords • Study design and “evaluation” often missing • Methods • Arguments for study design/case selection are often lacking. Description of intervention, study flow , outcome measures all reported <70%

  16. Results – some observations • Results • Major findings reported, seldom unexpected findings or unexpected events influencing study • Conclusions • Sometimes lacking – even no summary statement, impact of findings, recommendations for future research • Conflict of interest/acknowledgment • Often missing. Relation authors-system

  17. Discussion/conclusion • Applicability STARE-HI • YES • Scope of STARE-HI could be improved • Need for more supporting material on STARE-HI • Quality of reporting • Completeness: room for improvement • Completeness measure of quality? • Pilot study

  18. Future developments • STARE-HI will be published in the MI literature • Seeking for broader support of STARE-HI • EFMI, IMIA, Journal Editors • Development of paper with elaboration on the reasons for items in STARE-HI • Broader study on the quality of reporting of evaluation studies in Health Informatics

  19. Thank you! Questions?

More Related