1 / 39

Critically Evaluating the Evidence: Tools for Appraisal

Critically Evaluating the Evidence: Tools for Appraisal. Elizabeth A. Crabtree, MPH, PhD (c) Director of Evidence-Based Practice, Quality Management Assistant Professor, Library & Informatics Medical University of South Carolina. Steps of EBP:.

miles
Download Presentation

Critically Evaluating the Evidence: Tools for Appraisal

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Critically Evaluating the Evidence: Tools for Appraisal Elizabeth A. Crabtree, MPH, PhD (c) Director of Evidence-Based Practice, Quality Management Assistant Professor, Library & Informatics Medical University of South Carolina

  2. Steps of EBP:

  3. Step 3: Evaluate the EvidenceSystematic, Critical Appraisal It’s peer-reviewed, therefore it must be OK? Adopted from: Heneghan, Carl. Introduction, 16th Oxford Workshop on Evidence-Based Practice, September, 2010.

  4. What is in “the stack”? Gold mine Bonfire

  5. Hierarchy of Evidence

  6. CONSORT • Consolidated Standards of Reporting Trials • Focus - Randomized Control Trials (RCT) • 2-group, parallel • Checklist of 25 items • Title/Abstract • Introduction • Methods • Results • Discussion • Other information The CONSORT Group

  7. STROBE • Strengthening the Reporting of Observational Studies in Epidemiology • Focus – Cross-sectional, Case-control, Cohort and Observational Studies • Checklists of 22 items • Title/Abstract • Introduction • Methods • Results • Discussion • Other Information STROBE Statement

  8. CASP • Critical Appraisal Skills Programme • Focus – Systematic Reviews, RCTs, Qualitative Studies, Diagnostic Test Studies, Cohort Studies, Case-control Studies & Economic Evaluation Studies • 10 - 12 Questions per appraisal tool • Validity • Results • Relevance CASP

  9. Body of Evidence • All studies relevant to a given PICO questions • Recommend grouping studies by PICO question • Assess the quality of relevant studies as a group How is this done???

  10. GRADE Quality Assessment Criteria

  11. What is the GRADE System? Grading of Recommendations Assessment Development and E valuation • Built on previous systems • International group of guideline developers

  12. Advantages of GRADE • Transparent process of moving from evidence to recommendations • Explicit, comprehensive criteria for downgrading and upgrading quality of evidence ratings • Explicit evaluation of the importance of outcomes of alternative management strategies GRADE vs. The Competition

  13. Quality & Recommendations • Quality of evidence-the extent to which one can be confident that an estimate of effect is adequate to support recommendations • Strength of recommendation-the extent to which one can be confident that adherence to the recommendation will do more good than harm

  14. Utilization

  15. Getting Started… • Must have a clearly defined question • Patient(s), intervention, comparison, and outcome of interest (PICO) In adult patients (population), is the use of glucocorticosteroids (intervention) associated with VTE (outcome)?

  16. Chutes & Ladders Evaluation of evidence can lower its quality or raise its quality.

  17. Key Elements-Chutes • Study design limitations • Inconsistency • Indirectness • Imprecision • Reporting bias

  18. Study Design Limitations • Basic study design (randomized trials or observational) • Study Limitations • Insufficient sample size • Lack of blinding • Lack of allocation concealment • Large losses to follow up • Non-adherence to intent to treat analysis • Stopped for early benefit • Selective reporting of measured outcomes

  19. Inconsistency of Results • Detailed study methods and execution • Wide variation of treatment effect across studies • Populations varied (e.g. sicker, older) • Interventions varied (e.g. doses) • Outcomes varied (e.g. diminishing effect over time) • Increased heterogeneity = ↓ quality (I2: <0.25 low; 0.25 – 0.5 moderate; > 0.5 high)

  20. Indirectness of Evidence • The extent to which the people, interventions, and outcome measures are similar to those of interest • Indirect comparisons • Different populations • Different interventions • Different outcomes measured • Comparisons not applicable to question/outcome

  21. Imprecision • Accuracy of data/results • Results include just a few events or observations • Sample size lower than calculated for optimal information (needed for decision-making) • Confidence intervals are sufficiently wide that an estimate is consistent with either important harms or benefits

  22. Bias

  23. Key Elements-Ladders Effect Dose response Plausible confounders

  24. Effect Magnitude of treatment effect • Strong effect • e.g., meta-analysis of observational studies found that bicycle helmets reduce the risk of head injuries RR 0.31 (95% CI, 0.13 to 0.37) • Very Strong effect • e.g., meta-analysis looking at impact of warfarin prophylaxis in cardiac valve replacement • Relative Risk for thromboembolism with warfarin was 0.17 (95% CI, 0.13 to 0.24)

  25. Dose Response Evidence of a dose-response gradient • The more exposure to an intervention the greater the harm • Higher warfarin dose → Higher INR → increased bleeding

  26. Plausible Confounders • All plausible confounders would have reduced the demonstrated effect • OR would suggest a spurious effect when results show no effect

  27. Evidence of Association • Strong evidence of association • significant relative risk of > 2 ( < 0.5) based on consistent evidence from two or more observational studies, with no plausible confounders • Very Strong evidence of association • significant relative risk of > 5 ( < 0.2) based on direct evidence with no major threats to validity

  28. Quality of Supporting Evidence

  29. Outcomes: Critical or Important Guyatt, G. H., Oxman, A. D., Kunz, R., Vist, G. E., Falck-Ytter, Y. & Schünemann, H. J. (2008). What is “quality of evidence” and why is it important to clinicians? BMJ 333, 995-998.

  30. Strength of Recommendations Strong Weak VS.

  31. Strength of Recommendations Strong Weak X VS.

  32. Strong Recommendation • Desirable effects clearly outweigh undesirable effects or vice versa • Certain that benefits do, or do not, outweigh risks & burdens

  33. Weak Recommendation • Desirable effects closely balanced with undesirable effects • Benefits, risks & burdens are finely balanced OR appreciable uncertainty exists about the magnitude of benefits & risks

  34. Moving from Strong to WeakTo treat or not to treat… • Absence of high quality evidence • Imprecise estimates • Uncertainty or variation in individuals’ value of the outcomes • Small net benefits • Uncertain if net benefits are worth the costs

  35. Strong Recommendations

  36. Weak Recommendations

  37. Guideline Evaluation-AGREE II • Appraisal of Guidelines for Research and Evaluation • Focus – evaluation of practice guidelines • Checklist of 23 questions • Six domains • Scope and Purpose • Stakeholder Involvement • Rigor of Development • Clarity and Presentation • Applicability • Editorial Independence

More Related