1 / 37

Assessing Applicability

Learn to assess and apply applicability in systematic reviews to ensure evidence is relevant for real-world settings. Understand key concepts and tools for framing and judging applicability, including PICOS domains and study parameters. Separate applicability from evidence strength for comprehensive reviews.

sandraw
Download Presentation

Assessing Applicability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assessing Applicability Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for Systematic Reviews Methods Guide www.ahrq.gov

  2. Systematic Review Process Overview

  3. Learning Objectives • To describe applicability and substantiate its importance • To delineate a systematic approach to assessing applicability by using PICOS domains • To recognize common features that may impact the applicability of studies PICOS domains = population, intervention, comparators, outcomes, and setting

  4. Defining Applicability • Applicability is defined as the extent to which the results observed in published studies are likely to reflect the expected outcomes when an intervention is applied to broader populations under real-world conditions. • Similar terms include external validity, generalizability, directness, and relevance.

  5. The Importance of Applicabilityin Systematic Reviews • Unless the evidence synthesized within a systematic review is applicable to relevant populations under real-world practice conditions, the conclusions may be inappropriate for guiding clinical or policy decisions.

  6. Framing Applicability Issues • Frame issues of applicability with reference to specific clinical or policy questions the review is intended to inform. • Applicability needs to be considered at the outset: • when the scope of the review is determined, and • when the key questions are identified. • This module focuses on handling applicability issues at the analyze-and-synthesize-data stage.

  7. Applicability Resources • Clinical experts and stakeholders can provide general information important to framing applicability issues. • What does the population of interest looks like? • What types of care or procedures are routine or represent standard of care? • Are certain subpopulations characteristically different from others?

  8. Other Applicability Resources • Registry or epidemiological information, practice guidelines, consensus papers, book chapters, and general reviews can provide useful applicability information. • Applicability issues do not have to be reviewed for each study. • They are used to place the available literature in context.

  9. How Individual Studies Consider Applicability • Studies often do not report on factors needed to judge applicability. • Increased use of CONSORT diagrams helps, but deficiencies remain. • Only a small number of studies are designed with applicability in mind. • Most trials are efficacy trials that maximize internal validity at the expense of applicability. CONSORT = Consolidated Standards of Reporting Trials

  10. Study Parameters Affect Applicability • Study parameters may affect applicability through these different mechanisms: • Effect modifiers • Modifiers of baseline rates • Factors that reduce ability to generalize results to everyday practice

  11. Applicability Judged for Each Question • Applicability should be judged separately for each important question. • Factors influencing benefits and harms often depend on distinct physiologic processes. • The most applicable evidence may differ when considering a benefit or a harm.

  12. Applicability of a Body of Evidence • Applicability of a body of evidence is not the same as applicability of individual studies. • A collection of studies may provide broad applicable evidence even if individual studies do not. • Consistency of study results across different populations and settings increases confidence in the applicability of the results. • Applicability depends on context and cannot be assessed with a simple rating scale. • There are no empiric data validating any scoring system for applicability.

  13. Judge Applicability andStrength of Evidence Separately • Applicability is best reported separately from quality or strength of evidence. • Internal validity concerns are not altered by changes in perspective, but applicability is. • Some evidence hierarchies (GRADE) consider applicability “or directness” when assessing the quality of evidence. • For systematic reviews in which multiple perspectives (clinician, patient, policymaker) are considered, applicability should be assessed separately.

  14. Three Steps To Assess Applicability • Stepwise approach to applicability: • Identify beforehand a limited number of factors that are most likely to influence applicability of evidence for the specific issue being reviewed and report the relevant elements from individual studies. • Judge limitations to the applicability of individual studies based on those factors. • Summarize the applicability of the body of evidence from multiple studies with respect to specific populations, interventions, comparisons, and outcomes.

  15. Step 1. Determine and Report theMost Important Factors in PICOS Format • Identify most important applicability issues • Understand the real-world context • Understand how studies differ from the real-world context • Use PICOS format to organize factors • Report features from individual studies that impact applicability in evidence tables PICOS = population, intervention, comparators, outcomes, and setting

  16. Population and Applicability: Examples • In a trial of women with osteoporosis, only 4,000 of the 54,000 women screened were enrolled; the enrollees were younger, healthier, and more adherent to therapy than is typical of women with osteoporosis. • A trial of etanercept for juvenile diabetes excluded patients with side effects during an active run-in period; the trial found a low incidence of adverse events. • Clinical trials used to inform Medicare decisions enrolled patients who were younger (60 vs. 75 years of age) and more often male (75 vs. 42%) than is typical of Medicare patients with cardiovascular disease.

  17. Population and Applicability

  18. Intervention and Applicability: Examples • Studies of behavioral modification to promote healthy diet employ a larger number and longer duration of visits than those available to most community patients. • The use of pill counts in antiretroviral trials does not always translate into effectiveness in real-world practice. • Combining iron and zinc attenuates the ability of iron to raise hemoglobin levels. • Trials of carotid endarterectomy select surgeons with extensive experience and low complication rates and are not representative of average vascular surgeons.

  19. Intervention and Applicability

  20. Comparator and Applicability: Examples • A fixed-dose study that compared high-dose duloxetine to low-dose paroxetine • Many trials evaluating magnesium as a treatment for acute myocardial infarction that were conducted before thrombolytic drugs, antiplatelet drugs, ß-blockers, and primary percutaneous coronary intervention (PCI) were used • Only 1 of 23 trials that compared bypass surgery to PCI used drug-eluting stents

  21. Outcomes and Applicability: Examples • Trials of biologics for rheumatoid arthritis that used radiographic progression rather than symptom evaluations as an outcome measure • Trials comparing cyclooxygenase-2 inhibitors and nonsteroidal antiinflammatory drugs that used endoscopy-evaluated ulceration rather than symptomatic ulcers as an outcome measure

  22. Comparator, Outcomes, and Applicability

  23. Setting and Applicability: Examples • Studies evaluating the benefits of breast self-examinations conducted in China and Russia, countries that do not employ routine mammography screening as in the United States • Studies of open surgical abdominal aortic aneurysm repair showing an inverse relationship between hospital volume and short-term mortality

  24. Setting and Applicability

  25. Step 2. Make and Report Judgments About Major Limitations to the Applicability of Individual Studies • Make judgments about applicability of individual studies • Look for effectiveness trials • Specifically designed to produce applicable results • Report on applicability • Highlight effectiveness trials • Compare and contrast effectiveness and efficacy trials • Clearly report and highlight major applicability issues of efficacy trials and what impact they may have on the interpretation of the study

  26. Effectiveness Trials • Seven criteria: • Enroll a primary-care population • Have less-stringent eligibility criteria • Assess health-related outcomes • Have a long study duration and clinically relevant treatment modalities • Assess for adverse events • Have an adequate sample size to assess minimally important differences from a patient perspective • Use intention-to-treat analysis Gartlehner G, et al. J Clin Epidemiol 2006;59:1040-8;Gartlehner G, et al. Int J Technol Assess Health Care 2009;25:323-30.

  27. Make Judgments About theApplicability of Individual Studies • For efficacy trials, clearly report characteristics that may limit applicability. • Describe those characteristics in the text or in evidence tables under the heading “comments” or “limitations.” • Describe how important factors would affect applicability and the expected direction and magnitude of bias.

  28. Evidence Table Template for Applicability After reviewing this sample evidence table, assume that four other trials are available with similar PICOS domains. ACE = angiotensin-converting enzyme; NYHA = New York Heart Association; PICOS = population, intervention, comparators, outcomes, and setting Slutsky J, et al. In: Methods guide for comparative effectiveness reviews. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/118/324/2008_1118CompareInterventions.pdf.

  29. Step 3. Consider and Summarize the Applicability of a Body of Evidence From Individual Studies • Describe the limitations of aggregate evidence in the conclusions. • Generate a summary applicability table with a PICOS structure. • Aggregate applicability is not the sum of applicability of individual studies. • Each individual study can have relatively low applicability in at least one PICOS domain, but the body of evidence can have high applicability. PICOS = population, intervention, comparators, outcomes, and setting

  30. Which Series of StudiesHas an Applicability Issue? Study 1 Study 4 Study 3 Study 2 Scenario 1 Lower Age Higher Age Scenario 2 Study 3 Study 1 Study 4 Study 2

  31. Step 3. Applicability Summary Table Template

  32. Step 3. Completed Applicability Summary Table

  33. Summary Table forBody of Evidence Applicability ACE = angiotensin-converting enzyme; NYHA = New York Heart Association

  34. Key Messages (I) • The PICOS framework is useful for organizing contextual factors of interest. • Input from clinical experts/stakeholders and a review of current practices can help identify specific applicability issues. • Population-based surveys, pharmacoepidemiologic studies, and large case series or registries can help determine how trial populations and circumstances differ from normal practice. PICOS = population, intervention, comparators, outcomes, and setting

  35. Key Messages (II) • Reviewers should routinely select the most important characteristics that may limit applicability and assess whether variation within the factors may modify outcome effects. • Reports should clearly highlight issues about applicability of individual studies in a “comments” or “limitations” section in evidence tables or in the text. • Meta-regression and/or separate applicability summary tables may help reviewers, and those using the reports, to see how the body of evidence applies to the question at hand. • Judgments about applicability of the evidence should consider the entire body of studies. • Important limitations of the applicability of the evidence should be described within each summary conclusion.

  36. References • Gartlehner G, Hansen RA, Nissman D, et al. A simple and valid tool distinguished efficacy from effectiveness studies. J ClinEpidemiol 2006;59:1040-8. • Gartlehner G, Thieda P, Hansen RA, et al. Inadequate reporting of trials compromises the applicability of systematic reviews. Int J Technol Assess Health Care 2009;25:323-30. • Shadish W, Cook T. Experimental and quasi-experimental design for generalized causal inference. Boston: Houghton Mifflin; 2002. • Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J ClinEpidemiol 2009;62:464-75.

  37. Author • This presentation was prepared by C. Michael White, Pharm.D., FCP, FCCP, a member of the University of Connecticut/Hartford Hospital Evidence-based Practice Center. • This module is based on chapter 6 in version 1.0 of the Methods Reference Guide for Effectiveness and Comparative Effectiveness Reviews (available at: http://effectivehealthcare.ahrq.gov/repFiles/2007_10Draft MethodsGuide.pdf).

More Related