1 / 34

When To Select Observational Studies as Evidence for Comparative Effectiveness Reviews

When To Select Observational Studies as Evidence for Comparative Effectiveness Reviews. Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for Systematic Reviews Methods Guide www.ahrq.gov. Systematic Review Process Overview. Learning Objectives.

Download Presentation

When To Select Observational Studies as Evidence for Comparative Effectiveness Reviews

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. When To Select Observational Studies as Evidence for Comparative Effectiveness Reviews Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for Systematic Reviews Methods Guide www.ahrq.gov

  2. Systematic Review Process Overview

  3. Learning Objectives • To understand why reviewers should consider including observational studies in comparative effectiveness reviews (CERs) • To understand when to include observational studies in CERs • To review important considerations for deciding whether to include observational studies to assess benefits and harms

  4. Current Perspective • Comparative effectiveness reviews should always consider including observational studies. • Reviewers should explicitly state the rationale for including or excluding observational studies.

  5. Comparative Effectiveness Reviews • Systematic reviews that compare the relative benefits and harms among a range of available treatments or interventions for a given condition.

  6. Danger of Overreliance onRandomized Controlled Trials • May be unnecessary, inappropriate, inadequate, or impractical • May be too short in duration • May report intermediate outcomes rather than main health outcomes of interest • Often not available for vulnerable populations • Generally report efficacy rather than effectiveness

  7. Observational Studies • In these studies, investigators do not assign the exposure or intervention. These studies include: • All nonexperimental studies • Cohort, case-control, cross-sectional studies • We present considerations for including observational studies to assess benefits and harms separately.

  8. Using Observational StudiesTo Assess Benefits (I) • Reviewers should answer two questions: • Are there gaps in trial evidence for the review questions under consideration? • Will observational studies provide valid and useful information to fill these gaps and, thereby, answer the review questions?

  9. Systematic review question (including PICOTS) Always consider: Controlled trials Are there gaps in trial evidence? No Yes Confine review to controlled trials Will OSs provide valid and useful information? Assess the suitability of OSs: Natural history of the disease or exposure Potential biases Assess whether OSs address the review question Refocus the review question on gaps Using Observational StudiesTo Assess Benefits (II) Consider OSs OS = observational study; PICOTS = population, intervention, comparator, outcome, timing, and setting.

  10. Gaps in Trial Evidence: PICOTS • Trial data may be insufficient for a number of reasons: • Population: may not be available for subpopulations or vulnerable populations • Intervention: may not be able to assign high-risk interventions randomly • Comparator: may be insufficient evidence for comparators of interest • Outcome: may report intermediate outcomes rather than main health outcomes of interest • Timing: duration of follow-up for outcomes assessment may be too short • Setting: may not represent typical practice

  11. Are Trial Data Sufficient? (I) • Risk of bias (internal validity) • The degree to which the findings may be attributed to factors other than the intervention under review • Consistency • Extent to which effect size and direction vary within and across studies • Inconsistency may be due to heterogeneity across PICOTS • Directness • Degree to which outcomes that are important to users of the comparative effective review (patients, clinicians, or policymakers) are encompassed by trial data • Health outcomes are generally most important

  12. Are Trial Data Sufficient? (II) • Precision • Includes sample size, number of studies, and heterogeneity of effect sizes • Reporting bias • Extent to which trial authors appear to have reported all outcomes examined • Applicability • Extent to which the trial data are likely to be applicable to populations, interventions, and settings of interest to the user • The review questions should reflect the study characteristics (PICOTS) of interest

  13. When To Identify Gaps in Trial Evidence • Gaps in trial evidence can be identified at a number of points in the comparative effectiveness review: • In scoping of the review • In consulting with the Technical Expert Panel • In reviewing titles and abstracts • In reviewing trial data in detail

  14. Iterative Process forIdentifying Gaps in Evidence Analyze and Synthesize Studies • Assess the quality of individual studies • Assess applicability • Present findings • Synthesize quantitative data • Grade strength of evidence Search for and Select Studies for Inclusion • Identify study eligibility criteria • Search for relevant studies • Select evidence for inclusion Prepare Topic • Refine topic • Develop analytic framework Report Systematic Review Extract Data from Studies

  15. Gaps in Trial Evidence • Reviewers may perform initial searches broadly to identify both observational studies and trials. • Or, they may perform searches sequentially and search for observational studies after reviewing trials in detail to identify gaps in evidence.

  16. Systematic review question (including PICOTS) Always consider: Controlled trials Are there gaps in trial evidence? No Yes Confine review to controlled trials Will OSs provide valid and useful information? Assess the suitability of OSs: Natural history of the disease or exposure Potential biases Assess whether OSs address the review question Refocus the review question on gaps Using Observational Studies To Assess Benefits OS = observational study; PICOTS = population, intervention, comparator, outcome, timing, and setting Consider OSs OS = observational study; PICOTS = population, intervention, comparator, outcome, timing, and setting.

  17. Will Observational StudiesProvide Valid and Useful Information? • Refocus the study question on gaps in trial evidence. • Respecify the PICOTS for gaps in trial evidence. • Assess whether available observational studies (OSs) may address the review questions. • Assess the suitability of OSs to answer the review questions.

  18. Assessing the Suitability of ObservationalStudies To Answer the Review Questions • After the gaps in evidence have been identified that observational studies (OSs) could potentially fill, reviewers should: • Consider the clinical context and natural history of the condition under investigation • Assess how potential biases may influence the results of OSs

  19. Clinical Context • Fluctuating or intermittent conditions are more difficult to assess with observational studies (OSs), especially if there is no well-formed comparison group. • For most chronic conditions, the natural history is for symptoms to wax and wane over time; regression to the mean is an important consideration. • OSs may be more useful for conditions with steady progression or decline.

  20. Potential Biases That May Limit theSuitability of Including Observational Studies • Selection bias • Performance bias • Detection bias • Attrition bias

  21. Confounding by Indication • Is a type of selection bias • Occurs when different diagnoses, severity of illness, or comorbid conditions are important reasons for physicians to assign different treatments • Is a common problem in pharmacoepidemiological studies comparing benefits • Is often difficult to adjust for, making studies with a high degree of this potential bias usually unsuitable for inclusion in a comparative effectiveness review

  22. Using Observational Studies To Assess Benefits • Observational studies (OSs) without a comparison group are rarely helpful in assessing benefits because of a high risk of bias. • In general, OSs must have a well-formed comparison group to be useful. • Establishing treatment benefits from OSs is uncommon; generally, it is necessary that efficacy be established first in randomized controlled trials.

  23. Harms Assessments • Assessing harms can be difficult. • Trials often focus on benefits, with little effort to balance assessment of benefits with assessment of harms. • Observational studies are almost always necessary to assess harms adequately. • There are trade-offs between increasing comprehensiveness by reviewing all possible observational studies that present harms and the decreased quality that may occur from increased risk of bias.

  24. Using Randomized ControlledTrials To Assess Harms (I) • Randomized controlled trials (RCTs) are the gold standard for evaluating efficacy. • Relying solely on RCTs to evaluate harms in comparative effectiveness reviews is problematic. • Most RCTs lack prespecified hypotheses for harms because they are designed to evaluate benefits. • Assessment of harms is often a secondary consideration. • The quality and quantity of harms reporting is frequently inadequate. • Few studies have sufficient sample sizes or duration to adequately assess uncommon or long-term harms.

  25. Using Randomized ControlledTrials To Assess Harms (II) • Most randomized controlled trials (RCTs) are efficacy trials. • They assess benefits and harms in ideal, homogenous populations and settings. • Patients who are more susceptible to harms are often underrepresented. • Few RCTs directly compare alternative treatment strategies. • The potential for publication bias and selective outcome reporting bias should be considered. • RCTs may not be available.

  26. Using Randomized ControlledTrials To Assess Harms (III) • Nevertheless, head-to-head randomized controlled trials (RCTs) provide the most direct evidence on comparative harms. • Placebo-controlled RCTs can provide important information. • Comparative effectiveness reviews (CERs) should include both head-to-head and placebo-controlled RCTs for assessment of harms. • In lieu of RCTs, CERs may incorporate findings of well-conducted systematic reviews if they evaluated the specific harms of interest.

  27. Using Data From UnpublishedTrials To Assess Harms • Consider including the results of unpublished completed or terminated randomized controlled trials and unpublished results from published trials. • The United States Food and Drug Administration Web site and www.ClinicalTrials.gov are important sources. • Reviewers must consider whether or not the risk of bias can be fully assessed. • When significant numbers of published trials fail to report important harms, reviewers should report this gap in the evidence and consider efforts to obtain unpublished data.

  28. Using Observational Studies To Assess Harms • Observational studies (OSs) are almost always necessary to assess harms adequately. • The exception is when there are sufficient data from randomized controlled trials to estimate harms reliably. • OSs may provide the best or only data for assessing harms in minority or vulnerable populations who are underrepresented in trials. • The types of OSs included in a comparative effectiveness research will vary. • Different types of OSs might be included or rendered irrelevant by data available from stronger study designs.

  29. Hypothesis Testing Versus Hypothesis Generating • Determining whether or not a hypothesis is being tested or generated is an important consideration in deciding which observational studies to include in harms assessments. • Case reports and case series are hypothesis generating. • Cohort and case-control studies are well suited for testing hypotheses that one intervention is associated with a greater risk for an adverse event than another and for quantifying the risk. Chou R, et al. J Clin Epidemiol 2010;63:502-12.

  30. Types of Observational StudiesThat Can Be Used To Assess Harms • Cohort and case-control studies • Routinely search for and include cohort and case-controlled studies, except when randomized controlled trial data are sufficient and valid • OSs based on patient registries • OSs based on analyses of large databases • Case reports, case series, and postmarketing surveillance studies • Include studies of new medications for which sufficient harms data are not available • Other OSs

  31. Screening Observational Studiesfor Inclusion in Harms Assessments • Often there are many more observational studies (OSs) than trials; evaluating a large number of OSs can be impractical when conducting a comparative effectiveness review (CER). • Criteria commonly used to screen OSs for inclusion in CERs: • Minimum duration of followup • Minimum sample size • Defined threshold for risk of bias • Study design restrictions (cohort and case-control) • Specific population of interest

  32. Key Messages • Evidence from trials is often insufficient to answer all the key questions to be addressed in comparative effectiveness reviews (CERs). • The default strategy for CERs should be to consider including observational studies (OSs). • CERs should explicitly state the rationale for including or excluding OSs. • To assess benefits, reviewers should consider two questions: • Are there gaps in trial evidence for the review questions under consideration? • Will observational studies provide valid and useful information to address key questions? • To assess harms, reviewers should routinely search for and include comparative cohort studies and case-control studies.

  33. References • Norris S, Atkins D, Bruening W, et al. Comparative effectiveness reviews and observational studies In: Agency for Healthcare Research and Quality. Methods Guide for Comparative Effectiveness Reviews. Rockville, MD. In press.  • Chou R, Aronson N, Atkins D, et al. AHRQ series paper 4: assessing harms when comparing medical interventions: AHRQ and the Effective Health Care Program. J Clin Epidemiol 2010;63:502-12.

  34. Authors • This presentation was prepared by Dan Jonas, M.D., M.P.H., and Karen Crotty, Ph.D., M.P.H., members of the Research Triangle Institute–University of North Carolina Evidence-based Practice Center. • The module is based on Norris S, Atkins D, Bruening W, et al. Comparative effectiveness reviews and observational studies In: Agency for Healthcare Research and Quality. Methods Guide for Comparative Effectiveness Reviews. Rockville, MD. In press.

More Related