1 / 25

Randomized Trials: A Brief Overview

Randomized Trials: A Brief Overview. Michael Porter July 20, 2004 Epi 590. Randomized Trial. Comparative study between two or more interventions Exposure to the intervention is determined by random allocation AKA: RCT, experimental design. “Hierarchy” of Evidence. Randomized Trial

ngallegos
Download Presentation

Randomized Trials: A Brief Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Randomized Trials: A Brief Overview Michael Porter July 20, 2004 Epi 590

  2. Randomized Trial • Comparative study between two or more interventions • Exposure to the intervention is determined by random allocation • AKA: RCT, experimental design

  3. “Hierarchy” of Evidence Randomized Trial (experimental) Observational Analytic (e.g. case-control, cohort) Observational Descriptive (case series, descriptive analyses)

  4. Population Eligibility criteria Ineligible Eligible Recruitment Decline to participate Agree to participate, Informed consent Randomization Treatment A Treatment B Good outcome Poor outcome Good outcome Poor outcome The Basic Model

  5. Strengths • Highest level of evidence • Everything can be specified up front • Ideal scenario for inferential statistics • Eliminates confounding

  6. Weaknesses • $$$ • Time consuming • Patients and providers must relinquish treatment decision to random chance • Generalizability • Subject to dropouts, crossovers, non-compliance • Still vulnerable to bias

  7. Two Broad Categories • Pragmatic • attempt to simulate clinical realities more accurately during patient recruitment and during formulation of the randomly allocated treatment groups (effectiveness) • Explanatory • attempt to answer a more specific and narrow question- in order to maximize their ability to do this, eligibility criteria may seek a more homogenous set of patients (efficacy)

  8. Bias • Systematic error within the study that results in a mistaken estimate of the effect of therapy on disease • Bias can be introduced into any step of the process, including enrollment, randomization, and assessment of outcomes

  9. Internal validity • The ability of a trial to come to the correct conclusion regarding the question being studied • Determined by the protocol and execution of the trial

  10. External validity • The ability of a trial to produce results that are generalizable to the larger population of patients with the disease • Determined by the eligibility criteria, the protocol, and the primary outcomes

  11. Randomization • The key step • The biggest strength, and often the biggest challenge • Randomization breaks the link between any unmeasured confounding variables and treatment status

  12. Randomization • 2 important elements • concealment from the investigator • unpredictability • “Randomized trials appear to annoy human nature- if properly conducted, indeed they should”* • Still vulnerable to selection bias *Schulz, K.F., Subverting randomization in controlled trials. JAMA, 1995. 274(18): p. 1456-8.

  13. Subverting Randomization • When researchers failed to adequately conceal randomization from the investigators, an average 41% percent increase in treatment effect occurred compared to trials where the randomization process was concealed appropriately Schulz, K.F., et al., Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA, 1995. 273(5): p. 408-12

  14. Blinding • Single blinded • Subject unaware of treatment assignment • Double blinded • Subject + evaluator unaware of treatment assignment • Blinding provides protection against • Placebo effect • Observation bias

  15. Sample Size (a.k.a. “power calculation”) • Significance level • α • Type 1 error rate • p-value • Power • 1-β • 1-Type 2 error rate • Variability of the outcome • Degree of difference in outcomes

  16. The “truth” Treatments do not differ Treatments differ Treatments do not differ Study conclusions 0.95 0.10 Treatments differ 0.90 0.05

  17. Outcome • Should only have one or two primary outcomes to which the study is powered • Be careful of secondary analyses • Multiple looks require adjustment of significance level

  18. Time Treatment A (n=500) Outcome (n=500) Randomize (n=1000) Treatment B (n=500) Outcome (n=500)

  19. Time Treatment A (n=500) Outcome (n=600) ? n=200 Randomize (n=1000) n=100 Treatment B (n=500) Outcome (n=400)

  20. Crossovers • Analyze based on treatment received? • Exclude crossovers from the analysis? • Analyze based on initial random treatment assignment?

  21. Intention to treat analysis • analyze the outcomes based on the original randomized assignments, regardless of treatment actually received

  22. Randomized study ≠ RCT • Survey • Community interventions • Clinic interventions • Be novel!

  23. Ethical Issues • Equipoise • Data monitoring and safety committees • Eligibility criteria

  24. Summary • RCT- the highest level of evidence • Avoids confounding, but not necessarily bias • Expensive, time consuming • Randomization- preserve and protect it • Blinded when possible • Single important outcome • Intention to treat • Equipoise

More Related