1 / 70

Introduction to Clinical Trials - Bias and the Need for Randomized Studies

Introduction to Clinical Trials - Bias and the Need for Randomized Studies. Rick Chappell, Ph.D. Professor, Department of Biostatistics and Medical Informatics University of Wisconsin School of Medicine & Public Health chappell@stat.wisc.edu BMI 542 – Week 1, Lecture 1. Outline .

swillbanks
Download Presentation

Introduction to Clinical Trials - Bias and the Need for Randomized Studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Clinical Trials - Bias and the Need for Randomized Studies Rick Chappell, Ph.D. Professor, Department of Biostatistics and Medical Informatics University of Wisconsin School of Medicine & Public Health chappell@stat.wisc.edu BMI 542 – Week 1, Lecture 1

  2. Outline A. Types of clinical research studies B. Biases in clinical research studies C. Resources for writing protocols and reports of RCTs D. References

  3. Good Ethics is Good Science: “If a research study is so methodologically flawed that little or no reliable information will result, it is unethical to put subjects at risk or even to inconvenience them through participation in such a study. … Clearly, if it is not good science, it is not ethical.” - U.S. Dept. of Health and Human Services, Policy for Protection of Human Subjects (45 CFR 46, 1/1/92 ed.)

  4. What is Good Science? The NIH (2016): Rigor ensures “… robust and unbiased experimental design, methodology, analysis, interpretation, and reporting of results.” Reproducibility “… validates the original results”

  5. What is Good Science? The NIH (2016): Rigor ensures “… robust and unbiased experimental design, methodology, analysis, interpretation, and reporting of results.” Reproducibility “… validates the original results” Bias works against both attributes.

  6. What is Good Science? The NIH (2016): Rigor ensures “… robust and unbiased experimental design, methodology, analysis, interpretation, and reporting of results.” Reproducibility “… validates the original results” Bias works against both attributes. My definition of bias: “an error in estimation which doesn’t go away with large sample size.”

  7. A. Types of Studies Classified by Temporal Point of View I. Instantaneous Studies - Surveys II. Longitudinal Studies A. Retrospective Studies Historical Observational Cohort Case - Control B. Prospective Studies Prospective Observational Cohort Clinical Trial C. Hybrid Designs

  8. A Schematic for Temporal Classification Retrospective Prospective Observational Cohort Observational Cohort Randomization Case - Control Clinical Trial Now Instantaneous: Survey

  9. I. Instantaneous:Population-Based Studies • Synonyms • Survey • Population-Correlation Study • Ecological Study • Two or more populations are instantaneously compared through the prevalences of both exposure and disease.

  10. Population-Based Studies Advantages Instantaneous. Easy access to a large and varied population. Good for hypothesis generation. Disadvantages Intervention is usually not feasible. Very little information on causality: IARC standards require individual-based evidence.

  11. II. Longitudinal:Individual-Based Studies • A longitudinal study observes exposures and events for individuals over a period of time. • There are two types, depending on whether one is looking forwards (prospective) or backwards (retrospective) from the present.

  12. Longitudinal Studies:A. Retrospective • Historical Observational Cohort • Synonyms- survey, retrospective cohort study. • Examines outcomes among patients with past exposures. • E.g., track down 1950s asbestos miners & determine current status. • Case - Control (Breslow and Day, 1980) • Synonyms - case referent, retrospective study. • Examines past exposures among a group of patients with current outcomes. • E.g., interview mesothelioma patients & determine past exposures.

  13. Historical Observational Cohort Studies Advantages Quick results - no wait. Easy to get large samples by ‘mining’ databases. Yields wide range of sequelae. Useful for investigating raretreatments orexposures. Disadvantages No opportunity to customize data collection. No possibility for blinding. Many possible biases: Confounding Selection Information

  14. Case - Control Studies Advantages Cheap, quick - record searching can be automated. Useful for pilot studies. Useful for investigating rare disorders. Disadvantages Gives narrow picture of risks due to treatment or exposure. Biases: Confounding Selection Recall Yields only estimates of relative, not absolute risk.

  15. Longitudinal Studies:B. Prospective • General Advantages • Can collect detailed exposure, treatment, disease, and demographic information. • Blinding is possible. • Recall and information bias may be eliminated. • Useful for investigating raretreatmentsor exposures. • Classification depends on the presence of intervention.

  16. Prospective Studies • Prospective Observational Cohort • Synonyms - prospective trial, ‘clinical trial’. • No intervention. • Randomized Controlled (“Phase III”) Clinical Trial • Synonyms - prospective interventional cohort study, experiment, prospective trial, clinical trial. • Experimenters directly intervene in patient treatment, usually on a randomized basis with controls.

  17. Prospective Observational Cohort Study Additional Advantage Passive observation; no need to dictate treatment. Disadvantages May take a long time to accrue cases and wait for results. Potential confounding bias due to lack of randomization and suitable controls.

  18. Phases of a “Clinical Trial” • Biochemical and pharmacological research. • Animal Studies (Gart, 1986 & Schneiderman, 1967). • Phase I (Storer, 1989) - estimate toxicity rates using few (~ 10 - 40) healthy or sick subjects. • Phase II (Thall & Simon, 1995) - determines whether a therapy has potential using a few very sick patients.

  19. Phases of a Clinical Trial (cont.) • Phase III - large randomized controlled, possibly blinded, experiments; Randomized Clinical Trial (RCT). • Phase IV - a controlled trial of an approved treatment with long-term followup of safety and efficacy.

  20. Clinical Trials Additional Advantages “The most definitive tool for evaluation of the applicability of clinical research” - 1979 NIH release. Biases may be eliminated. Good design may make analysis simple. Disadvantages As above, may take a long time. Must be ethically and laboriously conducted. Requires treatment on basis (in part) of scientific rather than medical factors. Patients may make some sacrifice (Meier, 1982).

  21. Digression - NIH Organizational Structure, a Brief Overview (1) 1. Project Office/Funding Agency • Responsible for providing organizational, scientific & statistical direction through Project Officer • Contract Officer is responsible for all administrative matters related to award and conduct of contracts • Responsible for most of the pre-award development; RFP, sample size, etc. 2. Policy Advisory Board (PAB) Data Monitoring Board (DMB) • Acts as senior independent advisory board to NIH on policy matters • Reviews study design and changes to the initial design • Reviews interim study results, by treatment group and recommends early termination for toxicity or beneficial effects • Reviews performance of individual clinical centers

  22. NIH Organizational StructureA Brief Overview (2) 3. Steering Committee • Provides scientific direction for the study at the operational level • Usually are recommended or elected representatives of the clinical center principle investigators • Monitors performance of individual centers • Report major problems to PAB and P.O. • May have several subcommittees which are responsible for various aspects such as recruitment, endpoints, publications, quality control, etc. 4. Assembly of Investigators (may be same as Steering Committee) • Each operational unit (clinic, laboratory, data center) has a representative • Elects from its membership representative on Steering Committee • Reviews operational progress of study • Represents individual clinical centers

  23. NIH Organizational StructureA Brief Overview (3) 5. Coordinating Center • Responsible for collecting, editing, analyzing & storing all data collected • Develop and test forms • Develop randomization procedure • Monitor quality control of clinics and labs • Periodic analysis for potential risks and benefits • Perform final analysis at end of the trial 6. Central Labs • Provide standardized results across centers to insure comparability • Examples are EKG, Biochemistry, Pathology 7. Clinical Centers • Recruit patients, administer treatment, coordinate patient care & collect data required • Grass Roots of any clinical trial

  24. Clinical Trials may be biased if: • Blinding is compromised (see Diagnostic Suspicion Bias, below) • Randomization is compromised, as in response-adaptive randomization or “randomization” is known in advance • Data are selectively missing (including censoring of times to events)

  25. Sackett’s Levels of Evidence for “Evidence-based Medicine” (Cook, et al., 2002) … continued

  26. Levels of Evidence (cont.) … continued

  27. Levels of Evidence (cont.)

  28. A critique (by example) of clinical trials and evidence-based medicine “Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials” (Smith & Pell, 2003). Fig. 1:

  29. Abstract Objectives: To determine whether parachutes are effective in preventing major trauma related to gravitational challenge. Design: Systematic review of randomised controlled trials ... Main outcome measure: Death or major trauma, defined as an injury severity score > 15. Results: We were unable to identify any randomised controlled trials of parachute intervention. Conclusions: As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.

  30. B. Bias in Clinical Studies • Classification is based on whether bias occurs at the time of patient Selection; or at the time of Information collection; or at the time of Publication. • They are all variants of Confounding, in which a third variable is related to both treatment (or exposure) and outcome. Confounder (“3rd variable”) Outcome Treatment or Exposure Treatment Effect?

  31. How do we break the arrows of causality with the Confounder? Confounder (“3rd variable”) Outcome Treatment or Exposure Treatment Effect?

  32. How do we break the arrows of causality with the Confounder? Generally impossible to Influence; dictated by nature Confounder (“3rd variable”) Outcome Treatment or Exposure Treatment Effect?

  33. How do we break the arrows of causality with the Confounder? Generally impossible to influence; dictated by nature Can be broken via randomization Confounder (“3rd variable”) Outcome Treatment or Exposure Treatment Effect?

  34. I. Selection Bias • Prevalence - Incidence Bias • Prevalence (observed occurrence) of a trait  Incidence (rate of onset). • Cause: gap between exposure, selection of subjects. • Not a problem with irreversible events such as mortality, if detectable. • E.g., hypertension may disappear with onset of CV disease and can be overlooked as a risk factor. • See Neyman, 1955. • (Any retrospective study, especially case-control.)

  35. Selection Bias • Admission Rate Bias • Patients may differ from noninstitutionalized subjects in size or direction of effects. • E.g., systemic weakness vs. arthritis: • Negative relation among inpatients; • Positive relation among outpatients. • See Berkson, 1946. • (Any nonrandomized study with a mix of patient sources, especially case-control.)

  36. Selection Bias • Nonrespondant (Volunteer) Bias • Nonparticipation may be related to the subject of investigation. • E.g., smokers ignore surveys more often than do non-smokers (Seltzer, 1974). • For general methods to analyze data with ‘nonignorable nonresponse’ see Little and Rubin (1987) and Rubin (1987). • (Case-control, though drop-outs can effect any study not analyzed ‘intent to treat.)

  37. Example: Where to add armor to fighter planes? In World War II, the U.S. Air Force conducted an investigation into where armor could most effectively be added to fighter planes. Researchers examined returning aircraft, mapped the locations of bullet holes, and recommended that the most commonly pierced areas be reinforced.

  38. Example: Where to add armor to fighter planes? Researchers examined returning aircraft, mapped the locations of bullet holes, and recommended that the most commonly pierced areas be reinforced. In one area they found no piercings: near the pilot’s head!

  39. II. Information Bias • Detection Signal (Diagnostic Suspicion) Bias • In unblinded studies, an exposure may be considered a risk factor for an endpoint, and such patients preferentially observed. • In blinded studies, an exposure may make an endpoint more detectable. • E.g., estrogen causes bleeding from uterine cancer to be more easily detectable. • (Any unblinded study except case-control; also even blinded clinical trials with sensitive endpoints.)

  40. II. Information Bias • Detection Signal (Diagnostic Suspicion) Bias • In unblinded studies, an exposure may be considered a risk factor for an endpoint, and such patients preferentially observed. • In blinded studies, an exposure may make an endpoint more detectable. • E.g., estrogen causes bleeding from uterine cancer to be more easily detectable. • (Any unblinded study except case-control; also even blinded clinical trials with sensitive endpoints.)

  41. Reports of Original Studies JAVMA191, 12/1/87 “High-rise syndrome in cats” Wayne O. Whitney, DVM & Cheryl J. Mehlhaff, DVM Selection and/or detection bias

  42. Information Bias • Exposure Suspicion Bias • An outcome may cause the investigator to look for a particular exposure. • The temporal reverse of detection signal bias. • E.g., arthritis and knuckle-cracking. • (Case-control studies.)

  43. Information Bias • Recall (family information) Bias • Similar to exposure suspicion bias, but errors originate with the subject or his/her family. • E.g., in a study of prescription use among women with fetal malformation, 28% reported unverifiable exposure vs. 20% of the controls (Klemetti & Saxen, 1967). • (Case-control studies.)

  44. III. Publication (Reporting) Bias • Even a perfect study leads to bias if dissemination depends on the direction of its result. • Causes: • Commercial reasons; • Researchers’ personal motivations; • Editorial Policy ! • Vickers, et al. (1998) show that the problem is widespread: in some countries, 100% of publications show treatment effects.

  45. Publication (Reporting) Bias • A version of the multiple comparisons problem (Miller, 1985), or ‘testing to a foregone conclusion’. • E.g., ORG-2766 protected nerves from cytotoxic injury in 55 women with ovarian cancer - NEJM lead article (van der Hoop, et al., 1990); a subsequent negative study of 133 women - ASCO Proceedings abstract (Neijt, et al., 1994). • (All Studies.)

  46. Publication (Reporting) Bias Thus, the NIH’s demand: “Consideration of refutations Have a policy stating that if the journal publishes a paper, it assumes responsibility to consider publication of refutations of that paper, according to its usual standards of quality.”

  47. Solutions to Publication Bias • Register trials (before they start) with the Cochrane Collaboration – www.cochrane.org "We gather and summarize the best evidence from research to help you make informed choices about treatment" via systematic reviews (meta-analyses) • Register trials (ditto) with www.clintrials.gov • Take “The trialist’s oath” (Meinert) - https://jhuccs1.us/clm/PDFs/OATH.pdf

More Related