200 likes | 288 Views
Department of O UTCOMES R ESEARCH. Clinical Research Design. Types of Clinical Research Sources of Error Study Design Clinical Trials. Daniel I. Sessler, M.D. Professor and Chair Department of O UTCOMES R ESEARCH The Cleveland Clinic. Case-Control Studies.
E N D
Clinical Research Design Types of Clinical Research Sources of Error Study Design Clinical Trials Daniel I. Sessler, M.D. Professor and Chair Department of OUTCOMESRESEARCH The Cleveland Clinic
Case-Control Studies • Identify cases & matched controls • Look back in time and compare on exposure Exposure Case Group Control Group Time
Cohort Studies • Identify exposed & matched unexposed patients • Look forward in time and compare on disease Disease Exposed Unexposed Time
Timing of Cohort Studies RETROSPECTIVE COHORT STUDY AMBIDIRECTIONAL COHORT STUDY PROSPECTIVE COHORT STUDY Time Initial exposures Disease onset or diagnosis
Sources of Error There is no perfect study All are limited by practical and ethical considerations It is impossible to control all potential confounders Multiple studies required to prove a hypothesis Good design limits risk of false results Statistics at best partially compensate for systematic error Major types of error Selection bias Measurement bias Confounding Reverse causation Chance
Selection Bias Non-random selection for inclusion / treatment Or selective loss Subtle forms of disease may be missed When treatment is non-random: Newer treatments assigned to patients most likely to benefit “Better” patients seek out latest treatments “Nice” patients may be given the preferred treatment Compliance may vary as a function of treatment Patients drop out for lack of efficacy or because of side effects Largely prevented by randomization
Measurement Bias Quality of measurement varies non-randomly Quality of records generally poor Not necessarily randomly so Patients given new treatments watched more closely Subjects with disease may better remember exposures When treatment is unblinded Benefit may be over-estimated Complications may be under-estimated Largely prevented by blinding
Example of Measurement Bias P = 0.003 From Schull & Cobb, J Chronic Dis, 1969
Measurement & Selection Bias Selection Bias (groups poorly matched) Measurement Bias (outcomes inaccurately characterized)
Confounding Association between two factors caused by third factor For example, mortality greater in Florida than Alaska But average age is much higher in Florida Increased mortality results from age, rather than geography of FL For example, transfusions associated with mortality But patients having larger, longer operations require more blood Increased mortality may be a consequence of larger operations Largely prevented by randomization
External Threats to Validity External validity Internalvalidity Subjects enrolled Population of interest Selection bias Measurement bias Confounding Chance Eligible Subjects ? ?? Conclusion
Study Design • Life is short; do things that matter! • Is the question important? Is it worth years of your life? • Concise hypothesis testing of important outcomes • Usually only one or two hypotheses per study • Beware of studies without a specified hypothesis • A priori design • Planned comparisons with identified primary outcome • Intention-to-treat design • Randomization • Best prevention for selection bias and confounding • Concealed allocation • No a priori knowledge of randomization
Subject Selection • Appropriate for question • Maximize incidence of desired outcome • Technically, results apply only to population studied • Broaden enrollment to improve generalizability • As restrictive as practical; trade-off between: • Variability • Ease of recruitment • Sample-size estimates • Usually from preliminary data • Defined stopping points
Blinding / Masking • Only reliable way to prevent measurement bias • Essential for subjective responses • Use for objective responses whenever possible • Careful design required to maintain blinding • Use double blinding whenever possible • Avoid triple blinding! • Maintain blinding throughout data analysis • Even data-entry errors can be non-random • Statisticians are not immune to bias! • Placebo effect can be enormous
Importance of Blinding Chronic Pain Analgesic Mackey, Personal communication
More About Placebo Effect Kaptchuk, PLoS ONE, 2010
Conclusion: Good Clinical Studies… • Test a specific a priori hypothesis • Evaluate clinically important outcomes • Use appropriate statistical analysis • Make conclusions that follow from the data • And acknowledged substantive limitations • Randomize treatment & conceal allocation if possible • Best protection against selection bias and confounding • Blind patients and investigators if possible • Best protection against measurement bias