600 likes | 915 Views
STATISTICS 542 Introduction to Clinical Trials SAMPLE SIZE ISSUES. Ref: Lachin, Controlled Clinical Trials 2:93-113, 1981. Sample Size Issues. Fundamental Point Trial must have sufficient statistical power to detect differences of clinical interest
E N D
STATISTICS 542Introduction to Clinical TrialsSAMPLE SIZE ISSUES Ref: Lachin, Controlled Clinical Trials 2:93-113, 1981.
Sample Size Issues • Fundamental Point Trial must have sufficient statistical power to detect differences of clinical interest • High proportion of published negative trials do not have adequate power Freiman et al, NEJM (1978) 50/71 could miss a 50% benefit
Example: How many subjects? • Compare new treatment (T) with a control (C) • Previous data suggests Control Failure Rate (Pc) ~ 40% • Investigator believes treatment can reduce Pc by 25% i.e. PT = .30, PC = .40 • N = number of subjects/group?
Estimates only approximate • Uncertain assumptions • Over optimism about treatment • Healthy screening effect • Need series of estimates • Try various assumptions • Must pick most reasonable • Be conservative yet be reasonable
Statistical Considerations Null Hypothesis (H0): No difference in the response exists between treatment and control groups Alternative Hypothesis (Ha): A difference of a specified amount () exists between treatment and control Significance Level (): Type I Error The probability of rejecting H0 given that H0 is true Power = (1 - ): ( = Type II Error) The probability of rejecting H0 given that H0 is not true
Standard Normal Distribution Ref: Brown & Hollander. Statistics: A Biomedical Introduction. John Wiley & Sons, 1977.
Standard Normal Table Ref: Brown & Hollander. Statistics: A Biomedical Introduction. John Wiley & Sons, 1977.
Distribution of Sample Means (1) Ref: Brown & Hollander. Statistics: A Biomedical Introduction. John Wiley & Sons, 1977.
Distribution of Sample Means (2) Ref: Brown & Hollander. Statistics: A Biomedical Introduction. John Wiley & Sons, 1977.
Distribution of Sample Means (3) Ref: Brown & Hollander. Statistics: A Biomedical Introduction. John Wiley & Sons, 1977.
Distribution of Sample Means (4) Ref: Brown & Hollander. Statistics: A Biomedical Introduction. John Wiley & Sons, 1977.
Distribution of Test Statistics • Many have a common form • Theta = population parameter (eg difference in means) • Thetahat = sample estimate • Then • Z = Thetahat – E(thetahat)/SE(thetahat) • And then Z has a Normal (0,1) distribution
If statistic z is large enough (e.g. falls into red area of scale), we believe this result is too large • to have come from a distribution with mean O (i.e. Pc - Pt = 0) • Thus we reject H0: Pc - Pt = 0, claiming that their exists 5% chance this result could have come from distribution with no difference
Normal Distribution Ref: Brown & Hollander. Statistics: A Biomedical Introduction. John Wiley & Sons, 1977.
Two Groups or OR
Test of Hypothesis • Two sided vs. One sided e.g. H0: PT = PC H0: PT< PC • Classic test za = critical value If |z| > z If z > z Reject H0 Reject H0 = .05 , z = 1.96 = .05, z = 1.645 where z = test statistic • Recommend zbe same value both cases (e.g. 1.96) two-sided one-sided = .05 or = .025 z = 1.96 1.96
Typical Design Assumptions (1) 1. = .05, .025, .01 2. Power = .80, .90 Should be at least .80 for design 3. = smallest difference hope to detect e.g. = PC - PT = .40 - .30 = .10 25% reduction!
Typical Design Assumptions (2) Two Sided Significance Level Power
Sample Size Exercise • How many do I need? • Next question, what’s the question? • Reason is that sample size depends on the outcome being measured, and the method of analysis to be used
Simple Case - Binomial 1. H0: PC = PT 2. Test Statistic (Normal Approx.) 3. Sample Size Assume • NT = NC = N • HA:= PC - PT
Sample Size Formula (1)Two Proportions Simpler Case • Za = constant associated with a P {|Z|> Za } = a two sided! (e.g. a = .05, Za =1.96) • Zb = constant associated with 1 - b P {Z< Zb} = 1- b (e.g. 1- b = .90, Zb =1.282) • Solve for Zb(1- b) or D
Sample Size Formula (2)Two Proportions • Za = constant associated with a P {|Z|> Za } = a two sided! (e.g. a = .05, Za =1.96) • Zb = constant associated with 1 - b P {Z< Zb} = 1- b (e.g. 1- b = .90, Zb =1.282)
Sample Size Formula Power • Solve for Zb1- b Difference Detected • Solve for D
Simple Example (1) • H0: PC = PT • HA: PC = .40, PT = .30 = .40 - .30 = .10 • Assume a = .05 Za = 1.96 (Two sided) 1 - b = .90 Zb = 1.282 • p = (.40 + .30 )/2 = .35
Simple Example (2) Thus a. N = 476 2N = 952 b. 2N = 956 N = 478
Approximate* Total Sample Size for Comparing Various Proportions in Two Groups with Significance Level (a) of 0.05 and Power (1-b) of 0.80 and 0.90
Comparison of Means • Some outcome variables are continuous • Blood Pressure • Serum Chemistry • Pulmonary Function • Hypothesis tested by comparison of mean values between groups, or comparison of mean changes
Comparison of Two Means • H0: C = TC - T = 0 • HA: C - T = • Test statistic for sample means ~ N (, ) • Let N = NC = NT for design • Power ~N(0,1) for H0
Example e.g. IQ = 15 = 0.3x15 = 4.5 • Set 2 = .05 = 0.10 1 - = 0.90 • HA: = 0.3 / = 0.3 • Sample Size • N = 234 2N = 468
Comparing Time to Event Distributions • Primary efficacy endpoint is the time to an event • Compare the survival distributions for the two groups • Measure of treatment effect is the ratio of the hazard rates in the two groups = ratio of the medians • Must also consider the length of follow-up
Assuming Exponential Survival Distributions • Then define the effect size by • Standard difference
Time to Failure (1) • Use a parametric model for sample size • Common model - exponential • S(t) = e-t = hazard rate • H0: I = C • Estimate N George & Desu (1974) • Assumes all patients followed to an event (no censoring) • Assumes all patients immediately entered
Assuming Exponential Survival Distributions • Simple case • The statistical test is powered by the total number of events observed at the time of the analysis, d.
Converting Number of Events (D) to Required Sample Size (2N) • d = 2N x P(event) 2N = d/P(event) • P(event) is a function of the length of total follow-up at time of analysis and the average hazard rate • Let AR = accrual rate (patients per year) A = period of uniform accrual (2N = AR x A) F = period of follow-up after accrual complete A/2 + F = average total follow-up at planned analysis = average hazard rate • Then P(event) = 1 – P(no event) =
Time to Failure (2) • In many clinical trials 1. Not all patients are followed to an event (i.e. censoring) 2. Patients are recruited over some period of time (i.e. staggered entry) • More General Model (Lachin, 1981) where g() is defined as follows
1. Instant Recruitment Study Censored At Time T 2. Continuous Recruiting (O,T) & Censored at T 3. Recruitment (O, T0) & Study Censored at T (T > T0)
Example Assume = .05 (2-sided) & 1 - = .90 C = .3 and I = .2 T = 5 years follow-up T0 = 3 0. No Censoring, Instant Recruiting N = 128 1. Censoring at T, Instant Recruiting N = 188 2. Censoring at T, Continual Recruitment N = 310 3. Censoring at T, Recruitment to T0 N = 233
Sample Size Adjustment for Non-Compliance (1) • References: 1. Shork & Remington (1967) Journal of Chronic Disease 2. Halperin et al (1968) Journal of Chronic Disease 3. Wu, Fisher & DeMets (1988) Controlled Clinical Trials • Problem Some patients may not adhere to treatment protocol • Impact Dilute whatever true treatment effect exists
Sample Size Adjustment for Non-Compliance (2) • Fundamental Principle Analyze All Subjects Randomized • Called Intent-to-Treat (ITT) Principle • Noncompliance will dilute treatment effect • A Solution Adjust sample size to compensate for dilution effect (reduced power) • Definitions of Noncompliance • Dropout: Patient in treatment group stops taking therapy • Dropin: Patient in control group starts taking experimental therapy
Comparing Two Proportions • Assumes event rates will be altered by non‑compliance • Define PT* = adjusted treatment group rate PC* = adjusted control group rate If PT < PC, 1.0 0 PC PT PC * PT *
Adjusted Sample Size Simple Model - Compute unadjusted N • Assume no dropins • Assume dropout proportion R • Thus PC* = PC PT* = (1-R) PT + R PC • Then adjust N • Example R1/(1-R)2% Increase .1 1.23 23% .25 1.78 78%
Sample Size Adjustment for Non-Compliance Dropouts & dropins (R0, RI) • Example R0 R1 1/(1- R0- R1)2% Increase .1 .1 1.56 56% .25 .25 4.0 4 times%
Sample Size Adjustments • More Complex Model Ref: Wu, Fisher, DeMets (1980) • Further Assumptions • Length of follow-up divided into intervals • Hazard rate may vary • Dropout rate may vary • Dropin rate may vary • Lag in time for treatment to be fully effective
Example: Beta-Blocker Heart Attack Trial (BHAT) (1) • Used complex model • Assumptions 1. = .05 (Two sided) 1 - = .90 2. 3 year follow-up 3. PC = .18 (Control Rate) 4. PT = .13 Treatment assumed 28% reduction 5. Dropout 26% (12%, 8%, 6%) 6. Dropin 21% (7%, 7%, 7%)
Example: Beta-Blocker Heart Attack Trial (BHAT) (2) UnadjustedAdjusted PC = .18 PC* = .175 PT = .13 PT* = .14 28% reduction 20% reduction N = 1100 N* = 2000 2N = 2200 2N* = 4000
“Equivalency” or Non-Inferiority Trials • Compare new therapy with standard • Wish to show new "as good as" • Rationale may be cost, toxicity, profit • Examples • Intermittent Positive Pressure Breathing Trial Expensive IPPB vs. Cheaper Treatment • Nocturnal Oxygen Therapy Trial (NOTT) 12 Hours Oxygen vs. 24 Hours • Problem Can't show H0: = 0 • A Solution Specify minimum difference = min
Sample Size Formula Two Proportions Simpler Case • Za = constant associated with a • Zb = constant associated with 1 - b • Solve for Zb(1- b) or D