290 likes | 830 Views
Information-Based Sample Size Re-estimation for Binomial Trials. Keaven Anderson, Ph.D. Amy Ko, MPH Nancy Liu, Ph.D. Yevgen Tymofyeyev, Ph.D. Merck Research Laboratories. June 9, 2010. Objective: Fit-for-purpose sample-size adaptation. Examples here restricted to binary outcomes
E N D
Information-Based Sample SizeRe-estimation for Binomial Trials Keaven Anderson, Ph.D. Amy Ko, MPH Nancy Liu, Ph.D. Yevgen Tymofyeyev, Ph.D. Merck Research Laboratories June 9, 2010
Objective: Fit-for-purpose sample-size adaptation • Examples here restricted to binary outcomes • Wish to find sample size to definitively test for treatment effect ≥ min • Minimum difference of clinical interest, min, is KNOWN • May be risk difference, relative risk, odds-ratio • Do not care about SMALLER treatment differences • Desire to limit sample size to that needed if ≠ min • Control group event rate UNKNOWN • Follow-up allows interim analysis to terminate trial without ‘substantial’ enrollment over-running
Case Study 1 • CAPTURE Trial (Lancet, 1997(349):1429-35) • Unstable angina patients undergoing angioplasty • 30-day cardiovascular event endpoint • Control event rate may range from 10%-20% • Wish 80% power to detect min = 1/3 reduction (RR)
Case Study 2 • Response rate study • Control rate may range from 10% to 25% • min = 10% absolute difference
Can we adapt sample size? • Gao, Ware and Mehta [2010] take a conditional power approach to sample size re-estimation • Presented by Cyrus Mehta at recent KOL lecture • Would presumably plan for null hypothesis 0 > min and adapt sample size up if interim treatment effect is “somewhat promising” • Information-based group sequential design • Estimate statistical information at analysis (blinded) • Do (interim or final) analysis based on proportion of final desired information (spending function approach) • If max information AND max sample size not reached • If desired information likely by next analysis, stop there • Otherwise, go to next interim • Go back to 1.
Fair comparison? • The scenarios here are set up for information-based design to be preferred • Other scenarios may point to a conditional power approach • Important to distinguish your situation to choose the appropriate method! • Scenarios where the information-based approach works well are reasonably common • Blinded approaches such as information-based design are considered “well-understood’’ in the FDA draft guidance
Estimate current information Stop if done Estimate information @ next analysis Go to final (may adapt) Go to next IA Information-based approach Enroll patients continuously Analyze data Stop enrollment
Example adaptation Information is re-scaled Target Adapted up to finish
Variance of (Note: =proportion in Arm E) • General formula • Absolute difference ( = pC – pE) • Relative risk ( = log(pE / pC ))
Estimating variance and information • Event rates estimated • Assume overall blinded event rate • Assume alternate hypothesis • Use MLE estimate for treatment group event rates (like M&N method) • Use these event rates to estimate • Statistical information
CAPTURE information-based approach • Plan for maximum sample size of 2800 • Analyze every 350 patients • At each analysis • Compute proportion of planned information • Analyze • Adapt appropriately
Case Study 2 • Response rate study • Control rate may range from 10% to 25% • min = 10% absolute difference
Diff<3.86%† Diff≥16.7%‡ 3.86%≤Diff<16.7% CP<0.35 or CP> 0.85 Stop for futility Stop for efficacy Execution of the IA Strategy:Conditional power approach of Gao et al • Interim Analysis, calculate: • Rate Difference Compute Conditional Power 0.35≤CP≤0.85 Re-estimate Sample Size Continue †Corresponding to a CP of 15%; ‡Corresponding to a P<0.0001.
Overall Power of the Study IA without SSR and IA with SSR 278 90.0% 303 92.4% 82.6% 78.2% 73.3% 86.8% 81.9% 78.0% 273 269 266 305 306 304 • Initial sample size is 289 in each case. • Maximum possible sample size is 578 (2 times of 289, cap of the SSR)
Information-based approach • Maximum sample size of 1100 • Plan analyses at 200, 400, 600, 800, 1100 • Adapt assume targetmin = .10 • Absolute response rate improvement
Some comments • Computations performed using gsDesign R package • Available at CRAN • For CAPTURE example, 10k simulations were performed for a large # of scenarios • Parallel computing was easily implemented using Rmpi (free) or Parallel R (REvolution Computing) • For smaller # of scenarios used for 2nd case study, sequential processing on PC was fine • My objective is to produce a vignette making this method available • Technical issues • Various issues such as over-running and “reversing information time” need to be considered
Objective: Fit-for-purpose sample-size adaptation • Examples here restricted to binary outcomes • Wish to find sample size to definitively test for treatment effect ≥ min • Minimum clinical difference of interest, min, is KNOWN • May be risk difference, relative risk, odds-ratio • Do not care about SMALLER treatment differences • Desire to limit sample size to that needed if ≠ min • Control group event rate UNKNOWN • Follow-up allows interim analysis to terminate trial without ‘substantial’ enrollment over-running
Summary • Information-based group sequential design for binary outcomes is • Effective at adapting maximum sample size to power for treatment effect ≥ min • Group sequential aspects terminate early for futility, large efficacy difference • Results demonstrated for absolute difference and relative risk examples • If you can posit a minimum effect size of interest, this may be an effective adaptation method