350 likes | 365 Views
Learn how to overcome design and statistical challenges in clinical trials to enhance R&D efficiency in the pharmaceutical industry. Gain insights on flexible trial design, statistical considerations, and dealing with real-world constraints.
E N D
Webinar Host HOSTED BY: Ronan Fitzpatrick
Webinar Overview Design Challenges Overview Accounting for Design Constraints Statistical Design Challenges Value of Flexible Design
Worked Examples Overview • Worked Examples • Two Means Group Sequential Trial • Unblinded SSR Example • Two Means Conditional Power • Two Means Blinded SSR • Mixed/Hierarchical Models • Posterior Error (Lee & Zelen) • SSR for Survival
About nQuery In 2018, 91% of organizations with clinical trials approvedby the FDA used nQuery for sample size and power calculation
Part 1 Design Challenges Overview
The pharmaceutical industry has a problem R&D efficiency has never been under more pressure Low Success High Cost 90% Failure $2.6 billion 10 Years rate in drug development average timeline for drug development cost of drug development Source: The Tufts Center for the Study of Drug Development
How to deal with this challenge? All aspects of trial design should be reviewed – real-world choices & data, study objective, statistical methods, study flexibility • Need stakeholder buy-in to mould trial towards “optimal” design • Sponsor, regulator, clinicians/on-site expertise, statisticians, patients Legislative push for innovative approaches (US: Innovative Cures Act, EU: Adaptive Pathways) creates opening for wider dialogue However, remember that costs also reflect high expectations from public so cannot compromise on making correct inferences
Types of Challenge • #1 Design Constraints e.g. randomization, stratification, hierarchical/multi-centre effects, recruitment, blinding, ITT #2 Statistical/Analysis Challenges e.g. endpoint/estimand?, statistical model, “success” criterion, covariate adjustment, hypothesis type • #3 Flexible Design e.g. interim analysis, adaptive changes (SSR, arm-selection etc.), ad-hoc protocol changes
Part 2 Design Constraints
Dealing with Design Constraints Real-world constraints greatly affect the design and statistical choices appropriate for a study • Designing w/ “first principles” important but consider issues early • Especially true for pre-clinical/academic but still relevant for Phase III In clinical trials, time and cost considerations encouraging global trials and reliance on greater number of centres and CROs This creates challenges but also opportunities such as greater generalizability, sub-group analyses, design flexibility etc.
Multi-centre/Hierarchical Effects Common issue in trials is accounting for effect of recruitment from multiple “centres” (e.g. high N needed, real-world constraint) • Vital to account for knock-on effect on design choices available • Randomization level, fixed vs random effect, covariate approach Subject-level analysis follows level of randomization (e.g. d.f. for effect) and random-effects (mixed modeling) used for centres While hierarchical effect may be expected to increase required sample size, in actuality can lead to the opposite
Example 1 “A cluster randomized design is employed to reduce contamination. Using the cluster randomized design with church as a cluster unit, we will recruit a sample size of 32 churches (16 clusters per arm with 12 individuals per cluster) for an overall sample of 384 individual participants. This number of churches and participants achieves 91% power to detect a difference of 2.5 kg (Standard Deviation = 7) (approximately 4% body weight loss) between the 2 groups’ mean body weight loss (effect size = 0.35) from pre- to postintervention assessment when the intra-cluster correlation is 0.01 using a linear model with a significance level of 0.05.” • Source: Medicine (2018)
Part 3 Statistical Considerations
Statistical Considerations Statistical choices have important effect on what trial questions/hypotheses are of interest & how to answer them • Statistical models should follow design and implied assumptions • For example, see previous part. Sample Size emergent from model In clinical trials, big issues exist around what “success” is (2.5% α) and what to measure (estimands, “responder” analysis etc.) However, advanced statistical methods are not an excuse for ill-judged design, so try not get lost in the tyranny of small differences
Bayes in Clinical Trial Design • Bayesian methods continue to be of great interest in clinical trials • However, 2.5% Type I error seen as barrier in confirmatory trials • Several methods proposed in trial design and SSD to fulfil Bayesian “success” as well as Type I error • Methods tend to deal with issues of prior uncertainty and the inverse-conditional problem • Lee & Zelen derive Bayesian Type I/II errors for posterior chance of H0/H1 being true given frequentist “success” (i.e. significance) • Argue this better reflects practical decision-making though note difference between requirement of clinician vs. regulator
Example 2 • “Assuming a mean (±SD) number of ventilator-free days of 12.7±10.6, we estimated that a sample of 524 patients would need to be enrolled in order for the study to have 80% power, at a two-tailed significance level of 0.05, to detect a mean between-group difference of 2.6 ventilator-free days. On the basis of data from the PAC-Man trial, we estimated that the study-withdrawal rate would be 3% and we therefore calculated that the study required a total of 540 patients.” Source: NEJM (2014)
(cont.) • Example 2 • “Assume the Z-test example but we want a posterior Type I error of 0.05 (probability the null hypothesis is true given significance) and posterior Type II of 0.2 (probability the null hypothesis is false given non-significance). What would be equivalent sample size and frequentist planning errors (α, 1-β) be assuming a range (0.25 - 0.75) of prior beliefs against the null hypothesis”
Part 4 Flexible Design
Flexible Design Increasing interest in approaches for clinical trials which allow greater flexibility to change trial while on-going • Up-front costs and limited scope may make some trials unfeasible • Significant difference between ideal trial with hindsight and actual trial Ad-hoc changes to protocol for on-going trial needs significant justification but will continue to be reality of trial conduct Greater interest from sponsors and regulators in designs which allow interim decisions and changes on a per-protocol basis
Adaptive Design Adaptive designs are any design where a change or decision is made to a trial while still on-going Encompasses a wide variety of potential adaptions • e.g. Early stopping, SSR, enrichment, seamless, dose-finding Adaptive trials seek to give control to trialist to improve trial based on all available information Adaptive trials can decrease costs & better inferences though require careful consideration to avoid bias or wasting resources
Adaptive Design Review Advantages • Earlier Decisions • Reduced Potential Cost • Higher Potential Success • Greater Control • Better Seamless Designs Disadvantages • More Complex • Logistical Issues • Modified Test Statistics • Greater Expertise • Regulatory Approval
Regulatory Context New FDA draft guidance published in October 2018 (PDUFA VI) • EU (Adaptive Pathways), ICH E20 starts 2019 Far less categorical than 2010 draft • Emphasizes early collaboration with FDA • Focus on design issues and Type I error • e.g. pre-specification, blinding, simulation In-depth on certain adaptive designs • SSR, enrichment, switching, multiple treats • Also views on Bayesian and Complex • “Adaptive designs have the potential to improve ... study power and reducethe sample size and total cost" for investigational drugs, including "targeted medicines that are being put into development today”Scott Gottlieb (FDA Commissioner)
Sample Size Re-estimation (SSR) Will focus here on specific adaptive design of SSR Adaptive Trial focused on higher sample size if needed • Obvious adaption target due to intrinsic SSD uncertainty • Note that more suited to knowable/short follow-up • Could also adaptively lower N but not encouraged Two Primary Types: 1) Unblinded SSR; 2) Blinded SSR • Differ on whether decision made on blinded data or not • Both target different aspects of initial SSD uncertainty
Unblinded Sample Size Re-estimation SSR suggested when interim effect size is “promising” (Chen et al) • “Promising” user-defined but based on unblinded effect size • Extends GSD with 3rd option: continue, stop early, increase N Power for optimistic effect but increase N for lower relevant effects • Updated FDA Guidance: Design which “can provided efficiency” Common criteria proposed for unblinded SSR is conditional power (CP) • Probability of significance given interim data (more detail on next slide) 2 methods here: Chen, DeMets & Lan; Cui, Hung & Wang • 1st uses GSD statistics but only penultimate look & high CP • 2nd uses weighted statistic but allowed at any look and CP
Example 3 “Using an unstratified log-rank test at the one-sided 2.5% significance level, a total of 282 events would allow 92.6% power to demonstrate a 33% risk reduction (hazard ratio for RAD/placebo of about 0.67, as calculated from an anticipated 50% increase in median PFS, from 6 months in placebo arm to 9 months in the RAD001 arm). With a uniform accrual of approximately 23 patients per month over 74 weeks and a minimum follow up of 39 weeks, a total of 352 patients would be required to obtain 282 PFS events, assuming an exponential progression-free survival distribution with a median of 6 months in the Placebo arm and of 9 months in RAD001 arm. With an estimated 10% lost to follow up patients, a total sample size of 392 patients should be randomized.” Source: nejm.org
(cont.) • Example 3 Assume fixed design but O’Brien-Fleming efficacy and HSD (γ=-2) futility with two equal interim analyses Assume interim HR= 0.8 (from 0.666), total E of 303 (interim E of 101 and 202) and final look alpha of 0.23. What will required E for SSR for Chen-Demets-Lan/Cui-Hung-Wang assuming maximum events multiplier of 3?
Part 5 New nQuery Release | ver 8.4
nQuery Summer 2019 Update Statsols.com/whats-new Adds new tables across nQuery to help you with any trial design –Classical, Bayesian & Adaptive 29 7 5 New Classical Design Tables New Bayesian Design Tables New AdaptiveDesign Tables • Hierarchical Modelling • Interval Estimation Blinded SSR Unblinded SSR Conditional Power • Assurance • Posterior Error
Q&A • Questions? For further details, email info@statsols.com Thanks for listening!
References International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (1997). General considerations for clinical trials E8. Retrieved from www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Efficacy/E8/Step4/E8_Guideline.pdf Evans, S. R. (2010). Fundamentals of clinical trial design. Journal of experimental stroke & translational medicine, 3(1), 19. Lee, Y., & Nelder, J. A. (1996). Hierarchical generalized linear models. Journal of the Royal Statistical Society: Series B (Methodological), 58(4), 619-656. Ahn, C., Heo, M., & Zhang, S. (2014). Sample size calculations for clustered and longitudinal outcomes in clinical research. Chapman and Hall/CRC. McElfish, P. A., Long, C. R., Kaholokula, J. K. A., Aitaoto, N., Bursac, Z., Capelle, L., ... & Ayers, B. L. (2018). Design of a comparative effectiveness randomized controlled trial testing a faith-based Diabetes Prevention Program (WORD DPP) vs. a Pacific culturally adapted Diabetes Prevention Program (PILI DPP) for Marshallese in the United States. Medicine, 97(19).
References Lee, S. J., & Zelen, M. (2000). Clinical trials and sample size considerations: another perspective. Statistical science, 15(2), 95-110. McAuley, D. F., Laffey, J. G., O'Kane, C. M., Perkins, G. D., Mullan, B., Trinder, T. J., ... & McNally, C. (2014). Simvastatin in the acute respiratory distress syndrome. New England Journal of Medicine, 371(18), 1695-1703. Jennison, C., & Turnbull, B. W. (1999). Group sequential methods with applications to clinical trials. CRC Press. Friede, T., & Kieser, M. (2006). Sample size recalculation in internal pilot study designs: a review. Biometrical Journal: Journal of Mathematical Methods in Biosciences, 48(4), 537-555. US Food and Drug Administration. (2018) Adaptive design clinical trials for drugs and biologics (Draft guidance). Retrieved from https://www.fda.gov/media/78495/download Chen, Y. J., DeMets, D. L., & Gordon Lan, K. K. (2004). Increasing the sample size when the unblinded interim result is promising. Statistics in medicine, 23(7), 1023-1038.
References Cui, L., Hung, H. J., & Wang, S. J. (1999). Modification of sample size in group sequential clinical trials. Biometrics, 55(3), 853-857. Mehta, C.R. and Pocock, S.J., (2011). Adaptive increase in sample size when interim results are promising: a practical guide with examples. Statistics in medicine, 30(28), 3267-3284. Liu, Y., & Lim, P. (2017). Sample size increase during a survival trial when interim results are promising. Communications in Statistics-Theory and Methods, 46(14), 6846-6863. Chen, Y. J., Li, C., & Lan, K. G. (2015). Sample size adjustment based on promising interim results and its application in confirmatory clinical trials. Clinical Trials, 12(6), 584-595 Yao, J. C., Shah, M. H., Ito, T., Bohas, C. L., Wolin, E. M., Van Cutsem, E., ... & Tomassetti, P. (2011). Everolimus for advanced pancreatic neuroendocrine tumors. New England Journal of Medicine, 364(6), 514-523.