1 / 32

Adaptive Trial Designs and Considerations October 19, 2010

Adaptive Trial Designs and Considerations October 19, 2010. John A. Whitaker, PhD Vice President, Biostatistics and Statistical Programming, Kendle International Inc. Outline. Background on adaptive designs Kinds of adaptations Acceptance of adaptive designs Operational Considerations.

Download Presentation

Adaptive Trial Designs and Considerations October 19, 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adaptive Trial Designs and ConsiderationsOctober 19, 2010 John A. Whitaker, PhD Vice President, Biostatistics and Statistical Programming, Kendle International Inc.

  2. Outline • Background on adaptive designs • Kinds of adaptations • Acceptance of adaptive designs • Operational Considerations

  3. Adaptive Trial Design FDA Guidance for Industry “Adaptive Design Clinical Trials for Drugs and Biologics”, Draft GuidanceFebruary 2010:“A study that includes a prospectively planned opportunity for modification of one or more specified aspects of the study design and hypotheses based on analysis of data (usually interim data) from subjects in the study. The guidance discusses study modifications for eligibility criteria, randomization procedure, total sample size, primary endpoints and secondary endpoints, and the analysis methods for endpoints.

  4. Regulatory Acceptance of Adaptive Designs Bob Temple (US FDA) from Adaptive Design Workshop 2006 • Adaptive designs not new • Not a substitute for an effective drug, in right population, at right dose (but perhaps a way of finding those things) • Still need to resolve usual worries about clinical trials: • Inflation of alpha error • Introduction of bias from examination of interim results • Developing data-conditioned hypotheses* • Not a way to look at data and do anything you want to • Sometimes pauses for analysis lead to insights • These may not be possible in “seamless” approaches and could undermine advantages *Changing an hypothesis to fit the data, as opposed to determining if the data confirm the hypothesis

  5. Regulatory Acceptance of Adaptive Designs Rob Hemmings, Statistics Units Manager, MHRA • Regulators not adverse to adaptive designs as a matter of principle, but consider there to be risks • Experience to date is disappointing… • Methodological issues often inadequately addressed • Totality of development program considered inadequate • Only a minority of designs endorsed… However, this is not due to a negative position per se EMEA-EFPIA workshop on adaptive designs in confirmatory clinical trials, December 2007

  6. Types of Adaptive Trial Designs An adaptation may fall in to one of the following categories: • Allocation rule • Sampling rule • Stopping, starting, and continuing rule • Decision rule • Multiple adaptations • Seamless designs • Adaptive interventions • Adaptive treatment-switching Adapted from: Dragalin V. Adaptive Designs: Terminology and Classification. Drug Information Journal 40(4)”425-435 (2006)

  7. Allocation Rule Response adaptive randomization • Start with some fixed allocation ratio • As trial progresses, more subjects allocated to treatment with more responses (e.g., Play-the-Winner model) • Alternate design is to alter allocation when a fixed number of events have been observed in an arm (e.g., number of deaths) • Breaking of blind introduces risk of bias Covariate adaptive allocation • Also known as ‘Dynamic randomization’ • e.g., assigns new enrollees in such a way as to minimize imbalance across least balanced variable • Agency statisticians have commented this is not “randomization”

  8. Sampling Rule Blinded sample size re-estimation • When there is uncertainty about variability of primary endpoint • Adjustment of sample size based on updated variance estimation at an interim evaluation • Sample size can get bigger or smaller* • Does not require α adjustment Unblinded sample size re-estimation (with interim analysis) • In case of uncertainty about placebo response rate or effect size • Conditional testing procedures based on the data collected so far • α-inflation must be controlled • Breaking of blind introduces risk of bias * Smaller can be risky because of chance of making poor choice caused by high variability of estimates early in the study

  9. Stopping, Starting, and Continuing Rule Group sequential designs with early stopping • For safety, efficacy, or futility (also consider risk/benefit) Stochastic curtailment • Stopping the trial early when current data predict outcome with high probability Adding treatment arms or adjusting dose levels • Rising dose designs in which choice is made on dose to test next • Also used in dose-response trials, e.g. up and down method • When original arms not distributed within dynamic range of model • Temporal variance may be problematic Continuing a trial for a longer period of time • When results are events-based and more time is needed to accumulate events

  10. Decision Rule • Change of hypothesis, e.g. from non-inferiority to superiority (the reverse direction is more difficult) • Change of primary endpoint or changing the hierarchical order of hypotheses • Changing the test statistic • e.g. using t-test instead of an Chi-square test • Changing the patient population • Alteration of inclusion or exclusion criteria (shift in population) • Change in definition of evaluable patient • During the study, implement enrichment strategy which attempts to capture a more responsive sub-group in the overall population • Other protocol amendments and changes in analysis plan

  11. Adaptive Seamless Trial Designs • A program that addresses within a single trial, objectives that are normally achieved through separate trials • Combines 2 trials into 1 trial (e.g. Phase I/II or Phase II/III) • Consists of 2 phases: Learning phase & Confirmatory phase • Opportunity to adapt based on data from learning phase • Advantages: Reduces sample size, avoids delays arising from having to complete one trial and start another, reduces overall costs • Final analysis based on: • Data collected only after the adaptation (no alpha adjustment) • Data from before & after the adaptation (alpha penalty) • Note: new FDA guidance does not recognize the term “seamless” (line 228)

  12. Seamless Phase 2/3 Traditional Phase 2 and 3 Studies Group A Data analysis Group B Planning Phase 3 Group B Group C Placebo Group Placebo Group Phase 2 results available End of Phase 3 Adaptive design – combined Phase 2/3 Drop Group A Group A Group B Group C Drop C Placebo Interim Analysis 1 Interim Analysis 2 End of Phase 3

  13. Adaptive Interventions (Enrichments) Placebo run-in • Patient treated with placebo prior to randomization • Drop patients who are not compliant with medication • Drop patients who demonstrate a response on placebo Active run-in • Patients treated with some standard of care prior to randomization • Drop patients who are not compliant with medication • Drop patients who fail to respond • May enrich population for patients expected to respond to the test intervention Dose adjustment (titration) designs • Individual titration to desired response • Identification of range of optimal dose levels

  14. Adaptive Treatment-Switching Re-randomization • e.g., if one treatment arm is dropped and it is desired to retain patients who were on the dropped arm Randomized withdrawal • Everyone given active treatment and after stabilization, randomized to continue active treatment or switch to placebo • Can demonstrate treatment effect since patients switched to placebo are expected to deteriorate • Minimizes time on placebo Patient choice designs • Crossover design in which patients can change treatments • Allows determination of a preferred treatment

  15. Summary of Acceptance of Adaptive Designs • Commonly accepted: • Phase 2: Response-adaptive randomization • Phase 3: Early stopping for efficacy or futility • Phase 3: Blinded sample size re-estimation • Gradually being accepted: • Seamless phase 2/3 pivotal trials • Unblinded sample size re-estimation • Still very controversial: • Enrichment of subpopulations • Change in choice of test statistic • Change of primary hypothesis • Change of primary endpoint

  16. Summary of Issues with Adaptive Designs Several critical issues influence level of acceptance by regulatory authorities or by scientific community: • Type (and number) of adaptations • Phase of study: • Adaptations accepted in exploratory phases 1, 2a • But may be criticized in confirmatory phase 3 • Preservation of type 1 error alpha • Preservation of type 2 error (FDA guidance line 1272) • Preservation of blindness - prevent bias due to information leak • Reduced ability for “period of reflection and data exploration” (FDA guidance lines 280 and 417)

  17. What is useful for Early Clinical Development? • Seamless Phase I/II • Estimating Dose Response Curve • Response Adaptive Designs • Continued Reassessment Models (Adaptive Dose Escalations)

  18. Adaptive Dose Escalations • Traditional Rising Dose 3+3 Design • TER – no de-escalation / STER – allows de-escalation • Continued Reassessment Method • Originally used in Phase I Oncology Studies • In brief, a priori a dose-toxicity curve (DTC) is assumed and a target toxicity rate was chosen. • This DTC is updated (Bayesian Statistics) as toxicity data is available* • New patient’s dose is based on information about how previous patients tolerated the treatment. • CRM assigns more patients near to the MTD *For a thorough example, see: Elizabeth Garrett-Mayer, "Understanding the Continual Reassessment Method for Dose Finding Studies: An Overview for Non-Statisticians" (March 2005). Johns Hopkins University, Dept. of Biostatistics Working Papers. Working Paper 74. http://www.bepress.com/jhubiostat/paper74

  19. Comparison of CRM vs. Standard 3+3

  20. Adaptive Dose Escalations (continued) • How big is a CRM trial? There are various stopping rules. • Could be fixed number of total patients (regardless of doses explored) • Fixed number of patients treated at a given dose • When a target dose changes less than 10% • Criticisms – Possible exposure to toxic levels of treatment • Large dose escalations may occur • Initial starting dose may be high • Alternative to Pure CRM it to combine with a 3+3 design • Utilize a 3+3 design with a fixed number of doses until first DLT is observed • Switch to CRM and used data observed in 3+3 as prior

  21. Operational Considerations for Adaptive Trials • A chief concern with adaptive design studies is the possibility of introducing bias and an increased risk of false-positives • Possibility for positive study results that are difficult to interpret • To address these concerns, FDA recommends • Sponsors submit a written standard operating procedure (“SOP”), which defines who will conduct the interim analysis and implement the adaptation plan.  • The Agency recommends using an independent entity for this purpose, such as a Data Monitoring Committee ("DMC"), to control access to unblinded data.

  22. Potential Sources of Bias • Multiplicity of options • Operational bias • Impact on treatment, management, or evaluation of patients • Access to unblinded information • Statistical Bias • Modifications based on interim analyses of a biomarker or an intermediate clinical endpoint thought to be related to the study final endpoint • Small samples have the potential to overestimate true effect size (“random highs”) • Increase in the sample size does not eliminate the statistical bias in the estimate of treatment effect

  23. Study-wide Type I Error Rate Chance of demonstrating a treatment effect when none exists • Multiplicity – need to control family-wise type I error. • At each adaptation / analysis, there is an increase in Type I error. • At each stage of interim analysis are • opportunities for early rejection of some of the several null hypotheses • possibility of increasing sample sizes, or • selection of final hypothesis from among several initial options • Controlling Type I error is best accomplished by prospectively specifying all possible adaptation plans and applying appropriate statistical methodology at the protocol stage.

  24. Type II Error Rate Chance of failing to demonstrate a treatment effect when one exists • Adaptive design methods have the potential to inflate the Type II error rate • Decisions based on interim results that are very variable because of the limited amount of early study data • Suboptimal adaptive selection of design modifications like • Selection of wrong dose • Exclusion of wrong population • Insufficient power to detect a real treatment effect on an endpoint • Too rigorous futility stopping criterion

  25. Statistical Analysis Plans • Prospective SAP more important for trials based on adaptive procedures • SAP should be available by the time protocol is finalized • SAP should include: • planned changes • statistical methods to implement adaptation • data analysis procedure for each stage of adaptation • justification for the method of control of Type I error rate. • SAP are generally more detailed and complex than for the standard clinical trials. • Modification to SAP should be discouraged and if any, should occur before any unblinded analyses are performed • Blinded steering committee can make such protocol and SAP changes. • Needs to be a firewall between personnel with access to the unblinded analyses and those making SAP changes

  26. Concept of Clinical Trial Simulations • Important planning tool in assessing the statistical properties of a trial design and the inferential statistics • Trial simulations may be helpful in comparing the performance characteristics across several competing designs under different scenarios • Modeling simulation provide allows for demonstrating control of the Type I error rate • Generally complex and an analytical solution is not available for these adaptation procedures • Some adaptations may allow a Bayesian approach • Useful planning tool at the study design stage • Selection of prior distributions • may aid in deciding which adaptation should be selected

  27. Role of Data Monitoring Committee • Data Monitoring Committee (DMC) needs to be independent, non-sponsor controlled to protect study integrity • Responsible for review of interim analysis of unblinded data and adaptive decision-making in accordance with the well-specified adaptation plans • Charter defines what information, under what circumstances, is permitted to be passed from the DMC to the sponsor or investigators • Decisions or recommendations by the DMC related to any aspect of study design, conduct or analysis can be influenced by the knowledge of interim results • DMC is not the appropriate group to consider and recommend study design changes in response to the new information.

  28. Documentation required • Written documentation should include • Identification of the personnel who will perform the interim analyses (sponsor-involved or CRO statistician) • How the interim analyses will be performed, including how any potential irregularities in the data (e.g., withdrawals, missing values) will be managed • How adaptation decisions will be made • Who will make recommendations for the adaptation (sponsor-involved or CRO staff) • Who will have access to the interim results • How that access will be controlled and verified

  29. Additional Items from New FDA Guidance • FDA does not consider the term “adaptive design” to apply to studies that are “revised based on information obtained entirely from sources outside of the study” (lines 118 & 571) • Revisions based on non-prospectively planned analyses and decision paths are not regarded as adaptive design (line 549) • Special Protocol Assessment may not be best suited for adaptive design studies (line 1656) • Need for comprehensive and prospectively written SOPs that define who will implement interim analysis and adaptation plan (line 1685) • “Study sponsors should have assurance that the personnel performing these roles have appropriate expertise…” (line 1725)

  30. Acknowledgements • Many thanks to Ülker Aydemir who provided the inspiration for this lecture and much of the lecture material • I also wish to thank Elke Sennewald, William Sietsema, Kevin Skare, and J. Michael Sprafka, who provided much advice and encouragement

  31. Bibliography Bretz F, Koenig F, Brannath W, Glimm E, Posch M. Adaptive designs for confirmatory clinical trials. Statistics in Medicine 28:1181-1217 (2009) Brown CH, Ten Have TR, Jo B, Dagne G, Wyman PA, Muth´en B, Gibbons RD. Adaptive Designs for Randomized Trials in Public Health. Annu. Rev. Public Health30:1–25 (2009) Chow SC, Chang M. Adaptive design methods in clinical trials – a review. Orphanet Journal of Rare Diseases 3:11 (2008) Coffey CS, Kairalla JA. Adaptive Clinical Trials. Progress and Challenges. Drugs R D 9(4):229-242 (2008) Dragalin V. Adaptive Designs: Terminology and Classification. Drug Information Journal 40(4)”425-435 (2006) Gallo P, Chuang-Stein C, Dragalin V, Gaydos B, Krams M, Pinheiro J. Adaptive design in clinical drug development – an executive summary of the PhRMA Working Group. Journal of Biopharmaceutical Statistics 16(3):275-283 (2006) Gao P, Ware JH , Cyrus M. Sample size re-estimation for adaptive sequential design in clinical trials. Journal of Biopharmaceutical Statistics 18:1184-1196 (2007) Elizabeth Garrett-Mayer, "Understanding the Continual Reassessment Method for Dose Finding Studies: An Overview for Non-Statisticians" (March 2005). Johns Hopkins University, Dept. of Biostatistics Working Papers. Working Paper 74. http://www.bepress.com/jhubiostat/paper74 Hung HMJ. Considerations in Adapting Clinical Trial Design. J Formos Med Assoc 107(12 Suppl):S14–S18 (2008) Wang M, Wu YC, Tsai GF. A Regulatory View of Adaptive Trial Design. J Formos Med Assoc 107(12 Suppl):S3–S8 (2008) Wang SJ, Hung HMJ, O’Neill R. Adaptive patient enrichment designs in therapeutic trials. J Formos Med Assoc 107(12 Suppl):3–8 (2008) Guidance for Industry. Adaptive Design Clinical Trials for Drugs and Biologicals. February 2010. United States Department of Health and Human Services. Food and Drug Administration.

More Related