240 likes | 491 Views
Confidentiality and trial integrity issues for monitoring adaptive design trials. Paul Gallo FDA-Industry Workshop September 28, 2006. Outline. Interim analysis conventions, motivation Issues for adaptive designs Interim analysis / review / decision process Sponsor involvement?
E N D
Confidentiality and trial integrity issues for monitoring adaptive design trials Paul Gallo FDA-Industry Workshop September 28, 2006
Outline • Interim analysis conventions, motivation • Issues for adaptive designs • Interim analysis / review / decision process • Sponsor involvement? • Information inferred by observers • Types of information / risks • Steps to limit information • Summary
Monitoring / confidentiality Adaptive designs present a number of challenges (e.g., statistical, logistic, procedural) which will need to be addressed before they can become widely utilized. Issues relating to • monitoring of accruing data • restriction of knowledge of interim results • and the processes of data review, decision-making and implementation may affect the integrity of trial results, and are thus likely to be critical in determining the extent, and shaping the nature, of adaptive design utilization in clinical trials.
Current interim analysis conventions Monitoring of accruing data is of course performed in many clinical trials, most frequently for: • safety monitoring • formal group sequential plan allowing stopping for efficacy • lack of effect / futility judgments. Current procedures and conventions governing monitoring are a sensible starting point for addressing similar issues in trials with adaptive designs.
Current interim analysis conventions As described, e.g., in the recent FDA DMC guidance document, comparative interim results and access to unblinded data should be strictly controlled: • Access to interim results diminishes the ability of trial personnel and the sponsor to manage the trial in a manner which is (and which will be seen by interested parties to be) completely objective. • Knowledge of interim results could introduce subtle, unknown biases into the trial, perhaps causing changes in characteristics of patients recruited, administration of the intervention, endpoint assessments, etc.
Current interim analysis conventions Thus a standard operational model involves having interim analysis results reviewed confidentially, and recommendations made, by a Data Monitoring Committee (DMC), whose members do not have any other responsibilities in the trial. In confirmatory trials, DMCs are usually totally external to the sponsor organization, for maximum independence.
Issues for adaptive designs I. Adaptive designs will certainly require review of accruing data. • Who will be involved in the analysis, review, and decision-making processes? • What might be the differences in operational models relative to more familiar monitoring situations? • Will sponsor perspective and input be desired / necessary for some types of adaptations?
Issues for adaptive designs II. An important distinction versus other monitoring situations: the results are intended to be used to implement some adaptation(s) which will govern some aspect of the conduct of the remainder of the trial. • Can observers infer from viewing the actions taken information about the results which might be perceived to rise to an unacceptable level?
Analysis / review / decision process Concerns about confidentiality to ensure objective trial management, and potential bias introduced by knowledge of interim results, would seem to be no less relevant for adaptive designs than in other settings. The key principles to adhere to would seem to be: • separation / independence of the DMC from other trial activities • strict limitation of knowledge of interim results.
Analysis / review / decision process Monitoring board composition • Adaptive design trials may utilize a single monitoring board for adaptations and other responsibilities (e.g., safety); or else a separate board may be considered for the adaptation decisions. • DMCs in adaptive design trials may require additional expertise not traditionally represented on DMCs; perhaps to monitor the adaptation algorithm, or to make the type of decision called for in the adaptation plan (e.g., dose selection).
Sponsor involvement? FDA (2006): “Sponsor exposure to unblinded interim data . . . can present substantial risk to the integrity of the trial”. Might sponsor perspective be relevant for most effectively making certain types of adaptation decisions? (e.g., dose selection). Will sponsors accept and trust decisions made confidentially by external DMCs in long-term trials / projects with important commercial implications? (e.g., seamless phase II/III).
Sponsor involvement Potential sponsor participation in the process in confirmatory adaptive trials should require: • a clear rationale based on the nature of the decision and its implications • a small number of individuals not involved in trial operations • clear understanding of the issues involved and risks to the trial, and restrictive firewalls / procedures in place • “minimal” sponsor exposure to make the needed decision, i.e., only at the adaptation point, only the relevant data (e.g., unlike a DMC with whom they may interact, which may have a broader ongoing role).
Information apparent to observers • Adaptive designs may lead to changes which will be apparent to some extent - sample size, treatment allocation, population, dosage, treatment arm selection, etc., etc. - and can thus be considered to provide some information to observers. • Considering the concerns which are the basis for the confidentiality conventions: can we distinguish between types and amounts of information, and how problematic they might be in this regard?
Information apparent to observers Note: other types of monitoring are not immune from this issue. It has never been the case that no information can be inferred from monitoring; i.e., all monitoring has some action thresholds, and lack of action usually implies that such thresholds have not been reached.
Example – Triangular test Design: • normal data, 2 group comparison • study designed to detect Δ = 0.15 • 90% power • 4 equally-spaced analyses • requires about 2276 patients
Example – Triangular test • ‘Christmas tree’ boundary
Example – Triangular test • Δ = Z / V • Continuation beyond the 3rd look would imply (barring over-ruling of the boundary) that the point estimate is between 0.076 and 0.106. • Doesn’t that convey quite a bit of information about the interim results? Note: even for an O’Brien-Fleming boundary, despite its perception of conservativeness, continuation beyond 2/3 information would imply an estimate below the hypothesized delta. • ^
Information apparent to observers In conventional GS design practice, when monitoring is justified, this issue seems not to be perceived to compromise the trial nor to discourage the monitoring. Presumably, it is viewed that reasonable balance is struck between the objectives and benefits of the monitoring and any slight potential for risk to the trial, with appropriate and feasible safeguards in place to minimize that risk. This same general type of standard should make sense in considering this type of issue in adaptive designs.
Information apparent to observers • For the general issue of information conveyed by adaptations, there may be opportunities in some cases to lessen this concern by withholding some details from the protocol, and placing them in another document of more limited circulation. • For example, if a selection decision is to be made based upon predictive probabilities, do full details and thresholds need to be described in the protocol, or could they be in a document of limited circulation (DMC, health authorities)?
Information apparent to observers Selection decisions, of the type made in seamless designs (e.g., choice of dose, subgroup, etc. for continuation), might not seem to convey an amount of information that should influence or compromise a trial, as long as the specific numerical results on which the decisions were based remain confidential. The information conveyed might often be similar to that in other conventionally acceptable monitoring situations.
Information apparent to observers More problematic - changes based in an algorithmic manner on interim treatment effect estimates, which in effect provide knowledge of those estimates to anyone who knows the algorithm and the change. Most typical example - certain approaches to sample size re-estimation: SSnew = f (interim treatment effect estimate) => estimate = f -1 (SSnew)
Group sequential vs adaptive debate Recent literature, e.g., Jennison & Turnbull, Mehta & Tsiatis. Group sequential designs of course can be viewed as a mechanism for sample size determination. What’s the difference between: • a study designed for 500 patients which might be extended to 1000 if results are weak; • a study designed to have 1000 patients, but which might stop at 500 if the effect is large enough? Maybe not so much ? . . .
Summary • We should not aim to broadly undo established monitoring conventions, but rather to fine-tune them to achieve their sound underlying principles. • To justify sponsor participation in monitoring, provide convincing rationale and “minimize” this involvement, and enforce strict control of information. • Some types of adaptations convey limited information for which it seems difficult to envision how the trial might be compromised. • Others convey more information, but perhaps extra steps can be implemented to mask this.