510 likes | 830 Views
Grant Applications Data Safety Monitoring Boards Pilot Studies. Janet Raboud, PhD Introduction to Statistical Methods for Clinical Trials Dalla Lana School of Public Health. Funding for Clinical Trials. Key criteria - Important problem - Feasible. Industry (pharmaceutical companies)
E N D
Grant ApplicationsData Safety Monitoring BoardsPilot Studies Janet Raboud, PhD Introduction to Statistical Methods for Clinical Trials Dalla Lana School of Public Health
Funding for Clinical Trials Key criteria - Important problem - Feasible • Industry (pharmaceutical companies) • Peer reviewed grant agencies • Canadian Institutes for Health Research (CIHR) • Heart and Stroke Foundation • Canadian Cancer Society • Grant review panels (CIHR) • Randomized trials • Population Health
Writing Grant Applications/protocols for clinical trials These need to match! Hypotheses Primary/Secondary Objective Primary/secondary Outcome(s) Primary/secondary Analysis Randomization Sample size and power considerations Interim analysis Time lines
CIHR Outline for RCT Grant Application 1. The Need for a Trial 1.1 What is the problem to be addressed? 1.2 What is/are the principal research question(s) to be addressed? 1.3 Why is a trial needed now? Evidence from the literature - see 1.4 below, professional and consumer consensus and pilot studies should be cited if available. 1.4 Give references to any relevant systematic review(s)1 and discuss the need for your trial in the light of the(se) review(s). If you believe that no relevant previous trials have been done, give details of your search strategy for existing trials. 1.5 How will the results of this trial be used? E.g. Inform decision making/improve understanding. 1.6 Describe any risks to the safety of participants involved in the trial.
2. The Proposed Trial • 2.1 What is the proposed trial design? E.g. Open-label, double or single blinded, etc. • Parallel group design • Crossover trial • Factorial design • Cluster randomized trial • Superiority/equivalence/non-inferiority • 2.2 What are the planned trial interventions? Both experimental and control. • Treatment – dosage and duration • Intervention – how will it be delivered? • Often there are many possibilities – describe how you chose the treatments/interventions to test
2. The Proposed Trial • 2.3 What are the proposed practical arrangements for allocating participants to trial groups? E.g. Randomization method. If stratification or minimization are to be used, give reasons and factors to be included. • Variable block size • Assignments in envelopes/on website/assigned by phone • Identify person/position to generate random treatment assignment • Plan for transmitting code to pharmacy, if treatment to be blinded
2. The Proposed Trial • 2.4 What are the proposed methods for protecting against sources of bias? E.g. Blinding or masking. If blinding is not possible please explain why and give details of alternative methods proposed, or implications for interpretation of the trial's results. • Placebo pills. In a trial of Drug A vs Drug B, patients may take • Drug A + placebo version of Drug B • Drug B + placebo version of Drug A • Blinding not possible for some interventions – surgery, counseling, etc • If blinding not possible, may have differential drop-out
2. The Proposed Trial • 2.5 What are the planned inclusion/exclusion criteria? • Often want patients to be homogeneous with respect to stage of disease. • Define a population expected to benefit from intervention • Expected to be compliant • May restrict number of other comorbidities • Balance of homogeneity of study population with generalizability of findings
2. The Proposed Trial • 2.6 What is the proposed duration of treatment period? • Long enough • To see an effect • To see if the effect is maintained • Not so long • To risk losing patients or • Not be able to complete the trial in a reasonable period of time • 2.7 What is the proposed frequency and duration of follow up? • Frequently enough to get information on patients who don’t complete the entire study and to be able to describe patterns of change • Costs of visits and burden to patients will limit number of visits
2. The Proposed Trial • 2.8 What are the proposed primary and secondary outcome measures? • Ideally, have one primary endpoint • If more than one primary endpoint, do sample size calculations for both • Sometimes, there is a long list of secondary outcome measures • Should choose these thoughtfully • Risk of type I error increases with more outcomes • Do sample size/power calculations for secondary outcomes of importance • If clinical events (eg. MI) are rare, may choose a surrogate marker. Assumes that treatment will effect the surrogate marker in the same way as the clinical outcome • Binary vs continuous measures (statistical power and clinical importance with both influence choice)
2. The Proposed Trial • 2.9 How will the outcome measures be measured at follow up? • If clinical endpoints, will they be adjudicated? • Will reviewers be blind to treatment assignment? • More detail required for subjective outcomes
2.11 What is the proposed sample size and what is the justification for the assumptions underlying the power calculations? Include both control and treatment groups, a brief description of the power calculations detailing the outcome measures on which these have been based, and give event rates, means and medians etc. as appropriate. • N.B. It is important to give the justification for the size of the difference that the trial is powered to detect. Does the sample size calculation take into account the anticipated rates of non-compliance and loss to follow-up given below? • Assumptions: • Standard deviation of outcome • Event rate in control arm • Dropout rate • Within patient correlation • Can include supplemental material in an appendix
2.12 What is the planned recruitment rate? How will the recruitment be organized? Over what time period will recruitment take place? What evidence is there that the planned recruitment rate is achievable? • Recruitment always slower than you expect • Need to document numbers of patients with disease/meet inclusion criteria/willing to participate • Take note of other ongoing trials, potential changes in treatment options • 2.13 Are there likely to be any problems with compliance? On what evidence are the compliance figures based?
2.14 What is the likely rate of loss to follow up? On what evidence is the loss to follow-up rate based? • Need to adjust sample size for the expected loss to follow-up • Include any data on rates of loss to follow-up • Specific how these patients will be handled in the analysis • Any information on expected randomness of loss to follow-up • 2.15 How many centers will be involved? • Each center should write a letter indicating how many patients they can recruit • More centers will help achieve a larger sample size but logistically more complicated • 2.16 What is the proposed type of analyses? • Needs to match sample size calculations!
2.17 What is the proposed frequency of analyses? • Number and timing of interim analyses • Stopping rules for those analyses • 2.18 Are there any planned subgroup analyses? • Subgroup analyses may be planned by disease severity or other characteristics • Power analyses should accompany these plans • 2.19 Has any pilot study been carried out using this design?
Eg: Cohort study of HPV Vaccine in HIV • Grant for extending follow-up of cohort of n=400 HIV infected girls and women who have received the HPV vaccine • Potential outcomes: • Antibody response • Incident HPV infections • Cervical dysplasia • Potential comparisons: • HIV infected to HIV negative population controls • Predictors within HIV infected population
Further considerations HPV vaccine protects against 4 HPV types: 6, 11, 16, 18 – can only measure response in participants without evidence of exposure to each type at the time of vaccination HPV vaccine response varies widely by age – want to determine power for specific age groups, particularly girls. Need to account for loss to follow-up
Hypothesis: We hypothesize that HPV antibody levels in HIV-positive girls and women will decline more rapidly and more significantly that in HIV-negative girls and women and this decline will be determined by HIV parameters.
OBJECTIVES: Overall, we set out to determine the immunogenicity and efficacy of the HPV vaccine out to 5 years in an established cohort of HIV-positive girls and women in Canada. Primary Objective: To measure the antibody response to each genotype contained in the qHPV vaccine out to 60 months post vaccination regimen; Secondary Objectives: 1) To determine the incidence rate and nature of ‘breakthrough’ HPV incident and persistent infections of vaccine-contained and non-vaccine-contained high-risk types; 2) To determine the incidence rate of cervical dysplasia (LSIL or greater) and/or vulvar and vaginal dysplasia with and without vaccine-contained types; 3) To determine the incidence rate of external genital warts. For each of these objectives we will compare outcomes for our cohort to those in HIV-negative populations. Where available, these comparisons will include patient level data comparators from ongoing parallel studies conducted by members of this investigator team. Within the HIV-positive cohort we will assess predictors of antibody response, persistent HPV infection, cervical dysplasia, and external genital warts.
Primary endpoints: The primary endpoint of this study will be the HPV antibody GMT for each of the 4 types contained in the GARDASILTM vaccine at month 60 after receiving at least one dose of the vaccine. Secondary endpoints: Incidence rates of: 1) breakthrough incident and persistent HPV infections 2) cervical dysplasia 3) external genital warts.
Primary analysis: • For each of the four vaccine containing HPV types, antibody response will be summarized using GMTs calculated for age specific groups within the HIV-positive cohort. • Antibody response for subjects aged 9-13 and those aged 16-26 will be compared to the individual patient level data from an HIV-negative cohort using Wilcoxon rank-sum tests. The age adjusted fold-difference in GMT between HIV-positive and HIV-negative populations will be calculated using generalized linear models. For women over age 26, the type specific HPV GMT will be compared to GMTs in the published literature using a sign test. • b) Within the HIV-positive cohort, predictors of decline from peak antibody response to Month 60 among HIV-positive girls and women will be determined using generalized linear models. Potential covariates to be considered in these models are age, CD4 count, HIV viral load and antiretroviral status at the time of first vaccination and CD4 nadir.
c) Longitudinal analyses will be conducted to describe the effect of HIV on antibody response at the 7 visits at 6, 12, 18, 24, 36, 48 and 60 months post first vaccine dose. Generalized estimating equation (GEE) models with a unity link will be used to adjust for correlation among repeat observations within study participants. Individual level data on HIV-negative girls and young women will be available at 7, 24, 36, 48, and 60 months post first vaccine dose. After adjusting for age, an interaction between HIV and time since first dose will be included in the model to test for a difference in the degree of waning of response in addition to the strength of the initial response. GEE models will also be fit for the HIV-positive girls and women alone to examine the associations of various covariates with level of response and waning of response over time. Covariates outlined in section 1b will be considered for inclusion in the model.
Detectable difference in antibody responsebetween HIV +ve and HIV-ve women The population of interest for the long term antibody response are participants who are both HPV-antibody and HPV-genotype naïve at the time of vaccination; the number of evaluable subjects varies depending on the age group and HPV type. For the entire study sample, the detectable ratio of GMT to the relevant HIV- population at a single time point will be between 1.13 and 1.30, with 80% power and a significance level of .05, using a two-sample t test. For the smallest age groups (n=30) the detectable ratio of GMT to the relevant HIV-negative population will be between 1.8 and 2.1. For larger age groups (n=100) comparison the detectable ratio will be 1.35 to 1.5.
Detectable difference in antibody response by covariate among HIV+ve women • Within the HIV-positive sample, the detectable ratio in average post vaccination GMT over the entire follow-up between 2 groups defined by a covariate that is present in 15% of the sample (eg. detectable viral load) is 1.33 to 1.96, assuming • 80% power, • a significance level of .05, • standard deviation of log(GMT) between 0.8 and 1.7, • within individual correlation between .6 and .8 • and 5 visits per participant. • Sample size formulae were to detect an average change in response over repeated measurements
Detectable difference in rates of HPV infections between HIV+ve participants and HIV-ve controls The reported rates of new persistent infection in HIV-negative populations are 0.4 per 100 person-years in a sample of 16-23 year olds and 0.12 in a sample of 25-45 year olds. Our current rate of persistent infection among women who have received at least 1 vaccination is 0.77/100PY (95% CI: 0.09-2.76) with 261 person-years of follow-up available. At the end of the present study, with 340 person-years of follow-up, we will be able to detect an increase in the rate of persistent infections from 0.1 to 1.26 per 100 person years with 80% power and a significance level of 0.05, using one sample comparison o. If these women are followed up to 3 years longer under the proposed study design such that 750 person years of follow-up are accumulated, we will be able to detect increases in rates of persistent infections from 0.12 to 0.74 per 100 person years.
Data Safety Monitoring Board • Composition • Clinician(s), biostatistician, trialist, ethicist • Chair of the DSMB communicates to study investigators on behalf of the DSMB • Independent to study investigators • Can recommend stopping the trial
Role of DSMB • Review timeliness of enrolment (projected vs actual) • Protocol violations • Review adverse events • Relationship to study drug (definitively, probably, possibly, unlikely) • Severity • Serious adverse events – usually reported within 24 hours – a narrative paragraph describing each event is sent to the DSMB • Review interim analyses
Procedures • Open meeting: Study investigators and DSMB • General update on study progress and other pertinent issues (competing trials, change in practice, drug availability) • Closed meeting: DSMB • Review report – prepared by study biostatistician, which is not seen by study investigators • Make a recommendation to stop, proceed or modify the proposal
Eg 1. Trial stopped early for benefit RCT of male circumcision to reduce risk of HIV infection in men during vaginal intercourse 2784 men were randomized to immediate vs delayed (by x months) circumcision Two interim analyses and a final analysis were scheduled The 1st interim analysis was done with data through April 17, 2005, with 37% of potential follow-up accrued and was assessed at α1 = .000518 with the O’Brien Fleming bound. The 2nd interim analysis was done with data through May 13, 2006, with 74% of potential follow-up accrued, using α2 = .0183. The DSMB requested a 3rd unscheduled interim analysis using data up to Oct 31, 2006, with 87% of the follow-up data accrued. The stopping boundary for the 3rd analysis was α3 = .0269. 22/1391 men in the immediate group and 47/1393 in the delayed group acquired HIV 2 year incidence of HIV: 2.1% (95% CI 1.2-3.0) and 4.2% (3.0-5.4), p=.0065 The DSMB recommended stopping the trial. Bailey et al. Lancet 2007; 369(9562): 643-56. http://www.avac.org/data-safety-monitoring-boards (general discussion of these 3 examples)
Eg 2. Trial stopped early for harm RCT of vaginal microbicide (VanDamme et al NEJM 2008) 1398 women randomly assigned to cellulose sulfate gel or placebo Primary outcome = HIV infection Planned sample size of 2574 was calculated to detect a reduction of 50% in the risk of HIV acquisition, assuming the cumulative probability of HIV infection in the placebo group was 4% at 12 months, and that 80% of the women completed the trial. A one-sided test was planned, with α=.025 Interim analysis was scheduled to occur after approximately half of the 66 anticipated HIV infections occurred. Lan-DeMets spending function with O’Brien-Fleming boundaries was specified to preserve the Type I error associated with a one-sided test with alpha = .025. No attempt was made to control Type I error for a test of harm.
Microbicide trial, cont’d • At the interim analysis after 35 HIV infections, there were 24 infections in the cellulose sulfate group and 11 in the placebo group • Estimated hazard ratio = 2.23 (95% CI 1.05, 5.03, p=.02) • The DSMB recommended stopping the trial. • The final analysis included 6 additional HIV infections during the study not recorded at the time of the interim analysis • 25 HIV infections in cellulose sulfate group • 16 HIV infections in the placebo group • HR = 1.61 (95% CI 0.86, 3.01), p=.13
Eg 3. Trial stopped early for futility RCT of microbicide called SAVVY in Ghana Primary outcome = HIV infection A planned sample size of 2142, with an expected 66 events, was calculated to detect a reduction of 50% in the risk of HIV acquisition, assuming the rate of HIV infection in the placebo group was 5 per 100 person years, and 20% loss to follow-up. A two-sided test was planned, with α=.05. DSMB was to review adverse events and primary safety and HIV seroconversion data twice, after 16 and 33 events. Testing for early evidence of effectiveness was only to occur at the 2nd look.
2153 women randomized Rate of new HIV infections was much lower than assumed in the sample size calculations Interim analysis at 29 events At interim analysis, it was determined than an additional 1980 participants would have to be enrolled in order to observe 66 events. Trial stopped for futility
Dose Adjustment Mid-trial • NEJM 1996; 335: 377-384. • 2 regimens for treatment of MAC bacteremia • rifampin, ethambutol, clofazimine, ciprofloxacin • rifabutin, ethambutol, clartihromycin • After 125 patients were randomized, 24 of 63 patients on rifabutin arm developed uveitis • Dose of rifabutin amended from 600 mg to 300 mg mid trial
Definition of a Pilot Study Smaller scale study conducted in advance of a larger scale randomized trial for the purposes of planning the larger trial. Goals of pilot study are different from those of the ultimate trial Don’t do a formal sample size calculation Analysis of a pilot study should be descriptive Not powered to draw conclusions
Reasons for Conducting a Pilot Study • Process • Feasibility of recruitment • Suitability of inclusion/exclusion criteria • Assess acceptability of the intervention • Retention rates • Resources • Capacity to handle the patients • Equipment • Nurses • Principal investigators Thabane BMC Med Res Method 2010
Reasons for Conducting a Pilot Study (2) • Management • Pilot case report forms and • Test randomization procedures • Storage and testing of equipment • Training of staff • Data entry • Scientific • Collect data on primary outcome for determination of sample size • Estimated treatment effect • Variance of treatment effect
Poor reasons for a pilot trial Small amount of funding available Student project Someone else did a small trial Often investigator is hoping to see an effect and at the end wants to analyze like a real trial
Criteria for success of pilot studies • Need to have some! • Should be based on objectives • Decisions from pilot studies could be • Do not proceed to larger trial • Proceed to larger trial after modification of design • Proceed as designed • Example criteria • Rate of recruitment > x patients/month • Able to process x patients/day • Estimated sample size is feasible
Sample size of pilot studies • Formal sample size calculations may not be required • Rule of thumb – 30 patients per parameter to be estimated (Lancaster 2004) • Confidence interval approach • Eg. If criteria for success of pilot study is to achieve a rate of x% in acceptability, adherence, or another outcome, could calculate the sample size required to achieve the lower bound of the 95% confidence interval > x%. Thabane BMC Med Res Method 2010
Internal vs External Pilot Studies External studies – data not included in analysis of ultimate trial – especially if study design changed in any way Internal studies – data is included in analysis of ultimate trial
Proof of Concept • Goal is to determine if there is any activity of a drug • Usually use surrogate markers as endpoints • Often Phase I/II trials • Sample size calculations • “If nothing goes wrong, is everything all right?” • Hanley, Lippman-Hand. JAMA 1983. • If none of n patients has the event of interest, we can be 95% sure that the chance of the event is at most 3/n; ie. the upper limit of the 95% CI is 3/n.
Adaptive design • Modify the protocol during the trial • Stop the trial early due to safety, futility, or efficacy at interim analysis • Change eligibility criteria to increase enrolment • Change study endpoints • Change statistical analysis plan
Results of a Pilot Study Should be published Should be identified as a pilot study Should focus on objectives of pilot study, not statistical significance of outcome for ultimate trial
Grant Application for Pilot Study CIHR often requires results of pilot study before funding ultimate trial Grant application should state objectives of pilot study as well as objectives of ultimate trial Don’t need a formal sample size calculation but do need to justify your choice of number of participants
References Lancaster GA, Dodd S, Williamson P. Design and Analysis of Pilot Studies, Recommendations for Good Practice. J Evaluation in Clinical Practice, 2004;10(2):307-312. ThabaneL, … Goldsmith C. A tutorial on pilot studies: the what, why and how. BMC Medical Research Methodology. 2010, 10:1. Arain M., Campbell M.J., Cooper C.L., Lancaster G.A. (2010) What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Medical Research Methodology, 10: 67. Eldridge S, Bond C, Campbell M, Lancaster G, Thabane L, Hopewell S. Definition and reporting of pilot and feasibility studies Trials 2013, 14(Suppl 1):O18.