620 likes | 738 Views
Predictive Classifiers Based on High Dimensional Data Development & Use in Clinical Trial Design. Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute http://linus.nci.nih.gov/brb. “Biomarkers”. Surrogate endpoints
E N D
Predictive Classifiers Based on High Dimensional DataDevelopment & Use in Clinical Trial Design Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute http://linus.nci.nih.gov/brb
“Biomarkers” • Surrogate endpoints • A measurement made before and after treatment to determine whether the treatment is working • Surrogate for clinical benefit • Predictive classifiers • A measurement made before treatment to select good patient candidates for the treatment
Predictive Biomarker Classifiers • Many cancer treatments benefit only a small proportion of the patients to which they are administered • Targeting treatment to the right patients can greatly improve the therapeutic ratio of benefit to adverse effects • Treated patients benefit • Treatment more cost-effective for society
Developmental Strategy (I) • Develop a diagnostic classifier that identifies the patients likely to benefit from the new drug • Develop a reproducible assay for the classifier • Use the diagnostic to restrict eligibility to a prospectively planned evaluation of the new drug • Demonstrate that the new drug is effective in the prospectively defined set of patients determined by the diagnostic
Develop Predictor of Response to New Drug Using phase II data, develop predictor of response to new drug Patient Predicted Responsive Patient Predicted Non-Responsive Off Study New Drug Control
Applicability of Design I • Primarily for settings where the classifier is based on a single gene whose protein product is the target of the drug • With substantial biological basis for the classifier, it may be unacceptable ethically to expose classifier negative patients to the new drug
Evaluating the Efficiency of Strategy (I) • Simon R and Maitnourim A. Evaluating the efficiency of targeted designs for randomized clinical trials. Clinical Cancer Research 10:6759-63, 2004. • Maitnourim A and Simon R. On the efficiency of targeted clinical trials. Statistics in Medicine 24:329-339, 2005. • reprints and interactive sample size calculations at http://linus.nci.nih.gov/brb
Two Clinical Trial Designs • Un-targeted design • Randomized comparison of T to C without screening for expression of molecular target • Targeted design • Assay patients for expression of target • Randomize only patients expressing target
Relative efficiency depends on proportion of patients test positive, and effectiveness of drug (compared to control) for test negative patients • When less than half of patients are test negative and the drug has little or no benefit for test negative patients, the targeted design requires dramatically fewer randomized patients. • May require fewer or more patients to be screened than randomized with untargeted design
Web Based Software for Comparing Sample Size Requirements • http://linus.nci.nih.gov/brb/
Develop Predictor of Response to New Rx Predicted Responsive To New Rx Predicted Non-responsive to New Rx New RX Control New RX Control Developmental Strategy (II)
Developmental Strategy (II) • Do not use the diagnostic to restrict eligibility, but to structure a prospective analysis plan. • Compare the new drug to the control overall for all patients ignoring the classifier. • If poverall 0.04 claim effectiveness for the eligible population as a whole • Otherwise perform a single subset analysis evaluating the new drug in the classifier + patients • If psubset 0.01 claim effectiveness for the classifier + patients.
The purpose of the RCT is to evaluate the new treatment overall and for the pre-defined subset • The purpose is not to re-evaluate the components of the classifier, or to modify or refine the classifier • The purpose is not to demonstrate that repeating the classifier development process on independent data results in the same classifier
Developmental Strategy III • Do not use the diagnostic to restrict eligibility, but to structure a prospective analysis plan. • Compare the new drug to the control for classifier positive patients • If p+>0.05 make no claim of effectiveness • If p+ 0.05 claim effectiveness for the classifier positive patients and • Continue accrual of classifier negative patients and eventually test treatment effect at 0.05 level
The Roadmap • Develop a completely specified genomic classifier of the patients likely to benefit from a new drug • Establish reproducibility of measurement of the classifier • Use the completely specified classifier to design and analyze a new clinical trial to evaluate effectiveness of the new treatment with a pre-defined analysis plan.
Guiding Principle • The data used to develop the classifier must be distinct from the data used to test hypotheses about treatment effect in subsets determined by the classifier • Developmental studies are exploratory • And not closely regulated by FDA • FDA should not regulate classifier development • Studies on which treatment effectiveness claims are to be based should be definitive studies that test a treatment hypothesis in a patient population completely pre-specified by the classifier
Adaptive Signature Design An adaptive design for generating and prospectively testing a gene expression signature for sensitive patients Boris Freidlin and Richard Simon Clinical Cancer Research 11:7872-8, 2005
Adaptive Signature DesignEnd of Trial Analysis • Compare E to C for all patients at significance level 0.04 • If overall H0 is rejected, then claim effectiveness of E for eligible patients • Otherwise
Otherwise: • Using only the first half of patients accrued during the trial, develop a binary classifier that predicts the subset of patients most likely to benefit from the new treatment E compared to control C • Compare E to C for patients accrued in second stage who are predicted responsive to E based on classifier • Perform test at significance level 0.01 • If H0 is rejected, claim effectiveness of E for subset defined by classifier
Classifier Development • Using data from stage 1 patients, fit all single gene logistic models (j=1,…,M) • Select genes with interaction significant at level
Classification of Stage 2 Patients • For i’th stage 2 patient, selected gene j votes to classify patient as preferentially sensitive to T if
Classification of Stage 2 Patients • Classify i’th stage 2 patient as differentially sensitive to T relative to C if at least G selected genes vote for differential sensitivity of that patient
Treatment effect restricted to subset.10% of patients sensitive, 10 sensitivity genes, 10,000 genes, 400 patients.
Overall treatment effect, no subset effect.10% of patients sensitive, 10 sensitivity genes, 10,000 genes, 400 patients.
Good Microarray Studies Have Clear Objectives • Class Comparison • For predetermined classes, identify differentially expressed genes • Class Prediction • Prediction of predetermined class (e.g. response) using information from gene expression profile • Class Discovery • Discover clusters among specimens or among genes
Components of Class Prediction • Feature (gene) selection • Which genes will be included in the model • Select model type • E.g. Diagonal linear discriminant analysis, Nearest-Neighbor, … • Fitting parameters (regression coefficients) for model • Selecting value of tuning parameters
Simple Feature Selection • Genes that are differentially expressed among the classes at a significance level (e.g. 0.01) • The level is selected only to control the number of genes in the model
Complex Feature Selection • Small subset of genes which together give most accurate predictions • Combinatorial optimization algorithms • Decision trees, Random forest • Top scoring pairs, Greedy pairs • Little evidence that complex feature selection is useful in microarray problems • Many published complex methods for selecting combinations of features do not appear to have been properly evaluated • Wessels et al. (Bioinformatics 21:3755, 2005) • Lai et al (BMC Bioinformatics 7:235, 2006) • Lecocke & Hess (Cancer Informatics 2:313,2006)
Linear Classifiers for Two Classes • Fisher linear discriminant analysis • Diagonal linear discriminant analysis (DLDA) assumes features are uncorrelated • Compound covariate predictor • Weighted voting classifier • Support vector machines with inner product kernel • Perceptrons • Naïve Bayes classifier
Other Simple Methods • Nearest neighbor classification • Nearest centroid classification • Shrunken centroid classification
When p>>n • It is always possible to find a set of features and a weight vector for which the classification error on the training set is zero. • There is generally not sufficient information in p>>n training sets to effectively use complex methods
Myth: Complex classification algorithms perform better than simpler methods for class prediction. • Comparative studies indicate that simpler methods usually work as well or better for microarray problems because they avoid overfitting the data.
Internal Validation of a Classifier • Split-sample validation • Split data into training and test sets • Test single fully specified model on the test set • Often applied invalidly with tuning parameter optimized on test set • Cross-validation or bootstrap resampling • Repeated training-test partitions • Average errors over repetitions
specimens log-expression ratios training set specimens test set Cross-Validated Prediction (Leave-One-Out Method) 1. Full data set is divided into training and test sets (test set contains 1 specimen). 2. Prediction rule is built from scratch using the training set. 3. Rule is applied to the specimen in the test set for class prediction. 4. Process is repeated until each specimen has appeared once in the test set.
Cross validation is only valid if the test set is not used in any way in the development of the model. Using the complete set of samples to select genes violates this assumption and invalidates cross-validation. • With proper cross-validation, the model must be developed from scratch for each leave-one-out training set. This means that feature selection must be repeated for each leave-one-out training set. • The cross-validated estimate of misclassification error is an estimate of the prediction error for model fit using specified algorithm to full dataset
Prediction on Simulated Null Data • Generation of Gene Expression Profiles • 14 specimens (Pi is the expression profile for specimen i) • Log-ratio measurements on 6000 genes • Pi ~ MVN(0, I6000) • Can we distinguish between the first 7 specimens (Class 1) and the last 7 (Class 2)? • Prediction Method • Compound covariate prediction • Compound covariate built from the log-ratios of the 10 most differentially expressed genes.
For small studies, cross-validation, if performed correctly, can be preferable to split-sample validation • Cross-validation can only be used when there is a well specified algorithm for classifier development
Permutation Distribution of Cross-validated Misclassification Rate of a Multivariate Classifier • Randomly permute class labels and repeat the entire cross-validation • Re-do for all (or 1000) random permutations of class labels • Permutation p value is fraction of random permutations that gave as few misclassifications as e in the real data
Validation of Predictive Classifier Does Not Involve • Measuring overlap of gene sets used in classifier developed from independent data • Statistical significance of gene expression levels or summary signatures in multivariate analysis • Confirmation of gene expression measurements on other platforms • Demonstrating that the classifier or any of its components are “validated biomarkers of disease status”