310 likes | 508 Views
Learning fMRI-Based Classifiers for Cognitive States. Stefan Niculescu Carnegie Mellon University April, 2003. Our Group: Tom Mitchell, Luis Barrios, Rebecca Hutchinson, Marcel Just, Francisco Pereira, Xuerui Wang. fMRI and Cognitive Modeling. Have: First generative models:
E N D
Learning fMRI-Based Classifiers for Cognitive States Stefan Niculescu Carnegie Mellon University April, 2003 Our Group: Tom Mitchell, Luis Barrios, Rebecca Hutchinson, Marcel Just, Francisco Pereira, Xuerui Wang
fMRI and Cognitive Modeling Have: • First generative models: • Task Cognitive state seq. average fMRIROI • Predict subject-independent, gross anatomical regions • Miss subject-subject variation, trial-trial variation Want: • Much greater precision, reverse the prediction • <fMRI, behavioral data, stimulus> of single subject, single trial Cognitive state seq.
Cognitive task Cognitive state sequence
Cognitive task Cognitive state sequence “Virtual sensors” of cognitive state
Does fMRI contain enough information? • Can we devise learning algorithms to construct such “virtual sensors”? Cognitive task Cognitive state sequence “Virtual sensors” of cognitive state
Preliminary Experiments: Learning Virtual Sensors • Machine learning approach: train classifiers • fMRI(t, t+ d) CognitiveState • Fixed set of possible states • Trained per subject, per experiment • Time interval specified
Approach • Learn fMRI(t,…,t+k) CognitiveState • Classifiers: • Gaussian Naïve Bayes, SVM, kNN • Feature selection/abstraction • Select subset of voxels (by signal, by anatomy) • Select subinterval of time • Average activities over space, time • Normalize voxel activities
Trial: read sentence, view picture, answer whether sentence describes picture Picture presented first in half of trials, sentence first in other half Image every 500 msec 12 normal subjects Three possible objects: star, dollar, plus Collected by Just et al. Study 1: Pictures and Sentences [Xuerui Wang and Stefan Niculescu]
+ --- *
Is Subject Viewing Picture or Sentence? • Learn fMRI(t, …, t+15) {Picture, Sentence} • 40 training trials (40 pictures and 40 sentences) • 7 ROIs • Training methods: • K Nearest Neighbor • Support Vector Machine • Naïve Bayes
Is Subject Viewing Picture or Sentence? • SVMs and GNB worked better than kNN • Results (leave one out) on picture-then-sentence, sentence-then-picture data and combined • Random guess = 50% accuracy • SVM using pair of time slices at 5.0,5.5 sec after stimulus: 91% accuracy
Dataset \ Classifier GNB SVM 1NN 3NN 5NN SP 0.10 0.11 0.13 0.12 0.10 PS 0.20 0.17 0.38 0.31 0.26 SP + PS 0.29 0.32 0.43 0.41 0.37 Error for Single-Subject Classifiers • 95% confidence intervals are 10% - 15% large • Accuracy of default classifier is 50%
Training Cross-Subject Classifiers • Approach: define supervoxels based on anatomically defined regions of interest • Normalize per voxel activity for each subject • Each value scaled now in [0,1] • Abstract to seven brain region supervoxels • 16 snapshots for each supervoxel • Train on n-1 subjects, test on nth • Leave one subject out cross validation
Error for Cross Subject Classifiers Dataset \ Classifier GNB SVM 1NN 3NN 5NN SP 0.14 0.13 0.15 0.13 0.11 PS 0.20 0.22 0.26 0.24 0.21 SP + PS 0.30 0.25 0.36 0.33 0.32 • 95% confidence intervals approximately 5% large • Accuracy of default classifier is 50%
Family members Occupations Tools Kitchen items Dwellings Building parts Study 2: Word Categories [Francisco Pereira] • 4 legged animals • Fish • Trees • Flowers • Fruits • Vegetables
Word Categories Study • Ten neurologically normal subjects • Stimulus: • 12 blocks of words: • Category name (2 sec) • Word (400 msec), Blank screen (1200 msec); answer • Word (400 msec), Blank screen (1200 msec); answer • … • Subject answers whether each word in category • 32 words per block, nearly all in category • Category blocks interspersed with 5 fixation blocks
Training Classifier for Word Categories Learn fMRI(t) word-category(t) • fMRI(t) = 8470 to 11,136 voxels, depending on subject Training methods: • train ten single-subect classifiers • kNN (k = 1,3,5) • Gaussian Naïve Bayes P(fMRI(t) | word-category)
Dataset \ Classifier GNB 1NN 3NN 5NN Words 0.10 0.40 0.40 0.40 Study 2: Results Classifier outputs ranked list of classes Evaluate by the fraction of classes ranked ahead of true class • 0=perfect, 0.5=random, 1.0 unbelievably poor
Study 3: Syntactic Ambiguity [Rebecca Hutchinson] Is subject reading ambiguous or unambiguous sentence? • “The experienced soldiers warned about the dangers conducted the midnight raid.” • “The experienced solders spoke about the dangers before the midnight raid.”
Study 3: Results • 10 examples, 4 subjects • Almost random results if no feature selection used • With feature selection: • SVM - 77% accuracy • GNB - 75% accuracy • 5NN – 72% accuracy
Feature Selection • Five feature selection methods: • All (all voxels available) • Active (n most active available voxels according to a t-test) • RoiActive (n most active voxels in each ROI) • RoiActiveAvg (average of the n most active voxels in each ROI) • Disc (n most discriminating voxels according to a trained classifier) • Active works best
Feature Selection Dataset Feature Selection GNB SVM 1NN 3NN 5NN PictureSent All 0.29 0.32 0.43 0.41 0.37 Active 0.16 0.09 0.20 0.18 0.19 Words All 0.10 N/A 0.40 0.40 0.40 Active 0.08 N/A 0.30 0.20 0.16 Synt Amb All 0.43 0.38 0.50 0.46 0.47 Active 0.25 0.23 0.29 0.29 0.28
Summary • Successful training of classifiers for instantaneous cognitive state in three studies • Cross subject classifiers trained by abstracting to anatomically defined ROIs • Feature selection and abstraction are essential
Research Opportunities • Learning temporal models • HMM’s, Temporal Bayes nets,… • Discovering useful data abstractions • ICA, PCA, hidden layers,… • Linking cognitive states to cognitive models • ACT-R, CAPS • Merging data from multiple sources • fMRI, ERP, reaction times, …