1 / 13

Distributed Representative Reading Group

Distributed Representative Reading Group . Research Highlights. Support vector machines can robustly decode semantic information from EEG and MEG Multivariate decoding techniques allow for detection of subtle, but distributed, effects

burton
Download Presentation

Distributed Representative Reading Group

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Representative Reading Group

  2. Research Highlights Support vector machines can robustly decode semantic information from EEG and MEG Multivariate decoding techniques allow for detection of subtle, but distributed, effects Semantic categories and individual words have distributed spatiotemporal representations Representations are consistent between subjects and stimulus modalities A scalable hierarchical tree decoder further improves decoding performance

  3. why do reported results vary from study to study,? Partly due to the statistical analysis (traditional univariate techniques ) of high-dimensional neuroimaging data require correction for multiple comparisons to control for false positives insensitive to subtle, but widespread, effects within the brain yield differing results depending on the specific responses elicited by the particular experiment performed

  4. Why chose SVM Robust to high-dimensional data attempt to find a separating boundary which maximizes the margin between these classes reduces over-fitting and allows for good generalization when classifying novel data allows for a multivariate examination of the spatiotemporal dynamics

  5. Why hierarchical tree decoding Single multiclass decoder can distinguish individual word representations well, it doesn’t, directly incorporate a prior knowledge about semantic classes and the features which best discriminate these categories. To combine information from the classifier models generated to decode semantic category and individual words, a hierarchical tree framework which attempts to decode word properties sequentially were implemented Given an unknown word, the tree decoder First classifies it as either a large (target) or small (nontarget) object Second classified as living or nonliving object Finally as an individual word within the predicted semantic category Advantages: allows the appropriate features to be used to decode each word property, narrowing the search space before individual words are decoded. such a tree construct is easily scalable and could allow for the eventual decoding of larger libraries of words.

  6. Experiment visual (SV) and auditory version (SA) language tasks Task: Subjects were instructed to press a button if the presented word represented an object larger than 1 foot in any dimension Stimuli: representing objects larger than 1 foot : smaller than 1 foot = 1:1 living objects (animals and animal parts) and nonliving objects (man-made items)= 1:1 How to present stimuli: Half of the trials presented a novel word which was only shown only once during the experiment while the other half of the trials presented 1 of 10 repeated words (each shown multiple times during the experiment).

  7. Decoding framework Features: Average amplitude in six 50-ms time windows were sampled from every channel and concatenated into a large feature vector for each trial Decoding living versus non living 200, 300, 400, 500, 600, and 700 ms poststimulus Decoding individual words 250, 300, 350, 400, 450, and 500 mspoststimulus

  8. Decoding accuracy Compared to Naive Bayes classifier , SVM is better able to handle high dimension data

  9. SVM weights show weights show important times and locations for decoding

  10. Decoding is not based on low-level stimulus properties It is possible that the generated classifiers are utilizing neural activity related to low-level visual or auditory stimulus properties when decoding individual words To test this, we performed a shuffling based on stimulus properties to evaluate this potential confounding factor.

  11. inter-modality and inter-subject decoding show shared neural representations inter-modality: train the classifier with one modality and test the classifier with the other modality inter-subject : classifier was trained on data from all but one subject within a single modalityand then the remaining subject was used as test data.

  12. Hierarchical tree decoding improves decoding performance A three-level hierarchical tree decoder was utilized to first decode the large/small distinction (utilizing amplitude and spectral features), then the living/nonliving object category (utilizing 200–700 ms amplitude features), and finally the individual word (utilizing 250–500 ms amplitude features).

  13. Thanks for your attention!

More Related