1 / 24

Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces. The Team. Adriano Claro Monteiro Alain de Cheveign Anahita Mehta Byron Galbraith Dimitra Emmanouilidou Edmund Lalor Deniz Erdogmus Jim O’Sullivan. Mehmet Ozdas Lakshmi Krishnan

mizell
Download Presentation

Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces

  2. The Team • Adriano Claro Monteiro • Alain de Cheveign • Anahita Mehta • Byron Galbraith • DimitraEmmanouilidou • Edmund Lalor • DenizErdogmus • Jim O’Sullivan • MehmetOzdas • Lakshmi Krishnan • Malcolm Slaney • MikeCrosse • NimaMesgarani • Jose L Pepe Contreras-Vidal • ShihabShamma • ThusithaChandrapala

  3. The Goal • To determine a reliable measure of imagined audition using electroencephalography (EEG). • To use this measure to communicate.

  4. What types of imagined audition? • Speech: • Short (~3-4s) sentences • “The whole maritime population of Europe and America.” • “Twinkle-twinkle little star.” • “London bridge is falling down, falling down, falling down.” • Music • Short (~3-4s) phrases • Imperial March from Star Wars. • Simple sequence of tones. • Steady-State Auditory Stimulation • 20 s trials • Broadband signal amplitude modulated at 4 or 6 Hz

  5. The Experiment • 64 – channel EEG system (Brain Vision LLC – thanks!) • 500 samples/s • Each “trial” consisted of the presentation of the actual auditory stimulus (“perceived” condition) followed (2 s later) by the subject imagining hearing that stimulus again (“imagined” condition).

  6. The Experiment • Careful control of experimental timing. • Perceived...2s... Imagined...2 s x 5 ... Break... next stimulus 4, 3, 2, 1, +

  7. Data Analysis - Preprocessing • Filtering • Independent Component Analysis (ICA) • Time-Shift Denoising Source Separation (DSS) • Looks for reproducibility over stimulus repetitions

  8. Data Analysis: Hypothesis-driven. • The hypothesis: • EEG recorded while people listen to (actual) speech varies in a way that relates to the amplitude envelope of the presented (actual) speech. • EEG recorded while people IMAGINE speech will vary in a way that relates to the amplitude envelope of the IMAGINED speech.

  9. Data Analysis: Hypothesis-driven. • Phase consistency over trials... • EEG from same sentence imagined over several trials should show consistent phase variations. • EEG from different imagined sentences should not show consistent phase variations.

  10. Data Analysis: Hypothesis-driven. Actual speech Imagined speech Consistency in theta (4-8Hz) band Consistency in alpha (8-14Hz) band

  11. Data Analysis: Hypothesis-driven.

  12. Data Analysis: Hypothesis-driven. • Red line – perceived music • Green line – imagined music

  13. Data Analysis - Decoding

  14. Data Analysis - Decoding Original Reconstruction London’s Bridge Twinkle Twinkle r = 0.30, p = 3e-5 r = 0.19, p = 0.01

  15. Data Analysis - SSAEP

  16. Data Analysis - SSAEP Perceived Imagined 4Hz 6Hz

  17. Data Analysis • Data Mining/Machine Learning Approaches:

  18. Data Analysis • Data Mining/Machine Learning Approaches:

  19. SVM Classifier Input Class Labels 1 1 1 1 1 0 0 1 0 0 0 0 0 1 EEG data (channels × time) : Concatenate channels: Group N trials: Predicted Labels 0 0 1 0 1 1 1 0 1 1 Input covariance matrix:

  20. SVM Classifier Results Decoding imagined speech and music: Mean DA = 87% Mean DA = 90% Mean DA = 90%

  21. DCT Processing Chain DSS Output (Look for repeatability) DCT Output (Reduce dimensionality) Raw EEG Signal (500Hz data)

  22. DCT Classification Performance Percentage accuracy

  23. Data Analysis • Data Mining/Machine Learning Approaches: • Linear Discriminant Analysis on Different Frequency Bands Music vs Speech Speech 1 vs Speech 2 Music 1 vs Music 2 Speech vs Rest Music vsRest - results ~ 50 – 66%

  24. Summary • Both hypothesis drive and machine-learning approaches indicate that it is possibleto decode/classify imagined audition • Many very encouraging results that align with our original hypothesis • More data needed!! • In a controlled environment!! • To be continued...

More Related