260 likes | 423 Views
Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces. The Team. Adriano Claro Monteiro Alain de Cheveign Anahita Mehta Byron Galbraith Dimitra Emmanouilidou Edmund Lalor Deniz Erdogmus Jim O’Sullivan. Mehmet Ozdas Lakshmi Krishnan
E N D
Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces
The Team • Adriano Claro Monteiro • Alain de Cheveign • Anahita Mehta • Byron Galbraith • DimitraEmmanouilidou • Edmund Lalor • DenizErdogmus • Jim O’Sullivan • MehmetOzdas • Lakshmi Krishnan • Malcolm Slaney • MikeCrosse • NimaMesgarani • Jose L Pepe Contreras-Vidal • ShihabShamma • ThusithaChandrapala
The Goal • To determine a reliable measure of imagined audition using electroencephalography (EEG). • To use this measure to communicate.
What types of imagined audition? • Speech: • Short (~3-4s) sentences • “The whole maritime population of Europe and America.” • “Twinkle-twinkle little star.” • “London bridge is falling down, falling down, falling down.” • Music • Short (~3-4s) phrases • Imperial March from Star Wars. • Simple sequence of tones. • Steady-State Auditory Stimulation • 20 s trials • Broadband signal amplitude modulated at 4 or 6 Hz
The Experiment • 64 – channel EEG system (Brain Vision LLC – thanks!) • 500 samples/s • Each “trial” consisted of the presentation of the actual auditory stimulus (“perceived” condition) followed (2 s later) by the subject imagining hearing that stimulus again (“imagined” condition).
The Experiment • Careful control of experimental timing. • Perceived...2s... Imagined...2 s x 5 ... Break... next stimulus 4, 3, 2, 1, +
Data Analysis - Preprocessing • Filtering • Independent Component Analysis (ICA) • Time-Shift Denoising Source Separation (DSS) • Looks for reproducibility over stimulus repetitions
Data Analysis: Hypothesis-driven. • The hypothesis: • EEG recorded while people listen to (actual) speech varies in a way that relates to the amplitude envelope of the presented (actual) speech. • EEG recorded while people IMAGINE speech will vary in a way that relates to the amplitude envelope of the IMAGINED speech.
Data Analysis: Hypothesis-driven. • Phase consistency over trials... • EEG from same sentence imagined over several trials should show consistent phase variations. • EEG from different imagined sentences should not show consistent phase variations.
Data Analysis: Hypothesis-driven. Actual speech Imagined speech Consistency in theta (4-8Hz) band Consistency in alpha (8-14Hz) band
Data Analysis: Hypothesis-driven. • Red line – perceived music • Green line – imagined music
Data Analysis - Decoding Original Reconstruction London’s Bridge Twinkle Twinkle r = 0.30, p = 3e-5 r = 0.19, p = 0.01
Data Analysis - SSAEP Perceived Imagined 4Hz 6Hz
Data Analysis • Data Mining/Machine Learning Approaches:
Data Analysis • Data Mining/Machine Learning Approaches:
SVM Classifier Input Class Labels 1 1 1 1 1 0 0 1 0 0 0 0 0 1 EEG data (channels × time) : Concatenate channels: Group N trials: Predicted Labels 0 0 1 0 1 1 1 0 1 1 Input covariance matrix:
SVM Classifier Results Decoding imagined speech and music: Mean DA = 87% Mean DA = 90% Mean DA = 90%
DCT Processing Chain DSS Output (Look for repeatability) DCT Output (Reduce dimensionality) Raw EEG Signal (500Hz data)
DCT Classification Performance Percentage accuracy
Data Analysis • Data Mining/Machine Learning Approaches: • Linear Discriminant Analysis on Different Frequency Bands Music vs Speech Speech 1 vs Speech 2 Music 1 vs Music 2 Speech vs Rest Music vsRest - results ~ 50 – 66%
Summary • Both hypothesis drive and machine-learning approaches indicate that it is possibleto decode/classify imagined audition • Many very encouraging results that align with our original hypothesis • More data needed!! • In a controlled environment!! • To be continued...