1 / 37

Audiovisual speech perception in L1 and L2: an fMRI study

Audiovisual speech perception in L1 and L2: an fMRI study. Marco Calabresi Universitat Pompeu Fabra - DTIC Centre for Brain and Cognition Multisensory Research Group. The Team. Salvador Soto-Faraco (UPF). Cesar Avila (UJI). Alfonso Barros-Loscertales Noelia Ventura-Campos

emmly
Download Presentation

Audiovisual speech perception in L1 and L2: an fMRI study

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Audiovisual speech perception in L1 and L2: an fMRI study Marco Calabresi UniversitatPompeuFabra - DTIC Centre for Brain and Cognition Multisensory Research Group

  2. The Team Salvador Soto-Faraco (UPF) Cesar Avila (UJI) Alfonso Barros-Loscertales Noelia Ventura-Campos Juan-Carlos Bustamante • Agnes Alsius • Marco Calabresi

  3. Aim To explore the functional anatomy of crossmodal speech processing in second language

  4. Crossmodal speech matters • Crossmodal speech is useful under real noise conditions to comprehend speech Sumby and Pollack, 1954

  5. Aspects of crossmodal perception • 2 experimentally dissociable aspects • multisensory integration • = response to the crossmodal stimulus larger than the maximum of either unimodal stimuli • crossmodal congruency • = response to the temporally aligned crossmodal stimulus larger than to the unaligned stimulus

  6. AV speech in L2 • Makes non-native speech sounds perceivable (Navarra and Soto-Faraco 2007) • Improves shadowing (Reisberg et al. 1987) • Increases proportion of McGurk fusions (Sekiyama and Tokhura, 1988) • Greater bias towards V speech (Massaro and Chen 2004; Navarra and Soto-Faraco, 2007; Wang et al., 2008) • Inconclusive evidence for naturalistic speech perception benefit (Wang et al.,2008) • Functional anatomical bases unknown

  7. Putative functional anatomy of bilingualism • In general L2 > L1 activity • Late L2 onset • Low L2 proficiency • Low L2 exposure • No L2 specific regions • Anterior cingulate • L-IFG (Broca’s) • Listening or reading L2 Indefrey, 2006

  8. Putative functional anatomy of AV speech (in L1) Adapted from: Nishitani and Hari 2002; Campbell, 2008

  9. Putative functional anatomy of AV speech 1 1 Adapted from: Nishitani and Hari 2002; Campbell, 2008

  10. Putative functional anatomy of AV speech 1 2 1 Adapted from: Nishitani and Hari 2002; Campbell, 2008

  11. Putative functional anatomy of AV speech 3 1 2 1 Adapted from: Nishitani and Hari 2002; Campbell, 2008

  12. Putative functional anatomy of AV speech 3 4 1 2 1 Adapted from: Nishitani and Hari 2002; Campbell, 2008

  13. Putative functional anatomy of AV speech 5 3 4 1 2 1 Adapted from: Nishitani and Hari 2002; Campbell, 2008

  14. Sample

  15. 2-s. 6-s. 2-s. 6-s. Stimuli and design Audiovisual congruent Audiovisual incongruent Baseline Auditory Visual 5 s phrasefragments 40 s blocks Task: attention Post scanningidentification

  16. Results • Multisensory integration AVc > Max (A, V) • Congruency (main effects) AVc > AVi All statistical maps shown at p<0.001 uncorrected with kE>4

  17. Multisensory integration: L1 AVc > Max (A, V) p-STS p-STS

  18. Multisensory integration: L2 AVc > Max (A, V) p-STS LG

  19. Multisensory integration: L2 AVc > Max (A, V) p-STS p-STS

  20. Multisensory integration: overlay AVc > Max (A, V) p-STS P-STS

  21. Multisensory integration: summary • MS region identified is in accord with much of the literature • In analogy with reading and listening there is almost complete overlap between L1 and L2 activity. 2

  22. Congruency: L1 AVc > AVi L a-MFG

  23. Congruency: L1

  24. Congruency: L2 AVc > AVi MOG MOG

  25. Congruency: overlay AVc > AVi MOG MOG

  26. Congruency: summary so far • Unlike MSI, regional brain activity for AV congruency in L1 and L2 is anatomically distinct • L1: Anterior temporal regions • L2: Secondary visual areas (attention?) • Which areas (sensitive to congruency) differ between L1 and L2? L1(AVc>AVi)>L2(AVc>AVi) & L2(AVc>AVi)>L1(AVc>AVi)

  27. Congruency: summary so far • Unlike MSI, regional brain activity for AV congruency in L1 and L2 is anatomically distinct • L1: Anterior temporal regions • L2: Secondary visual areas (attention?) • Which areas (sensitive to congruency) differ between L1 and L2? L1(AVc>AVi)>L2(AVc>AVi) & L2(AVc>AVi)>L1(AVc>AVi)

  28. L1(AVc>AVi)>L2(AVc>AVi)

  29. L1(AVc>AVi)>L2(AVc>AVi) PcG

  30. Functional anatomy of AV speech in L1 and L2: a first look L1 & L2 - MSI Adapted from: Nishitani and Hari 2002; Campbell, 2008

  31. Functional anatomy of AV speech in L1 and L2: a first look L1 & L2 - MSI L2 - AVc>AVi Adapted from: Nishitani and Hari 2002; Campbell, 2008

  32. Functional anatomy of AV speech in L1 and L2: a first look L1 & L2 - MSI L2 - AVc>AVi L1 - AVc>AVi Adapted from: Nishitani and Hari 2002; Campbell, 2008

  33. Functional anatomy of AV speech in L1 and L2: a first look L1>L2 AVc>AVi L1 & L2 - MSI L2 - AVc>AVi Adapted from: Nishitani and Hari 2002; Campbell, 2008

  34. Conclusions • L1 and L2 dissociate with respect to congruency sensitivity but not with respect to multisensory integration • We suggest that individual areas within the crossmodal speech processing network may play specific roles in L1 and L2 • Further research needed to confirm these initial findings

  35. Pesky questions? Extra slides

  36. Tables 1

  37. Tables 2

More Related