170 likes | 317 Views
Speech Perception 2 DAY 17 – Oct 4, 2013. Brain & Language LING 4110-4890-5110-7960 NSCI 4110-4891-6110 Harry Howard Tulane University. Course organization. The syllabus, these slides and my recordings are available at http://www.tulane.edu/~howard/LING4110/ .
E N D
Speech Perception 2DAY 17 – Oct 4, 2013 Brain & Language LING 4110-4890-5110-7960 NSCI 4110-4891-6110 Harry Howard Tulane University
Brain & Language, Harry Howard, Tulane University Course organization • The syllabus, these slides and my recordings are available at http://www.tulane.edu/~howard/LING4110/. • If you want to learn more about EEG and neurolinguistics, you are welcome to participate in my lab. This is also a good way to get started on an honor's thesis. • The grades are posted to Blackboard.
Brain & Language, Harry Howard, Tulane University Review The quiz was the review.
Brain & Language, Harry Howard, Tulane University Linguistic model, Fig. 2.1 p. 37 Discourse model Semantics Sentence level Syntax Sentence prosody Word level Morphology Word prosody Segmental phonology perception Segmental phonology production Acoustic phonetics Feature extraction Articulatory phonetics Speech motor control INPUT
Brain & Language, Harry Howard, Tulane University Categorical perception The Clinton-Kennedy continuum Chinchillas do this too!
Brain & Language, Harry Howard, Tulane University speech Perception Ingram §6
Brain & Language, Harry Howard, Tulane University Category boundary shifts The shift in VOT is from ‘bin’ to ‘pin’: • Thus the phonetic feature detectors must compensate for the context –– because they know how speech is produced? But Japanese quail do this too.
Brain & Language, Harry Howard, Tulane University Duplex speech (or perception) a A and B refer to either ear; B is also called the base
Brain & Language, Harry Howard, Tulane University Results • Listeners hear a syllable in the ear that gets the base (B), but it is not ambiguous. Its identification is determined by which of the nine F3 transitions are presented to the other ear (A). • Listeners also hear a non-speech "chirp" in the ear that gets the isolated transition (A).
Brain & Language, Harry Howard, Tulane University Implications • The fact that the same stimulus is simultaneously part of two quite distinct types of percepts argues that the percepts are produced by separate mechanisms that are both sensitive to the same range of stimuli. • The discrimination of the isolated "chirp" and the speech percept are quite different, despite the fact that the acoustic event responsible for both is the same. • The speech percept exhibits categorical perception; the chirp percept exhibits continuous perception. • If the intensity of the isolated transition is lowered below the threshold of hearing, so that listeners cannot tell reliably whether or not it is there on a given trial, it is still capable of disambiguating the speech percept. [HH: hold that thought]
Brain & Language, Harry Howard, Tulane University Posterior research • Tried to control for the potential temporal delay of dichotic listening by manipulating the intensity (loudness) of the chirp with respect to the base. • Only if the chirp and the base have the same intensity are they perceived as a single speech sound.
Brain & Language, Harry Howard, Tulane University Gokcen & Fox (2001)
Brain & Language, Harry Howard, Tulane University Discussion • Even if the explanation for the latency differences is simply because linguistic and nonlinguistic components have two different areas in the brain to which they must go for processing, and coordinating these two processing sources in order to make an identification of a stimulus takes longer, the data would be consistent with the contention of separate modules for phonetic and auditory stimuli. • We would argue that these data do not support the claim that there is only a single unified cognitive module that processes all auditory information because the speech-only and duplex stimuli contained identical components and were equal in complexity.
Brain & Language, Harry Howard, Tulane University Back to sine-wave speech What is this? It is this.
Brain & Language, Harry Howard, Tulane University Dehaene-Lambertz et al. (2005) • … used ERP and fMRI to investigate sine-wave [ba]-[da] sounds. • For the EEG, the subjects had to be trained to hear the sound as speech. • In the MRI, most subjects heard the sound as speech immediately. • Switching to the speech mode significantly enhanced activation in the posterior parts of the left superior temporal sulcus.
Brain & Language, Harry Howard, Tulane University NEXT TIME P5 Finish Ingram §6; start §7. ☞ Go over questions at end of chapter.