310 likes | 324 Views
Speech. The two different parts of speech Speech Production Speech Perception. Speech Production. Three basic components Respiration Phonation Articulation. Respiration / Phonation. Air starts in diaphragm Pushed out of lungs through trachea, up to larynx
E N D
Speech • The two different parts of speech • Speech Production • Speech Perception
Speech Production • Three basic components • Respiration • Phonation • Articulation
Respiration / Phonation • Air starts in diaphragm • Pushed out of lungs through trachea, up to larynx • At larynx air goes through vocal cords • Phonation – process of vocal folds vibrating when air pushed out of the lungs
Vocal Cords: Just above the Larynx
Guitar Strings Vocal Cord Vary the frequency of speech by changing the length, Thickness & tightness of the muscles
Vocal Tract • The airway above the larynx used to produce speech, including: • Oral Tract • Nasal Tract
Articulation • Speech sounds most often described in terms of articulation • Vowel sounds are made with a relatively open vocal tract • Shape of the mouth and lips influence the sound of vowels • Vary with how high or low and how forward or back the tongue is placed in oral tract • Consonant sound can be classified according to three articulatory dimensions influencing air flow: 1. Place of Articulation – obstructed at lips or behind teeth 2. Manner of Articulation – total or partial obstruction 3. Voicing –vocal cord vibrating or NOT vibrating
Place of Articulation • Airflow can be obstructed: - At the lips (bilabial speech sounds ‘ba’, ‘pa’) - At the alveolar ridge just behind the teeth (alveolar speech sounds ‘dee’, ‘tee’) - At the soft palate (velar speech sounds ‘ga’, ‘ka’)
Manner of Articulation • “Manner” of Airflow can be: - Totally obstructed (stops ‘ba’) - Partially obstructed (fricatives ‘es’) - Only slightly obstructed (laterals ‘ar’, ‘eL’, and glides ‘wa’, ‘ya’) - First blocked, then allowed to sneak through (affricates ‘cha’) - Blacked at first from going through the mouth but allowed to go through the nasal passage (nasals ‘na’, ‘ma’)
Voicing • Whether the vocal cords are: - Vibrating (Voiced consonants ‘ba’) - Not vibrating (voiceless consonants ‘pa’)
Coarticulation & Speech Perception • The overlap of articulation in space and time • Production of one speech sound overlaps the production of the next • Does not cause much trouble for listeners and understanding speech (in your native language), but makes understanding speech perception harder for learning new languages (and for researchers)
Crucial need for language stimulation • Talking to the baby is essential: correlation between mom and baby’s language • “Infant Directed Speech” (“motherese”) • How we teach the baby about our “mother tongue” and how our society communicates
Crucial need for language stimulation • Infant Directed Speech & hearing the sounds of language • prosody: rhythm, tempo, cadence, melody, intonation • categorical perception: differentiating the phonemes (sounds) of your “mother tongue” • Turn-taking & social reciprocity: the communication dance
“Categorical” Speech Perception • Learning the phonemes (speech sounds) of your “mother tongue” • 200 “speech sounds” (sound “categories”) universally heard by all newborns • 45-ish sound categories (/bah/, /pah/, /rah/, /lah/, etc.) are heard in typical languages by adults • Why are they called “categories” of sound? • Because “bah” is always “bah” no matter who says it • You hear “bah” if it is said by 2 year old or 22-year old; female or male speaker, etc.
“Categorical” Speech Perception • Learning the phonemes (speech sounds) of your “mother tongue” • What happened to the other 150-ish sounds categories that young babies can hear, but are lost by adulthood? • Examples: Adult Japanese speakers do not distinguish the sounds /rah/ vs. /lah/ -- “flied lice” for “fried rice” • The Japanese language do not differentiate these sound “categories”(they are the same sound in Japanese) • English does differentiate these sounds in their langauge • What sounds do babies continue to “hear”? Those that they are routinely exposed to -- “Use it, or lose it”
Speech Perception through your eyes? The McGurk Effect • What you see influences what you hear • http://www.youtube.com/watch?v=G-lN8vWm3m0
Crucial need for language stimulation • Critical Periods • Victor the “wild child” -- Aveyron, France, 1800: learned a few words. • Genie imprisoned in home until 13-years of age • Developed “toddler-like” language • “Father take piece wood. Hit. Cry.”
Language Deprivation • Oxana Malaya -- http://www.youtube.com/watch?v=2PyUfG9u-P4 • Genie -- http://www.youtube.com/watch?v=ICUZN462qMw
Primate Communication and Symbolic Skills (tool use) Imitation and discourse (Oprah piece) -- http://www.youtube.com/watch?v=jKauXrp9dl4&feature=related Vocabulary – http://www.youtube.com/watch?v=wRM7vTrIIis Receptive vocabulary -- http://www.youtube.com/watch?v=h7IdghtkKmA&feature=related
The human brain & language • Most linguists ignore the • Basal Ganglia (BG) • Swearing associated with • BG involvement • (see Steve Pinker, 2008) • Parkinson’s patients • “loss” of language • implicates the • importance of BG
Speech In The Brain – “hemispheric lateralization” • Development of techniques for brain imaging has made it possible for us to learn more about how speech is processed in the brain Ex: positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) • For most people (who are right handed), the left hemisphere of the brain is dominant for language processing • 95% of right-handed are left temporal lobe dominant • About 19% of left-handed are right temporal lobe dom. • About 20% of left-handed have “bilateral” language dominance between hemispheres
Speech In The Brain – “hemispheric lateralization” • Processing of complex sounds relies on additional areas of the cortex adjacent to A1 – called belt and parabelt regions – usually referred to as “secondary” or “association” areas • These areas are activated when listeners hear speech and music (and all other sounds) • Also, activity in response to music, speech, and other complex sounds is relatively balanced across the 2 hemispheres • But, there are “preferences” for some sounds processed in one hemisphere (language left), vs. the other (non-speech in right)