340 likes | 466 Views
Phonological Encoding II. Producingconnectedspeech. Producing words: Lecture 2. TIGER(X). Lexical Concepts. Tijger. (Lemmas). <Tijger>. Word Forms. Producing words: Lecture 3. TIGER(X). Lexical Concepts. Tijger. (Lemmas). <Tijger>. Word Forms. Producing words: Lecture 4. TIGER(X).
E N D
Phonological Encoding II Producingconnectedspeech
Producing words: Lecture 2 TIGER(X) Lexical Concepts Tijger (Lemmas) <Tijger> Word Forms
Producing words: Lecture 3 TIGER(X) Lexical Concepts Tijger (Lemmas) <Tijger> Word Forms
Producing words: Lecture 4 TIGER(X) Lexical Concepts Tijger (Lemmas) <Tijger> Word Forms Structure Segments /t/ /EI/ /x/ /@/ /r/ ‘s1(on nu coda) s2(on nu coda)
So what have got at the end of the day? TIGER(X) Lexical Concepts Tijger (Lemmas) <Tijger> Word Forms Structure Segments /t/ /EI/ /x/ /@/ /r/ ‘s1(on nu coda) s2(on nu coda)
Lecture 5 TIGER(X) Lexical Concepts Tijger (Lemmas) <Tijger> Word Forms Structure Segments /t/ /EI/ /x/ /@/ /r/ ‘s1(on nu coda) s2(on nu coda) Er…Word Forms ‘s1(on /t/ nu /EI/)s2(on/x/ nu /@/ coda /r/)
Levelt’s paradox • All models of phonological encoding distinguish between the retrieval of content (segments) and structure (word or syllable template) • Evidence: properties of speech errors • But what’s the point to re-order, if you’ve stored the order in the lexicon (word form)? • Answer: domain of syllabification (thus, structure) is the phonological word.
Phonological word • Content morpheme, preceded and/or followed by 0 or more closed class morphemes (e.g., inflections, pronouns). • Examples: • <understand> + <ing>: un der stan ding • <understand> + <er>: un der stan der • <understand> + <her>: un der stan der
Syllabification Rules • Principle of Maximal Onset (Dutch, English) • Principle of Minimal Coda (Dutch) • Sonority hierarchy (Universal?): the ideal syllable has a maximal rise in sonority in the onset, and a minimal decline in sonority in the coda • Vowels > liquids, nasals, glides > the rest
How does it work in Levelt et al (1999)? • Word form(s) are retrieved • Word forms are spelled out • Spell-out of segments • Spell-out of structure (#sylls and stress) • Frames are merged • Segments are placed in frames, respecting language-specific rules of syllabification • Syllable nodes are retrieved (from a syllabary)
Thus: <demand> <her> 1 6 /r/ /h/ /@/ /d/ ... /i/ /d/
Thus: <demand> <her> 1 6 /r/ /h/ /@/ /d/ ... /i/ /d/ W(S S’) W(S)
Thus: <demand> <her> 1 6 /r/ /h/ /@/ /d/ ... /i/ /d/ W(S S’) W(S) W(S1 S2’ S3)
Thus: <demand> <her> 1 6 /r/ /h/ /@/ /d/ ... /i/ /d/ W(S S’) W(S) W(S1 S2’ S3) Onset S1
Thus: <demand> <her> 1 6 /r/ /h/ /@/ /d/ ... /i/ /d/ W(S S’) W(S) W(S1 S2’ S3) Onset S1 Nucleus S1
Thus: <demand> <her> 1 6 /r/ /h/ /@/ /d/ ... /i/ /d/ W(S S’) W(S) Onset S2 W(S1 S2’ S3) Onset S1 Nucleus S1
Thus: <demand> <her> 1 6 /r/ /h/ /@/ /d/ ... /i/ /d/ W(S S’) W(S) W(S S’ S) coda onset onset [di] [man] [d@r] SYLLABARY
Properties of the model • The segments connected to the word form are numbered. Numbers specify attachment order. • Segments know where to go, and can look at their neighbours. • If I am a vowel: nucleus of next available syllable • If I am a consonant, put me in the onset of the next syllable • If there is no next syllable, put me in the coda of the current syllable.
Properties of the model(2) • There is a verification mechanism, preventing errors. Thus, if phoneme /d/ is selected, only syllable programs [d*] can be selected. • There is a suspension/resumption mechanism, allowing for incrementality. Thus, even if /m/, /ae/, etc., or not selected yet, the model can already build the Phon. Word corresponding to the first syllable.
Meyers’ paradox • Meyer & Schriefers (1991): Picture/Word interference, with phonological relatedness. • TAFEL with tapir vs jofel • Early SOA: Effect of Begin-relatedness • Late SOA: Effect of END-relatedness • Meyer (1990, 1991): Implicit priming with begin and end-homogenous sets: • Lotus, loner, local; murder, ponder, boulder • Effect of Begin-relatedness only.
Explanation • Explicit priming (p/w interference) speeds up the retrieval of segments. This depends on the time-course of the spoken distractor. • Implicit priming does not speed up the retrieval of segments. But the participant, when doing a homogeneous set, can prepare part of the phonological word (suspension/resumption mechanism).
The Syllabary • Stored programs for entire syllables, specified as sets of articulatory gestures. That is, abstract instructions to the articulators. • For example, one such instruction could be to “close the lips” (but not: move upper-lip -8 mms AND move lower-lip + 5 mms, following velocity trajectories v1 and v2). • Thus, these instructions are not external context-sensitive.
Why a syllabary? • Phonetic accomodation in speech errors. If phonemes end up in the wrong place, they are pronounced correctly for their environment: • E.g., tab stops -> tap [stabz] (Fromkin, 1971)
Why a syllabary (2)? • If you do something really often, it is better to store and reuse it than it is to start from scratch. • The top 500 sylls (out of roughly 12,000) cover 80% of words in English, 85% in Dutch.
Why a syllabary (3)? • Levelt & Wheeldon (1994): Frequency effects in word production. • Practice phase: Symbol to word association. • %%%%% = Tiger, ***** = Lotus • Test phase: Symbol cue for production • %%%%% TIGER
Why a syllabary (3)? • Additive effects of word frequency and syllable frequency • Especially frequency of SECOND syllable was important • Not reducible to syllabic complexity
Why a syllabary (3)? • Additive effects of word frequency and syllable frequency • Especially frequency of SECOND syllable was important • Not reducible to syllabic complexity • HOWEVER: there were confounding factors in the experiment. Conclusions should not be taken at face value! (Levelt et al., 1999).
What about errors? • Weaver++ does not make ANY errors. It always ensures that the selected unit at level n+1 is connected to the selected unit(s) at level n. • Errors were simulated, by assuming that this checking mechanism sometimes produces false positives at the level of the syllabary. • Thus, target is red sock. If the syllable program [sed] is happy -> anticipation. If [rok] is happy -> persevaration. If both happy, exchange.
Exchange rate: sed rock • In WEAVER, probability of false positive for [sed] is independent of that for [rok]. Both p’s are extremely small. The p of both occurring is infinitely small => 0% exchanges. • In Dell’s model, selected phonemes are turned off. Thus, if /r/ is not selected in word 1, it has an advantage over /s/ for word 2 (because /s/ is set to 0). See also Dell, Burger, & Svic (Psych. Rev. 97)
Exchange rate • Fromkin (1971) (and Matt, last week): Anticipations could be half-way corrected exchanges! Yew…New York • Nooteboom (in press). If we assume detection p is same for anticipation and perseveration, we can estimate the proportion of half-way corrected exchanges.
Nooteboom (in press) P A E Tot Corrected 103 442? 42? 587 Not correct 153 238 175 566 Total 256 680? 217? 1153 22% 59%? 19%? 100% 103: 153 = Acor : 238 => Acor = 160
Nooteboom (in press) P A E Tot Corrected 103 160 324 587 Not correct 153 238 175 566 Total 256 398 499 1153 22% 35% 43% 100%
Nooteboom (in press) P A E Tot Corrected 103 160 324 587 Not correct 153 238 175 566 Total 256 398 499 1153 22% 35% 43% 100% Weaver 19% 80% 1%
Conclusions • WEAVER++ (as opposed to Dell’s model) accounts for resyllabification in running speech • Like Dell’s model, it captures seriality effects • It accounts for the paradoxical RT data found in implicit and explicit priming • It’s syllable theory is supported by theoretical arguments, but not by conclusive data • Unlike Dell’s model, it does not predict the occurrence of exchange errors.