300 likes | 465 Views
Phonological constraints as filters in SLA. Raung-fu Chung rfchung@mail.nsysu.edu.tw. 1. Introduction. The main components of this article are: The framework of Optimality Theory Acquisition and learnability in OT Our model Concluding remarks. 2. The framework of Optimality Theory.
E N D
Phonological constraints as filters in SLA Raung-fu Chung rfchung@mail.nsysu.edu.tw
1. Introduction The main components of this article are: • The framework of Optimality Theory • Acquisition and learnability in OT • Our model • Concluding remarks
2. The framework of Optimality Theory (1) The model of OT
For instance, the English morpheme for plurals /s/ can be realized either as [s], [z], depending on the preceding sound of the stem:
(2) cat [k] cats [ks] dog [dg] dogs [dgz] hen [hen] hens [henz] The input form is taken to be [s]. Then we propose the following constraint: Voiced Obstruent Prohibition)。
(3) Voiced Obstruent Prohibition, VOP No obstruents can be voiced.
Another constraint called for is: (4) Obstruent Voicing Harmony, OVH The adjacent obstruents should share the same value for [voice] The third constraint is a universal constraint, which is one component of IO, here referred as Ident-IO(voice) Ident= identical, IO= Input and Output) (5) Ident-IO(voice) The value for [voice] feature of the Output should be identical with tha of the Input.
As for the ranking, it is obvious, as shown below. (6) OVH >> VOP (「>>」= be preceded or be prior to) Adding to IO, we have the following raking for all the three constraints we just proposed. (54) OVH>> Id-IO(voice)>> VOP
3.1 The notion of learnability a. The formal learnability in the sense of Tesar & Smolensky, 1993 assumed that “all constraints started out being unranked.” In later empirical studies (e.g. Gnanadesikan, 1996; Levelt, 1995), it is pointed out that outputs are initially governed by markedness constraints, rather than by faithfulness constraints. This leads to the proposal that in the initial state of the grammar, all markedness constraints outrank all faithfulness constraints, or “M >> F” for short (Kager, Pater & Zonneveld, 2004; Hayes, 2004; Prince & Tesar, 2004).
b. There are two algorithms accounting for learnability of constraint rankings: Constraint Demotion Algorithm (CDA) and Gradual Learning Algorithm (GLM). Constraint Demotion Algorithm (CDA), proposed by Tesar & Smolensky (1993, 1998, 2000), ranks a set of constraints based on the positive input. For example, L1 acquisition can be interpreted as constraint demotion (Tesar & Smolensky, 1996)
c. Gradual Learning Algorithm (GLA), developed in Boersma (1997, 1998) and Boersma & Hayes (2001), handles variation in the input and explains gradual well-formedness. GLA is helpful in accounting for categorization errors a learner makes in both production and perception. L2 learners with restricted constraint sets have to gradually learn to rerank the constraints by raising or lowering the existing ones.
L1 filter L2 Output (native-like) UG (UC) L2 Input Native- like ranking Interlanguage- Output Interlanguage- ranking Constraint- reranking 4. Our model • L1 Filter Hypothesis & OT: L1 filter Interlanguage- ranking
Empirical arguemtns: • An VOT-baased analysis of VOT production by Taiwanese EFL learners • An diphthong construction of Mandarin and English for Taiwanese learners • Errors of production of [yi] and [wu] for Mandarin EFL leasrners
An OT-based Analysis of VOT Productionby Taiwanese EFL Learners • Acoustic values of VOT: (Liou, 2005) • Note: NSE: native speakers of English; HEFL: high proficient EFL learners; LEFL: low proficient EFL learners; MAN: Mandarin; SM: Southern Min
Constraints for VOT: • 1. the *CATEG(ORIZE) family, which punishes productive categories with certain acoustic values. For example, *CATEG(VOT: /91.5ms/) is against producing /91.5ms/ as a particular category. • 2. *WARP family, which demands every segment be produced as a member of the most similar available category. For instance, *WARP(VOT: 9.3ms) requires that an acoustic segment with a VOT of 91.5ms should not be produced as any VOT ‘category’ that is 9.3ms off (or more), i.e. as /82.2ms/ or /100.8ms/ or anything even farther away.
Constraint-ranking for English [ph] by NSE • Tableau 1 English [ph] of NSE
Constraint-ranking for [ㄆ] by Taiwanese EFL learners • Tableau 2 Mandarin [ㄆ] by Taiwanese EFL learners
Constraint-ranking forInterlanguage [ph] by Taiwanese EFL learners • Tableau 3 Interlanguage [ph] by HEFL
An OT-based Analysis of Mandarin and English diphthongs for Taiwanese EFL (MSL) learners
(2) [-back] vowels for: • (3) [+back] vowels for: • (4)
(4) *N (N=韻母,=相同的) [後] [- 後]
(5) Different back featues for: • (6) • (7) • (8)
(9) *N (N=nucleus,=same,*=ungrammatical) [後] [後]
5. Concluding remarks • theoretical implications • empirical supports