320 likes | 327 Views
This article discusses the concept of unambiguous triggers in language acquisition theory, specifically in the context of the CUNY-CoLAG language domain. It explores the advantages of unambiguous triggers, presents findings from a case study, and raises important questions for further research.
E N D
Introduction to Language Acquisition Theory Janet Dean Fodor St. Petersburg July 2013 Class 6. Unambiguous triggers for setting syntactic parameters
Today’s class • We’ve seen that SP (or some similar conservative learning strategy) is essential, but would cause extreme undershoot failures for strictly incremental learning. • Possible solutions: Assume learners have memory for prior input, or for previously eliminated grammars. • But another possibility to consider: LM should be confident that all marked parameter values adopted so far are correct, so they never require retrenchment. “Deterministic learning” – accurate, permanent. • How can LM be confident? Must distinguish between ambiguous and unambiguous input, and change the grammar only on the basis of unambiguous triggers. • Question for today: Do unambiguous triggers exist?Question for future: If so, could a learner recognize them?
Advantages of unambiguous triggers If all syntactic parameters have unambiguous triggers, learning of natural language syntax could be error-free and deterministic (no need to revise grammar hypotheses). That would be fast, and efficient, and would explain uniformity of outcomes across learners. Great, but…. But, learnability theorists (unlike linguists) gave up on this goal years ago, because of ambiguity and unruly interactions among triggers for natural language parameters. (Class 2.) A shift to statistical or probabilistic models. However, these are labor-intensive; they predict errors of comissionthat children don’t make; as well as retrenchment problems. So let’s take a close look at the domain of languages.Even if there’s ambiguity (overlap between languages), is there sufficient unambiguity to set the parameters correctly? 3
A case study: preview of findings • We searched the CUNY CoLAG domain and found that most of its marked parameter values do have unambiguous triggers, in all languages that need them. (Sakas & Fodor 2012) • But 3 of its 13 parameters do not. Yet with closer study of those cases, we found ways to rescue them all. • Where unambiguous triggers could not be found, they could be created by disambiguation. • This is not generally considered in P&P models. • Yet trigger disambiguation is not a new concept. It’s just what SP does: If input fits 2 values, adopt the subset one. This disambiguates the input as a trigger for the subset value. • We had to extend this in two ways: Between-parameter defaults; conditioned triggers. (Explained below.)
Caution: Results are for a limited domain There’s no proof that this positive finding of unambiguous triggers will extend from the simplified CoLAG domain to the full domain of natural languages. More work needed! But optimistic: the ‘toolkit’ for trigger disambiguation seems robust. And similar to Dresher & Kaye (1990)for phonology. Important questions for further research (by you?) include: ♦ Are specific trigger disambiguations predictable by general principles? ♦Are they compatible with theoretical assumptions about the (narrow) faculty of language?♦Are they implementable in a psychologically feasible learning model? Plausible resources? Fit child data?♦Once implemented, does reliance on unambiguous triggers out-perform other (non-deterministic) learning models? 5
Case study: the empirical investigation Do syntactic parameters have unambiguous triggers? All syntactic parameters, in all languages?? Impossible in practice to check out all natural languages.No way to know what the full range of languages is. Linguists haven’t analyzed most of the ones we do know of; the parametric contrasts are only just now being studied. But the issue can be approached using artificial languages, whose syntactic properties and parametric contrasts have been precisely and systematically assigned. Ideally, the language domain should be:large enough to be challenging;but not so large that we can’t understand the outcomes;authentic natural-language-like parameters:syntactically structured sentences, not just word strings 6
The CUNY-CoLAG language domain 3,072 languages, simplified but natural-language-like. Universal lexicon: S, O1, P, Aux, -wa, etc. Sentences: S Aux V Adv, O1+wh V S O2, O3-wa V P ka All sentences are degree-0 (no embedded clauses). Languages have 827 sentences on average. Domain contains 48,086 distinct sentences (as strings)87,088 distinct syntactic trees 3,072 distinct sets of P-values (grammars) Average ambiguity: Each string is in 53 distinct languages. For simulation studies of learning models, the input is a random sequence of all word-strings, in each target lg. 7
Parameters in the CoLAGdomain 13 standard Ps, simplified. GB-style, for stability, but designed to illustrate a variety of interesting learning problems. Seeking I-triggers (abstract, structural ‘treelets’), that are recognizable from E-triggers (surface word strings).
E-triggers and I-triggers • Chomsky (1986) distinguished: External language (observable language behavior, utterances) Internal language (mentally represented language; grammar) • A very important distinction for learnability theory. The input to learning is E-language End result of learning is I-language • Our computer searched for unambiguous E-triggers.Was there some sentence (string of ‘words’) that was uniquely associated with just one parameter value? • But in Class 2, I argued that children, like adults, represent sentences as tree structures – and that the innately given parameter values take the form of ‘treelets’ which contribute to sentential tree structures. These are I-triggers. • So when we find unambiguous E-triggers, we will try to relate them to structural I-triggers.
Case study: Method For each parameter, the database search program checks: Does it have at least one unambiguous E-trigger for at least one of its values, in all languages with that value? If only for one value, the other one could be a default. That’s ok (as long as it’s linguistically plausible). For a P-value that does have unambiguous E-triggers, what (if anything) do those triggers have in common? Do they converge to a single E-schema? (Or a few) E.g.: Prep-stranding: O3+wa S P V. O3-wh Aux V O1 P S. etc.All E-triggers have P and O3 both present, not adjacent. I-trigger: An O3-chain in GB. Or [SLASH O3], GPSG-style. 10
Initial search of the domain:3 P-values do lack unambiguous E-triggers
Resolving the ambiguity of triggers • Worst possible case of trigger ambiguity: Suppose every potential trigger for the marked value v of parameter Piis also compatible with the marked value of a different parameter Pj. • Even so, LM could establish the value Pi(v) if it follows a strategy that disambiguates the input in favor of Pi(v). • Two such strategies (specific examples in slides below): • Prioritize Pi(v): Designate the ambiguous input pattern as a trigger for Pi(v). That’s fine, as long as the competing parameter Pj has other (unambiguous) triggers to set it. • Conditionalize the trigger: Once some other parameter is (correctly) set, the input pattern is unique to Pi(v). • NB. Such strategies must be assumed innate.(Unless somehow derivable? How?? A research project.)
Example: the OptTop parameter The Optional Topic parameter interacts with the Topic Marking, Null Topic and Null Subject parameters, and the basic word order parameters. The OptTop parameter was deliberately included in the CoLAG domain as an instance of a subset parameter. It has 2 values: obligatory / optional. (No CoLAG language lacks topics entirely)By SP, the default = obligatory; marked value = optional. So only +OptTop needs triggers. But they are elusive – let’s find out why.
Most +OptTop E-triggers are syntactically ambiguous -OptTop oblig topic in all declarative sentences. +OptTop declarative sent may or may not have a topic.(No CoLAG languages lack topics entirely.) In all CoLAG languages, topicalized XPs are in Spec of CP, which is always sentence-initial. Ambiguity problem: How to tell if a topic is present? E.g.:S O2 O1 V Is S in topic position? (Vacuous movement) If so, this is compatible with –OptTop. If not, then only compatible with +OptTop. Aux V O1 S Is there a null topic in this sentence? If so, this is compatible with –OptTop. If not, then only compatible with +OptTop. 14
+OptTop gets a free ride from +NullSubj +OptTop languages that are +Null Subject get a free ride, due to a between-parameter constraint that we built into the CoLAG domain. The domain has ‘topic-oriented’ versus ‘subject-oriented’ languages. As part of this distinction, +NullTop and +NullSubj are mutually exclusive. So any E-trigger for +NullSubj thereby sets +OptTop. +NullSubj has E-triggers in CoLAG. (but see Hyams)Unambiguous trigger: Any non-imperative sentence which has an overt topicand no overt subject. +NullTopic only if –OptTopic +NullSubject only if +OptTopic
So ‘real’ triggers are needed for only 768 of the 1,536 +OptTop languages +OptTop, -NullSubj still needy +NullSubj Free ride -OptTop default
Another free ride: the value of OptTop is sometimes irrelevant Recall example: S O2 O1 V Is S in topic position? Can’t tell whether a topic is present, in S-initial languages which have no overt topic marking. So both values of OptTop yield the same (surface) language. Either value will do. Weak equivalence -- good enough! LM can just keep the default (starting) value –OptTop. So no triggers are needed. OptTop is irrelevant in 120CoLAG languages. 17
Only 648 languages need +OptTop triggers +OptTop still needy OptTop irrelevant +NullSubj Free ride -OptTop default
Some strange triggers for +OptTop! Domain search found unambiguous +OptTop triggers in 408 languages needing ‘real’ triggers (no free ride). All have sentence-initial Aux, Verb, not, never or ka. Why? All have overt subject & a ‘full house’ of complements in VP: direct object, indirect object, PP, adverb. Why? Explanation: To establish +OptTop, a learner needs to encounter a declarative sentence that clearly: (a) has no overt topic. E.g., Initial Aux, Verb, etc. and (b) has no null topic. I.e., Nothing’s missing. 19
With the ‘full house’ triggers +OptTop still needy No overt topic, ‘full house’ OptTop irrelevant +NullSubj Free ride -OptTop default
Reject the full house triggers! Though they do the job, we rejected themon linguistic and psycholinguistic grounds. Linguistically: Natural languages can have an essentially unbounded number of optional adjuncts in a clause. All would have to be overtly present, to prove that none had been topicalized and deleted. Psycholinguistically: Even if there’s a practical bound, it’s not plausible that children have to rely on such huge sentences. We searched again, but we found no other globally valid triggers(valid wherever encountered). So then we looked for locally valid triggers (valid only in some languages). These do less work. And they can be dangerous – they rely on correct settings of other Ps. 21
Locally valid triggers can be dangerous • A locally valid trigger is reliable only in a language that isknownto have certain other P-values. • E.g. Sentence-initial O3 P is a safe trigger for +Head-final only in a language known to have no prep stranding. • We say the trigger is conditioned by –PrepStranding. • LM must be fully confident that the target language is –PrepStranding, in order to safely use this trigger. • If a learner wrongly believes the target language has no Prep-stranding, the Head-final parameter would be mis-set. • And that in turn could lead to mis-setting another parameter and another… (Cf. Clark’s examples) • Locally valid triggers are safe only if parameter setting is guaranteed to be error-free (deterministic, no guessing).
Conditioned triggers for +OptTop The search program found triggers for +OptTop, conditioned by +TopicMarking. (In a +TopMark language, topics carry an obligatory topic marker wa). Clearly these aren’t globaltriggers for +OptTop(e.g. not available in English). So other languages would need other triggers for +OptTop. That could be ok – we can check. Can LM be surewhen the conditioning P-value is correctly set? Yes. +TopMark has clear triggers: sentences with wa. In a +TopMarklanguage, a sentence without wa clearly reveals that it has no overt topic. So far, so good. Now need to check whether the sentence has a null topic. Unfortunately, the ‘missing-wa’ triggers all turned out to be ‘full house’ sentences, in order to rule out +NullTop. So as before, we will reject them.
So – back to where we were +OptTop still needy OptTop irrelevant +NullSubj Free ride -OptTop default 24
-Null Topic also conditions +OptTop triggers Domain search found 528 languages with unambiguous triggers for +OptTop, conditioned by –Null Topic. In a language with -NullTop, a sentence with no overt topic (e.g., initial Aux, Verb, not, …) has no topic at all. Conditioning by –NullTop does the same work as the undesirable full house triggers. Excellent! But: –NullTop is a default value, with no unambiguous triggers of its own. So a learner may temporarily have –NullTop even if the target language is +NullTop. So the default value -NullTop should not be allowed to condition another parameter value. Must we discard this too? Yes. 25
Triggers conditioned by two P-values Further domain search found that all of the still-needy +OptTop languages have unambiguous triggers conditioned by both –NullTop and +TopMark. If it weren’t for the illegitimacy of a default value as aconditioner, we’d be finished. +TopMark conditioning reveals if no overt topic. (a) -NullTopconditioning wd ensure there’s no null topic. (b) Hence the sentence has no topic at all. So the language must be +OptTop. 26
We’re done (except…no default conditioners)! Conditioned by -NullTop and +TopMark OptTop is irrelevant +NullSubj Free ride -OptTop default
Rescued by a between-parameter default We can do without the unsafe conditioning of +OptTopby the default value –NullTop. Instead, the competition between +OptTop and +NullTop can be dealt with by a between-parameter default. A between-parameter default gives priority to the marked value of one parameter over the marked value of another, when a trigger is ambiguous between them. +OptTop and +NullTopcompete when there’s no overt topic.Solution: Give priority to +OptTop (which is needy for triggers) over +NullTop (which has sufficient triggers). +NullTopmakes a gift of the trigger to +OptTop. If the language is really +NullTop, the learner will at some point encounter its triggers. (If not, not.) 28
This is another way of disambiguating triggers Previous problem: –NullTopic is not itself supported by unambiguous triggers, so it’s not trustworthy as a conditioner of +OptTop. But +NullTopic is well-supported with triggers, so it can afford to give some of them away to +OptTop, in a between-parameter default. There are abundant triggers for +NullTopic. All +NullTop languages have sentences lacking an obligatory item: Obj of a preposition: Verb not O1 P Adv Direct obj if indirect obj is present: Aux O2 Verb S Happy ending. NullTop can donate the ambiguous triggers to OptTop, which needs them.
We’re (really) done! Conditioned by +TopMark. Priority to +OptTop over +NullTop. OptTop is irrelevant +NullSubj Free ride -OptTop default 30
Conclusions However much ambiguity there is in a domain, there may be enough unambiguity to set the parameters. Suppose 97% of potential triggers for parameter P overlap with triggers for other parameters. No problem, if the other 3% are readily available to learners. We’ve seen 2 mechanisms of trigger disambiguation conditioned triggers between-parameter defaultsthat can yield unambiguous triggers for accurate learning.(At least, for one parameter in a simplified lg domain!) Unambiguity is what it takes for deterministic parameter setting (with its various benefits: reducing errors en route, resistance to retrenchment by SP, etc. – see Class 2). Note the converse: Deterministic learning is what it takes to create safe conditioned triggers, when standard triggers are unavailable because of parameter overlap. 31
But ‘squeaky clean’ P-setting presupposes… This ‘squeaky clean’ model of parameter setting from unambiguous triggers presupposes that a learner can know which sentences are parametrically ambiguous. We’ll see next time (Class 7) that recognizing ambiguity is not easy – it might require more resources than can reasonably be attributed to a pre-school child. WRITING ASSIGNMENT Prepare a question, to ask in class on Friday, about some aspect of language learnability that we haven’t touched on, or haven’t dealt with satisfactorily. Indicate why it is important or of interest. Hand in a written copy to me on Wednesday (to include in your grade). Keep a copy so that you can ask it in class on Friday. 32