460 likes | 470 Views
Delve into the analysis of various semantic types in language and meaning, comparing different entities like names and verb phrases. Discover the complexities of semantic categories and their implications in linguistic interpretations.
E N D
Types of Meanings:Two is Better than Too Many Paul M. Pietroski Rutgers University
Do the meanings of names like (1) and verb phrases like (2) differ in kind? (1) Socrates (2) talk in Athens Or do they have meanings of the same type, like ‘dog’ and ‘catastrophe’? • Does the meaning of (3) differ in kind from the meaning of (4)? (3) every (4) every philosopher If so, is (1) an instance of the same—or a same—“semantic type” as (4)? • Do the meanings of (5) and (6) differ in kind? (5) in Athens (6) see Athens What about the verbs in (6) and (7)? (7) gave Socrates a drink • Is (8) a phrase of the same semantic type as (9)? (8) Socrates talk (9) hear Socrates talk • What about (8) and (10), or (10) and (1)? Are tensed sentences sui generis? (10) Socrates talked. • How many semantic types are there, and are some more basic than others?
A Pair of Familiar Ideas Endlessly Many Types BasicNon-Basic <e> <𝛂, 𝛃> if <t> 𝛂 and 𝛃 are types SEMANTIC TYPES (1) There’s room in the middle: two types, three types, four types, … (2) If we’re offering hypotheses about “natural human” languages: (a) the Fregean typology overgenerates badly; (b) Tarskian alternatives are too unconstrained in other ways; but (c) a small number of types—perhaps two—may be just right.
The Lowest Fregean Types (described iteratively ) 0. <e> <t> (2) basic types 1. <e, e> <e, t> <t, e> <t, t> (4) of <0, 0> 2. eight of <0, 1> eight of <1, 0> (32), including sixteen of <1, 1> <e, et> and <et, t> 3. 64 of <0, 2> 64 of <2, 0> (1408), including 128 of <1, 2> 128 of <2, 1> <e, <e, et>> and 1024 of <2, 2> <et, <et, t>> and <<e, et>, t> 4. 2816 of <0, 3> 2816 of <3, 0> (> 2M), including 5632 of <1, 3> 5632 of <3, 1> <e, <e, <e, et>>> and 45,056 of <2, 3> 45,056 of <3, 2> <<e, et>, <<e, et>, t>> 1,982,464 of <3, 3> 5. … > 5 x 1012
3. 64 of <0, 2> 64 of <2, 0> (1408), including 128 of <1, 2> 128 of <2, 1> <e, <e, et>> and 1024 of <2, 2> <et, <et, t>> and <<e, et>, t> 4. 2816 of <0, 3> 2816 of <3, 0> (> 2M), including 5632 of <1, 3> 5632 of <3, 1> <e, <e, <e, et>>> and 45,056 of <2, 3> 45,056 of <3, 2> <<e, et>, <<e, et>, t>> 1,982,464 of <3, 3> If one precedes two, and two precedes three, then one precedes three. Precedes transits. Predecessor doesn’t transit. And loves doesn’t symmet. But precedes ancestspredecessor. (cp. The relation of precedence is transitive. This relation is the ancestral of another relation that Frege talked about.) <<e, et>, t>Transitive[<e, et>Precedes(_, _)] <<e, et>, t>Symmetric[<e, et>Loves(_, _)] <<e, et>, t>Transitive[<e, et>Predecessor(_, _)] <<e, et>, <<e, et>, t>>TransitiveClosure[<e, et>Precedes(_, _), <e, et>Predecessor(_, _)]
The Lowest Fregean Types (described iteratively ) 0. <e> <t> (2) basic types 1. <e, e> <e, t> <t, e> <t, t> (4) of <0, 0> 2. eight of <0, 1> eight of <1, 0> (32), including sixteen of <1, 1> <e, et> and <et, t> 3. 64 of <0, 2> 64 of <2, 0> (1408), including 128 of <1, 2> 128 of <2, 1> <e, <e, et>> and 1024 of <2, 2> <et, <et, t>> and <<e, et>, t> 4. 2816 of <0, 3> 2816 of <3, 0> (> 2M), including 5632 of <1, 3> 5632 of <3, 1> <e, <e, <e, et>>> and 45,056 of <2, 3> 45,056 of <3, 2> <<e, et>, <<e, et>, t>> 1,982,464 of <3, 3> 5. … > 5 x 1012
Human children regularly acquire languages, spoken or signed, that connect meanings of some kind with pronunciations of some kind. • These languages—let’s call them Slangs—are distinctively human. • Like most Slang words, ‘language’ (‘being’, ‘good’, ‘book’, … ) can be said/used in many ways. Invented Languages Natural Languages Various Kinds of “Mentalese” Arithmetic Notation, Leibniz’ Calculus, First-Order Logic, Morse Code, … Bee Dance Systems, Vervet Monkey Calls, … English, Japanese, French, Walpiri, ASL, Latin, … Slangs ⦁ distinctive signals (pronunciations) and interpretations (meanings) ⦁ naturally acquirable by ordinary human children (but not other animals)
Overgeneration for Slangs? 0. <e> <t> (2) basic types 1. <e, e> <e, t> <t, e> <t, t> (4) of <0, 0> 2. eight of <0, 1> eight of <1, 0> (32), including sixteen of <1, 1> <e, et> and <et, t> 3. 64 of <0, 2> 64 of <2, 0> (1408), including 128 of <1, 2> 128 of <2, 1> <e, <e, et>> and 1024 of <2, 2> <et, <et, t>> and <<e, et>, t> 4. 2816 of <0, 3> 2816 of <3, 0> (> 2M), including 5632 of <1, 3> 5632 of <3, 1> <e, <e, <e, et>>> and 45,056 of <2, 3> 45,056 of <3, 2> <<e, et>, <<e, et>, t>> 1,982,464 of <3, 3> 5. … > 5 x 1012
Overgeneration for Slangs? 0. <e> <t> (2) basic types 1. <e, e> <e, t> <t, e> <t, t> (4) of <0, 0> 2. eight of <0, 1> eight of <1, 0> (32), including sixteen of <1, 1> <e, et> and <et, t> 3. 64 of <0, 2> 64 of <2, 0> (1408), including 128 of <1, 2> 128 of <2, 1> <e, <e, et>> and 1024 of <2, 2> <et, <et, t>> and <<e, et>, t> 4. 2816 of <0, 3> 2816 of <3, 0> (> 2M), including 5632 of <1, 3> 5632 of <3, 1> <e, <e, <e, et>>> and 45,056 of <2, 3> 45,056 of <3, 2> <<e, et>, <<e, et>, t>> 1,982,464 of <3, 3> Mx Dyx Tzyx … Mx & NxMx & Ny Mx & Ny & Nz… There are other ways to overgenerate. Mx & DyxMx & DyzDxy & Dyz… Mx & TzyxTzyx & Duv…
Rest of the Talk 1957 • Way Back Machine to 1957: • Competence/Performance • Overgeneration and Theoretical Vocabulary • Come back to the Fregean Typology • really does overgenerate massively • appeal to performance limitations doesn’t help nearly enough • Sketch the alternative proposed in Conjoining Meanings • two lexical types: monadic <M>, dyadic <D> • all non-lexical meanings of type <M> • The “no types” option also faces overgeneration problems • Positing semantic types should be part finding the right vocabulary for formulating grammars that don’tovergenerate. (So start spare, and add if—but only if—needed.)
Competence/Performance S / \ NP VP / \ / \ the N V NP | / \ atethe N | cheese • The rat ate the cheese • The cat chased the rat • The dog chased the cat • The rat the cat chased ate the cheese | rat S NP VP VP V NP V ate NP the N N rat, cheese , chased , cat , dog
Competence/Performance S / \ NP VP / \ / \ the N V NP | / \ atethe N | cheese • The rat ate the cheese • The cat chased the rat • The dog chased the cat • The rat the cat chased ate the cheese • The rat the cat the dog chased chased ate the cheese / \ N RC | … rat / N \ the chased / \ / \ N RC | … cat / N \ the | chased dog | cat *Yuck*, but not ungrammatical S NP VP VP V NP V ate NP the N N rat, cheese the dog chased the cat that chased the rat that ate the cheese that Jack bought from the store the night before the dog chased the cat , chased , cat , dog N N RC RC the N chased
Competence/Performance S / \ NP VP / \ / \ the N V NP | / \ atethe N | cheese • The rat ate the cheese • The cat chased the rat • The dog chased the cat • The rat the cat chased ate the cheese • The rat the cat the dog chased chased ate the cheese / \ N RC | … rat / N \ the chased / \ / \ N RC | … cat / N \ the | chased dog /ungenerable *Yuck*, but not ungrammatical “Center Embedding” is hard to parse. S NP VP VP V NP V ate NP the N N rat, cheese Grammars generate expressions (in a mathematical sense of ’generate’). People produce expressions (in an episodic sense of ’produce’). Grammars generate expressions that people can’t produce (or parse). , chased , cat , dog N N RC RC the N chased
Competence/Performance S / \ NP VP / \ / \ the N V NP | / \ atethe N | cheese / \ N RC | … rat / N \ the chased • But if some *Yuck*-judgments reflect the difficulty of parsing “center embeddings,” then grammars generate expressions in ways that yield such embeddings. • And not all recursive string-generators do this. • This “Finite State Procedure” (FSP) generates the rat ate the cheese and the rat the cat chased ate the cheese and the rat the cat the dog chased chased ate the cheese • Given the FSP, the yucky string is not hard to parse; it’s just a littlelonger. • The FSP imposes no constituency structure on strings. So it overgeneratesthe ate the cheese, the the ate the cheese, … chased cat rat the the dog dog dog chased ate the cheese, … / \ / \ N RC | … cat / N \ the | chased dog • the ate the cheese • ⟶ ⟶ ⟶ ⟶ rat cat chased dog in the same boring way wildly.
Overgeneration or Performance Limitations? 0. <e> <t> (2) basic types 1. <e, e> <e, t> <t, e> <t, t> (4) of <0, 0> 2. eight of <0, 1> eight of <1, 0> (32), including sixteen of <1, 1> <e, et> and <et, t> 3. 64 of <0, 2> 64 of <2, 0> (1408), including 128 of <1, 2> 128 of <2, 1> <e, <e, et>> and 1024 of <2, 2> <et, <et, t>> and <<e, et>, t> 4. 2816 of <0, 3> 2816 of <3, 0> (> 2M), including 5632 of <1, 3> 5632 of <3, 1> <e, <e, <e, et>>> and 45,056 of <2, 3> 45,056 of <3, 2> <<e, et>, <<e, et>, t>> 1,982,464 of <3, 3> Is it plausible that certain cases of abstraction-- including <<e, et>, t> and <<e, et>, <<e, et>, t>>-- are grammatically licit but psychologically harder than other cases like <et, <et, t>>?
S NP VP VP V NP V ate NP the N N rat, cheese 1. S 2. NP VP 3. NP V NP 4. NP ate NP 5. the N ate NP 6.theN atethe N 7. the ratatethe N 8. the ratatethe cheese S NP VP V NP ate the N the N rat cheese S NP VP the N rat V NP the N cheese ate 1. S 2. NP VP 3. the N VP 4. the rat VP 5. the rat V NP 6. the rat V the N 7. the rat V the cheese 8. the ratatethe cheese An important pair of ideas from Chomsky (1957): --syntactic structures as equivalence classes of derivations --a string can be structurally homophonous
S / \ NP VP / \ / \ a N V NP doctor called / | \ a N PP lawyer /__\ from Texas 1. S 2. NP VP 3. a N VP 4. adoctor VP 5. a doctor V NP 6. a doctor called NP 7. a doctor calleda N PP 8. a doctor calleda lawyer PP 9. a doctor calleda lawyer from Texas S NP VP VP V NP V called NP a N N doctor, lawyer PP from Texas NP a N PP VP VP PP … 4. adoctor VP 5. a doctor VP PP 6. a doctor V NP PP 7. a doctor called NP PP 8. a doctor called NP from Texas 9. a doctor calleda N from Texas 10. a doctor calleda lawyer from Texas S / \ NP VP / \ / \ a N VP PP doctor / \ /____\ V NP from Texas called / \ a N lawyer An important pair of ideas from Chomsky (1957): --syntactic structures as equivalence classes of derivations --a string can be structurally homophonous
this procedure generates every string consisting of zero or more occurrences of ‘a’, zero or more occurrences of ‘b’, and at least one occurrence of ‘a’ or ‘b’ (now imagine a “1 to 2” link for each English word) • it’s easy to provide Finite-State models of Slangs that don’t undergenerate • but such models overgenerate (wildly) because they don’t generate in a way that imposes constituency structure on strings of lexical items • and strings of lexical items are not remotely adequate as proxies for expressions, which have meanings that reflect constituency structure • Chomsky used the (Post-style) vocabulary of “phrase structure rules” to describe languages that respect structural constraints because the relevant expressions are generated in ways that impose constituency structure on strings • a • ⟶ ⟶ b S NP VP VP V NP V ate NP the N N rat, cheese
ambn anbn b ab aababb aaababbb … … abb aabbaabb aaabbaabbb … … ab aabb aaabbb … • a b • ⟶ ⟶ ⟶ a aa aaa aaab aaabb mirror S aSb S ab • 1. S • 2. aSb • 3. aaSbb • 4. aaabbb aabb aaaabbbb baababba aaaaaabbbbbb abaabababbab baaaababbbba bbaabbaabbaa … S / \ L R / \ / \ a L b R / \ | a L b | a S LR L a L aL R b R bR S / | \ a S b / | \ a S b / \ a b
anbncn abc aabbcc aaabbbccc … The symbol on the left side of the arrow can be rewritten, as indicated on the right, wherever that symbol appears (i.e., regardless of the context in which the symbol on the left appears) • Context Insensitive (a.k.a. Context Free) Rules S aSa S aa S LR L aL R b • Context Sensitive Rules aB ab cC cc CB CX (1) S a S B C (2) S a B C • (3) a B a b(4) b B b b • (5) b C b c • (6) c C c c • (7) C B B C • (7a) C B C X • (7b) C X Y X • (7c) Y X Y C • (7b) Y C B C you can replace a ‘B’ with a ‘b’ when the ‘B’ follows an ‘a’ you can replace a ‘C’ with a ‘c’ when the ‘C’ follows another ‘c’ you can replace a ‘B’ with an ‘X’ when the ‘B’ follows a ‘C’ What appears on the left can be rewritten as on the right. And only one symbol gets rewritten. But this is legit only ifthe rewritable symbol is in a certain context.
anbncn The S that was rewritten as aSBC (and then as aaBCBC) doesn’t correspond to any chunk of the final anbncn-string. (1) S a S B C (2) S a B C S aSBC via (1) a a S B C BC via (1) aa a B C B CBC via (2) aa a B B C C BC via (7) aa a B B CBCC via (7) aa a B B BCCC via (7) aa a b B BC CC via (3) aa a b b BC CC via (4) aa a b b bC CCvia (4) aa a b b bc CCvia (5) aa a b b bc cCvia (6) aa a b b bc ccvia (6) From here, we just shuffle and rewrite CAPITALS. But we need to shuffle before rewriting: ‘aaabcbcbc’ is not an anbncn-string. And shuffling obliterates constituency: aaBCBCwas a constituent (back at line 4). (7) C B B C (3) a B a b Note that replacing ‘aB’ with ‘ab’ is OK, as is replacing ‘bB’ with ‘bb’. But allowing indiscriminatereplacement of ‘B’ with ‘b’ wouldovergenerate. Likewise, we need (5) and (6), not “C c.” • To generate all and only the anbncn-strings, • we need context-sensitive rules. But allowing for context-sensitive rules allows for constituency-obliterators like (7). (4) b B b b (5) b C b c (6) c C c c
The Golden Age 1959 • By 1959, it was pretty well understood that acquiring a Slang is a matter of acquiring a procedure that (recursively) generates expressions in an interesting but limited way • not in the very, very, boring Finite-State way (no constituency structure) • not in a way that exploits the power of Context-Sensitive Rules (constituency obliterating) • more like the limited power ofContext-Insensitive Rules (supplemented by constrained “movement” of constituents) • The issues concern the generative powers humans employ • In formulating theories/models of these powers, overgeneration is the real concern
Frege’sBegriffsschrift 1879 • Designed to depict proofs • including proofs by (logical) induction • no “intuitive” shortcuts…all steps explicit • So the invented syntax… • makes a minimal contribution to “content” (Apply Function) • imposes no real constraint on atomic expressions • Frege wasn’t offering hypotheses about • which combinatorial operations Slang syntax can express • which kinds of lexical items Slangs support • whether the relevant operations constrain candidate lexical meanings <t> <𝛂,t> <𝛂> <𝛂, 𝛃> <𝛃> <X> <Y> <Z> NP <et> N <et> RC<et> cat C S <t> which NP VP | Fido V NP | saw __ |_______________|
Overgeneration or Performance Limitations? 0. <e> <t> (2) basic types 1. <e, e> <e, t> <t, e> <t, t> (4) of <0, 0> 2. eight of <0, 1> eight of <1, 0> (32), including sixteen of <1, 1> <e, et> and <et, t> 3. 64 of <0, 2> 64 of <2, 0> (1408), including 128 of <1, 2> 128 of <2, 1> <e, <e, et>> and 1024 of <2, 2> <et, <et, t>> and <<e, et>, t> 4. 2816 of <0, 3> 2816 of <3, 0> (> 2M), including 5632 of <1, 3> 5632 of <3, 1> <e, <e, <e, et>>> and 45,056 of <2, 3> 45,056 of <3, 2> <<e, et>, <<e, et>, t>> 1,982,464 of <3, 3> Is it plausible that certain cases of abstraction-- including <<e, et>, t> and <<e, et>, <<e, et>, t>>-- are grammatically licit but psychologically harder than other cases like <et, <et, t>>?
So you see, it’s good to suppose that <e> and <t> are types, and hence that <et> is a type, and <e, et> is a type, and <et, t> is a type, and <et, <et, t>> is a type. Remember trying to learn this stuff. <t> / \ <et, t><et> / \ / \ <et, <et, t>><et> <e, et> <e> a dog saw Felix <t> / \ <e> <et> Fido / \ <e, et> <e> saw Felix What if you replace ‘Fido’ with ‘a dog’? But if we can treat ‘a’ as an expression of the (Level Three) type <et, <et, t>> is it really harder to introduce expressions of the (Level Three) type <<e, et>, t> or the (Level Four) type <<e, et>, <<e, et>, t>>? <<e, et>, t>Transitive[<e, et>Precedes(_, _)] <<e, et>, t>Symmetric[<e, et>Loves(_, _)] <<e, et>, t>Transitive[<e, et>Predecessor(_, _)] <<e, et>, <<e, et>, t>>TransitiveClosure[<e, et>Precedes(_, _), <e, et>Predecessor(_, _)]
What if you replace ‘Felix’ with ‘a cat’? <t> / \ <et, t> <et> / \ / \ <et, <et, t>> <et> <e, et> <e> a dog saw Felix <t> / \ <e> <et> Fido / \ <e, et> <e> saw Felix <t> / \ <et, t> <<et, t>, t> / \ / \ <et, <et, t>> <et> ● <et, t> a dog saw / \ <et, <et, t>> <et> a cat <<et, t>, <<et, t>, t>> <t> / \ <et, t> <et> / \ / \ <et, <et, t>> <et> 1 <t> a dog/ \ <et, t> <et> / \ / \ <et, <et, t>> <et> 2 <t> a cat / \ <e> <et> 1 / \ <e, et> 2 saw Is this cheating? saw 𝛌y.𝛌x.Saw(x, y) saw𝛌𝚿.𝛌𝚽.𝚽{𝛌x.𝚿[𝛌y.Saw(x, y)]} Is ‘a’ really that complicated? Maybe…probably not. But we’re stuck with ‘every’. (Level 4, even without event variables)
<t> / \ <e> <et> Fido / \ <e, et> <e> saw Felix <t> / \ <et, t> <<et, t>, t> / \ / \ <et, <et, t>> <et> ● <et, t> every dog saw / \ <et, <et, t>> <et> every cat <<et, t>, <<et, t>, t>> <t> / \ <et, t> <et> / \ / \ <et, <et, t>> <et> 1 <t> every dog/ \ <et, t> <et> / \ / \ <et, <et, t>> <et> 2 <t> every cat / \ <e> <et> 1 / \ <e, et> 2 saw Is ‘Fido’ like ‘every dog’? Probably…more or less. But you said that ‘every dog’ is not of type <e>. Is ‘the dog’ like ‘every dog’? Maybe…probably. And besides, we’re stuck with ‘every’.
Tyler Burge saw David Kaplan. Professor Tyler Burge saw that guy Kaplan. Every Tyler at the party saw a David. The Capulets saw the Montagues. Every Hatfield saw every McCoy. <t> / \ <e> <et> Fido / \ <e, et> <e> saw Felix <t> / \ <et, t> <<et, t>, t> / \ / \ <et, <et, t>> <et> ● <et, t> every Hatfield saw / \ <et, <et, t>> <et> every McCoy <<et, t>, <<et, t>, t>> <t> / \ <et, t> <et> / \ / \ <et, <et, t>> <et> 1 <t> every Hatfield / \ <et, t> <et> / \ / \ <et, <et, t>> <et> 2 <t> every McCoy / \ <e> <et> 1 / \ <e, et> 2 saw
Tyler Burge saw David Kaplan. Professor Tyler Burge saw that guy Kaplan. Every Tyler at the party saw a David. The Capulets saw the Montagues. Every Hatfield saw every McCoy. <t> / \ <e> <et> Fido / \ <e, et> <e> saw Felix <t> / \ <et, t> <<et, t>, t> / \ / \ <et, <et, t>> <et> ● <et, t> every Fido saw / \ <et, <et, t>> <et> every Felix <<et, t>, <<et, t>, t>> <t> / \ <et, t> <et> / \ / \ <et, <et, t>> <et> 1 <t> every Fido / \ <et, t> <et> / \ / \ <et, <et, t>> <et> 2 <t> every Felix / \ <e> <et> 1 / \ <e, et> 2 saw So ‘Fido’ is not of type <e> after all? The name is a complex expression, and the lexical noun is of type <et>? Probably…details TBA.
Last week, you said that names were probably not atomic expressions of type <e>. This week, I’m simplifying. Why? To tell you about more problems for the typology we assumed. We? <t> / \ <e> <et> Romeo / \ <e, et> <e> saw Juliet <t> / \ <e> <et> Fido / \ <e, et> <e> chased Felix We need to deal with tense. <t> / \ <e> <et> Romeo / \ <e, et> <?> saw <t> / \ <?> <et> dogs / \ <e, et> <?> chased cats / \ <?> <?> dogs / \ <?> <?> chase cats
<et> <t> / \ <e>_<et, t> past <t> / \ <e>_<et, t> <et> past / \ <e> <e, et> Fido / \ <e, <e, et>> <e> chase Felix <et> / \ <e, et> <e> on Monday <et> / \ <e> <e, et> Romeo / \ <e, <e, et>> <e> see Juliet <t> / \ <e>_<et, t> <et> past / \ <e> <e, et> Romeo / \ <e, <e, et>> see <t> / \ <e>_<et, t> <et> past / \ <?> <?> dogs / \ <e, <e, et>> <?> chase cats <et> / \ <?> <?> dogs / \ <e, <e, et>> <?> chase cats OOPS So we’re no longer sure about ‘see’. So we can’t be sure about ‘chase’. And we weren’t sure about proper nouns.
<t> / \ <e>_<et, t> <et> past / \ <?> <?, et> Romeo / \ <??> <et> see / \ <?!> <?> dogs / \ <??> <?!> chase cats So we’re no longer sure about ‘see’. So we can’t be sure about ‘chase’. And we weren’t sure about proper nouns.
<t> / \ <et, t> ∃ <et> / \ <et> <et> past / \ <?> <?, et> Romeo / \ <??> see <t> / \ <e>_<et, t> <et> past / \ <?> <?, et> Romeo / \ <??> <et> see / \ <?!> <?> dogs / \ <??> <?!> chase cats past-perfect … <et> / \ <?!> <?> dogs / \ <??> <?!> chase cats In here, there’s lots of stuff we don’t understand. But we’re sure that we have the right semantic typology.
<t> / \ <et, t> ∃ <t> / \ <et> / \ 1 <t> / \ <et> / \ 2 <t> / \ <et, t> ∃ <et> / \ <et> <et> past / \ <?> <?, et> Romeo / \ <??> see <et> / \ <et> <et> past / \ <?> <?, et> Romeo / \ <??> see <et> / \ / \ <et, t> <?> / \ / \ <et, <et, t>> <et> <??> <et, t> sm / \ chase smcat+pl <et> <et, et> dog +pl <et> / \ <?!> <?> dogs / \ <??> <?!> chase cats
<t> / \ <e> <et> Fido / \ <e, et> <e> chased Felix It’s progress to learn how complex an apparently simple expression can be. So forget what I said (on day one) by way of motivating the typology that we have now learned is right. <t> / \ <e> <et> Romeo / \ <e, et> <e> saw Juliet Isn’t this cheating? You’re saying that ‘dogs’ and ‘cats’ are more complicated than they seem, and then saying that they move (because otherwise there would be a ”mismatch problem” with the verb, whose type you’re not sure about) and then positing the “index magic” to solve a “mismatch problem” that you created by trying to make something of type <et, t> combine with something of type <t>. <t> / \ <et, t> ∃ <t> / \ <et> / \ 1 <t> / \ <et> / \ 2 <et> / \ <et> <et> past / \ <?> <?, et> Romeo / \ <??> see <et> / \ / \ <et, t> <?> / \ / \ <et, <et, t>> <et> <??> <et, t> sm / \ chase smcat+pl <et> <et, et> dog +pl
Overgeneration or Performance Limitations? 0. <e> <t> (2) basic types 1. <e, e> <e, t> <t, e> <t, t> (4) of <0, 0> 2. eight of <0, 1> eight of <1, 0> (32), including sixteen of <1, 1> <e, et> and <et, t> 3. 64 of <0, 2> 64 of <2, 0> (1408), including 128 of <1, 2> 128 of <2, 1> <e, <e, et>> and 1024 of <2, 2> <et, <et, t>> and <<e, et>, t> 4. 2816 of <0, 3> 2816 of <3, 0> (> 2M), including 5632 of <1, 3> 5632 of <3, 1> <e, <e, <e, et>>> and 45,056 of <2, 3> 45,056 of <3, 2> <<e, et>, <<e, et>, t>> 1,982,464 of <3, 3>
Think about the “Plausible Eight” Where’s the independent evidence that Slangs generate expressions of type <e> <t> <t, t> <e, <e, et>> 0. <e> <t> • <e, t> <t, t> • <e, et> <et, t> 3. <e, <e, et>> <et, <et, t>> Where’s the independent evidence that Slang quantifiers (like ‘every cat’) are of type <et, t> and Slang determiners (like ‘every’) are of type <et, t> Once we’re down to <et> and <e, et> why not try taking these types as basic and exhaustive: <M> and <D>
Two Metaphors for Concept Composition Jigsaw Puzzles7th Grade Chemistry A THOUGHT a Monadic Concept “filled by” a Saturater yields a complete Thought Sang( ) Kermit +1KermitSang-1 saturater Unsaturated +1NaCl-1 2nd saturater a Dyadic Concept “filled by” two Saturaters yields a complete Thought Kick( , ) -2 +1Brutus(KickCaesar+1)-1 -2 +1H(OH+1)-1 1st saturater Doubly Un- saturated Caesar Brutus
Conjoining two monadic (-1) concepts can yield a complex monadic (-1) concept Aggie Aggie Cow( ) Brown( ) +1 Cow( ) +1 -1 -1 Aggie M-joiner Brown( ) & Cow( ) Aggie Brown( ) -1
D-joiner Monadic ∃ Monadic Dyadic M-joiner Monadic M-joiner Monadic Monadic THREE(_)^GREEN(_)^FISH(_) ∃[UNDER(_, _)^BRIDGE(_)] |_________|_________| D-joiner Monadic ∃ M-joiner Monadic Dyadic M-joiner M-joiner Monadic THREE(_)^GREEN(_)^FISH(_)^∃[UNDER(_, _)^BRIDGE(_)] |_________|_________|
Instructions can be as composable as Concepts Fetch-M D-join Fetch-M ∃ M-join Fetch-M M-join Fetch-D Fetch-M M-join[ , M-join(fetch@three, M-join(fetch@green, fetch@fish)) Fetch-M ] D-join Fetch-M ∃ M-join M-join Fetch-M Fetch-D M-join Fetch-M D-join(fetch@under, fetch@bridge)
The Lowest Fregean Types (described iteratively ) 0. <e> <t> (2) basic types 1. <e, e> <e, t> <t, e> <t, t> (4) of <0, 0> 2. eight of <0, 1> eight of <1, 0> (32), including sixteen of <1, 1> <e, et> and <et, t> 3. 64 of <0, 2> 64 of <2, 0> (1408), including 128 of <1, 2> 128 of <2, 1> <e, <e, et>> and 1024 of <2, 2> <et, <et, t>> and <<e, et>, t> 4. 2816 of <0, 3> 2816 of <3, 0> (> 2M), including 5632 of <1, 3> 5632 of <3, 1> <e, <e, <e, et>>> and 45,056 of <2, 3> 45,056 of <3, 2> <<e, et>, <<e, et>, t>> 1,982,464 of <3, 3> 5. … > 5 x 1012
Human Infant (as seen by a linguist) Generates certain pronunciation-meaning pairs in a certain way Other Systems in their matured states Other Systems in their initial states ExperienceJapanese + + Growth Japanesed Human Faculty of Language Human Faculty of Language + ExperienceEnglish Initial State Englished What are these meanings that we connect with pronunciations? Possible 𝛑-𝛍 Generators What are word meanings? What are sentence meanings? Do these meanings exhibit various types? If so, which ones? What are the possible semantic types? - Naturally Acquirable Slangs Acquired Slangs
Generates certain pronunciation-meaning pairs in a certain way Other Systems in their matured states LEXICON COMBINATORICS Japanesed We’re making a huge empirical hypothesis if we assume that the combinatorics is minimal in Frege’s sense (Apply Function) and that humans have a linguistic competence that supports lexical items of any Fregean type Human Faculty of Language Englished What are these meanings that we connect with pronunciations? What are word meanings? What are sentence meanings? Do these meanings exhibit various types? If so, which ones? What are the possible semantic types?
The Lowest Fregean Types (described iteratively ) 0. <e> <t> (2) basic types 1. <e, e> <e, t> <t, e> <t, t> (4) of <0, 0> 2. eight of <0, 1> eight of <1, 0> (32), including sixteen of <1, 1> <e, et> and <et, t> 3. 64 of <0, 2> 64 of <2, 0> (1408), including 128 of <1, 2> 128 of <2, 1> <e, <e, et>> and 1024 of <2, 2> <et, <et, t>> and <<e, et>, t> 4. 2816 of <0, 3> 2816 of <3, 0> (> 2M), including 5632 of <1, 3> 5632 of <3, 1> <e, <e, <e, et>>> and 45,056 of <2, 3> 45,056 of <3, 2> <<e, et>, <<e, et>, t>> 1,982,464 of <3, 3> 5. … > 5 x 1012