690 likes | 810 Views
Linguistica : Unsupervised Learning of Natural Language Morphology Using MDL. John Goldsmith Department of Linguistics The University of Chicago. The Goal:. To develop a program that learns the structure of words in any human language on the basis of a raw text.
E N D
Linguistica:Unsupervised Learning of Natural Language Morphology Using MDL John Goldsmith Department of Linguistics The University of Chicago
The Goal: • To develop a program that learns the structure of words in any human language on the basis of a raw text. • No human supervision, except for the naïve creation of the text.
Value • To linguistic theory: reconstruct linguistic theory in a quantitative fashion • Practical value: • Information retrieval on data bases of unrestricted languages • develop stochastic morphologies rapidly: necessary for automatic speech recognition • A step towards syntax
The product • Currently a C++ program that functions as a Windows-based tool for corpus-based linguistics. • Available in beta version on web site.
What do we want? If you give the program a computer file containing Tom Sawyer, it should tell you that the language has a category of words that take the suffixes ing,s,ed, and NULL; another category that takes the suffixes 's, s, and NULL; If you give it Jules Verne, it tells you there's a category with suffixes: a aient ait ant (chanta, chantaient, chantait, chantant)
Immediate queries: • Do you tell it what language to expect? No. • Does it have access to meaning? No. • Does that matter? No. • How much data does it need. ...
How much data do you need? • You get reasonable results fast, with 5,000 words, but results are much better with 50,000, and much better with 500,000 words (length of corpus). • 100,000 word tokens ~ 12,000 distinct words.
Game plan • Overview of MDL = Minimum Description Length, where • Description Length = Length of Analysis + Length of Compressed Data • Length of data as optimal compressed length of the corpus, given probabilities derived from morphology • Length of morphology in information theoretic terms • MDL is dead without heuristics…(then again, heuristics without MDL lack all finesse.)
Game plan (continued) • Heuristic 1: discover basic candidate suffixes of the language using weighted mutual information • Heuristic 2: use these to find regular signatures; • Now use MDL to correct errors generated by heuristics
Game plan (end) Why using MDL is closely related to measuring the (log of the) size of the space of possible vocabularies.
Turning to the problem of learning morphology...
For the purposes of version 1 of Linguistica 1, I will restrict myself to Indo-European languages, and in general languages in which the average number of suffixes per word is not greater than 2. (We drop this requirement in Linguistica 2.)
Minimum Description Length (Rissanen 1989) Basic idea: A good analysis of a set of data is one that (1) extracts the structure found in the data, and (2) which does it without overfitting the data.
If you have a set of pointers to a bunch of objects, and a probability distribution over those pointers, then You may act as if the information-length of each pointer = -1* log prob (that pointer).
So for our entire corpus-- The length of the compressed size of each piece w is -log prob(w); so... Total compressed length of the corpus is:
Overfitting the data: • The Gettysburg Address can be compressed to 2 bits if you choose an eccentric encoding scheme. • But that encoding scheme (1) will be long, and (2) will do more poorly than an encoding scheme that does not waste its probability mass on the Gettysburg Address.
Even scientific theories bow to the exigencies of MDL... • in a sense. • A theory is penalized if it does not capture generalizations within the observational data (e.g., predicting future observations on the basis of the initial conditions); • It is penalized if it is more complex than it needs to be (Ockham’s Razor).
Minimum Description Length: For a given set of data D, choose the analysis Ai to minimize the function: Length(Compression of D using Ai) + Length (Ai)
Compressed length of the data using Ai? The data is the corpus. The compressed length of the corpus is just (summing over the words)
Our morphology has two necessary properties: • It must assign a probability to every word of the language (so that we can speak of its ability to compress the corpus) -- we’ll return to this immediately; • And it must have a well-defined length.
Morphology assigns a frequency: • If the morphology assigns no internal structure to a word (John, the, …), it assigns the observed frequency to the word. • If the morphology analyzes a word (dog+s), it assigns a frequency to that word on the basis of 3 things:
1. The frequency of the suffixal pattern in which the word is found (dog-s, dog-’s, dog-NULL); 2. The frequency of the stem (dog); 3. The frequency of the suffix (-s) within that pattern (-s, -’s, -NULL)
Terminology: The pattern of suffixes that a stem takes is its signature: • NULL.ed.ing.s • NULL.er.est.ness
Frequency of analyzed word W is analyzed as belonging to Signature s,stem T and suffix F. [x] means the count of x’s in the corpus (token count) Where [W] is the total number of words. Actually what we care about is the log of this:
So far: • The behavior we demand of our morphology is that it assign a frequency to any given word; we need this so that we can evaluate the particular morphology’s goodness as an analysis, i.e., as a compressor.
Next, let’s see how to measurethe length of a morphology A morphology is a set of 3 things: • A list of stems; • A list of suffixes; • A list of signatures with the associated stems.
Let’s measure the list of suffixes A list of suffixes consists of: • a piece of punctuation telling us how long the list is (of length log (size) ); • A list of pointers to the suffixes (each pointer of size - log (freq (suffix)); • A concatenation of the letters of the suffixes (we could compress this, too, or just count number of letters).
punctuation ~ of length log(4) of length 3, because p(ed) = 1/8 4 pointered pointers pointerNULL pointering ed s NULL ing of length 2, because 2 letters long
Same for stem list: Indication of size of the list (of length log (size)); List of pointers to each stem, where each pointer is of length - log freq (stem); Concatenation of stems (sum of lengths of stems in letters)
Size of the signature list What is the size of an individual signature? It consists of two subparts: • a list of pointers to stems, and a list of pointers to suffixes. • And we already know how to measure the size of a list of pointers.
An individual signature for the words dog, dogs, cat, cats, glove, gloves
Length of a signature Sum of the lengths of the pointers to the stems Sum of the lengths of the pointers to the suffixes
I’m glossing over an importantnatural language complexity:recursive structure. word (Significant effects on distribution of probability mass over all the words.) word find ing s
Signature component list of pointers to signatures <X> indicates the number of distinct elements in X
MDL needs heuristics • MDL does only one thing: it tells you which of two analyses is better. • It doesn’t tell you how to find those analysis.
Overall strategy • Use initial heuristic to establish sets of signatures and sets of stems. • Use heuristics to propose various corrections. • Use MDL to decide on whether proposed corrections are to be accepted or refused.
Initial Heuristic 1. Take top 100 ngrams based on weighted mutual information as candidate morphemes of the language:
If a word ends in a candidate morpheme, split it thusly, to form a candidate stem thereby: sanity: • sanit + y • sanity • san + ity
How to choose in ambiguous cases? This turns out to be a lot harder than you’d think, given what I’ve said so far. Short answer is a heuristic: maximize the objective function There’s no good short explanation for this, except this:the frequency of a single letter is a very bad first approximation of its likelihood to be a morpheme.
For each stem, find the suffixes it appears with • This forms its signature: • NULL.ed.ing.s, for example. Now eliminate all signatures that appear only once. This gives us an excellent first guess for the morphology.
Stems with their signatures abrupt NULL ly ness. abs ence ent. absent -minded NULL ia ly. absent-minded NULL ly absentee NULL ism (French:) absolu NULL e ment. absorb ait ant e er é ée abus ait er abîm e es ée.
Now build up signature collection... Top 10, 100K words 1 .NULL.ed.ing. 65 1214 2 .NULL.ed.ing.s. 27 1464 3 .NULL.s. 290 8184 4 .'s.NULL.s. 27 2645 5 .NULL.ed.s. 26 541 6 .NULL.ly. 128 2124 7 .NULL.ed. 87 767 8 .'s.NULL. 75 3655 9 .NULL.d.s. 14 510 10 .NULL.ing. 62 983
Verbose signature... .NULL.ed.ing. 58 heap check revolt plunder look obtain escort proclaim arrest gain destroy stay suspect kill consent knock track succeed answer frighten glitter.…\
Find strictly regular signatures: • A signature is strictly regular if it contains more than one suffix, and is found on more than one stem. • A suffix found in a strictly regular suffix is a regular suffix. • Keep only signatures composed of regular suffixes (=regular signatures).
Examples of non-regular signatures Only one stem for this signature: • ch.e.erial.erials.rimony.rons.uring • el.ezed.nce.reupon.ther
Prefixes Just the same, in mirror-image style. Perform either on stems or on words.
English prefixes .NULL.re. 8 .NULL.dis. 7 .NULL.de. 4 .NULL.un. 4 .NULL.con. 3 .NULL.en. 3 .NULL.al. 3 .NULL.t. 3 .NULL.con.ex. 2