430 likes | 446 Views
This article explores the question of whether Russian has full paradigms or partially overlapping groups, and investigates how speakers of languages with complex inflectional morphology recognize and produce forms they have never encountered. Theoretical background, hypotheses, and evidence from computational experiments and linguistic analysis are presented.
E N D
Does Russian have full paradigms? Laura A. Janda, UiT The Arctic University of Norway Francis M. Tyers, Higher School of Economics, Moscow
The point:Is partial input enough to learn a whole system? The Paradigm Cell Filling Problem: Native speakers of languages with complex inflectional morphology routinely recognize and produce forms that they have never heard or seen. How is this possible?
Theoretical Background Word and Paradigm Morphology (Blevins 2016) The Paradigm Cell Filling Problem (Ackerman et al. 2009) Generating paradigms with a recurrent neural network (Sigmorphon 2016 & 2017 Shared Tasks;Malouf 2016, 2017)
Hypotheses and Evidence Hypotheses • Russian does not contain paradigms, neither in aggregate, nor in the minds of speakers • Instead there are relationships among forms that constitute partially overlapping groups making it possible to produce any potential form Evidence • Russian and the relationship between paradigm size and number of full paradigms for nouns • Russian nouns: correspondence analysis showing partially overlapping subsets of forms • Russian nouns, verbs, and adjectives: computational experiment comparing training on full paradigms vs. single forms
Hypotheses and Evidence All of our evidence is based on SynTagRus with > 1M hand-annotated tokens Hypotheses • Russian does not contain paradigms, neither in aggregate, nor in the minds of speakers • Instead there are relationships among forms that constitute partially overlapping groups making it possible to guess any potential form Evidence • Russian and the relationship between paradigm size and number of full paradigms for nouns • Russian nouns: correspondence analysis showing partially overlapping subsets of forms • Russian nouns, verbs, and adjectives: computational experiment comparing training on full paradigms vs. single forms
Definition of Terms and Theoretical Premises Word form: inflected forms such as the forms of Russian ‘word’: slóvo [slóvǝ], slóva [slóvǝ], slóvu [slóvu], slóvom [slóvǝm], slóve [slóvji], slová [slʌvá], slóv [slóf], slovám [slʌvám], slovámi [slʌvámji], slováx [slʌváx] Lexeme: an abstraction that unifies a set of inflectionally-related word forms with the same meaning, like slovo ‘word’ Paradigm: the set of word forms associated with a lexeme Cognitive Linguistics meets Word and Paradigm: word forms are constructions, sets of word forms show radial category structure
Paradigm size and number of full paradigms • Full paradigms of word forms are rarely encountered in corpora • As the size of the paradigm increases, the percentage of lexemes for which all possible word forms are attested decreases • Russian is somewhere in the middle of the scale • For languages for which linguists claim the existence of truly large paradigms, there may be no lexeme that is ever attested in all possible word forms and even some word forms that have no attestations at all
Relationship between paradigm size and number of full paradigms for nouns
Relationship between paradigm size and number of complete paradigms for nouns Because Zipf’s Law scales up, these numbers will never change substantially, no matter how large the corpus is
Partially overlapping subsets of word forms Grammatical profile: Frequency distribution of word forms • Grammatical profiles of five types of lexemes: • Masculine inanimate ending in consonant • Masculine animate ending in consonant • Neuter inanimate • Feminine inanimate (II) ending in –a/-я • Feminine inanimate (III) ending in –ь Frequency threshold ≥ 50
Examples of grammatical profiles (these are raw frequency, calculations done on relative frequency)
Visualizing Russian Noun Paradigms Key:bold >20%, plain >10%, grey 1-9%, (blank) unattested
More High-Frequency Russian Nouns Key:bold >20%, plain >10%, grey 1-9%, (blank) unattested
Examples of grammatical profiles (these are raw frequency, calculations done on relative frequency) We will look at correspondence analysis for each group, starting with masculine animates
Correspondence Analysis of Grammatical Profiles Input: 95 vectors (1 for each lexeme) of frequencies for word forms Each vector tells how many attestations were found for each case/number value: Nominative Singular, Genitive Singular, etc. rows are lexemes, columns are case/number values of word forms Process: Matrices of distances are calculated for rows and columns and represented in a multidimensional space defined by factors that are mathematical constructs. Factor 1 is the mathematical dimension that accounts for the largest amount of variance in the data, followed by Factor 2, etc. Plot of the first two (most significant) Factors, with Factor 1 as x-axis and Factor 2 as the y-axis You can think of Factor 1 as the strongest parameter that splits the data into two groups (negative vs. positive values on the x-axis)
Grammatical profiles relate to the meanings of the lexemes and their typical grammatical constructions: аналитики отмечают, что; захват/спасение/расстрел заложников; стать/быть чемпионом; корреспондент + Gen; сказать/сообщить корреспонденту
Feminine II (minus рамка)
весной ‘in the spring’ зимой ‘in the winter’ с просьбой ‘with a request’ за чей-то спиной ‘behind someone’s back’ борьба с коррупцией ‘fighting corruption’
Computational experiment: nouns, verbs, adjectives • Based on an ordered list of the most frequent forms in SynTagRus • Machine learning: • Given the 100 most frequent forms, predict the next 100 most frequent forms • Given the 200 most frequent forms, predict the next 100 most frequent forms • Given the 300 most frequent forms, predict the next 100 most frequent forms • Given the 400 most frequent forms, predict the next 100 most frequent forms • Given the 500 most frequent forms, predict the next 100 most frequent forms • … until 5400, when SynTagRus runs out of data
Computational experiment Computational experiment: nouns, verbs, adjectives This is the training data • Based on an ordered list of the most frequent forms in SynTagRus • Machine learning: • Given the 100 most frequent forms, predict the next 100 most frequent forms • Given the 200 most frequent forms, predict the next 100 most frequent forms • Given the 300 most frequent forms, predict the next 100 most frequent forms • Given the 400 most frequent forms, predict the next 100 most frequent forms • Given the 500 most frequent forms, predict the next 100 most frequent forms • … until 5400, when SynTagRus runs out of data
Computational experiment Computational experiment: nouns, verbs, adjectives This is the testing data • Based on an ordered list of the most frequent forms in SynTagRus • Machine learning: • Given the 100 most frequent forms, predict the next 100 most frequent forms • Given the 200 most frequent forms, predict the next 100 most frequent forms • Given the 300 most frequent forms, predict the next 100 most frequent forms • Given the 400 most frequent forms, predict the next 100 most frequent forms • Given the 500 most frequent forms, predict the next 100 most frequent forms • … until 5400, when SynTagRus runs out of data
Computational experiment Computational experiment: nouns, verbs, adjectives • Based on an ordered list of the most frequent forms in SynTagRus • Machine learning: • Given the 100 most frequent forms, predict the next 100 most frequent forms • Given the 200 most frequent forms, predict the next 100 most frequent forms • Given the 300 most frequent forms, predict the next 100 most frequent forms • Given the 400 most frequent forms, predict the next 100 most frequent forms • Given the 500 most frequent forms, predict the next 100 most frequent forms • … until 5400, when SynTagRus runs out of data • Comparison of learning with full paradigms vs. learning with single forms • No overlap between training and testing data • This means that testing is always on previously unseen lemmas
Computational experiment Computational experiment: nouns, verbs, adjectives • Based on an ordered list of the most frequent forms in SynTagRus • Machine learning: • Given the 100 most frequent forms, predict the next 100 most frequent forms • Given the 200 most frequent forms, predict the next 100 most frequent forms • Given the 300 most frequent forms, predict the next 100 most frequent forms • Given the 400 most frequent forms, predict the next 100 most frequent forms • Given the 500 most frequent forms, predict the next 100 most frequent forms • … until 5400, when SynTagRus runs out of data • Comparison of learning with full paradigms vs. learning with single forms • No overlap between training and testing data • This means that testing is always on previously unseen lemmas • Ours is the only experiment that: • takes frequency into account • compares full-paradigm vs. single-form training • (nearly all other experiments use only full-paradigm training)
So the model that gets the most input should be the most successful, right?
100-200: Both models fail completely
300-1100: Better performance with full paradigms, but accuracy is low for both
1200-1700: Both models perform equally
1800-5400: Single forms model outperforms full paradigms
Conclusions • A given lexeme typically appears in only a handful of word forms • Word forms are likely learned as partially overlapping sets of related items • Learning is potentially enhanced by focus only on the most typical word forms attested for given lexemes • It is possible to extract patterns that relate to the meaning of the lexeme and the constructions that it appears in and use these to strategically target learning • For nouns, number is the most strongly distinguished dimension; locative and instrumental case are most distinct
A vision of the future? • Dictionaries that cite the most frequent word forms of lexemes, along with the constructions in which they typically appear • Learning materials that focus on the typical word forms, avoiding word forms no one is ever likely to use