350 likes | 524 Views
Training Paradigms for Correcting Errors in Grammar and Usage. Alla Rozovskaya and Dan Roth University of Illinois at Urbana-Champaign. NAACL-HLT 2010 Los Angeles, CA. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A. Error correction tasks.
E N D
Training Paradigms for Correcting Errors in Grammar and Usage Alla Rozovskaya and Dan Roth University of Illinois at Urbana-Champaign NAACL-HLT 2010 Los Angeles, CA TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA
Error correction tasks • Context-sensitive spelling mistakes • I would like a peace*/piece of cake. • English as a Second Language (ESL) mistakes • Mistakes involving prepositions • To*/in my mind, this is a serious problem. • Mistakes involving articles • Nearly 30000 species of plants are under the*/a serious threat of disappearing. • Laziness is the engine of the*/<NONE> progress.
The standard training paradigm for error correction • Example: Correcting article mistakes [Izumi et al., ’03; Han et al., ’06; De Felice and Pulman, ’08; Gamon et al., ’08] • Cast the problem as a classification task • Provide a set of candidates: {a,the,NONE} • Task: select the appropriate candidate in context • Define features based on the surrounding context and train a classifier on correct (native) data Laziness is the engine of [the] progress Features: w1B=of, w1A=progress, w2Bw1B=engine-of, …
The standard training paradigm for error correction • Correcting article mistakes [Izumi et al., ’03; Han et al., ’06; De Felice and Pulman, ’08; Gamon et al., ’08] • Correcting preposition mistakes [Eeg-Olofsson and Knutsson, ’03; Gamon et al., ’08; Tetreault and Chodorow, ’08, others] • Context-sensitive spelling correction [Golding and Roth, ’96,’99; Carlson et al., ’01, others]
But this is a paradigm for a selection task! • Selection task (e.g. WSD): • We have a set of candidates • Task: select the correct candidate from a set of candidates The selection paradigm is appropriate for WSD, because there is no proposed candidate in context
The typical error correction training paradigm is the paradigm of a selection task! Why? • Easy to obtain training data – can use correct text • No need for annotation
Outline • The error correction task: Problem statement • The typical training paradigm – does selection rather than correction • Selection versus correction • What is the appropriate training paradigm for the correction task? • The ESL corpus • Training paradigms for the error correction task • Key idea • Methods of error generation • Experiments • Conclusions
Selection tasks versus error correction tasks • Article selection task Nearly 30000 species of plants are under ___ serious threat of disappearing. • Article correction task Nearly 30000 species of plants areunder the serious threat of disappearing. Set of candidates: {a,the,NONE} Set of candidates: {a,the,NONE} source
Correction versus selection • Article selection classifier • Accuracy on native English data 87-90% • Baseline for the article selection task 60-70%(use the most common article) • Non-native data accuracy >90% • If we use the writer’s selection, the results are very good already! Conclusion: Need to use the proposed candidate (or will make more mistakes than there are in the data) Error rate=10% With a selection model – can use it as a threshold Can we do better if we use the proposed candidate in training?
The proposed article is a useful resource We want to use the proposed article in training 90% of articles are used correctly Article mistakes are not random Selection paradigm: Can we use the proposed candidate in training? - No: In native data, the proposed article always corresponds to the label Page 10
How can we use the proposed article in training? Using annotated data for training Laziness is the engine of <the,NONE> progress. Annotating data for training is expensive *Need a method to generate training data for the error correction task without expensive annotation. source label Page 11
Contributions of this work • We propose a method to generate training datafor the error correction task • Avoid expensive data annotation • We use the generated data to train classifiers in the paradigm of correction • With the proposed candidate in training • We show that error correction training paradigms are superior to the selection paradigm of training
Outline • The error correction task: Problem statement • The typical training paradigm – does selection rather than correction • Selection versus correction • What is the appropriate training paradigm for correction? • The ESL corpus • Training paradigms for the error correction task • Key idea • Methods of error generation • Experiments • Conclusions
The annotated ESL corpus • Annotated a corpus of ESL sentences (60K words) • Extracted from two corpora of ESL essays: • ICLE [Granger et al.,’02] • CLEC [Gui and Yang,’03] • Sentences written by ESL students of 9 first languages • Each sentence is fully corrected and error tagged • Annotated by native English speakers • Experiments: Chinese, Czech, Russian
The annotated ESL corpus Sentence for annotation • Annotating ESL sentences with an annotation tool
The annotated ESL corpus Each sentence is fully corrected and error-tagged For details about the annotation, please see [Rozovskaya and Roth, ’10, NAACL-BEA5] • Before annotation “This time asks for looking at things with our eyes opened.” • With annotation comments “This time @period, age, time@ asks $us$ for <to> looking *look* at things with our eyes opened .” • After annotation “This period asks us to look at things with our eyes opened.”
Outline • The error correction task: Problem statement • The typical training paradigm – does selection rather than correction • Selection versus correction • What is the appropriate training paradigm for correction? • The ESL data used in the evaluation • Training paradigms for the error correction task • Key idea • Methods of error generation • Experiments • Conclusions
Training paradigms for the error correction task • Generate artificial article errors in native training data • The source article can be used in training as a feature • Constraint: We want training data to be similar to non-native text • Other works that use artificial errors do not take into account error patterns in non-native data [Sjöbergh and Knutsson, ’05; Brockett et al., ’06, Foster and Andersen, ’09] Key idea: We want to be able to use the proposed candidate in training
Training paradigms for the error correction task • We examine article errors in the annotated data: • Add errors selectively • Mimic • the article distribution • theerror rate • the error patternsof the non-native text
Error rates in article usage Very common mistakes made by non-native speakers of English • TOEFL essays by Russian, Chinese, and Japanese speakers:13% of noun phrases have article mistakes [Han et al., ’06] • Essays by advanced Chinese, Czech, Russian learners of ESL: 10% of noun phrases have article mistakes.
Distribution of articles in the annotated ESL data This error rate sets the baseline for the task around 90%
Distribution of article errors in the annotated ESL text Errors are dependent on the first language of the writer Not all confusions are equally likely
Characteristics of the non-native data: Summary • Article distribution • Error rates • Error patterns of the non-native text We use this knowledge to generate errors for error correction training paradigms
Error correction training paradigm 1: General • General Add errors uniformly at random with error rateconf, where conf 2{5%,10%,12%,14%,16%,18%} Example: Let error rate=10% the a NONE replace(the, a, 0.05) replace(the,NONE,0.05) replace(a, the, 0.05) replace(a,NONE,0.05) replace(NONE, a, 0.05) replace(NONE,the,0.05)
Error correction training paradigm 2: ArticleDistr • ArticleDistr Mimic the distribution of the ESL source articles in training A linear program is set up to find p1 and p2 Example: the Constraints: (1) ProbTrain(the)=ProbCzech(the) (2) p1, p2¸minConf, where minConf2{0.02, 0.03, 0.04, 0.05} replace(the, a, p1) replace(the,NONE,p2)
Error correction training paradigm 3: ErrorDistr • ErrorDistr Add article mistakes to mimic the error rate and confusion patterns observed in the ESL data. Example: Chinese Error rate: 9.2% Article confusions by error type
Error correction training paradigms: Summary • Key idea: generate artificial errors in native training data • We can use the source article in training as a feature • Important constraints: • Errors mimic the error patterns of the ESL text • Error rate • Distribution of different article confusions
Error correction training paradigms: Costs • 3 error generation methods • Use different knowledge (and have different costs) • Paradigm 1 (error rate in the data) • Paradigm 2 (distribution of articles in the ESL data) – no annotation required • Paradigm 3 (error rate and article confusions) – requires annotated data (the most costly method)
Outline • The error correction task: Problem statement • The typical training paradigm – does selection rather than correction • Selection versus correction • What is the appropriate training paradigm for correction? • The ESL data used in the evaluation • Training paradigms for the error correction task • Key idea • Methods of error generation • Experiments • Conclusions
Experimental setup • Train a TrainClean classifier using the selection paradigm • 3 classifiers are Trained With artificial Errors (TWE classifiers) • Online learning paradigm and the Averaged Perceptron Algorithm.
Features source feature – TWE systems only Features are based on the 3-word window around the target. If we take [a] brief look back if-IN we-PRP take-VBP [a] brief-JJ look-NN back-RB Word features: headWord=look, w3B=if, w2B=we,w1B=take, w1A=brief, etc. Tag features: p3B=IN, p2B=PRP, etc. Composite features: w2Bw1B=we-take w1Bw1A= take-brief , etc.
Performance on the data by Russian speakers • All TWE’s outperform the selection paradigm TrainClean for all languages • On average, TWE (Error Distr.) provides the best improvement
Conclusions • We argued that the error correction task should be studied in the error correction paradigm rather than the current selection paradigm • The baselinefor the error correction task is high • Mistakes are not random • We have proposed a method to generate training data for error correction tasks using artificial errors • The artificial errors mimic error rates and error patterns in the non-native text • The method allows us to train with the proposed candidate, in the paradigm of error correction • The error correction training paradigms are superior to the typical selection training paradigm
Thank you! Questions?