470 likes | 585 Views
Text Correction using Domain Dependent Bigram Models from Web Crawls. Christoph Ringlstetter, Max Hadersbeck, Klaus U. Schulz, and Stoyan Mihov. Two recent goals of text correction. Two recent goals of text correction. Use of powerful language models
E N D
Text Correction using Domain Dependent Bigram Models from Web Crawls Christoph Ringlstetter, Max Hadersbeck, Klaus U. Schulz, and Stoyan Mihov
Two recent goals of text correction Use of powerful language models word frequencies, n-gram models, HMMs, probabilistic grammars, etc. Keenan et al. 91, Srihari 93, Hong & Hull 95,Golding & Schabes 96,...
Two recent goals of text correction Use of powerful language models word frequencies, n-gram models, HMMs, probabilistic grammars, etc. Keenan et al. 91, Srihari 93, Hong & Hull 95,Golding & Schabes 96,... Document centric and adaptive text correction prefer words of the text as correction suggestions for unknown tokens. Taghva & Stofsky 2001, Nartker et al. 2003, Rong Jin 2003, ...
Two recent goals of text correction Use of powerful language models word frequencies, n-gram models, HMMs, probabilistic grammars, etc. Keenan et al. 91, Srihari 93, Hong & Hull 95,Golding & Schabes 96,... Document centric and adaptive text correction prefer words of the text as correction suggestions for unknown tokens. Taghva & Stofsky 2001, Nartker et al. 2003, Rong Jin 2003, ...
Two recent goals of text correction Use of powerful language models word frequencies, n-gram models, HMMs, probabilistic grammars, etc. Keenan et al. 91, Srihari 93, Hong & Hull 95,Golding & Schabes 96,... Document centric and adaptive text correction prefer words of the text as correction suggestions for unknown tokens. Taghva & Stofsky 2001, Nartker et al. 2003, Rong Jin 2003, ... Here: Use of document centric language models (bigrams)
Use of document centric bigram models Idea Text T: ............. Wk-1 Wk Wk+1 .............
Use of document centric bigram models Idea ill-formed Text T: ............. Wk-1 Wk Wk+1 .............
Use of document centric bigram models Idea ill-formed Text T: ............. Wk-1 Wk Wk+1 ............. V1 V2 ... Vn correction candidates
Use of document centric bigram models Idea ill-formed Text T: ............. Wk-1 Wk Wk+1 ............. V1 V2 ... Vn correction candidates Prefer those correction candidates V where bigrams Wk-1V and VWk+1"are natural, given the text T".
Use of document centric bigram models Idea ill-formed Text T: ............. Wk-1 Wk+1 ............. Vi V1 V2 ... Vn correction candidates Prefer those correction candidates V where bigrams Wk-1V and VWk+1"are natural, given the text T".
Use of document centric bigram models Idea ill-formed Text T: ............. Wk-1 Vi Wk+1 ............. V1 V2 ... Vn correction candidates Prefer those correction candidates V where bigrams Wk-1V and VWk+1"are natural, given the text T".
Use of document centric bigram models Idea ill-formed Text T: ............. Wk-1 Vi Wk+1 ............. V1 V2 ... Vn correction candidates Prefer those correction candidates V where bigrams Wk-1V and VWk+1"are natural, given the text T". Problem How to measure "naturalness of a bigram, given a text"?
How to derive "natural" bigram models for a text? • Counting bigram frequencies in text T?
How to derive "natural" bigram models for a text? • Counting bigram frequencies in text T? Sparseness of bigrams: low chance to find bigrams repeated in T.
How to derive "natural" bigram models for a text? • Counting bigram frequencies in text T? Sparseness of bigrams: low chance to find bigrams repeated in T. • Using a fixed background corpus • (British National Corpus, Brown Corpus)?
How to derive "natural" bigram models for a text? • Counting bigram frequencies in text T? Sparseness of bigrams: low chance to find bigrams repeated in T. • Using a fixed background corpus • (British National Corpus, Brown Corpus)? Sparseness problem partially solved - but models not document centric!
How to derive "natural" bigram models for a text? • Counting bigram frequencies in text T? Sparseness of bigrams: low chance to find bigrams repeated in T. • Using a fixed background corpus • (British National Corpus, Brown Corpus)? Sparseness problem partially solved - but models not document centric! Our suggestion Using domain dependent terms from T, crawl a corpus C in the web that reflects domain and vocabulary of T. Count bigram frequencies in C.
Correction Experiments Text T
Correction Experiments Text T 1. Extract domain specific terms (compounds).
Correction Experiments Text T 1. Extract domain specific terms (compounds). 2. Crawl a corpus C that reflects domain and vocabulary of T.
Correction Experiments Text T 1. Extract domain specific terms (compounds). 2. Crawl a corpus C that reflects domain and vocabulary of T. Dictionary D
Correction Experiments Text T 1. Extract domain specific terms (compounds). 2. Crawl a corpus C that reflects domain and vocabulary of T. Dictionary D 3. For each pair of dictionary words UV, store the frequency of UV in C as a score s(U,V).
Correction Experiments Text T 1. Extract domain specific terms (compounds). 2. Crawl a corpus C that reflects domain and vocabulary of T. Dictionary D 3. For each pair of dictionary words UV, store the frequency of UV in C as a score s(U,V). First experiment ("in isolation") What is the correction accuracy reached when using s(U,V) as the single information for ranking correction suggestions?
Correction Experiments Text T 1. Extract domain specific terms (compounds). 2. Crawl a corpus C that reflects domain and vocabulary of T. Dictionary D 3. For each pair of dictionary words UV, store the frequency of UV in C as a score s(U,V). First experiment ("in isolation") What is the correction accuracy reached when using s(U,V) as the single information for ranking correction suggestions? Second experiment ("in combination") Which gain is obtained when adding s(U,V) as a new parameter to a sophisticated correction system using other scores as well?
Experiment 1: bigram scores "in isolation" • Set of ill-formed output tokens of commercial OCR system. • Candidate sets for ill-formed tokens: dictionary entries with edit distance < 3. • Using s(U,V) as the single information for ranking correction suggestions. • Measured the percentage of correctly top-ranked correction suggestions. • Comparing bigram scores from web crawls, from BNC, from Brown Corpus. Texts from 6 domains
Experiment 1: bigram scores "in isolation" • Set of ill-formed output tokens of commercial OCR system. • Candidate sets for ill-formed tokens: dictionary entries with edit distance < 3. • Using s(U,V) as the single information for ranking correction suggestions. • Measured the percentage of correctly top-ranked correction suggestions. • Comparing bigram scores from web crawls, from BNC, from Brown Corpus. Texts from 6 domains Resumee: crawled bigram frequencies clearly better than those from static corpora.
Experiment 2: adding bigram scores to fully-fledged correction system • Baseline: correction with length-sensitive Levenshtein distance and crawled word frequencies as two scores. • Then adding bigram frequencies as a third score. • Measuring the correction accuracy (percentage of correct tokens) reached with fully automated correction (optimized parameters). • Corrected output of commercial OCR 1 and open source OCR 2.
Experiment 2: adding bigram scores to fully-fledged correction system
Experiment 2: adding bigram scores to fully-fledged correction system Output highly accurate
Experiment 2: adding bigram scores to fully-fledged correction system Baseline correction adds significant improvement
Experiment 2: adding bigram scores to fully-fledged correction system Small additional gain by adding bigram score
Experiment 2: adding bigram scores to fully-fledged correction system
Experiment 2: adding bigram scores to fully-fledged correction system Reduced output accuracy
Experiment 2: adding bigram scores to fully-fledged correction system Baseline correction adds drastic improvement
Experiment 2: adding bigram scores to fully-fledged correction system Considerable additional gain by adding bigram score
Additional experiments: comparing language models Experiment • Compare word frequencies in input text with • word frequencies retrieved from "general" standard corpora • word frequencies retrieved from crawled domain dependent corpora Result Using the same large word list (dictionary) D, the top-k segments w.r.t. ordering using frequencies of type 2 covers much more tokens of the input text than the top-k segments w.r.t. ordering using frequencies of type 1
Additional experiments: comparing language models Crawled frequencies Standard frequencies Tokens Types
Summing up • Bigram scores represent a useful additional score for correction systems.
Summing up • Bigram scores represent a useful additional score for correction systems. • Bigram scores obtained from text-centered domain dependent • crawled corpora more valuable than uniform bigram scores • from general corpora.
Summing up • Bigram scores represent a useful additional score for correction systems. • Bigram scores obtained from text-centered domain dependent • crawled corpora more valuable than uniform bigram scores • from general corpora. • Sophisticated crawling strategies developed. Special techniques for • keeping arbitrary bigram scores in main memory (see paper).
Summing up • Bigram scores represent a useful additional score for correction systems. • Bigram scores obtained from text-centered domain dependent • crawled corpora more valuable than uniform bigram scores • from general corpora. • Sophisticated crawling strategies developed. Special techniques for • keeping arbitrary bigram scores in main memory (see paper). • The additional gain in accuracy reached with bigram scores • depends on the baseline.
Summing up • Bigram scores represent a useful additional score for correction systems. • Bigram scores obtained from text-centered domain dependent • crawled corpora more valuable than uniform bigram scores • from general corpora. • Sophisticated crawling strategies developed. Special techniques for • keeping arbitrary bigram scores in main memory (see paper). • The additional gain in accuracy reached with bigram scores • depends on the baseline. • Language models obtained from text-centered domain dependent • corpora retrieved in the web reflect the language of the input document • much more closely than those obtained from general corpora.