360 likes | 479 Views
Wrapper Learning: Cohen et al 2002; Kushmeric 2000; Kushmeric & Frietag 2000. William Cohen 1/26/03. Goal: learn from a human teacher how to extract certain database records from a particular web site. Learner. Why learning from few examples is important.
E N D
Wrapper Learning:Cohen et al 2002; Kushmeric 2000; Kushmeric & Frietag 2000 William Cohen 1/26/03
Goal: learn from a human teacher how to extract certain database records from a particular web site.
Why learning from few examples is important At training time, only four examples are available—but one would like to generalize to future pages as well… Must generalize across time as well as across a single site
Kushmerick’s WIEN system • Earliest wrapper-learning system (published IJCAI ’97) • Special things about WIEN: • Treats document as a string of characters • Learns to extract a relationdirectly, rather than extracting fields, then associating them together in some way • Example is a completely labeled page
Left delimiters L1=“<B>”, L2=“<I>”; Right R1=“</B>”, R2=“</I>” WIEN system: a sample wrapper
WIEN system: a sample wrapper • Learning means finding L1,…,Lk and R1,…,Rk • Li must precede every instance of field i • Ri must follow every instance of field I • Li, Ri can’t contain data items • Limited number of possible candidates for Li,Ri
WIEN system: a more complex class of wrappers (HLRT) Extension: use Li,Ri delimiters only:after a “head” (after first occurence of H) and before a “tail” (occurrence of T) H = “<P>”, T = “<HR>”
Review of boosting Generalized version of AdaBoost (Singer&Schapire, 99) Allows “real-valued” predictions for each “base hypothesis”—including value of zero.
Constraint: W+ > W- where and caret is smoothing Learning methods: boosting rules • Weak learner: to find weak hypothesis t: • Split Data into Growing and Pruning sets • Let Rt be an empty conjunction • Greedily add conditions to Rt guided by Growing set: • Greedily remove conditions from Rt guided by Pruning set: • Convert to weak hypothesis:
Learning methods: boosting rules SLIPPER also produces fairly compact rule sets.
Learning methods: BWI • Boosted wrapper induction (BWI) learns to extract substrings from a document. • Learns three concepts: firstToken(x), lastToken(x), substringLength(k) • Conditions are tests on tokens before/after x • E.g., toki-2=‘from’, isNumber(toki+1) • SLIPPER weak learner, no pruning. • Greedy search extends “window size” by at most L in each iteration, uses lookahead L, no fixed limit on window size. • Good results in (Kushmeric and Frietag, 2000)
BWI algorithm Lookahead search here
Improving A Page Classifier with Anchor Extractionand Link Analysis William W. Cohen NIPS 2002
Previous work in page classification using links: • Exploit hyperlinks (Slattery&Mitchell 2000; Cohn&Hofmann, 2001; Joachims 2001): Documents pointed to by the same “hub” should have the same class. • What’s new in this paper: • Use structure of hub pages (as well as structure of site graph) to find better “hubs” • Adapt an existing “wrapper learning” system to find structure, on the task of classifying “executive bio pages”.
Intuition: links from this “hub page” are informative… …especially these links
Task: train a page classifier, then use it to classify pages on a new, previously-unseen web site as executiveBio or other Question: can index pages for executive biographies be used to improve classification? Idea: use the wrapper-learner to learn to extract links to execBio pages, smoothing the “noisy” data produced by the initial page classifier.
Background: “co-training” (Mitchell&Blum, ‘98) • Suppose examples are of the form (x1,x2,y) where x1,x2are independent(given y), and where each xiis sufficient for classification, and unlabeledexamples are cheap. • (E.g., x1 = bag of words, x2 = bag of links). • Co-training algorithm: 1. Use x1’s (on labeled data D) to train f1(x)=y 2. Use f1 to label additional unlabeledexamples U. 3. Use x2’s (on labeled part of U+D to train f1(x)=y 4. Repeat . . .
Simple 1-step co-training for web pages f1 is a bag-of-words page classifier, and S is web site containing unlabeledpages. • Feature construction. Represent a page xin S as a bag of pages that link tox(“bag of hubs”). • Learning. Learn f2 from the bag-of-hubs examples, labeled with f1 • Labeling. Use f2(x) to label pages from S. Idea: use one round of co-training to bootstrap the bag-of words classifier to one that uses site-specific features x2/f2
Improved 1-step co-training for web pages Feature construction. - Label an anchor a in S as positive iff it points to a positive page x (according to f1). Let D = {(x’,a): a is a positive anchor on x’}. - Generate many small training sets Di from D, by sliding small windows over D. - Let P be the set of all “structures” found by any builder from any subset Di - Say that p links to xif p extracts an anchor that points to x. Represent a page x asthe bag of structuresin Pthat link to x. Learning and Labeling. As before.
builder extractor List1
builder extractor List2
builder extractor List3
BOH representation: { List1, List3,…}, PR { List1, List2, List3,…}, PR { List2, List 3,…}, Other { List2, List3,…}, PR … Learner
No improvement Co-training hurts Experimental results
Summary - “Builders” (from a wrapper learning system) let one discover and use structure of web sites and index pages to smooth page classification results. - Discovering good “hub structures” makes it possible to use 1-step co-training on small(50-200 example) unlabeled datasets. – Average error rate was reduced from 8.4% to 3.6%. – Difference is statistically significant with a 2-tailed paired sign test or t-test. – EM with probabilistic learners also works—see (Blei et al, UAI 2002)