180 likes | 298 Views
Wrapper Induction for Information Extraction. Nicholas Kushmerick Daniel S.Weld Robert Doorenbos. Outline. Describe wrappers. Formalize the wrapper construction problem as that of inductive generalization. Identify the HLRT wrapper class. Apply the PAC framework.
E N D
Wrapper Induction for Information Extraction Nicholas Kushmerick Daniel S.Weld Robert Doorenbos
Outline • Describe wrappers. • Formalize the wrapper construction problem as that of inductive generalization. • Identify the HLRT wrapper class. • Apply the PAC framework. • Present a modular approach to building oracles. • Provide empirical evaluation of our approach.
ExtractCCs(page P) Skip past first occurrence of <B> in P While next <B> is before next <HR> in P for each <lk,rk>belongs to {< <B>,</B>>,< <I>,</I>>} Skip past next occurrence of lk in P extract attribute from P to next occurrence of rk Return extracted tuples ExecuteHLRT(<h,t,l1,r1,..,lk,rk>,page P) Skip past first occurrence of h in P While next l1 is before next t in P For each <lk,rk>belongs to {<l1,r1>,..,< lk, rk >} Skip past next occurrence of lk in P Extract attribute from P to next occurrence of rk Return extracted tuples
Constructing wrapper by induction • Induction is the task of generalizing from labeled example to a hypothesis, a function for labeling instances. • The wrapper construction problem is the following: given a supply of example query responses, learn a wrapper for the information resource that generated them .
Wrapper Induction Details: Instances Labels Hypotheses Oracles correspond to sources of example query responses and their labels. We split the traditional oracle into two parts. PageOracle generates example pages. LabelOracle produces correct labels for these instances. PAC analysis answers the question, How many examples must a learner see to be confident that its hypothesis is good enough-i.e., to be probably approximately correct?'
Composing oracles LabelOracle is provided as input. Recognizers:finds instances of a particular attribute on a page. These recognized instances are then corroborated to label the entire page. For example, given a recognizer for countries and another for country codes, corroboration produces an oracle that labels pages containing pairs of these attributes.
Recognizer types: • Perfect:accept all positive instances and reject all negative instances of their target attribute. • Incomplete:reject all negative instances but reject some positive instances. • Unsound: accept all positive instances but accept some negative instances. • unreliable:reject some positive instances and accept some negative instances.
Empirical evaluation I • 100 Internet resources • selected randomly • search.com • 48% can be wrapped by HLRT.
Empirical evaluation II. Another experiment measures the robustness of the system to the recognizers' error rates. The system was tested on: (i) the OKRA email service (ii) the BIGBOOK telephone directory. OKRA has four attributes BIGBOOK has six. Runs the system with these perfect recognizers. two termination conditions: (a) we ran the system until the PAC criteria was satisfied (b) we required that the learned wrapper be 100% correct on a suite of test pages.
4.9 examples are sufficient for OKRA. 29 for BIGBOOK. The number of examples required is small enough for practical perspective.
105 examples are needed required to satisfy the PAC criteria. PAC model is too week to tightly constrain the induction process.
Conclusions • Wrapper induction works reasonably well. • three contributions: • Formalization of the wrapper construction problem as induction. • Definition of the HLRT bias, which is efficiently learnable in this framework. • Using of heuristic knowledge to compose the algorithm's oracle.