1 / 23

Presented by Mat Kelly CS895 – Web-based Information Retrieval Old Dominion University

The Deep Web: Surfacing Hidden Value Michael K. Bergman. Web-Scale Extraction of Structured Data Michael J. Cafarella , Jayant Madhavan & Alon Halevy. Presented by Mat Kelly CS895 – Web-based Information Retrieval Old Dominion University Septmber 27, 2011. Papers’ Contributions.

linus-barry
Download Presentation

Presented by Mat Kelly CS895 – Web-based Information Retrieval Old Dominion University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Deep Web: Surfacing Hidden ValueMichael K. Bergman Web-Scale Extraction of Structured DataMichael J. Cafarella, JayantMadhavan & Alon Halevy Presented by Mat Kelly CS895 – Web-based Information Retrieval Old Dominion University Septmber 27, 2011

  2. Papers’ Contributions • Bergman attempts various methods of estimating size of Deep Web • Cafarella proposes concrete methods of extracting and more reliably estimating size of Deep Web and offers a surprising caveat in the estimation

  3. What is The Deep Web? • Pages that do not exist in search engines • Created dynamically as result of search • Much larger than surface web (400-550x) • 7500 TB (deep) vs. 19TB (surface) [in 2001] • Information resides in databases • 95% of the information is publicly accessible

  4. Estimating the Size • Analysis procedure of > 100 known deep web sites • Webmasters queried for record count and storage size, 13% responded • Some sites explicitly stated their database size without the need for webmaster assistance • Site sizes compiled from lists provided at conferences • Utilizing a site’s own search capability with a term known not to exist, e.g. “NOT ddfhrwxxct” • If still unknown, do not analyze

  5. Further Attempts at Size Estimation:Overlap Analysis • Compare (pair-wise) random listings from two independent sources • Repeat pair-wise with all sources previously collected that are known to have deep web • From the commonality of the listings, we can then abstract the total size • Provides a lower bound size of the deep web, since our source list is incomplete src 2 listings shared listings src 1 listings Total Size Total size covered by Src1 listings = (shared listings) (src 1 listings)

  6. Further Attempts at Size Estimation:Multiplier on Average Site’s Size • From listing of 17,000 site candidates, 700 were randomized selected. 100 of these could be fully characterized • Randomized queries issues to these 100 with results on HTML pages, mean page size calculatedand used for est. 17k deep websites 700 randomly chosen 100 sites used that could be fully characterized ? queried Results page produced and analyzed

  7. Other Methods Used For Estimation • Pageviews (“What’s Related” on Alexa) and Link References • Growth Analysis obtained from Whois • From 100 surface and 100 deep web sites’, acquired date site was established • Combined and plotted to add time as factor for estimation

  8. Overall Findings From Various Analyses • Mean deep website has web-expressed database (HTML included) of 74.4MB • Actual record counts can be derived from one-in-seven deep websites • On average, deep websites receive half as much monthly traffic as surface websites • Median deep website receives more than two times traffic as random surface website

  9. The Followup Paper:Web-Scale Extraction of Structured Data • Three systems for used for extracting deep web data • TextRunner • WebTables • Deep Web Surfacing (Relevant to Bergman) • By using these methods, the data can be aggregated for use in other services, e.g. • Synonym finding • Schema auto-complete • Type prediction became

  10. TextRunner • Parses text from crawls into n-arytuplesinto natural language • e.g. “Albert Einstein was born in 1879” into the tuple <Einstein,1879> with the was_born_in relation • This has been done before but TextRunner: • Works in batch mode: Consumes an entire crawl, produces large amount of data • Pre-compute good extractions before queries arrive and aggressively index • Discovers relations on-the-fly, others pre-programmed • Others methods are query-driven and perform all of the work on-demand Argument 1 Einstein Argument 2 1879 born Predicate Search • Search Results • Albert Einstein was born in 1879. Demo: http://bit.ly/textrunner

  11. TextRunner’s Accuracy Corpus Size (pages) Accuracy Tuples Extracted 88% Early Trial 9 Million 1 Million 93% “Results not yet available” Followup Trial 500 Million 900 Million http://turing.cs.washington.edu/papers/banko-thesis.pdf

  12. Downsides of TextRunner • Text-centric extractors rely on binary relations of language (two nouns and a linking relation) • Unable to extract data that conveys relations in a table form (but WebTables [next] can) • Because of the on-the-fly analyses of relations, the output model is not relational • e.g. We cannot know that Einstein is a human attribute and 1879 a birth year

  13. WebTables • Designed to extract data from content within HTML’s table tag • Ignores calendar, single cells and tables used as basis for site design • General crawl of 14.1B tables contains 154M true relational database (1.1%). • Evolved into

  14. How Does WebTables work? <td> • Throw out tables with single cell, calendars and those used for layout. • Accomplished with hand-written detectors • Label these as relational or non-relational using statistically trained classifiers • base classification on number of rows, columns, empty cells, number of columns with numeric-only data, etc <table> Relational Data

  15. WebTables Accuracy • Procedure retains 81% of truly relational databases in input corpus though only 41% of output is relational (superfluous data) • 271M relations including 125M of raw input’s 154M true relations (and 146M false ones)

  16. Downsides of WebTables • Does not recover multi-table databases • Traditional database restraints (e.g. key constraints) cannot be expressed with table tag • Metadata is difficult to distinguish from table contents • Second trained classifier can be run to determine if metadata exists • Human-marked filtering of true relations indicates 71% have metadata • Secondary classifier performs well with: • Precision of 89% • Recall of 85%

  17. Obtaining Access to Deep-Web Databases • Two Approaches • Create vertical search on specific domains (e.g. cars, books), a semantic mapping and a mediator for the domain. • Not scalable • Difficult to identify domain-query mapping • Surfacing: pre-compute relevant form submissions then index the resulting HTML • Leverages current search infrastructure

  18. Surfacing Deep-Web Databases • Select values for each input in the form • Trivial for select menus, challenging for text boxes • Perform enumeration of the inputs • Simple enumeration is wasteful and un-scalable • Text input falls in one of two categories: • Generic inputs that accept most keywords • Typed text input that only accept values in a particular domain

  19. Enumerating Generic Inputs • Examine page for good candidate keywords to bootstrap an iterative probing process • When valid results are produced from keywords, obtain more keywords from results page

  20. Selecting Input Combination • Crawling forms with multiple inputs is expensive and not scalable • Introduced notion: input template • Given a set of binding inputs:Template = set of all form submissions using only Cartesian product of binding inputs • Results in only informative templates in the form, only a few hundred form submissions per form • No. of form submissions proportional to size of database in underlying form, NOT No. of inputs and possible combinations

  21. Extraction Caveats • Semantics are lost when only using results pages • Annotations, future challenge is to find right kind of annotation that can be used by the IR-style index most effectively

  22. In Summary • The Deep Web is large – much larger than the surface web • Bergman gave various means of estimating the deep web and some method of accomplishing this • Cafarella et al. provided a much more structured approach in surfacing the content, not just to estimate magnitude but also to integrate the contents • Cafarella suggests a better way to estimate the deep web size independent of the number of fields and possible combinations.

  23. References • Bergman, M. K. (2001). The Deep Web: Surfacing Hidden Value. Journal of Electronic Publishing 7, 1-17. Available at: http://www.press.umich.edu/jep/07-01/bergman.html. • Cafarella, M. J., Madhavan, J., and Halevy, A. (2009). Web-scale extraction of structured data. ACM SIGMOD Record 37, 55. Available at: http://portal.acm.org/citation.cfm?doid=1519103.1519112.

More Related