160 likes | 266 Views
The Architecture of a Large-Scale Web Search and Query Engine. Andreas Harth Joint work with Aidan Hogan, Juergen Umbrich, Stefan Decker. Current Search Climate. Major search engines (Google, Yahoo, Microsoft) offer keyword searches over hypertext documents
E N D
The Architecture of a Large-Scale Web Search and Query Engine Andreas Harth Joint work with Aidan Hogan, Juergen Umbrich, Stefan Decker
Current Search Climate • Major search engines (Google, Yahoo, Microsoft) offer keyword searches over hypertext documents • Search engines are powerful at expressing general searches, but are poor at expressing complex queries: • e.g. podcasts about gardening • e.g. pictures of your home town • e.g. people that Rudi Studer knows • e.g. pictures of friends of Norman Walsh • e.g. weather-related WSDL services • Smaller sites, such as online social networks, scientific databases, digital libraries, collaborative data repositories, etc. provide semantically rich data and offer specialized search interfaces – mostly backed up by relational databases
Semantic Web Search Engine • Data integration on Web scale, to leverage structured data available under open licenses • Allow people to pose queries over the integrated corpus • Allow for programmatic access to the corpus (via SPARQL)
Topical Subgraphs • First, Match nodes in a large graph satifsying a query • Then, select the sourrounding nodes and arcs Topical subgraph contains all information required to further process results
Semantic Web Search Engine Architecture Consolidation Query Proc Crawler UI Index Ranking Indexing Extraction
Obtaining Information • Data from the HTML Web • DMOZ sites • Data from the XML Web • CiteSeer • DBLP • RSS, Podcasts • Data from the RDF Web • DMOZ categories • SwissProt • Wikipedia • FOAF, SIOC, DC, …
Optimized Index on Quadruples • Data model: subject/predicate/object/context • 16 different lookup patterns for quads (node substituted by variable) – e.g. (s, ?, ?, ?), (?, p, o, ?), … • Naive solution: put a separate index on s, p, o, and c, and compute join form combinations • But: joins are costly • Solution: 16 indexes to cover all quadruple patterns • But: very costly to maintain 16 indexes • Index with concatenated keys allows to re-use access patterns – saves 10 indexes • Huffmann coding to save space on disk and in memory
Providing Information to the Casual User • Ranking required in case of large result sets • Link-based ranking algorithms (such as PageRank, HITS) not applicable to directed labeled graphs • ReConRank: • link-based ranking on structured data • can exploit labeled links • takes into account provenance of data • operates on topical subgraph – local ranking yields higher quality
Example Input Dataset • Example graph returned by keyword search for “ReConRank”, n = 1 • 4 keyword hits (red outline) • 4 rankable resources (yellow outline)
Example Input Dataset (with Context) • Example graph returned by keyword search for “ReConRank”, n = 1 • 4 keyword hits (red outline) • 4 rankable resources (yellow outline)
Solution: Combined Resource Context Graph • Shown is the result of combining the resource graph with context grpah, including the implied links (depicted with hollow green arrowheads) • Graph is well connected !
Conclusion • SWSE is a distributed system for processing large amounts of Web content • Crawler does syntax integration • Storage component features keyword index and complete index on quads for fast lookups • Ranking is scalable and fast, applicable to arbitrary RDF, but needs more quality evaluation • Design philosophy: keep the system simple, to be able to optimize and distribute easily • Algorithms designed for distributed setting -- partition the data and task at hand and distribute to many machines
http://swse.deri.org/ • Prototype online with dataset crawled starting from ISWC 2006 web site • plus DBLP in RDF • plus Wikipedia in WikiOnt • Acknowledgements: DERI Lion (SFI/02/CE1/l131)