240 likes | 254 Views
Learn about the challenges of web crawling and how to develop effective crawling strategies. Explore importance metrics such as backlink count and page similarity, and discover crawling models like crawl-and-stop and limited buffer crawl. Experiment with ordering metrics to optimize the crawling process.
E N D
Junghoo Cho Hector Garcia-Molina Stanford InfoLab Intelligent Crawling
What is a crawler? • Program that automatically retrieves pages from the Web. • Widely used for search engines.
Challenges • There are many pages out on the Web. (Major search engines indexed more than 100M pages) • The size of the Web is growing enormously. • Most of them are not very interesting In most cases, it is too costly or not worthwhile to visit the entire Web space.
Good crawling strategy • Make the crawler visit “important pages” first. • Save network bandwidth • Save storage space and management cost • Serve quality pages to the client application
Outline • Importance metrics : what are important pages? • Crawling models : How is crawler evaluated? • Experiments • Conclusion & Future work
Importance metric The metric for determining if a page is HOT • Similarity to driving query • Location Metric • Backlink count • Page Rank
Similarity to a driving query Example) “Sports”, “Bill Clinton” the pages related to a specific topic • Importance is measured by closeness of the page to the topic (e.g. the number of the topic word in the page) • Personalized crawler
Importance metric The metric for determining if a page is HOT • Similarity to driving query • Location Metric • Backlink count • Page Rank
Backlink-based metric • Backlink count • number of pages pointing to the page • Citation metric • Page Rank • weighted backlink count • weight is iteratively defined
B A C E D F BackLinkCount(F) = 2 PageRank(F) = PageRank(E)/2 + PageRank(C)
Ordering metric • The metric for a crawler to “estimate” the importance of a page • The ordering metric can be different from the importance metric
Crawling models • Crawl and Stop • Keep crawling until the local disk space is full. • Limited buffer crawl • Keep crawling until the whole web space is visited throwing out seemingly unimportant pages.
Crawling models • Crawl and Stop • Keep crawling until the local disk space is full. • Limited buffer crawl • Keep crawling until the whole web space is visited throwing out seemingly unimportant pages.
Architecture HTML parser crawled page extracted URL Virtual Crawler page info URL pool WebBase Crawler Page Info selected URL Repository URL selector Stanford WWW
Experiments • Backlink-based importance metric • backlink count • PageRank • Similiarty-based importance metric • similarity to a query word
Ordering metrics in experiments • Breadth first order • Backlink count • PageRank
Similarity-based crawling • The content of the page is not available before it is visited • Essentially, the crawler should “guess” the content of the page • More difficult than backlink-based crawling
Promising page Anchor Text HOT Parent Page URL Sports Sports!! Sports!! …/sports.html ? ? ?
Virtual crawler for similarity-based crawling Promising page • Query word appears in its anchor text • Query word appears in its URL • The page pointing to it is “important” page • Visit “promising pages” first • Visit “non-promising pages” in the ordering metric order
Conclusion • PageRank is generally good as an ordering metric. • By applying a good ordering metric, it is possible to gather important pages quickly.