1 / 51

Web Basics

Web Basics. Slides adapted from Information Retrieval and Web Search, Stanford University, Christopher Manning and Prabhakar Raghavan CS345A, Winter 2009: Data Mining. Stanford University, Anand Rajaraman, Jeffrey D. Ullman. Web search.

sondra
Download Presentation

Web Basics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Web Basics Slides adapted from Information Retrieval and Web Search, Stanford University, Christopher Manning and Prabhakar Raghavan CS345A, Winter 2009: Data Mining. Stanford University, Anand Rajaraman, Jeffrey D. Ullman

  2. Web search • Due to the large size of the Web, it is not easy to find the needle in the hay. • Solutions • Classification • Early search engines • Modern search engines • …

  3. Early solutions to web search Classification of web pages Yahoo Mostly done by humans. Difficult to scale. Early keyword-based engines ca. 1995-1997 Altavista, Excite, Infoseek, Inktomi, Lycos Decide how queries match pages Most queries match large amount of pages which page is more authoritative? Paid search ranking: Goto.com (aka overture.com, acquired by yahoo) Your search ranking depended on how much you paid Auction for keywords: casino was expensive!

  4. Ranking of web pages 1998+: Link-based ranking pioneered by Google Blew away all early engines save Inktomi Great user experience in search of a business model Meanwhile Goto/Overture’s annual revenues were nearing $1 billion

  5. Sec. 19.4.1 Web search overall picture User Web spider Search Indexer Indexes Ad indexes queries links The Web

  6. Graph User User Crawl Crawl Rank Rank Spam Key components in web search • Links and graph: The web is a hyperlinked document collection, a graph. • Queries: Web queries are different, more varied and there are a lot of them. How many? • 108 every day, approaching 109 • Users: Users are different, more varied and there are a lot of them. How many? • 109 • Documents: Documents are different, more varied and there are a lot of them. How many? • 1011. Indexed: 1010 • Context: Context is more important on the web than in many other IR applications. • Ads and spam

  7. Graph User Crawl Rank Spam Web as graph • Web Graph • Node: web page • Edge: hyperlink

  8. Graph User Crawl Rank Spam Why web graph • Example of a large, dynamic and distributed graph • Possibly similar to other complex graphs in social, biological and other systems • Reflects how humans organize information (relevance, ranking) and their societies • Efficient navigation algorithms • Study behavior of users as they traverse the web graph (e-commerce)

  9. Graph User Crawl Rank Spam In-degree and out-degree • In-degree: number of in-coming edges of a node • Out-degree: number of out-going edges of a node • E.g., • Node 8 has 3 in-degrees, 0 out-degree • Node 2 has 2 in-degrees, and 4 out-degrees • Degree distribution

  10. Graph User Crawl Rank Spam Degree distribution • Degree distribution is the fraction of the nodes that have degree i, i.e. • Degree of Web graph obeys power law distribution • Study at Notre Dame University reported • a = 2.45 for out-degree distribution • a = 2.1 for in-degree distribution • Random graphs have Poisson distribution

  11. Graph example, matlab (or Octave) G=[ 0,1,1,0,0,0,0,0,0,0; 0 0 1 1 0 0 0 1 1 0; 0 0 0 0 1 1 0 0 0 0; 0 0 0 0 0 0 0 1 0 0; 0 0 0 0 0 0 0 1 0 0; 0 0 0 0 0 0 0 0 0 1; 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0; 0 1 0 0 0 0 1 0 1 0 ]; indegree=sum(G) outdegree=sum(G') bin=0:4; h=hist(indegree,bin); subplot(1,2,1); bar(bin,h); title('indegree'); h=hist(outdegree,bin); subplot(1,2,2); bar(bin,h); title('outdegree');

  12. Graph User Crawl Rank Spam Power law plotted • 500 random numbers are generated, following power law with xmin=1, alpah=2 • Subplots C and D are produced using equal bin size (bin size=5) • To remove the noise in the tail of subplot (D), we need to use log bin size • Subplot (F) shows a straight line as desired. • Try the matlab program to experience with the power law

  13. Generate random numbers • Generate uniform random numbers • rand(n,1) • Generate power law random numbers using transformation method n=500; alpha=2; xmin=1; %generate n random numbers following power law rawData = xmin*(1-rand(n,1)).^(-1/(alpha-1));

  14. Plot the power law data subplot(3,2,1); scatter(1:n, rawData); title('(A) Scatter plot of 500 random data'); subplot(3,2,2); scatter(1:n, rawData, rawData.^(0.5),rawData); title('(B) Crowded dots are plotted in smaller size'); b=5; bins=1:b:n; h=hist(rawData, bins); subplot(3,2,3); plot(h, 'o'); xlabel('value'); ylabel('frequency'); title('(C) Histogram of equal bin size');

  15. Loglog plot subplot(3,2,4); Loglog(bins, h, 'o'); xlabel('value'); ylabel('frequency'); binslog(1)=1; for j=1:7 b2(j)=2^j binslog(j+1)=binslog(j)+b2(j); end; subplot(3,2,5); h=hist(rawData, binslog); plot(binslog, h, 'o'); xlabel('value'); ylabel('frequency'); title('(E)Histogram of log bin size'); subplot(3,2,6); h=hist(rawData, binslog); plot(log10(binslog), log10(h), 'o'); xlabel('value'); ylabel('frequency'); title('(F) log-log plot of (E)');

  16. Power law of web graph in 1999 • Note that the in/out distributions are slightly different • Out-degree may be better fitted by Mandelbrot law • What about the current web? • clueWeb data consist of 4 billion web pages.

  17. Graph User Crawl Rank Spam Scale-free networks • A network is scale free if the degree distribution follows power law • Mathematical model behind: Preferential attachment • Many networks obey power law • Internet at the router and inter domain level • Citation network/co-author network • Collaboration network of actors • Networks formed by interacting genes and proteins • … … • Web graph • Online social network • Semantic web

  18. Graph User Crawl Rank Spam Other graph properties • Distance from A to B: the length of the shortest path connecting A to B • Distance from node 0 to node 9: 1 • Length: the average of the distances between all the pairs of nodes • Diameter: the maximum of the distances • Strongly connected: for any pair of nodes, there is a path connecting them

  19. Graph User Crawl Rank Spam Small world • It is a ‘small world’ • Millions of people. Yet, separated by “six degrees” of acquaintance relationships • Popularized by Milgram’s famous experiment (1967) • Mathematically • Diameter of graph is small (log N) as compared to overall size N • For a fixed average degree • The diameter of a complete graph never grows (always 1) • This property also holds in random graphs

  20. Study of 200 million nodes & 1.5 billion links SCC: Strongly connected component (SCC) in the center. Up Stream: Lots of pages that link to other pages, but don’t get linked to (IN) Down stream: Lots of pages that get linked to, but don’t link (OUT) Tendrils, tubes, islands Small-world property not applicable to the entire web Some parts unreachable Others have long paths Power-law connectivity holds though Page in-degree (alpha = 2.1), out-degree (alpha= 2.72) Graph User Crawl Rank Spam Bow tie structure of Web

  21. Graph User Crawl Rank Spam Empirical numbers for bow-tie • Maximal diameter • 28 for SCC, 500 for entire graph • Probability of a path between any 2 nodes • ~1 quarter (0.24) • Average length • 16 (directed path exists), 7 (undirected) • Shortest directed path between 2 nodes in SCC: 16-20 links on average

  22. Graph User Crawl Rank Spam Component properties • Each component is roughly same size • ~50 million nodes • Tendrils not connected to SCC • But reachable from IN and can reach OUT • Tubes: directed paths IN->Tendrils->OUT • Disconnected components • Maximal and average diameter is infinite

  23. Graph User Crawl Rank Spam Statistics of web graph • Distribution of incoming and outgoing connections • Diameter of the graph: Average and maximal length of the shortest path between any two vertices • Web site and distribution of pages per site • Size of the graph

  24. Graph User Crawl Rank Spam Web site size • Simple estimates suggest over billions nodes • Distribution of site sizes measured by the number of pages follow a power law distribution • Note that degree distribution also follows power law • Observed over several orders of magnitude with an exponent a in the 1.6-1.9 range

  25. Graph User Crawl Rank Spam Web Size • The web keeps growing. • But growth is no longer exponential? • Who cares? • Media, and consequently the user • Engine design • Engine crawl policy. Impact on recall. • What is size? • Number of web servers/web sites? • Number of pages? • Terabytes of data available? • Size of search engine index?

  26. Sec. 19.5 Difficulties in defining the web size Some servers are seldom connected. Example: Your laptop running a web server Is it part of the web? The “dynamic” web is infinite. Soft 404: www.yahoo.com/<anything> is a valid page Dynamic content, e.g., Whether forecast calendar Any sum of two numbers is its own dynamic page on Google. Example: “2+4” Deep web content E.g., all the articles in nytimes. Duplicates Static web contains syntactic duplication, mostly due to mirroring (~30%) Graph User Crawl Rank Spam

  27. Sec. 19.5 What can we attempt to measure? The relative sizes of search engines The notion of a page being indexed is still reasonably well defined. Already there are problems Document extension: e.g. engines index pages not yet crawled, by indexing anchor text. Document restriction: All engines restrict what is indexed (first n words, only relevant words, etc.) Graph User Crawl Rank Spam Anchor text Bottom of a doc

  28. Graph User Crawl Rank Spam “Search engine index contains N pages”: Issues • Can I claim a page is in the index if I only index the first 4000 bytes? • Usually long documents are not fully indexed. Bottom parts are ignored. • Can I claim a page is in the index if I only index anchor text pointing to the page? • E.g., Apple web site may not contain the key word ‘computer’, but many anchor text pointing to Apple contains ‘computer’. • Hence when people search for ‘computer’, Apple page may be returned • There used to be (and still are?) billions of pages that are only indexed by anchor text.

  29. Sec. 19.5 New definition? The statically indexable web is whatever search engines index. Different engines have different preferences max url depth, max count/host, anti-spam rules, priority rules, etc. Different engines index different things under the same URL: Frames (e.g., some frames are navigational, should be indexed in a different way) meta-keywords, e.g., put more weight on the title document restrictions, document extensions, ... Graph User Crawl Rank Spam

  30. Sec. 19.5 Relative Size from overlap of engines A and B Graph User Crawl Rank Spam AÇB SampleURLs randomly from A Check if contained in B and vice versa AÇ B= (1/2) * Size A AÇ B= (1/6) * Size B (1/2)*Size A = (1/6)*Size B \ Size A / Size B = (1/6)/(1/2) = 1/3 Each test involves: (i) Sampling (ii) Checking

  31. Sec. 19.5 Sampling URLs Ideal strategy: Generate a random URL and check for containment in each index. Problem: Random URLs are hard to find! Enough to generate a random URL contained in a given Engine. Approach 1: Generate a random URL contained in a given engine Suffices for the estimation of relative size Approach 2: Random walks / IP addresses In theory: might give us a true estimate of the size of the web (as opposed to just relative sizes of indexes) Graph User Crawl Rank Spam

  32. Sec. 19.5 Random URLs from random queries Generate random query: how? Lexicon:400,000+ words from a web crawl Conjunctive Queries: w1 and w2 e.g., vocalists AND rsi Get 100 result URLs from engine A Choose a random URL as the candidate to check for presence in engine B Download D. Get list of words. Use 8 low frequency words as AND query to B Check if D is present in result set. Graph User Crawl Rank Spam Not an English dictionary

  33. Sec. 19.5 Biases induced by random query Query Bias: Large documents have higher probability being captured by queries Solution: reject some large documents using, e.g., rejection sampling method Ranking Bias: Search engine ranks the matched documents and returns only top-k documents. Solution: Use conjunctive queries & fetch all Another solution: modify the estimator Checking Bias: Duplicates, impoverished pages omitted Document or query restriction bias: engine might not deal properly with 8 words conjunctive query Malicious Bias: Sabotage by engine Operational Problems: Time-outs, failures, engine inconsistencies, index modification. Graph User Crawl Rank Spam

  34. Sec. 19.5 Random IP addresses Generate random IP addresses Find a web server at the given address If there’s one Collect all pages from server From this, choose a page at random Graph User Crawl Rank Spam

  35. Sec. 19.5 Random IP addresses Ignored: empty or authorization required or excluded [Lawr99] Estimated from observing 2500 servers 2.8 million IP addresses running crawlable web servers 16 million total servers 800 million pages Also estimated use of metadata descriptors: Meta tags (keywords, description) in 34% of home pages, Dublin core metadata in 0.3% OCLC using IP sampling found 8.7 M hosts in 2001 Netcraft [Netc02] accessed 37.2 million hosts in July 2002 Graph User Crawl Rank Spam

  36. Sec. 19.5 Advantages & disadvantages Advantages Clean statistics Independent of crawling strategies Disadvantages Doesn’t deal with duplication Many hosts might share one IP, or not accept requests No guarantee all pages are linked to root page. Eg: employee pages Power law for # pages/hosts generates bias towards sites with few pages. But bias can be accurately quantified IF underlying distribution understood Potentially influenced by spamming (multiple IP’s for same server to avoid IP block) Graph User Crawl Rank Spam

  37. Sec. 19.5 Random walks View the Web as a directed graph Build a random walk on this graph Includes various “jump” rules back to visited sites Does not get stuck in spider traps! Can follow all links! Converges to a stationary distribution Must assume graph is finite and independent of the walk. Conditions are not satisfied (cookie crumbs, flooding) Time to convergence not really known (may be too long) Sample from stationary distribution of walk Graph User Crawl Rank Spam

  38. Sec. 19.5 Advantages & disadvantages Advantages “Statistically clean” method at least in theory! Could work even for infinite web (assuming convergence) under certain metrics. Disadvantages List of seeds is a problem. Practical approximation might not be valid. Non-uniform distribution Subject to link spamming Graph User Crawl Rank Spam

  39. Sec. 19.5 Conclusions No sampling solution is perfect. Lots of new ideas ... ....but the problem is getting harder Quantitative studies are fascinating and a good research problem Graph User Crawl Rank Spam

  40. Graph User Crawl Rank Spam Another estimation method • OR-query of frequent words in a number of languages • According to such query: • Size of web > 21,450,000,000 on 2007.07.07 • > 25,350,000,000 on 2008.07.03 • But page counts of google search results are only rough estimates.

  41. The Web document collection No design/co-ordination Distributed content creation, linking, democratization of publishing Content includes truth, lies, obsolete information, contradictions … Unstructured (text, html, …), semi-structured (XML, annotated photos), structured (Databases)… Scale much larger than previous text collections … but corporate records are catching up Growth – slowed down from initial “volume doubling every few months” but still expanding Content can be dynamically generated The Web

  42. Documents • Dynamically generated content (deep web) • Dynamic pages are generated from scratch when the user requests them – usually from underlying data in a database. • Example: current status of flight LH 454 • Most (truly) dynamic content is ignored by web spiders. • It’s too much to index it all. • Actually, a lot of “static” content is also assembled on the fly (asp, php etc.: headers, date, ads etc)

  43. Sec. 19.4.1 User Web spider Search Indexer Indexes Ad indexes Web search overall picture queries links The Web

  44. Graph User Crawl Rank Spam Users • Use short queries (average < 3) • Rarely use operators • Don’t want to spend a lot of time on composing a query • Only look at the first couple of results • Want a simple UI, not a search engine start page overloaded with graphics • Extreme variability in terms of user needs, user expectations, experience, knowledge, . . . • Industrial/developing world, English/Estonian, old/young, rich/poor, differences in culture and class • One interface for hugely divergent needs

  45. Graph User Crawl Rank Spam User’s evaluation on search engines • Classic IR relevance (as measured by F, or precision and recall) can also be used for web IR. • Equally important: Trust, duplicate elimination, readability, loads fast, no pop-ups • On the web, precision is more important than recall. • Precision at 1, precision at 10, precision on the first 2-3 pages • But there is a subset of queries where recall matters.

  46. Graph User Crawl Rank Spam Queries • Queries have a power law distribution • Power law again ! • Same here: a few very frequent queries, a large number of very rare queries • Examples of rare queries: search for names, towns, books etc

  47. Graph User Crawl Rank Spam Types of queries • Informational user needs: I need information on something. (~40% / 65%) • “web service”, “information retrieval” • Navigational user needs: I want to go to this web site. (~25% / 15%) • “hotmail”, “myspace”, “United Airlines” • Transactional user needs: I want to make a transaction. (~35% / 20%) • Buy something: “MacBook Air” • Download something: “Acrobat Reader” • Chat with someone: “live soccer chat” • Gray areas • Find a good hub • Exploratory search “see what’s there” • Difficult problem: How can the search engine tell what the user need or intent for a particular query is?

  48. How far do people look for results? Graph User Crawl Rank Spam (Source: iprospect.com WhitePaper_2006_SearchEngineUserBehavior.pdf)

  49. Users’ empirical evaluation of results Quality of pages varies widely Relevance is not enough Other desirable qualities (non IR!!) Content: Trustworthy, diverse, non-duplicated, well maintained Web readability: display correctly & fast No annoyances: pop-ups, etc Precision vs. recall On the web, recall seldom matters What matters Precision at 1? Precision above the fold? Comprehensiveness – must be able to deal with obscure queries Recall matters when the number of matches is very small User perceptions may be unscientific, but are significant over a large aggregate Graph User Crawl Rank Spam

  50. Users’ empirical evaluation of engines Relevance and validity of results UI – Simple, no clutter, error tolerant Trust – Results are objective Coverage of topics for polysemic queries Pre/Post process tools provided Mitigate user errors (auto spell check, search assist,…) Explicit: Search within results, more like this, refine ... Anticipative: related searches Deal with idiosyncrasies Web specific vocabulary Impact on stemming, spell-check, etc Web addresses typed in the search box “The first, the last, the best and the worst …” Graph User Crawl Rank Spam

More Related