580 likes | 596 Views
This master's degree program explores the structure, function, and evolution of the web through stochastic modeling. Joint work with experts in the field.
E N D
Aristotle University, Department of Mathematics Master in Web Science supported by Municipality of Veria Stochastic Modeling of Web Evolution M. Vafopoulos Joint work with S. Amarantidis, I. Antoniou 2010/06/09, SMTDA 2010 http://www.smtda.net/ Chania, Crete, Greece
Contents • What is the Web? • What are the issues and problems? • The Web as a Complex System • Query-Web Models • Stochastic Models and the Web
What is the Web? Internet ≠ Web Web: a system of interlinked hypertext documents (html) with unique addresses (URI) accessed via the Internet (http)
Web milestones 1992: Tim Berners-Lee presents the idea in CERN 1993: Dertouzos (MIT) and Metakides (EU) create W3C appointing TBL as director
Why the Web is so successful? Is based on architecture (HTTP, URI, HTML) which is: • simple, free or cheap, open source, extensible • tolerant • networked • fun & powerful • universal
Why is so successful? • New experience of exploring & editing huge amount of information, people, abilities anytime, from anywhere • The biggest human system with no central authority and control but with log data (Yotta* Bytes/sec) • Has not yet revealed its full potential… *10248
We knew the Web was big... • 1 trillion unique URIs(Google blog 7/25/2008) • 2 billion users • Google: 300 million searches/day • US: 15 billion searches/month • 72% of the Web population are active on at least 1 social network … Source blog.usaseopros.com/2009/04/15/google-searches-per-day-reaches-293-million-in-march-2009/
Web: the new continent • Facebook: 400 million active users • 50% of our active users log on to Facebook in any given day • 35 million users update their status each day • 60 million status updates posted each day • 3 billion photos uploaded to the site each month • Twitter: 75 million active users • 141 employees • Youtube: 350 million daily visitors • Flickr: 35 million daily visitors
Web: the new continent • Online advertising spending in the UK has overtaken television expenditure for the first time [4 billion Euros/year] (30/9/2009, BBC) • In US, spending on digital marketing will overtake that of print for the first time in 2010 • Amazon.com: 50 million daily visitors • 60 billion dollars market capitalization • 24.000 employess
Web: What are the issues and related problems? • Safe surfing (navigating) • Find relevant and credible information (example: research) • Create successful e-business • Reduce tax evasion • Enable local economic development • Communicate with friends, colleagues, customers, citizens, voters,…
Need to study the Web The Web is the largest human information construct in history. The Web is transforming society… It is time to study it systematically as stand- alone socio-technical artifact
How to study the Web? • Analyze the interplay among the: • Structure • Function • Evolution the Web as a highly inter-connected large complex system
Web Modeling • understand • measure and • model its evolution in order to optimize its social benefit through effective policies
What is the Structure of the Web The Web as a Graph: • Nodes: the websites (URI) more than 1 trillion • Links: the hyperlinks 5 links per page (average) • Weights: link assessment The WWW graph is a Directed Evolving Graph 2 0.5 1.2 3 0.2 1 2.1 4
Statistical Analysis of Graphs:The degree distribution P(k) = P(d ≤ k) is the distribution function of the random variable d that counts the degree of a randomly chosen node.
Statistical Analysis of the Web Graph four major findings: • power law degree distribution (self-similarity) • internet traffic: Faloutsos • Web links: Barabasi • small world property (the diameter is much smaller than the order of the graph) easy communication • many dense bipartite subgraphs • on-line property (the number of nodes and edges changes with time)
Distribution of links on the World-Wide Web P(k)∼ k−γ power law a, Outgoing links (URLs found on an HTML document); b, Incoming links Web.c, Average of the shortest path between two documents as a function of system size [Barabasi,ea 1999]
Small World Property Social Communication Networks Watts-Strogatz(1998) Short average path lengths and high clustering. WWW Average Distance (Shortest Path) between 2 Documents: <ℓ> = 0.35 + 2.06 log(n) <ℓ> = 18.6, n = 8 x 108 (1999) <ℓ> = 18.9, n = 109 (2009) two randomly chosen documents on the web are on average 19 clicks away from each other. (Small World)
Web dynamics • Search (PageRank, HITS, Markov matrices) • Traffic • Evolution • graph generators • Games • mechanism design (auctions) • Queries-search engine-Web
Search The Hyperlink Matrix The page rank vector π, is an eigenvector of the Hyperlink Markov matrix M, For the eigenvalue 1. π is a stationary distribution Mπ = π π = (π(κ)), π(κ) = the pagerank of the web page κ dimΜ = the number of the web pages that can be crawled by search engines.
Basis of Google’s Algorithm • If the Μarkovmatrix M is ergodic, the stationary distribution vector ρ is unique. • If the Μarkovmatrix Μ is mixing, then π is calculated as the limit for every initial probability distribution ρ. • The 2ndeigenvalueοf M estimates the speed of Convergence
Internet Traffic Prigogine and Herman 1971 Stochastic model of vehicular traffic dynamics based on statistical physics between the macroscopic “fluid dynamics” model and the individual vehicle model (1st order SDE) f0 is the "desired" velocity distribution function x and v are the position and velocity of the "vehicles“ is the average velocity c is the concentration of the "vehicles“ P is the probability of "passing" in the sense of increase of flow, T is the relaxation time.
Adaptation of the Prigogine - Hermann Model for the Internet Traffic [Antoniou, Ivanov 2002,2003] • Vehicles = the Information Packages • Statistics of Information Packages: Log-Normal Distribution
The Origin of Power Law in Network Structure and Network Traffic Kolmogorov 1941, The local structure of the turbulence in incompressible viscous fluid for very large Reynolds numbers, Dokl. Akad. Nauk SSSR 30, 301. The origin of Self-Similar Stochastic Processes Model of the homogeneous fragmentation Applying a variant the central limit theorem, Kolmogorov found that the logarithms of the grain sizes are normally distributed Before Fractals and Modern scale-free models, Wavelet Analysis of data [Antoniou, Ivanov 2002]
Evolution: Graph Generators • Erdős-Rényi (ER)model [Erdős, Rényi ‘60] • Small-world model [Watts, Strogatz ‘98] • Preferential Attachment [Barabási, Albert ‘99] • EdgeCopying models [Kumar et al.’99], [Kleinberg et al.’99], • Forest Fire model [Leskovec, Faloutsos ‘05] • Kroneckergraphs [Leskovec, Chakrabarti, Kleinberg, Faloutsos ‘07] • Optimization-based models [Carlson,Doyle,’00] [Fabrikant et al. ’02]
Evolution: Game theoretic models • Stageman (2004) Information Goods and Advertising: An Economic Model of the Internet • ZsoltKatona and MiklosSarvary (2007) Network Formation and the Structure of the Commercial World Wide Web • Kumar (2009), Why do Consumers Contribute to Connected Goods
Evolution: Queries- Search Engine -Web Kouroupas, Koutsoupias, Papadimitriou, SideriKKPS 2005 Economic-inspired model (utility) Explains scale-free behavior In the Web three types of entities exist: • Documents-i.e. web pages, created by authors [n] • Users [m] • Topics [k] • k≤m≤n
the KKPS model • The Search Engine recommends Documents to the Users • A User obtains satisfaction (Utility) after presented with some Documents by a Search Engine • Users choose and endorse those that have the highest Utility for them, and then • Search Engines make better recommendations based on these endorsements
Documents • For each topic t ≤ kthere is a Document vector Dt of length n(relevance of Document d for Topic t) • For Dtthe value 0 is very probable so that about k - 1 of every k entries are 0
User-Query • There are Users that can be thought as simple Queries asked by individuals. • For each topic t there is a User vector Rtof length m, (relevance of User-Query i for Topic t) • with about m/k non-zero entries
User-Query • the number of Documents proposed by the Search Engine is fixed and denoted by α • the number of endorsements per User-Query is also fixed and denoted by b • b ≤ α ≤ n
the algorithm Step 1: A User-Query,for a specific Topic, isentered in the Search Engine Step 2: The Search Engine recommends αrelevant Documents. The listing order is defined by a rule. In the very first operation of the Search Engine the Documents the rule is random listing according to some probability distribution Step 3: Among the α recommended Documents, bareendorsed on the basis of highest Utility. In this way, the bipartite graph S= ([m], [n], L) of Document endorsements is formed. Compute the in-degree of the Documents from the endorsements
the algorithm Step 4: Repeat Step 1 for another Topic. Step 5: Repeat Step 2. The rule for Documents listing is the decreasing in-degree for the specific User-Query computed in Step 3. Step 6: Repeat Step 3. Step 7: Repeat Steps 4, 5, 6 for a number of iterations necessary for statistical convergence (“that is, until very few changes are observed in the www state” )
results of statistical experimentsKKPS • for a wide range of values of the parameters m, n, k, a, b, the in-degree of the documents is power-law distributed • the price of anarchy (efficiency of algorithm) improved radically during the first 2-3 iterations and later the improvement had a slower rate
results of statistical experimentsKKPS • When the number of topics k increases the efficiency of the algorithm increases • When a increases (the number of recommended documents by the search engine) the efficiency of the algorithm also increases • Increasing b (number of endorsed documents per user) causes the efficiency of the algorithm to decrease
results of statistical experimentsAmarantidis-Antoniou-Vafopoulos we extend the investigation in two directions: • for Uniform, Poisson and Normal initial random distribution of Documents in-degree (Step 2) and • for different values of α, b and k
results of statistical experimentsAmarantidis-Antoniou-Vafopoulos in the case α=b the validity of the power law becomes less significant as bincreases b: number of endorsed documents per user a: the number of recommended documents by the search engine
results of statistical experimentsAmarantidis-Antoniou-Vafopoulos an increase in the number of Topics k, results faster decay of the power law exponent
efficiency of the search algorithm α=b efficiency of the search algorithm increases when the number of topics k increases [confirmation of KKPS results]
efficiency of the search algorithm in the case b≤α efficiency of the search algorithm increases when the number of recommended Documents by the Search Engine α increases [confirmation of KKPS results]
efficiency of the search algorithm b≤α efficiency of the search algorithm increases when the number of bof endorsed Documents per User-Query increases [KKPS results not confirmed]
Discussion of statistical experimentsAmarantidis-Antoniou-Vafopoulos • α=b: all recommended Documents are endorsed according to the highest in-degree criterion • Utility is useful only in terms of establishing compatibility between Utility Matrix and the Users-Queries and Documents bipartite graph
Discussion of statistical experimentsAmarantidis-Antoniou-Vafopoulos • origin of the power law distribution of the in-degree of Documents, two mechanisms are identified in the KKPS model: • Users-Queries endorse a small fraction of Documents presented (b) • Assuming a small fraction of poly-topic Documents, the algorithm creates a high number of endorsements for them • The above mechanisms are not exhaustive for the real Web graph. Indexing algorithms, crawler’s design, Documents structure and evolution should be examined as possible additional mechanisms contributing to the manifestation of the power law distribution
Discussion on the Endorsement Mechanism “The endorsement mechanism does not need to be specified, as soon as it is observable by the Search Engine. For example, endorsing a Document may entail clicking it, or pointing a hyperlink to it.” This KKPS hypothesis does not take into account the fundamental difference between clicking a link (browsing) and creating a hyperlink.
discussion Web traffic is observable by the website owner or administrator through the corresponding log file and by third parties authorized (like search engine cookies which can trace clicking behavior or malicious
discussion On the contrary, creating a hyperlink results in a more “permanent” link between two Documents which is observable by all Users-Queries and Search Engines. Therefore, the KKPS algorithm actually examines the Web traffic and not the hyperlink structure of Documents which is the basis of the in-degree Search engine’s algorithm