1 / 40

Google’s Billion Dollar Eigenvector

Google’s Billion Dollar Eigenvector. Gerald Kruse, PhD. Associate Professor of Mathematics and Computer Science Juniata College Huntingdon, PA kruse@juniata.edu http://faculty.juniata.edu/kruse. Math, math, everywhere…. Who is getting close to the $1 M?. Here’s an interesting billboard….

jock
Download Presentation

Google’s Billion Dollar Eigenvector

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Google’s Billion Dollar Eigenvector Gerald Kruse, PhD. Associate Professor of Mathematics and Computer Science Juniata College Huntingdon, PA kruse@juniata.edu http://faculty.juniata.edu/kruse

  2. Math, math, everywhere…

  3. Who is getting close to the $1 M?

  4. Here’s an interesting billboard…

  5. What happened for those who found the answer? • The answer is 7427466391 • Those who typed in the URL, http://7427466391.com , ended up getting another puzzle. Solving that lead them to a page with a job application for… • Google!

  6. First Question (1) Just what does it take to solve that problem?Calculations (most probably on a computer), knowledge of number theory, a general aptitude and interest in problem solving.

  7. Second Question (2) Why does Google want to hire people who know how to find that, what does it have to do with a search engine? Hmmm… Google gives great search results.Maybe their ranking algorithm is mathematically based?

  8. “Google-ing” Google

  9. Results in an early paper from Page, Brin et. al. while in graduate school

  10. Search EnginesWe’ve all used them, but what is “under the hood?” • Crawl the web and locate all public pages • Index the “crawled” data so it can be searched • Rank the pages for more effective searching ( the focus of this talk ) • Each word which is searched on is linked with a list of pages (just URL’s) which contain it. The pages with the highest rank are returned first.

  11. Note:Google ONLY uses the link structure of the World Wide Web to determine a page’s rank, NOT its content.

  12. PageRank is NOT a simple citation index Which is the more popular page below, A or B?What if the links to A were from unpopular pages, and the one link to B was from www.yahoo.com ? A B • NOTE: • Rankings based on citation index would be very easy to manipulate • While PageRankis an important part of Google’s search results, it is not the sole means used to rank pages.

  13. Intuitively PageRank is analogous to popularity • The web as a graph: each page is a vertex, each hyperlink a directed edge. • A page is popular if a few very popular pages point (via hyperlinks) to it. • A page could be popular if many not-necessarily popular pages point (via hyperlinks) to it. Page A Page B Which of these three would have the highest page rank? Page C

  14. So what is the mathematical definition of PageRank? In particular, a page’s rank is equal to the sum of the ranks of all the pages pointing to it. note the scaling of each page rank

  15. Writing out the equation for each web-page in our example gives: Page A Page B Page C

  16. Even though this is a circular definition we can calculate the ranks.Re-write the system of equations as a Matrix-Vector product. The PageRank vector is simply an eigenvector of the coefficient matrix, with

  17. Wait… what’s an eigenvector?

  18. A Graphical Interpretation of a 2-Dimensional Eigenvectorhttp://cnx.org/content/m10736/latest/ If we have some 2-D vector x, and some 2 x 2 matrix A, generally their product, A*x = b, will result in a new vector, b, which is pointing in a different direction and having a different length than x. But, if the vector (v in the image at the left) is an eigenvector of A, then A*v will give a vector which is same direction as v,but just scaled a different length, by λ. Note that λ is called an eigenvalue of A.

  19. PageRank = 0.4 PageRank = 0.2 Page A Page B Page C PageRank = 0.4 Note: we choose the eigenvector with

  20. Implementation Details • Billions of web-pages would make a huge matrix • The matrix (in theory) is column-stochastic, which allows for iterative calculation • Previous PageRank is used as an initial guess • Random-Surfer term handles computational difficulties associated with a “disconnected graph”

  21. Attempts to Manipulate Search Results Via a “Google Bomb”

  22. French Military Victories

  23. Juniata’s own “Google Bomb”

  24. At Juniata, CS 315 is my “Analysis and Algorithms” course

  25. Liberals vs. Conservatives! As of November, 2007, Google no longer returns this!

  26. “Ego Surfing” Be very careful…

  27. More than one Gerald Kruse…

  28. Try a search in Google on “PigeonRank.” What types of sites would Google NOT give good results on? PageRank is not the only means Google uses to order search results. Miscellaneous points

  29. [1] S. Brin, L. Page, et. al., The PageRank Citation Ranking: Bringing Order to the Web, http://dbpubs.stanford.edu/pub/1999-66 , Stanford Digital Libraries Project (January 29, 1998). [2] K. Bryan and T. Leise, The $25,000,000,000 Eigenvector: The Linear Algebra behind Google, SIAM Review, 48 (2006), pp. 569-581. [3] G. Strang, Linear Algebra and Its Applications, Brooks-Cole, Boston, MA, 2005. [4] D. Poole, Linear Algebra: A Modern Introduction, Brooks-Cole, Boston, MA, 2005. Bibliography

  30. Any Questions? Slides available at http://faculty.juniata.edu/kruse

  31. The following slides give some of the more in-depth mathematics behind Google

  32. Note that the coefficient matrix is column-stochastic* Every column-stochastic matrix has 1 as an eigenvalue.* As long as there are no “dangling nodes” and the graph is connected.

  33. Dangling Nodes have no outgoing links In this example, Page C is a dangling node. Note that its associated column in the coefficient matrix is all 0. Matrices like these are called column-substochastic. Page A Page C Page B In Page, Brin, et. al. [1], they suggest dangling nodes most likely would occur from pages which haven’t been crawled yet, and so they “simply remove them from the system until all the PageRanks are calculated.”It is interesting to note that a column-substochastic does have a positive eigenvalue and corresponding eigenvector with non-negative entries, which is called the Perron eigenvector, as detailed in Bryan and Leise [2].

  34. A disconnected graph could lead to non-unique rankings Notice the block diagonal structure of the coefficient matrix. Note: Re-ordering via permutation doesn’t change the ranking, as in [2]. Page A Page C Page E Page B Page D In this example, the eigenspace assiciated with eigenvalue is two-dimensional. Which eigenvector should be used for ranking?

  35. Add a “random-surfer” term to the simple PageRank formula. Let S be an n x n matrix with all entries 1/n. S is column-stochastic, and we consider the matrix M , which is a weighted average of A and S. This models the behavior of a real web-surfer, who might jump to another page by directly typing in a URL or by choosing a bookmark, rather than clicking on a hyperlink. Originally, m=0.15 in Google, according to [2]. can also be written as: Important Note: We will use this formulation with A when computing x , and s is a column vector with all entries 1/n, where if

  36. M for our previous disconnected graph, with m=0.15 Page A Page C Page E Page B Page D The eigenspace associated with is one-dimensional, and the normalized eigenvector is So the addition of the random surfer term permits comparison between pages in different subwebs.

  37. Iterative Calculation By many estimates, the web currently contains at least 8 billion pages. How does Google compute an eigenvector for something this large?One possibility is the power method.In [2], it is shown that every positive (all entries are > 0) column-stochastic matrix M has a unique vector q with positivecomponents such that Mq = q, with , and it can becomputed as , for any initial guess withpositive components and .

  38. Iterative Calculation continued Rather than calculating the powers of M directly, we could use the iteration, .Since M is positive, would be an calculation. As we mentioned previously, Google uses the equivalent expression in the computation:These products can be calculated without explicitly creating the huge coefficient matrix, since A contains mostly 0’s. The iteration is guaranteed to converge, and it will converge quicker with a better first guess, so the previous PageRank vector is used as the initial vector.

  39. This gives a regular matrix • In matrix notation we have • Since we can rewrite as • The new coefficient matrix is regular, so we can calculate the eigenvector iteratively. • This iterative process is a series of matrix-vector products, beginning with an initial vector (typically the previous PageRank vector). These products can be calculated without explicitly creating the huge coefficient matrix.

More Related