The Google Markov Chain: Convergence Speed and Eigenvalues

Total Page:16

File Type:pdf, Size:1020Kb

The Google Markov Chain: Convergence Speed and Eigenvalues U.U.D.M. Project Report 2012:14 The Google Markov Chain: convergence speed and eigenvalues Fredrik Backåker Examensarbete i matematik, 15 hp Handledare och examinator: Jakob Björnberg Juni 2012 Department of Mathematics Uppsala University Acknowledgments I would like to thank my supervisor Jakob Björnberg for helping me writing this thesis. 1 The Google Markov Chain: convergence speed and eigenvalues Contents 1 Introduction 2 Definitions and background 2.1 Markov chains 2.2 The Google PageRank 3 Convergence speed 3.1 General theory of convergence speed 3.2 Convergence speed and eigenvalues of Google´s Markov Chain 4 Simulations 4.1 Multiplicity of the second eigenvalue 4.2 Quality of the limit distribution 5 Conclusion 6 References Appendices Matlab-Code 2 1 Introduction There are many different search engines on the internet which help us find the information we want. These search engines use different methods to rank pages and display them to us in a way such that the most relevant and important information is showed first. In this thesis, we study a mathematical method that is a part of how PageRank,the ranking method for the search engine Google, ranks the order of which pages are displayed in a search. This method we look at uses pages as states in a stochastic Markov chain where outgoing links from pages are the transitions and the corresponding transition probabilities are equally divided among the number of outgoing links from the related page. The transition probability matrix that is given by this is then used to compute a stationary distribution where the page with the largest stationary value is ranked first, the page with the second largest is ranked second and so on. This method can be put into two variants, with a dampening factor or without. The variant without a dampening factor is the one we just described. In the other variant, which we study in this thesis, the dampening factor (often set to 0.85) is introduced mainly to ensure that the stationary distribution is unique. This variant is considered to be the most useful one and in this thesis we take a light look at how the dampening factor affects the computation of PageRank. We will begin by going through some basic definitions for Markov chains and explain the Google PageRank in more detail. In the section after, we go through some general theory about the rate of convergence for Markov chains since it turns out that the eigenvalues of a transition probability matrix is connected to the convergence speed to its steady state. Further, we look at the second largest eigenvalue of the Google Markov chain and its algebraic multiplicity, which are the main factors that affect the convergence rate of the chain. Next, we go through some results of how the second eigenvalue of the Google Markov chain is limited by the dampening factor and by this, makes the choice of the dampening factor very important. We end by doing some simulations to check how different properties of PageRank are affected by choices of the dampening factor and in particular, which value of the dampening factor that is most adapted for a fast convergence speed of the Google Markov chain. 3 2 Definitions and background 2.1 Markov chains { } A discrete time Markov chain is a stochastic process X n with finite state space S that satisfies the Markov property: ( = ∣ = … = )= ( = ∣ = ) P X n x n X 0 x0 , , X n−1 xn−1 P X n xn X n−1 xn−1 … ∈ ⩾ for all x0 , , xn S and n 1. In other words, the next step of a Markov chain is independent of the past and only relies upon the most recent state. The chain is called time-homogenous if the transition probabilities do not change over time, i.e. if ∈ = ( = ∣ = ) for each i , j S , pij P X n j X n−1 i does not depend on n. In this case the probabilities pij are the Markov chains transition probabilities when moving from (m)= ( = ∣ = ) state i to state j. Also let pij P X m+n j X n i denote the transition probabilities in m steps, m=0,1,2... The probabilities can be collected in a transition probability matrix, here denoted by P: … p00 p01 =( …) P p10 p11 ⋮ ⋮ ⋱ ∑ = This matrix is called a stochastic matrix if all of the row vectors in it sum to one: pij 1. The j Markov chain is said to be irreducible if it is possible to reach each state i from any other state j, in any number of steps. More formally, if ( = ∣ = )> ≥ ∀ P X n j X 0 i 0 for some n 0 i , j A state i has period k if any return to state i occurs in multiples of k steps: = { ( = ∣ = )> } k greatest common divisor of the set n: P X n i X 0 i 0 If all the states in a Markov chain has period one, it is said to be aperiodic, i.e. the greatest common divisor of the return time to any state from itself is one. The following result is standard and we do not prove it. Proposition 1 A Markov chain that is irreducible and aperiodic with finite state space has a unique stationary distribution π, which is a probability vector such that π=πP. Additionally, the transition probabilities converges to a steady state when the number of steps goes to infinity in the sense that (m)= lim pij π j for all i,j in S. m →∞ 2.2 The Google PageRank The Google PageRank is one of many methods that the search engine Google uses to determine the importance or relevance of a page. This method uses a special Markov chain which is used to compute the rank of web pages and this rank determines in which order the pages should be listed in a search in Google. 4 Let all the web pages Google communicates with be denoted by the state space W. The size of W is =( ) n, several billion pages. Let C c denote the connectivity matrix of W, which means that C is = ij = a n×n matrix with cij 1 if there is an hyperlink from page i to page j and cij 0 otherwise. The number of outgoing links from page i are the row sums n =∑ si cij = = j 1 =( ) If s 0 , it has no outgoing links and is called a dangling node. Let T t be given by =i / ⩾ = / ij t ij cij si if si 1 and t ij 1 n if i is a dangling node. By this, T can be seen as a transition probability matrix of the Markov chain with state space W. Furthermore, to define the Google Markov chain we include an additional parameter d, which is a dampening factor that can be set between 0 and 1. The transition probability matrix of the Google Markov chain is defined by: 1 P=dT+(1−d)( )E n where E is the n×n matrix with only ones. This Markov chain can be described as a ”random surfer” who, with probability d, clicks on an outgoing link on the current web page with equal probabilites or, if the page has no outgoing links, chooses another page at random in W. Also, with probability 1-d, the surfer jumps to a page at random among all the pages n. The Google Markov chain is finite, irreducible and also aperiodic depending on which value d has. If d<1, the chain is aperiodic since all its states have a probability to jump back to them self and therefor periods that are equal to one. If d=1, we get P=T and that the periodicity and irreducibility is completely determined by the outgoing links from all of the pages. By this, it is possible that two pages only link to each other and create a subset with a periodicity of two. If so, the chain is neither aperiodic nor irreducible. Then there is no unique stationary distribution and because the chain stays in a subset, the limit distribution π j depends on the starting state. This would be the most realistic case considering how the internet is structured and is the main reason why the dampening factor d is introduced. Further d affects, as we will see, the convergence speed of the Google Markov chain. In the computation of PageRank, d is usually set to 0.85[1] and then the Google Markov chain is finite, irreducible and aperiodic. Hence by Proposition 1 there exist a unique stationary distribution π. This stationary distribution is used to rank all the pages in W by letting the page with the largest πi be ranked first ,and the second largest be ranked second, and so on until all we get a Google PageRank for all the pages. One way of computing the Google PageRank is done by simulating the transitions until you reach a (approximate) steady state and according to Brin and Page[1], the creators of Google, ”a PageRank for 26 million web pages can be computed in a few hours on a medium size workstation”. 5 3 Convergence speed 3.1 General theory of convergence speed Since the Google PageRank consists of many billion pages, one might would like to know how fast this can be computed. This can be done by determining how fast the transition probability matrix of the Google Markov chain converges to its steady state as in Proposition 1. To find this rate of convergence, we need to go through some definitions and theorems.
Recommended publications
  • Google Matrix Analysis of Bi-Functional SIGNOR Network of Protein-Protein Interactions Klaus M
    bioRxiv preprint doi: https://doi.org/10.1101/750695; this version posted September 1, 2019. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. Bioinformatics doi.10.1093/bioinformatics/xxxxxx Manuscript Category Systems biology Google matrix analysis of bi-functional SIGNOR network of protein-protein interactions Klaus M. Frahm 1 and Dima L. Shepelyansky 1,∗ 1Laboratoire de Physique Théorique, IRSAMC, Université de Toulouse, CNRS, UPS, 31062 Toulouse, France. ∗To whom correspondence should be addressed. Associate Editor: XXXXXXX Received on August 28, 2019; revised on XXXXX; accepted on XXXXX Abstract Motivation: Directed protein networks with only a few thousand of nodes are rather complex and do not allow to extract easily the effective influence of one protein to another taking into account all indirect pathways via the global network. Furthermore, the different types of activation and inhibition actions between proteins provide a considerable challenge in the frame work of network analysis. At the same time these protein interactions are of crucial importance and at the heart of cellular functioning. Results: We develop the Google matrix analysis of the protein-protein network from the open public database SIGNOR. The developed approach takes into account the bi-functional activation or inhibition nature of interactions between each pair of proteins describing it in the frame work of Ising-spin matrix transitions. We also apply a recently developed linear response theory for the Google matrix which highlights a pathway of proteins whose PageRank probabilities are most sensitive with respect to two proteins selected for the analysis.
    [Show full text]
  • Mathematical Properties and Analysis of Google's Pagerank
    o Bol. Soc. Esp. Mat. Apl. n 0 (0000), 1–6 Mathematical Properties and Analysis of Google’s PageRank Ilse C.F. Ipsen, Rebecca S. Wills Department of Mathematics, North Carolina State University, Raleigh, NC 27695-8205, USA [email protected], [email protected] Abstract To determine the order in which to display web pages, the search engine Google computes the PageRank vector, whose entries are the PageRanks of the web pages. The PageRank vector is the stationary distribution of a stochastic matrix, the Google matrix. The Google matrix in turn is a convex combination of two stochastic matrices: one matrix represents the link structure of the web graph and a second, rank-one matrix, mimics the random behaviour of web surfers and can also be used to combat web spamming. As a consequence, PageRank depends mainly the link structure of the web graph, but not on the contents of the web pages. We analyze the sensitivity of PageRank to changes in the Google matrix, including addition and deletion of links in the web graph. Due to the proliferation of web pages, the dimension of the Google matrix most likely exceeds ten billion. One of the simplest and most storage-efficient methods for computing PageRank is the power method. We present error bounds for the iterates of the power method and for their residuals. Palabras clave : Markov matrix, stochastic matrix, stationary distribution, power method, perturbation bounds Clasificaci´on por materias AMS : 15A51,65C40,65F15,65F50,65F10 1. Introduction How does the search engine Google determine the order in which to display web pages? The major ingredient in determining this order is the PageRank vector, which assigns a score to a every web page.
    [Show full text]
  • Google Matrix of the World Trade Network
    Vol. 120 (2011) ACTA PHYSICA POLONICA A No. 6-A Proceedings of the 5th Workshop on Quantum Chaos and Localisation Phenomena, Warsaw, Poland, May 2022, 2011 Google Matrix of the World Trade Network L. Ermann and D.L. Shepelyansky Laboratoire de Physique Théorique du CNRS, IRSAMC, Université de Toulouse UPS, 31062 Toulouse, France Using the United Nations Commodity Trade Statistics Database we construct the Google matrix of the world trade network and analyze its properties for various trade commodities for all countries and all available years from 1962 to 2009. The trade ows on this network are classied with the help of PageRank and CheiRank algorithms developed for the World Wide Web and other large scale directed networks. For the world trade this ranking treats all countries on equal democratic grounds independent of country richness. Still this method puts at the top a group of industrially developed countries for trade in all commodities. Our study establishes the existence of two solid state like domains of rich and poor countries which remain stable in time, while the majority of countries are shown to be in a gas like phase with strong rank uctuations. A simple random matrix model provides a good description of statistical distribution of countries in two-dimensional rank plane. The comparison with usual ranking by export and import highlights new features and possibilities of our approach. PACS: 89.65.Gh, 89.75.Hc, 89.75.−k, 89.20.Hh 1. Introduction damping parameter α in the WWW context describes the probability 1 − α to jump to any node for a random The analysis and understanding of world trade is of pri- surfer.
    [Show full text]
  • The Pagerank Algorithm and Application on Searching of Academic Papers
    The PageRank algorithm and application on searching of academic papers Ping Yeh Google, Inc. 2009/12/9 Department of Physics, NTU Disclaimer (legal) The content of this talk is the speaker's personal opinion and is not the opinion or policy of his employer. Disclaimer (content) You will not hear physics. You will not see differential equations. You will: ● get a review of PageRank, the algorithm used in Google's web search. It has been applied to evaluate journal status and influence of nodes in a graph by researchers, ● see some linear algebra and Markov chains associated with it, and ● see some results of applying it to journal status. Outline Introduction Google and Google search PageRank algorithm for ranking web pages Using MapReduce to calculate PageRank for billions of pages Impact factor of journals and PageRank Conclusion Google The name: homophone to the word “Googol” which means 10100. The company: ● founded by Larry Page and Sergey Brin in 1998, ● ~20,000 employees as of 2009, ● spread in 68 offices around the world (23 in N. America, 3 in Latin America, 14 in Asia Pacific, 23 in Europe, 5 in Middle East and Africa). The mission: “to organize the world's information and make it universally accessible and useful.” Google Services Sky YouTube iGoogle web search talk book search Chrome calendar scholar translate blogger.com Android product news search maps picasaweb video groups Gmail desktop reader Earth Photo by mr.hero on panoramio (http://www.panoramio.com/photo/1127015)‏ 6 Google Search http://www.google.com/ or http://www.google.com.tw/ The abundance problem Quote Langville and Meyer's nice book “Google's PageRank and beyond: the science of search engine rankings”: The men in Jorge Luis Borges’ 1941 short story, “The Library of Babel”, which describes an imaginary, infinite library.
    [Show full text]
  • Reduced Google Matrix 2
    Reduced Google matrix K.M.Frahm and D.L.Shepelyansky Laboratoire de Physique Th´eorique, IRSAMC, CNRS and Universit´ede Toulouse, UPS, 31062 Toulouse, France Abstract. Using parallels with the quantum scattering theory, developed for processes in nuclear and mesoscopic physics and quantum chaos, we construct a reduced Google matrix GR which describes the properties and interactions of a certain subset of selected nodes belonging to a much larger directed network. The matrix GR takes into account effective interactions between subset nodes by all their indirect links via the whole network. We argue that this approach gives new possibilities to analyze effective interactions in a group of nodes embedded in a large directed networks. Possible efficient numerical methods for the practical computation of GR are also described. PACS numbers: 02.50.-r, 89.75.Fb, 89.75.Hc Dated: February 7, 2016 Keywords: Markov chains, Google matrix, directed networks Submitted to: J. Phys. A: Math. Gen. arXiv:1602.02394v1 [physics.soc-ph] 7 Feb 2016 Reduced Google matrix 2 1. Introduction At present the concept of Markov chains finds impressive applications in descriptions of directed networks including the World Wide Web (WWW) [1, 2], citation networks [3], Perron-Frobenius operators of dynamical systems [4], software architecture [5], Ulam networks of chaotic maps, world trade flows, network of Wikipedia articles and many other networks [6]. Such directed networks are well described by the Google matrix usually presented in the form Gij = αSij + (1 − α)/N , (1) where Sij describes Markov transitions on the network typically given by the inverse number of outgoing links from the node j in presence of a link j → i or 0 in absence of such a link.
    [Show full text]
  • Google Pagerank and Reduced-Order Modelling
    Delft University of Technology Faculty Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics Google PageRank and Reduced-Order Modelling Report for the Delft Institute of Applied Mathematics as part of the degree of BACHELOR OF SCIENCE in APPLIED MATHEMATICS by HUGO DE LOOIJ Delft, Netherlands June 2013 Copyright c 2013 by Hugo de Looij. All rights reserved. BSc report APPLIED MATHEMATICS “Google PageRank and Reduced-Order Modelling” HUGO DE LOOIJ Delft University of Technology Thesis advisor Dr. N.V. Budko Other members of the graduation committee Prof.dr.ir. C. Vuik Dr. J.A.M. de Groot Dr. J.G. Spandaw June, 2013 Delft Contents 1 Introduction 7 2 Preliminaries 8 3 The PageRank Method 10 3.1 The basic model . 10 3.2 Random surfer . 10 3.3 Ranking . 11 3.4 The hyperlink matrix H .............................. 12 3.5 The stochastic matrix S .............................. 14 3.6 The Google matrix G ............................... 15 3.7 PageRank . 16 4 Calculating the PageRank vector 20 4.1 The Power Method . 20 4.2 Implementation of the Power Method . 21 4.3 A direct method . 22 4.4 The Jacobi method . 23 4.5 Optimized Jacobi . 24 5 Numerical Experiments 27 5.1 Data . 27 5.2 Convergence criteria . 28 5.3 Numerical Results . 29 5.4 Discussion . 31 6 Random Teleportation Parameter 32 6.1 Expected PageRank . 32 6.2 Optimized Power Method . 33 6.3 Reduced-order modelling . 34 6.4 Numerical experiments . 38 6.5 Discussion . 41 7 Conclusion & Discussion 42 8 References 44 A Appendix 45 A.1 Constructing the hyperlink/Google matrices .
    [Show full text]
  • Linear Response Theory for Google Matrix
    EPJ manuscript No. (will be inserted by the editor) Linear response theory for Google matrix Klaus M. Frahm1 and Dima L. Shepelyansky1 Laboratoire de Physique Théorique, IRSAMC, Université de Toulouse, CNRS, UPS, 31062 Toulouse, France August 26, 2019 Abstract. We develop the linear response theory for the Google matrix PageRank algorithm with respect to a general weak perturbation and a numerical efficient and accurate algorithm, called LIRGOMAX algorithm, to compute the linear response of the PageRank with respect to this perturbation. We illustrate its efficiency on the example of the English Wikipedia network with more than 5 millions of articles (nodes). For a group of initial nodes (or simply a pair of nodes) this algorithm allows to identify the effective pathway between initial nodes thus selecting a particular subset of nodes which are most sensitive to the weak perturbation applied to them (injection or pumping at one node and absorption of probability at another node). The further application of the reduced Google matrix algorithm (REGOMAX) allows to determine the effective interactions between the nodes of this subset. General linear response theory already found numerous applications in various areas of science including statistical and mesoscopic physics. Based on these grounds we argue that the developed LIRGOMAX algorithm will find broad applications in the analysis of complex directed networks. PACS. XX.XX.XX No PACS code given 1 Introduction bility at a certain network node (or group of nodes) and absorbing probability at another specific node (or group of Linear response theory finds a great variety of applications nodes). In a certain sense such a procedure reminds lasing in statistical physics, stochastic processes, electron trans- in random media where a laser pumping at a certain fre- port, current density correlations and dynamical systems quency generates a response in complex absorbing media (see e.g.
    [Show full text]
  • A Reordering for the Pagerank Problem
    A Reordering for the PageRank problem Amy N. Langville† and Carl D. Meyer∗ March 2004 Abstract We describe a reordering particularly suited to the PageRank problem, which reduces the com- putation of the PageRank vector to that of solving a much smaller system, then using forward substitution to get the full solution vector. We compare the theoretical rates of convergence of the original PageRank algorithm to that of the new reordered PageRank algorithm, showing that the new algorithm can do no worse than the original algorithm. We present results of an experimental comparison on five datasets, which demonstrate that the reordered PageRank algorithm can provide a speedup as much as a factor of 8. We also note potential additional benefits that result from the proposed reordering. Key words: Markov chains, PageRank, reorderings, power method, convergence, stationary vector, dangling nodes † Department of Mathematics, N. Carolina State University, Raleigh, NC 27695-8205, USA [email protected] Phone: (919) 515-8146, Fax: (919) 513-3798 ∗ Department of Mathematics, Center for Research in Scientific Computation, N. Carolina State University, Raleigh, N.C. 27695-8205, USA [email protected] Phone: (919) 515-2384, Fax: (919) 515-3798 Research supported in part by NSF CCR-ITR-0113121 and NSF DMS 9714811. 1 Introduction It is well-known that many subsets of the web contain a large proportion of dangling nodes, webpages with no outlinks. Dangling nodes can result from many sources; a page containing an image, a postscript 1 or pdf file, a page of data tables, or a page whose links have yet to be crawled by the search engine’s spider.
    [Show full text]
  • Fast Parallel Pagerank: a Linear System Approach
    Fast Parallel PageRank: A Linear System Approach ∗ David Gleich Leonid Zhukov Pavel Berkhin Stanford University, ICME Yahoo! Yahoo! Stanford, CA 94305 701 First Ave 701 First Ave [email protected] Sunnyvale, CA 94089 Sunnyvale, CA 94089 [email protected] pberkhin@yahoo- inc.com ABSTRACT PageRank is also becoming a useful tool applied in many In this paper we investigate the convergence of iterative sta- Web search technologies and beyond, for example, spam de- tionary and Krylov subspace methods for the PageRank lin- tection [13], crawler configuration [10], or trust networks ear system, including the convergence dependency on tele- [20]. In this setting many PageRanks corresponding to dif- portation. We demonstrate that linear system iterations ferent modifications – such as graphs with a different level of converge faster than the simple power method and are less granularity (HostRank) or different link weight assignments sensitive to the changes in teleportation. (internal, external, etc.) – have to be computed. For each In order to perform this study we developed a framework technology, the critical computation is the PageRank-like for parallel PageRank computing. We describe the details vector of interest. Thus, methods to accelerate and paral- of the parallel implementation and provide experimental re- lelize these computations are important. Various methods to sults obtained on a 70-node Beowulf cluster. accelerate the simple power iterations process have already been developed, including an extrapolation method [19], a block-structure method [18], and an adaptive method [17]. Categories and Subject Descriptors Traditionally, PageRank has been computed as the prin- G.1.3 [Numerical Analysis]: Numerical Linear Algebra; ciple eigenvector of a Markov chain probability transition D.2 [Software]: Software Engineering; H.3.3 [Information matrix.
    [Show full text]
  • Arxiv:1711.11499V1 [Cs.SI] 30 Nov 2017 Ized by a Power Law [10,11] Which Is Typical for Complex Scale-Free Networks [12]
    EPJ manuscript No. (will be inserted by the editor) Google matrix of Bitcoin network Leonardo Ermann1, Klaus M. Frahm2 and Dima L. Shepelyansky2 1 Departamento de Física Teórica, GIyA, Comisión Nacional de Energía Atómica, Buenos Aires, Argentina 2 Laboratoire de Physique Théorique du CNRS, IRSAMC, Université de Toulouse, CNRS, UPS, 31062 Toulouse, France December 1, 2017 Abstract. We construct and study the Google matrix of Bitcoin transactions during the time period from the very beginning in 2009 till April 2013. The Bitcoin network has up to a few millions of bitcoin users and we present its main characteristics including the PageRank and CheiRank probability distributions, the spectrum of eigenvalues of Google matrix and related eigenvectors. We find that the spectrum has an unusual circle-type structure which we attribute to existing hidden communities of nodes linked between their members. We show that the Gini coefficient of the transactions for the whole period is close to unity showing that the main part of wealth of the network is captured by a small fraction of users. PACS. 89.75.Fb Structures and organization in complex systems – 89.75.Hc Networks and genealogical trees – 89.20.Hh World Wide Web, Internet 1 Introduction analyze the spectrum of the Google matrix of BCN using the powerful numerical approach of the Arnoldi algorithm The bitcoin crypto currency was introduced by Satoshi as described in [19,20,21]. We note that a possibility to Nakamoto in 2009 [1] and became at present an impor- use the PageRank probabilities for BCN was briefly noted tant source of direct financial exchange between private in [22].
    [Show full text]
  • Google: an Application of Linear Algebra (The Mathematics of a Great Success)
    Google: An Application of Linear Algebra (The mathematics of a great success) Peter Butkovic Peter Butkovic Google: An Application of Linear Algebra Peter Butkovic Google: An Application of Linear Algebra Peter Butkovic Google: An Application of Linear Algebra Hyun’sMap of the Web Peter Butkovic Google: An Application of Linear Algebra Peter Butkovic Google: An Application of Linear Algebra Peter Butkovic Google: An Application of Linear Algebra Founders of Google: Larry Page and Sergey Brin 1995: Research students at Stanford 1996: Started a student project on search engines 1998: Google incorporates as a company (initial investment: $1.1million) and …les patent for PageRank Google (~googol = 10100) Peter Butkovic Google: An Application of Linear Algebra 1995: Research students at Stanford 1996: Started a student project on search engines 1998: Google incorporates as a company (initial investment: $1.1million) and …les patent for PageRank Google (~googol = 10100) Founders of Google: Larry Page and Sergey Brin Peter Butkovic Google: An Application of Linear Algebra 1996: Started a student project on search engines 1998: Google incorporates as a company (initial investment: $1.1million) and …les patent for PageRank Google (~googol = 10100) Founders of Google: Larry Page and Sergey Brin 1995: Research students at Stanford Peter Butkovic Google: An Application of Linear Algebra 1998: Google incorporates as a company (initial investment: $1.1million) and …les patent for PageRank Google (~googol = 10100) Founders of Google: Larry Page and Sergey
    [Show full text]
  • Arxiv:1511.04823V1 [Physics.Soc-Ph] 16 Nov 2015
    Comparing classical and quantum PageRanks T. Loke1,∗ J. W. Tang1;2, J. Rodriguez1, M. Small3, and J. B. Wang1y 1School of Physics, The University of Western Australia, WA 6009, Australia 2School of Physics, Nanjing University, Jiangsu, China 3School of Mathematics and Statistics, The University of Western Australia, WA 6009, Australia Following recent developments in quantum PageRanking, we present a compar- ative analysis of discrete-time and continuous-time quantum-walk-based PageRank algorithms. For the discrete-time case, we introduce an alternative PageRank mea- sure based on the maximum probabilities achieved by the walker on the nodes. We demonstrate that the required time of evolution does not scale significantly with in- creasing network size. We affirm that all three quantum PageRank measures consid- ered here distinguish clearly between outerplanar hierarchical, scale-free, and Erd¨os- R´enyi network types. Relative to classical PageRank and to different extents, the quantum measures better highlight secondary hubs and resolve ranking degeneracy among peripheral nodes for the networks we studied in this paper. PACS numbers: 03.67.Lx, 03.67.-a, 05.40.Fb arXiv:1511.04823v1 [physics.soc-ph] 16 Nov 2015 ∗ [email protected] y [email protected] 2 I. INTRODUCTION Characterising the relative importance of nodes in a graph is a key element in network analysis. A ubiquitous application of such centrality measures is Google's PageRank algo- rithm [1, 2], whereby the World-Wide Web (WWW) is considered as a network of webpages (nodes) connected by hyperlinks (directed edges) between them. By ranking each webpage according to its PageRank centrality, the search engine's results are ordered based on their approximated quality.
    [Show full text]