THE GOOGLE SPECTRUM Contents 1. Introduction 1 2. the Spectrum Of

Total Page:16

File Type:pdf, Size:1020Kb

THE GOOGLE SPECTRUM Contents 1. Introduction 1 2. the Spectrum Of THE GOOGLE SPECTRUM VARUN IYER Abstract. Google's PageRank algorithm is vastly superior to its rival search algorithms because of the mathematics behind it. As one of the many re- sults of Perron's Theorem, the algorithm is a discrete-time finite-state Markov chain that relies upon its transition probability matrix's spectrum for its con- vergence. This paper introduces the spectrum, Perron's theorem and proof, some of its applications, and basic Markov Chain theory, including, finally, a description of the PageRank algorithm. Contents 1. Introduction 1 2. The Spectrum of a Graph 2 3. Perron's Theorem 4 4. Some Applications of Perron's Theorem 8 Ecological Population Model 8 Walrasian Stability of Competitive Markets 9 5. The Power Method and Markov Chains 10 6. The Mathematics of Google's PageRank 12 Acknowledgments 15 References 15 1. Introduction Larry Page and Sergey Brin developed Google's PageRank algorithm in 1998, still during the early stages of the Internet. By 2002, 2 years before Google's IPO, PageRank was operating on a matrix at the order of 2.7 billion, cited by Cleve Moler as "The World's Largest Matrix Computation" [4]. Today Google searches over 30 trillion webpages, 100 billion times a month, and those numbers are still growing as new sites are used and new users Google on a daily basis. The algorithm, that was far more efficient than its rival '90s search engines at ranking web pages based on a given search, is still exceedingly relevant today, with some minor modifications over the years. To understand Google's PageRank is to understand the development and structure of the Internet. The goal of this paper is to highlight what makes the Google PageRank algorithm so superior to its rival search engines by explaining its core mathematics. To get to this discussion, however, we need to cover several different topics. First, we begin with some spectral graph theory to provide a graphical understanding of the spectrum, a tool that will be used numerous times. Describing some properties of graphs that can be found from the spectrum will provide some intuition for 1 2 VARUN IYER when we later discuss an example of a large directed graph, the Internet. We move on to stating and proving Perron's Theorem, an elegant foundation to many practical mathematics. A related theorem shows the convergence properties of iterative processes, which we will use to discuss some of Perron's other applications in ecology and economics. We then define the power method, a central tool to the PageRank algorithm and other Perron applications, and then introduce some Markov Chain theory which is crucial to understanding the intuition behind creating PageRank. Finally explaining in detail the rationale behind the Google matrix and the PageRank algorithm, we end with a proof of the Google matrix's spectrum, which is fundamentally responsible for the algorithm's efficiency. Hopefully this paper explains the first revolutionary search algorithm in depth, and will perhaps inspire future algorithms to be modeled from a similar mathematical foundation. 2. The Spectrum of a Graph Spectral graph theory is the study of the properties of graphs by analyzing the eigenvalues and eigenvectors of the graph's associated matrices, including the adja- cency matrix. We begin by defining a graph, and introduce the graph's spectrum. In later sections, we will prove a useful theorem about the spectrum of a positive matrix, and then use the spectrum to study a particular graph, the Internet. For now, this introduction to spectral graph theory should open some ideas to how the manipulation of the matrix of a graph can provide useful information about the graph's structure. Definition 2.1. A graph is an ordered pair G = (V; E), where V is the set of vertices and E is the set of edges, such that: E ⊂ f(x; y) j x; y 2 V; x 6= yg: The graph we defined above is an undirected graph. A directed graph contains edges with directions, where an edge that joins vi to vj may not necessarily join vj to vi. In this section, we limit our discussion to undirected graphs, but the concept of directed graphs will become important in a later section. If vi and vj are vertices of G and e = fvi; vjg is an edge that joins the two vertices, we call vi and vj adjacent to one another. In other words, two vertices are adjacent in a graph if there is a single connecting edge. The degree d(v) of a vertex v is the number of vertices that are adjacent to v. We will now define the adjacency matrix, which will be crucial to understanding the graph's properties. Definition 2.2. Suppose G is a graph with vertex set V = fv1; v2; : : : ; vng. The adjacency matrix of graph G is the n × n matrix A whose entries aij are given by: ( 1; if vi and vj are adjacent; aij = 0; otherwise. The adjacency matrix is both real and symmetric, by definition. Since the la- beling of a graph's vertices is arbitrary, properties of the matrix that are invariant under permutations of rows and columns are of particular interest. Suppose that λ is an eigenvalue of A. Since A is real and symmetric, λ is also real. This λ is a root of the characteristic polynomial det(λI − A). We now have the tools to define the spectrum of a graph. THE GOOGLE SPECTRUM 3 Definition 2.3. The spectrum of graph G is the set of eigenvalues of A combined with their respective algebraic multiplicities. If the k distinct eigenvalues of A are λ0 > λ1 > : : : > λk, then we write Spec = λ0 λ1 : : : λk Applications of the spectrum will come into play in later sections. For now, we will show some properties of graphs that can be derived from A. Suppose the characteristic polynomial det(λI − A) of G is k k−1 k−2 k−3 χ(G; λ) = λ + c1λ + c2λ + c3λ + ··· + ck The coefficients of χ can be expressed in terms of the principal minors of A, where a principal minor is the determinant of a submatrix obtained by taking a subset of the rows and the same subset of the columns. For the following proposition, we define triangles T as subgraphs containing 3 mutually adjacent vertices. Proposition 2.4. The coefficients of the characteristic polynomial χ of graph G have the following properties: (1) c1 = 0; (2) −c2 is the number of edges in G (3) −c3 is twice the number of triangles in G i Proof. For i 2 f1; 2; : : : ; kg, the value of (−1) ci is the sum of the principal minors of A with i rows and i columns. (1) The diagonal elements of A are all 0, so c1 = 0. (2) A nonzero principal minor of A with 2 rows and 2 columns must be 0 1 = −1: 1 0 For each pair of adjacent vertices in G, there is one such minor. Hence 2 (−1) c2 = −1 × E. (3) The nontrivial possibilities for principal minors of three rows and columns include: 0 1 0 0 1 1 0 1 1 1 0 0 = 0; 1 0 0 = 0; 1 0 1 = 2: 0 0 0 1 0 0 1 1 0 The only nonzero minor corresponds to 3 mutually adjacent vertices in G, 3 or a triangle T . Thus (−1) c3 = 2 × T . We end our introduction to spectral graph theory by defining the walk and what it means for a graph to be connected. A walk of length l in G, from vi to vj, is a finite sequence of vertices of G, vi = u0; u1; : : : ; ul = vj; such that consecutive elements of the sequence are adjacent. The powers of the adjacency matrix of a graph can be used to find the number of walks of length l. Theorem 2.5. The number of walks of length l in graph G, from vi to vj, is element (i; j) of the matrix Al. 4 VARUN IYER Proof. We will prove this theorem by induction. For walks of lengths 0 and 1, the result is true, as A0 = I and A1 = A. Suppose the result is true for walks of length L. For each walk of length L + 1 from vi to vj, there is a walk of length L from vi to vertices vk adjacent to vj. This bijection implies that the number of such L + 1 walks is n X L X L L+1 (A )ik = (A )ikakj = (A )ij fvk;vj g2E k=1 Hence, the result is true by induction. Definition 2.6. A graph is connected if each pair of vertices is joined by a walk. A directed graph is said to be strongly connected if there is a walk of directed edges between any pair of vertices. For more on spectral and algebraic graph theory, see Bigg [1]. In the next section, we move away from graphs to discuss a property of positive square matrices quintessential to many practical applications, stated in Perron's Theorem. 3. Perron's Theorem In 1907, Oskar Perron proved a theorem that would play a role in numerous fields of practical mathematics. Many measured interactions that take place in the real world can be described by nonnegative or positive matrices, where each element is nonnegative or positive, and a large number of important models take the form of simple linear iterative processes. Such processes begin with an initial state x0 and evolve recursively as xk = Akx0; Perron tells us when that iterative process will converge using, interestingly enough, the spectrum of matrix A.
Recommended publications
  • Google Matrix Analysis of Bi-Functional SIGNOR Network of Protein-Protein Interactions Klaus M
    bioRxiv preprint doi: https://doi.org/10.1101/750695; this version posted September 1, 2019. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. Bioinformatics doi.10.1093/bioinformatics/xxxxxx Manuscript Category Systems biology Google matrix analysis of bi-functional SIGNOR network of protein-protein interactions Klaus M. Frahm 1 and Dima L. Shepelyansky 1,∗ 1Laboratoire de Physique Théorique, IRSAMC, Université de Toulouse, CNRS, UPS, 31062 Toulouse, France. ∗To whom correspondence should be addressed. Associate Editor: XXXXXXX Received on August 28, 2019; revised on XXXXX; accepted on XXXXX Abstract Motivation: Directed protein networks with only a few thousand of nodes are rather complex and do not allow to extract easily the effective influence of one protein to another taking into account all indirect pathways via the global network. Furthermore, the different types of activation and inhibition actions between proteins provide a considerable challenge in the frame work of network analysis. At the same time these protein interactions are of crucial importance and at the heart of cellular functioning. Results: We develop the Google matrix analysis of the protein-protein network from the open public database SIGNOR. The developed approach takes into account the bi-functional activation or inhibition nature of interactions between each pair of proteins describing it in the frame work of Ising-spin matrix transitions. We also apply a recently developed linear response theory for the Google matrix which highlights a pathway of proteins whose PageRank probabilities are most sensitive with respect to two proteins selected for the analysis.
    [Show full text]
  • Mathematical Properties and Analysis of Google's Pagerank
    o Bol. Soc. Esp. Mat. Apl. n 0 (0000), 1–6 Mathematical Properties and Analysis of Google’s PageRank Ilse C.F. Ipsen, Rebecca S. Wills Department of Mathematics, North Carolina State University, Raleigh, NC 27695-8205, USA [email protected], [email protected] Abstract To determine the order in which to display web pages, the search engine Google computes the PageRank vector, whose entries are the PageRanks of the web pages. The PageRank vector is the stationary distribution of a stochastic matrix, the Google matrix. The Google matrix in turn is a convex combination of two stochastic matrices: one matrix represents the link structure of the web graph and a second, rank-one matrix, mimics the random behaviour of web surfers and can also be used to combat web spamming. As a consequence, PageRank depends mainly the link structure of the web graph, but not on the contents of the web pages. We analyze the sensitivity of PageRank to changes in the Google matrix, including addition and deletion of links in the web graph. Due to the proliferation of web pages, the dimension of the Google matrix most likely exceeds ten billion. One of the simplest and most storage-efficient methods for computing PageRank is the power method. We present error bounds for the iterates of the power method and for their residuals. Palabras clave : Markov matrix, stochastic matrix, stationary distribution, power method, perturbation bounds Clasificaci´on por materias AMS : 15A51,65C40,65F15,65F50,65F10 1. Introduction How does the search engine Google determine the order in which to display web pages? The major ingredient in determining this order is the PageRank vector, which assigns a score to a every web page.
    [Show full text]
  • Google Matrix of the World Trade Network
    Vol. 120 (2011) ACTA PHYSICA POLONICA A No. 6-A Proceedings of the 5th Workshop on Quantum Chaos and Localisation Phenomena, Warsaw, Poland, May 2022, 2011 Google Matrix of the World Trade Network L. Ermann and D.L. Shepelyansky Laboratoire de Physique Théorique du CNRS, IRSAMC, Université de Toulouse UPS, 31062 Toulouse, France Using the United Nations Commodity Trade Statistics Database we construct the Google matrix of the world trade network and analyze its properties for various trade commodities for all countries and all available years from 1962 to 2009. The trade ows on this network are classied with the help of PageRank and CheiRank algorithms developed for the World Wide Web and other large scale directed networks. For the world trade this ranking treats all countries on equal democratic grounds independent of country richness. Still this method puts at the top a group of industrially developed countries for trade in all commodities. Our study establishes the existence of two solid state like domains of rich and poor countries which remain stable in time, while the majority of countries are shown to be in a gas like phase with strong rank uctuations. A simple random matrix model provides a good description of statistical distribution of countries in two-dimensional rank plane. The comparison with usual ranking by export and import highlights new features and possibilities of our approach. PACS: 89.65.Gh, 89.75.Hc, 89.75.−k, 89.20.Hh 1. Introduction damping parameter α in the WWW context describes the probability 1 − α to jump to any node for a random The analysis and understanding of world trade is of pri- surfer.
    [Show full text]
  • Stabilization, Estimation and Control of Linear Dynamical Systems with Positivity and Symmetry Constraints
    Stabilization, Estimation and Control of Linear Dynamical Systems with Positivity and Symmetry Constraints A Dissertation Presented by Amirreza Oghbaee to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering Northeastern University Boston, Massachusetts April 2018 To my parents for their endless love and support i Contents List of Figures vi Acknowledgments vii Abstract of the Dissertation viii 1 Introduction 1 2 Matrices with Special Structures 4 2.1 Nonnegative (Positive) and Metzler Matrices . 4 2.1.1 Nonnegative Matrices and Eigenvalue Characterization . 6 2.1.2 Metzler Matrices . 8 2.1.3 Z-Matrices . 10 2.1.4 M-Matrices . 10 2.1.5 Totally Nonnegative (Positive) Matrices and Strictly Metzler Matrices . 12 2.2 Symmetric Matrices . 14 2.2.1 Properties of Symmetric Matrices . 14 2.2.2 Symmetrizer and Symmetrization . 15 2.2.3 Quadratic Form and Eigenvalues Characterization of Symmetric Matrices . 19 2.3 Nonnegative and Metzler Symmetric Matrices . 22 3 Positive and Symmetric Systems 27 3.1 Positive Systems . 27 3.1.1 Externally Positive Systems . 27 3.1.2 Internally Positive Systems . 29 3.1.3 Asymptotic Stability . 33 3.1.4 Bounded-Input Bounded-Output (BIBO) Stability . 34 3.1.5 Asymptotic Stability using Lyapunov Equation . 37 3.1.6 Robust Stability of Perturbed Systems . 38 3.1.7 Stability Radius . 40 3.2 Symmetric Systems . 43 3.3 Positive Symmetric Systems . 47 ii 4 Positive Stabilization of Dynamic Systems 50 4.1 Metzlerian Stabilization . 50 4.2 Maximizing the stability radius by state feedback .
    [Show full text]
  • On Control of Discrete-Time LTI Positive Systems
    Applied Mathematical Sciences, Vol. 11, 2017, no. 50, 2459 - 2476 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2017.78246 On Control of Discrete-time LTI Positive Systems DuˇsanKrokavec and Anna Filasov´a Department of Cybernetics and Artificial Intelligence Faculty of Electrical Engineering and Informatics Technical University of Koˇsice Letn´a9/B, 042 00 Koˇsice,Slovakia Copyright c 2017 DuˇsanKrokavec and Anna Filasov´a.This article is distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract Incorporating an associated structure of constraints in the form of linear matrix inequalities, combined with the Lyapunov inequality guaranteing asymptotic stability of discrete-time positive linear system structures, new conditions are presented with which the state-feedback controllers can be designed. Associated solutions of the proposed design conditions are illustrated by numerical illustrative examples. Keywords: state feedback stabilization, linear discrete-time positive sys- tems, Schur matrices, linear matrix inequalities, asymptotic stability 1 Introduction Positive systems are often found in the modeling and control of engineering and industrial processes, whose state variables represent quantities that do not have meaning unless they are nonnegative [26]. The mathematical theory of Metzler matrices has a close relationship to the theory of positive linear time- invariant (LTI) dynamical systems, since in the state-space description form the system dynamics matrix of a positive systems is Metzler and the system input and output matrices are nonnegative matrices. Other references can find, e.g., in [8], [16], [19], [28]. The problem of Metzlerian system stabilization has been previously stud- ied, especially for single input and single output (SISO) continuous-time linear 2460 D.
    [Show full text]
  • Z Matrices, Linear Transformations, and Tensors
    Z matrices, linear transformations, and tensors M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland, USA [email protected] *************** International Conference on Tensors, Matrices, and their Applications Tianjin, China May 21-24, 2016 Z matrices, linear transformations, and tensors – p. 1/35 This is an expository talk on Z matrices, transformations on proper cones, and tensors. The objective is to show that these have very similar properties. Z matrices, linear transformations, and tensors – p. 2/35 Outline • The Z-property • M and strong (nonsingular) M-properties • The P -property • Complementarity problems • Zero-sum games • Dynamical systems Z matrices, linear transformations, and tensors – p. 3/35 Some notation • Rn : The Euclidean n-space of column vectors. n n • R+: Nonnegative orthant, x ∈ R+ ⇔ x ≥ 0. n n n • R++ : The interior of R+, x ∈++⇔ x > 0. • hx,yi: Usual inner product between x and y. • Rn×n: The space of all n × n real matrices. • σ(A): The set of all eigenvalues of A ∈ Rn×n. Z matrices, linear transformations, and tensors – p. 4/35 The Z-property A =[aij] is an n × n real matrix • A is a Z-matrix if aij ≤ 0 for all i =6 j. (In economics literature, −A is a Metzler matrix.) • We can write A = rI − B, where r ∈ R and B ≥ 0. Let ρ(B) denote the spectral radius of B. • A is an M-matrix if r ≥ ρ(B), • nonsingular (strong) M-matrix if r > ρ(B). Z matrices, linear transformations, and tensors – p. 5/35 The P -property • A is a P -matrix if all its principal minors are positive.
    [Show full text]
  • The Pagerank Algorithm and Application on Searching of Academic Papers
    The PageRank algorithm and application on searching of academic papers Ping Yeh Google, Inc. 2009/12/9 Department of Physics, NTU Disclaimer (legal) The content of this talk is the speaker's personal opinion and is not the opinion or policy of his employer. Disclaimer (content) You will not hear physics. You will not see differential equations. You will: ● get a review of PageRank, the algorithm used in Google's web search. It has been applied to evaluate journal status and influence of nodes in a graph by researchers, ● see some linear algebra and Markov chains associated with it, and ● see some results of applying it to journal status. Outline Introduction Google and Google search PageRank algorithm for ranking web pages Using MapReduce to calculate PageRank for billions of pages Impact factor of journals and PageRank Conclusion Google The name: homophone to the word “Googol” which means 10100. The company: ● founded by Larry Page and Sergey Brin in 1998, ● ~20,000 employees as of 2009, ● spread in 68 offices around the world (23 in N. America, 3 in Latin America, 14 in Asia Pacific, 23 in Europe, 5 in Middle East and Africa). The mission: “to organize the world's information and make it universally accessible and useful.” Google Services Sky YouTube iGoogle web search talk book search Chrome calendar scholar translate blogger.com Android product news search maps picasaweb video groups Gmail desktop reader Earth Photo by mr.hero on panoramio (http://www.panoramio.com/photo/1127015)‏ 6 Google Search http://www.google.com/ or http://www.google.com.tw/ The abundance problem Quote Langville and Meyer's nice book “Google's PageRank and beyond: the science of search engine rankings”: The men in Jorge Luis Borges’ 1941 short story, “The Library of Babel”, which describes an imaginary, infinite library.
    [Show full text]
  • Reduced Google Matrix 2
    Reduced Google matrix K.M.Frahm and D.L.Shepelyansky Laboratoire de Physique Th´eorique, IRSAMC, CNRS and Universit´ede Toulouse, UPS, 31062 Toulouse, France Abstract. Using parallels with the quantum scattering theory, developed for processes in nuclear and mesoscopic physics and quantum chaos, we construct a reduced Google matrix GR which describes the properties and interactions of a certain subset of selected nodes belonging to a much larger directed network. The matrix GR takes into account effective interactions between subset nodes by all their indirect links via the whole network. We argue that this approach gives new possibilities to analyze effective interactions in a group of nodes embedded in a large directed networks. Possible efficient numerical methods for the practical computation of GR are also described. PACS numbers: 02.50.-r, 89.75.Fb, 89.75.Hc Dated: February 7, 2016 Keywords: Markov chains, Google matrix, directed networks Submitted to: J. Phys. A: Math. Gen. arXiv:1602.02394v1 [physics.soc-ph] 7 Feb 2016 Reduced Google matrix 2 1. Introduction At present the concept of Markov chains finds impressive applications in descriptions of directed networks including the World Wide Web (WWW) [1, 2], citation networks [3], Perron-Frobenius operators of dynamical systems [4], software architecture [5], Ulam networks of chaotic maps, world trade flows, network of Wikipedia articles and many other networks [6]. Such directed networks are well described by the Google matrix usually presented in the form Gij = αSij + (1 − α)/N , (1) where Sij describes Markov transitions on the network typically given by the inverse number of outgoing links from the node j in presence of a link j → i or 0 in absence of such a link.
    [Show full text]
  • Google Pagerank and Reduced-Order Modelling
    Delft University of Technology Faculty Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics Google PageRank and Reduced-Order Modelling Report for the Delft Institute of Applied Mathematics as part of the degree of BACHELOR OF SCIENCE in APPLIED MATHEMATICS by HUGO DE LOOIJ Delft, Netherlands June 2013 Copyright c 2013 by Hugo de Looij. All rights reserved. BSc report APPLIED MATHEMATICS “Google PageRank and Reduced-Order Modelling” HUGO DE LOOIJ Delft University of Technology Thesis advisor Dr. N.V. Budko Other members of the graduation committee Prof.dr.ir. C. Vuik Dr. J.A.M. de Groot Dr. J.G. Spandaw June, 2013 Delft Contents 1 Introduction 7 2 Preliminaries 8 3 The PageRank Method 10 3.1 The basic model . 10 3.2 Random surfer . 10 3.3 Ranking . 11 3.4 The hyperlink matrix H .............................. 12 3.5 The stochastic matrix S .............................. 14 3.6 The Google matrix G ............................... 15 3.7 PageRank . 16 4 Calculating the PageRank vector 20 4.1 The Power Method . 20 4.2 Implementation of the Power Method . 21 4.3 A direct method . 22 4.4 The Jacobi method . 23 4.5 Optimized Jacobi . 24 5 Numerical Experiments 27 5.1 Data . 27 5.2 Convergence criteria . 28 5.3 Numerical Results . 29 5.4 Discussion . 31 6 Random Teleportation Parameter 32 6.1 Expected PageRank . 32 6.2 Optimized Power Method . 33 6.3 Reduced-order modelling . 34 6.4 Numerical experiments . 38 6.5 Discussion . 41 7 Conclusion & Discussion 42 8 References 44 A Appendix 45 A.1 Constructing the hyperlink/Google matrices .
    [Show full text]
  • An Alternating Sequence Iteration's Method for Computing Largest Real Part Eigenvalue of Essentially Positive Matrices
    Computa & tio d n ie a l l Oepomo, J Appl Computat Math 2016, 5:6 p M p a Journal of A t h f DOI: 10.4172/2168-9679.1000334 e o m l a a n t r ISSN: 2168-9679i c u s o J Applied & Computational Mathematics Research Article Open Access An Alternating Sequence Iteration’s Method for Computing Largest Real Part Eigenvalue of Essentially Positive Matrices: Collatz and Perron- Frobernius’ Approach Tedja Santanoe Oepomo* Mathematics Division, Los Angeles Harbor College/West LA College, and School of International Business, California International University, USA Abstract This paper describes a new numerical method for the numerical solution of eigenvalues with the largest real part of essentially positive matrices. Finally, a numerical discussion is given to derive the required number of mathematical operations of the new method. Comparisons between the new method and several well know ones, such as Power and QR methods, were discussed. The process consists of computing lower and upper bounds which are monotonically approximating the eigenvalue. Keywords: Collatz’s theorem; Perron-Frobernius’ theorem; Background Eigenvalue Let A be an n x n essentially positive matrix. The new method can AMS subject classifications: 15A48; 15A18; 15A99; 65F10 and 65F15 be used to numerically determine the eigenvalue λA with the largest real part and the corresponding positive eigenvector x[A] for essentially Introduction positive matrices. This numerical method is based on previous manuscript [16]. A matrix A=(a ) is an essentially positive matrix if a A variety of numerical methods for finding eigenvalues of non- ij ij negative irreducible matrices have been reported over the last decades, ≥ 0 for all i ≠ j, 1 ≤ i, j ≤ n, and A is irreducible.
    [Show full text]
  • Stability Analysis of Positive Linear Systems by Decomposition of the State Matrices Into Symmetrical and Antisymmetrical Parts
    BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES, Vol. 67, No. 4, 2019 DOI: 10.24425/bpasts.2019.130185 Stability analysis of positive linear systems by decomposition of the state matrices into symmetrical and antisymmetrical parts T. KACZOREK* Bialystok University of Technology, Faculty of Electrical Engineering, Wiejska 45D, 15-351 Bialystok Abstract. The stability of positive linear continuous-time and discrete-time systems is analyzed by the use of the decomposition of the state matrices into symmetrical and antisymmetrical parts. It is shown that: 1) The state Metzler matrix of positive continuous-time linear system is Hurwitz if and only if its symmetrical part is Hurwitz; 2) The state matrix of positive linear discrete-time system is Schur if and only if its symmetrical part is Hurwitz. These results are extended to inverse matrices of the state matrices of the positive linear systems. Key words: linear, positive, system, decomposition, state matrix, stability. 1. Introduction The following notation will be used: ℜ – the set of real n m n m numbers, ℜ £ – the set of n m real matrices, ℜ+£ – the set £ n n 1 A dynamical system is called positive if its trajectory starting of n m real matrices with nonnegative entries and ℜ = ℜ £ , £ + + from any nonnegative initial state remains forever in the posi- M – the set of n n Metzler matrices (real matrices with non- n £ tive orthant for all nonnegative inputs. An overview of state of negative off-diagonal entries), I – the n n identity matrix. n £ the art in positive systems theory is given in the monographs [1, 3, 7].
    [Show full text]
  • Linear Response Theory for Google Matrix
    EPJ manuscript No. (will be inserted by the editor) Linear response theory for Google matrix Klaus M. Frahm1 and Dima L. Shepelyansky1 Laboratoire de Physique Théorique, IRSAMC, Université de Toulouse, CNRS, UPS, 31062 Toulouse, France August 26, 2019 Abstract. We develop the linear response theory for the Google matrix PageRank algorithm with respect to a general weak perturbation and a numerical efficient and accurate algorithm, called LIRGOMAX algorithm, to compute the linear response of the PageRank with respect to this perturbation. We illustrate its efficiency on the example of the English Wikipedia network with more than 5 millions of articles (nodes). For a group of initial nodes (or simply a pair of nodes) this algorithm allows to identify the effective pathway between initial nodes thus selecting a particular subset of nodes which are most sensitive to the weak perturbation applied to them (injection or pumping at one node and absorption of probability at another node). The further application of the reduced Google matrix algorithm (REGOMAX) allows to determine the effective interactions between the nodes of this subset. General linear response theory already found numerous applications in various areas of science including statistical and mesoscopic physics. Based on these grounds we argue that the developed LIRGOMAX algorithm will find broad applications in the analysis of complex directed networks. PACS. XX.XX.XX No PACS code given 1 Introduction bility at a certain network node (or group of nodes) and absorbing probability at another specific node (or group of Linear response theory finds a great variety of applications nodes). In a certain sense such a procedure reminds lasing in statistical physics, stochastic processes, electron trans- in random media where a laser pumping at a certain fre- port, current density correlations and dynamical systems quency generates a response in complex absorbing media (see e.g.
    [Show full text]