Markov Chains, Diffusion and Green Functions. Applications to Traffic Routing in Ad Hoc Networks and to Image Restoration

Total Page:16

File Type:pdf, Size:1020Kb

Markov Chains, Diffusion and Green Functions. Applications to Traffic Routing in Ad Hoc Networks and to Image Restoration Markov chains, diffusion and Green functions. Applications to traffic routing in ad hoc networks and to image restoration Chaînes de Markov, diffusion et fonctions de Green. Applications au routage dans les réseaux ad hoc et à la restauration d’images Marc Sigelle Ian Jermyn Sylvie Perreau Mai 2005 2009D003 Janvier 2009 Département Traitement du Signal et des Images Groupe MM : Multimédia Markov chains, diffusion and Green functions - Applications to traffic routing in ad hoc networks and to image restoration Marc Sigelle∗ , Ian Jermyn† and Sylvie Perreau‡ [email protected] [email protected] [email protected] January 12, 2009 Abstract In this document, we recall overall main definitions and basic properties of Markov chains on finite state spaces, Green functions, partial differential equa- tions (PDE’s) and their (approximate) resolution using diffusion walks in a discrete graph. We apply then all these topics to the study of traffic propagation and repar- tition in ad hoc networks. Last we also apply this framework to image restoration (with and without boundaries.) ∗Institut TELECOM TELECOM ParisTech CNRS UMR 5141, 46 rue Barrault 75634 Paris Cedex 13 France †INRIA Ariana 2004 route des Lucioles 06904 Sophia Antipolis Cedex France ‡Institute for Telecommunications Research, University of South Australia Mawson Lakes, Adelaide Australia 1 Chaˆınesde Markov, diffusion et fonctions de Green - Applications au routage dans les r´eseaux ad hoc et `a la restauration d’images Marc Sigelle∗ , Ian Jermyn† et Sylvie Perreau‡ [email protected] [email protected] [email protected] 6 janvier 2009 R´esum´e Nous rappelons dans un premier temps les d´efinitions et propri´et´es fondamen- tales des chaˆınes de Markov `atemps discret et `anombre d’´etatsfini. Ce sch´ema permet la r´esolution (approch´ee) de certaines classes d’´equations aux d´eriv´ees par- tielles par fonctions de Green discr`etes. Il en r´esulte naturellement que l’on peut utiliser certaines classes de marches al´eatoiresdans des graphes finis pour r´esoudre ces probl`emes. Nous montrons enfin l’application de l’ensemble de cette th´eorie `a l’´etude de la propagation du trafic dans des r´eseaux ad hoc r´eguliers et `ala restau- ration d’images bruit´ees avec ou sans processus bords. ∗Institut TELECOM TELECOM ParisTech CNRS UMR 5141, 46 rue Barrault 75634 Paris Cedex 13 France †INRIA Ariana 2004 route des Lucioles 06904 Sophia Antipolis Cedex France ‡Institute for Telecommunications Research, University of South Australia Mawson Lakes, Adelaide Australia 2 1 Introduction In this document, we present and recall several basic definitions and properties of: • Markov chains on finite state spaces. • Green functions. • partial differential equations (PDE’s) and their (approximate) resolution using dif- fusion walks in a discrete graph. In particular the role of boundary and source conditions modeling has been emphasized. • application of these to traffic propagation and repartition in ad hoc networks, and to image restoration with eventual boundary process. We try to present these topics in a pedagogical way, hoping this aim will be achieved. 2 Markov chains on a finite state space - Recalls To start with we consider a discrete finite state space E with cardinal N . Then we consider E = IR|E| as the vector space of real bounded functions on E: E 7→ IR x 7→ f(x) . Two functions (or classes of functions) play a particular role in E: • the family of indicatrix functions: ex0 = 1lx=x0 ∀x0 ∈ E which are identified to the canonical base vectors of E . • the constant function 1 defined by: x 7→ 1 ∀x ∈ E Now a probability distribution P is a positive measure on E i.e., a (positive) linear form with total mass 1 on E. Defining: P r (X = x0) = P (ex0 ) = P (1lx=x0 ) ∀x0 ∈ E one has: X X P (f) = P (ex) f(x) = P r (X = x) f(x) x∈E x∈E = IE[ f ] X with P r (X = x) = 1 of course x∈E This last condition can be written as P (1) = 1 Also useful in the sequel will be the uniform measure on E defined by: 1 1 µ (e ) = = ∀x ∈ E . 0 x |E| N 3 2.1 stochastic matrices definition A stochastic matrix Q is such that Q 1 = 1 right-wise multiplication of a probability measure by a stochastic matrix Let P be a probability measure and Q a (stochastic) matrix in L(E). Then by definition: (PQ)(f) = P (Qf) ∀f ∈ E In particular one has: (PQ)(1) = P (1) = 1 and PQ is thus definitely a probability measure on E . Now, let y ∈ E. Then, the probability measure Qr = PQ applied to f(x) = 1lx=y writes: Qr (1lx=y) = (PQ)(ey) = P (Q(ey)) X = P r (X = x) Qx y x∈E 2.2 Markov chains on E Let us consider a Markov chain with initial associated probability measure P0 and tran- sition matrix Q. We have that: P r (X0 = x0, . .....XM = y) = P r (X0 = x0) Qx0 x1 ...QxM−1 y (1) so that by noting PM = P r (XM = .) one has very formally and concisely : M PM = P0 Q In particular: X P (y) = P r (X = y) = P r (X = x )(QM ) M M 0 0 x0 y x0∈E and thus P r (X = y | X = x ) = (QM ) M 0 0 x0 y A useful formula for the sequel is the following: let us compute the expectation of some function ψ ∈ E at step M (wrt. to previous Markov chain) . Then: M M IE[ ψ(XM ) ] = PM (ψ) = (P0 Q )(ψ) = P0(Q ψ) X M = P r (X0 = x0)(Q ψ)(x0) (2) x0∈E Results directly from previous formulaes. In particular: M ∀ψ ∈ E, IE[ ψ(XM ) | X0 = x0 ] = (Q ψ)(x0) (3) 4 2.3 invariant measure - convergence Recall that E is endowed with the L∞ norm: || f ||∞ = sup |f(x)| . x∈E so that the variation distance between two measures is defined equivalently as: || µ − ν ||V = sup |µ(f) − ν(f)| (4a) || f ||∞=1 X X = |µ(ex) − ν(ex)| = 2 − min(µ(ex), ν(ex)) (4b) x∈E x∈E and satisfies of course from the first previous equality (4a) : |µ(f) − ν(f)| ≤ || µ − ν ||V || f ||∞ ∀µ, ν, f Now, the Dobrushin contraction coefficient of a stochastic matrix A is defined as: 1 c(A) = sup || Ai(.) − Aj(.) ||V , 0 ≤ c(A) ≤ 1 2 i,j It ensures that Winkler (1995) : || (νA) − (µA) ||V ≤ c(A) || ν − µ ||V ∀µ, ν, A c(AB) ≤ c(A) . c(B) ∀A, B In the case when A is strictly positive, one has from the second equality (4b) : 0 ≤ c(A) < 1 (hence the contraction property) In this case, Perron-Frobenius theorem ensures a unique invariant measure µ of A i.e., : (µA) = µ Thus one has: M M M ∀M ≥ 0, || (P0 A ) − µ ||V = || (P0 A ) − (µA ) ||V M ≤ c(A) || P0 − µ ||V ∀ initial P0 (5) M ⇒ the measure P0 A converges to µ as M → +∞ , whatever initial measure P0 . This can be generalized when some power of A, Ar is strictly positive (r > 1), although A itself may be not so: Indeed let us decompose in an euclidian manner: M = nr + p, 0 ≤ p < r : nr+p p r n p r n p || P0 A − µr A ||V = || P0 (A ) A − µr (A ) A ||V r n p p ≤ c(A ) || P0 A − µr A ||V 0 ≤ p < r r n p ≤ c(A ) c(A) || P0 − µr ||V 0 ≤ p < r (6) 5 3 Discretization of a class of linear (elliptic) PDEs We follow Ycart (1997); Rubinstein (1981) with slight changes. Find f ∈ E such that: γ (A − I) f = φ , A ∈ L(E) (eventually) subject to (7) f(xs) = b(xs) known ∀xs ∈ B ⊂ E Here γ is a constant, and B represents the boundary of domain E wiewed as a multi- dimensional (in general 2D) discrete lattice. 3.1 a linear algebra point of view We split E in boundaries/non-boundaries, so that the following decomposition holds: E = E˜ ∪ B (state space) E = E˜ ⊕ B (functional vector space) f = f˜ + b (functions) Problem (7) writes thus: γ (A − I) f˜ = φ + γ (I − A) b with A ∈ L(E) Now let us note P˜ the linear projector on susbspace E˜ with kernel B . One has: f˜ = P˜ f = P˜f˜ and P˜ b = 0 . Left-application of P˜ to previous equation yields: γ (A˜ − I) f˜ = ψ = P˜ φ − γ P˜ A b with A˜ = PA˜ ∈ L(E˜) (8) Note that PA˜ P˜ = A˜P˜ and P˜ φ are the restrictions of A (resp. φ) to E˜. In the sequel A will often be a stochastic matrix (see below the Poisson-Laplace case) so that 1 is an eigenfunction of A with eigenvalue λ1 = 1. In many cases the multiplicity of λ1 is 1 (see below) and all other eigenvectors have eigenvalues such that |λi| < 1. Thus solving (8) with proper invertibility conditions yields: 1 f˜ = − (I − A˜)−1 ψ (9) γ 3.1.1 an example: the Laplace and Poisson equations E is now a discrete sublattice of ZZ2 with lattice step h (and thus unit cell size: h × h .) This sublattice is endowed with a non-oriented, connected graph structure, the neigbor- hood relationship being noted as xt ∼ xs ( e.g. 4-connectivity .) Now: X [ f(xt)] − 4 f(xs) ∆f(x ) ≈ xt∼xs ⇒ ∆f ≈ γ (A − I) f s h2 4 1 X with γ = and (Af)(x ) = [ f(x )] ∀x ∈ E (averaging operator) h2 s 4 t s xt∼xs ˜ X ˜ In this case: B(xs) = γ P A b(xs) = b(xt) ∀xs ∈ E .
Recommended publications
  • Eigenvalues of Euclidean Distance Matrices and Rs-Majorization on R2
    Archive of SID 46th Annual Iranian Mathematics Conference 25-28 August 2015 Yazd University 2 Talk Eigenvalues of Euclidean distance matrices and rs-majorization on R pp.: 1{4 Eigenvalues of Euclidean Distance Matrices and rs-majorization on R2 Asma Ilkhanizadeh Manesh∗ Department of Pure Mathematics, Vali-e-Asr University of Rafsanjan Alemeh Sheikh Hoseini Department of Pure Mathematics, Shahid Bahonar University of Kerman Abstract Let D1 and D2 be two Euclidean distance matrices (EDMs) with correspond- ing positive semidefinite matrices B1 and B2 respectively. Suppose that λ(A) = ((λ(A)) )n is the vector of eigenvalues of a matrix A such that (λ(A)) ... i i=1 1 ≥ ≥ (λ(A))n. In this paper, the relation between the eigenvalues of EDMs and those of the 2 corresponding positive semidefinite matrices respect to rs, on R will be investigated. ≺ Keywords: Euclidean distance matrices, Rs-majorization. Mathematics Subject Classification [2010]: 34B15, 76A10 1 Introduction An n n nonnegative and symmetric matrix D = (d2 ) with zero diagonal elements is × ij called a predistance matrix. A predistance matrix D is called Euclidean or a Euclidean distance matrix (EDM) if there exist a positive integer r and a set of n points p1, . , pn r 2 2 { } such that p1, . , pn R and d = pi pj (i, j = 1, . , n), where . denotes the ∈ ij k − k k k usual Euclidean norm. The smallest value of r that satisfies the above condition is called the embedding dimension. As is well known, a predistance matrix D is Euclidean if and 1 1 t only if the matrix B = − P DP with P = I ee , where I is the n n identity matrix, 2 n − n n × and e is the vector of all ones, is positive semidefinite matrix.
    [Show full text]
  • Clustering by Left-Stochastic Matrix Factorization
    Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Maya R. Gupta [email protected] Amol Kapila [email protected] Maryam Fazel [email protected] University of Washington, Seattle, WA 98103, USA Abstract 1.1. Related Work in Matrix Factorization Some clustering objective functions can be written as We propose clustering samples given their matrix factorization objectives. Let n feature vectors d×n pairwise similarities by factorizing the sim- be gathered into a feature-vector matrix X 2 R . T d×k ilarity matrix into the product of a clus- Consider the model X ≈ FG , where F 2 R can ter probability matrix and its transpose. be interpreted as a matrix with k cluster prototypes n×k We propose a rotation-based algorithm to as its columns, and G 2 R is all zeros except for compute this left-stochastic decomposition one (appropriately scaled) positive entry per row that (LSD). Theoretical results link the LSD clus- indicates the nearest cluster prototype. The k-means tering method to a soft kernel k-means clus- clustering objective follows this model with squared tering, give conditions for when the factor- error, and can be expressed as (Ding et al., 2005): ization and clustering are unique, and pro- T 2 arg min kX − FG kF ; (1) vide error bounds. Experimental results on F;GT G=I simulated and real similarity datasets show G≥0 that the proposed method reliably provides accurate clusterings. where k · kF is the Frobenius norm, and inequality G ≥ 0 is component-wise. This follows because the combined constraints G ≥ 0 and GT G = I force each row of G to have only one positive element.
    [Show full text]
  • Similarity-Based Clustering by Left-Stochastic Matrix Factorization
    JournalofMachineLearningResearch14(2013)1715-1746 Submitted 1/12; Revised 11/12; Published 7/13 Similarity-based Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Toyota Technological Institute 6045 S. Kenwood Ave Chicago, IL 60637, USA Maya R. Gupta [email protected] Google 1225 Charleston Rd Mountain View, CA 94301, USA Amol Kapila [email protected] Maryam Fazel [email protected] Department of Electrical Engineering University of Washington Seattle, WA 98195, USA Editor: Inderjit Dhillon Abstract For similarity-based clustering, we propose modeling the entries of a given similarity matrix as the inner products of the unknown cluster probabilities. To estimate the cluster probabilities from the given similarity matrix, we introduce a left-stochastic non-negative matrix factorization problem. A rotation-based algorithm is proposed for the matrix factorization. Conditions for unique matrix factorizations and clusterings are given, and an error bound is provided. The algorithm is partic- ularly efficient for the case of two clusters, which motivates a hierarchical variant for cases where the number of desired clusters is large. Experiments show that the proposed left-stochastic decom- position clustering model produces relatively high within-cluster similarity on most data sets and can match given class labels, and that the efficient hierarchical variant performs surprisingly well. Keywords: clustering, non-negative matrix factorization, rotation, indefinite kernel, similarity, completely positive 1. Introduction Clustering is important in a broad range of applications, from segmenting customers for more ef- fective advertising, to building codebooks for data compression. Many clustering methods can be interpreted in terms of a matrix factorization problem.
    [Show full text]
  • Alternating Sign Matrices and Polynomiography
    Alternating Sign Matrices and Polynomiography Bahman Kalantari Department of Computer Science Rutgers University, USA [email protected] Submitted: Apr 10, 2011; Accepted: Oct 15, 2011; Published: Oct 31, 2011 Mathematics Subject Classifications: 00A66, 15B35, 15B51, 30C15 Dedicated to Doron Zeilberger on the occasion of his sixtieth birthday Abstract To each permutation matrix we associate a complex permutation polynomial with roots at lattice points corresponding to the position of the ones. More generally, to an alternating sign matrix (ASM) we associate a complex alternating sign polynomial. On the one hand visualization of these polynomials through polynomiography, in a combinatorial fashion, provides for a rich source of algo- rithmic art-making, interdisciplinary teaching, and even leads to games. On the other hand, this combines a variety of concepts such as symmetry, counting and combinatorics, iteration functions and dynamical systems, giving rise to a source of research topics. More generally, we assign classes of polynomials to matrices in the Birkhoff and ASM polytopes. From the characterization of vertices of these polytopes, and by proving a symmetry-preserving property, we argue that polynomiography of ASMs form building blocks for approximate polynomiography for polynomials corresponding to any given member of these polytopes. To this end we offer an algorithm to express any member of the ASM polytope as a convex of combination of ASMs. In particular, we can give exact or approximate polynomiography for any Latin Square or Sudoku solution. We exhibit some images. Keywords: Alternating Sign Matrices, Polynomial Roots, Newton’s Method, Voronoi Diagram, Doubly Stochastic Matrices, Latin Squares, Linear Programming, Polynomiography 1 Introduction Polynomials are undoubtedly one of the most significant objects in all of mathematics and the sciences, particularly in combinatorics.
    [Show full text]
  • Representations of Stochastic Matrices
    Rotational (and Other) Representations of Stochastic Matrices Steve Alpern1 and V. S. Prasad2 1Department of Mathematics, London School of Economics, London WC2A 2AE, United Kingdom. email: [email protected] 2Department of Mathematics, University of Massachusetts Lowell, Lowell, MA. email: [email protected] May 27, 2005 Abstract Joel E. Cohen (1981) conjectured that any stochastic matrix P = pi;j could be represented by some circle rotation f in the following sense: Forf someg par- tition Si of the circle into sets consisting of …nite unions of arcs, we have (*) f g pi;j = (f (Si) Sj) = (Si), where denotes arc length. In this paper we show how cycle decomposition\ techniques originally used (Alpern, 1983) to establish Cohen’sconjecture can be extended to give a short simple proof of the Coding Theorem, that any mixing (that is, P N > 0 for some N) stochastic matrix P can be represented (in the sense of * but with Si merely measurable) by any aperiodic measure preserving bijection (automorphism) of a Lesbesgue proba- bility space. Representations by pointwise and setwise periodic automorphisms are also established. While this paper is largely expository, all the proofs, and some of the results, are new. Keywords: rotational representation, stochastic matrix, cycle decomposition MSC 2000 subject classi…cations. Primary: 60J10. Secondary: 15A51 1 Introduction An automorphism of a Lebesgue probability space (X; ; ) is a bimeasurable n bijection f : X X which preserves the measure : If S = Si is a non- ! f gi=1 trivial (all (Si) > 0) measurable partition of X; we can generate a stochastic n matrix P = pi;j by the de…nition f gi;j=1 (f (Si) Sj) pi;j = \ ; i; j = 1; : : : ; n: (1) (Si) Since the partition S is non-trivial, the matrix P has a positive invariant (stationary) distribution v = (v1; : : : ; vn) = ( (S1) ; : : : ; (Sn)) ; and hence (by de…nition) is recurrent.
    [Show full text]
  • Left Eigenvector of a Stochastic Matrix
    Advances in Pure Mathematics, 2011, 1, 105-117 doi:10.4236/apm.2011.14023 Published Online July 2011 (http://www.SciRP.org/journal/apm) Left Eigenvector of a Stochastic Matrix Sylvain Lavalle´e Departement de mathematiques, Universite du Quebec a Montreal, Montreal, Canada E-mail: [email protected] Received January 7, 2011; revised June 7, 2011; accepted June 15, 2011 Abstract We determine the left eigenvector of a stochastic matrix M associated to the eigenvalue 1 in the commu- tative and the noncommutative cases. In the commutative case, we see that the eigenvector associated to the eigenvalue 0 is (,,NN1 n ), where Ni is the ith principal minor of NMI= n , where In is the 11 identity matrix of dimension n . In the noncommutative case, this eigenvector is (,P1 ,Pn ), where Pi is the sum in aij of the corresponding labels of nonempty paths starting from i and not passing through i in the complete directed graph associated to M . Keywords: Generic Stochastic Noncommutative Matrix, Commutative Matrix, Left Eigenvector Associated To The Eigenvalue 1, Skew Field, Automata 1. Introduction stochastic free field and that the vector 11 (,,PP1 n ) is fixed by our matrix; moreover, the sum 1 It is well known that 1 is one of the eigenvalue of a of the Pi is equal to 1, hence they form a kind of stochastic matrix (i.e. the sum of the elements of each noncommutative limiting probability. row is equal to 1) and its associated right eigenvector is These results have been proved in [1] but the proof the vector (1,1, ,1)T .
    [Show full text]
  • Alternating Sign Matrices, Extensions and Related Cones
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/311671190 Alternating sign matrices, extensions and related cones Article in Advances in Applied Mathematics · May 2017 DOI: 10.1016/j.aam.2016.12.001 CITATIONS READS 0 29 2 authors: Richard A. Brualdi Geir Dahl University of Wisconsin–Madison University of Oslo 252 PUBLICATIONS 3,815 CITATIONS 102 PUBLICATIONS 1,032 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Combinatorial matrix theory; alternating sign matrices View project All content following this page was uploaded by Geir Dahl on 16 December 2016. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. Alternating sign matrices, extensions and related cones Richard A. Brualdi∗ Geir Dahly December 1, 2016 Abstract An alternating sign matrix, or ASM, is a (0; ±1)-matrix where the nonzero entries in each row and column alternate in sign, and where each row and column sum is 1. We study the convex cone generated by ASMs of order n, called the ASM cone, as well as several related cones and polytopes. Some decomposition results are shown, and we find a minimal Hilbert basis of the ASM cone. The notion of (±1)-doubly stochastic matrices and a generalization of ASMs are introduced and various properties are shown. For instance, we give a new short proof of the linear characterization of the ASM polytope, in fact for a more general polytope.
    [Show full text]
  • Contents 5 Eigenvalues and Diagonalization
    Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 2017, v. 1.50) Contents 5 Eigenvalues and Diagonalization 1 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial . 1 5.1.1 Eigenvalues and Eigenvectors . 2 5.1.2 Eigenvalues and Eigenvectors of Matrices . 3 5.1.3 Eigenspaces . 6 5.2 Diagonalization . 9 5.3 Applications of Diagonalization . 14 5.3.1 Transition Matrices and Incidence Matrices . 14 5.3.2 Systems of Linear Dierential Equations . 16 5.3.3 Non-Diagonalizable Matrices and the Jordan Canonical Form . 19 5 Eigenvalues and Diagonalization In this chapter, we will discuss eigenvalues and eigenvectors: these are characteristic values (and characteristic vectors) associated to a linear operator T : V ! V that will allow us to study T in a particularly convenient way. Our ultimate goal is to describe methods for nding a basis for V such that the associated matrix for T has an especially simple form. We will rst describe diagonalization, the procedure for (trying to) nd a basis such that the associated matrix for T is a diagonal matrix, and characterize the linear operators that are diagonalizable. Then we will discuss a few applications of diagonalization, including the Cayley-Hamilton theorem that any matrix satises its characteristic polynomial, and close with a brief discussion of non-diagonalizable matrices. 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial • Suppose that we have a linear transformation T : V ! V from a (nite-dimensional) vector space V to itself. We would like to determine whether there exists a basis of such that the associated matrix β is a β V [T ]β diagonal matrix.
    [Show full text]
  • Markov Chains
    Stochastic Matrices The following 3 3matrixdefinesa × discrete time Markov process with three states: P11 P12 P13 P = P21 P22 P23 ⎡ ⎤ P31 P32 P33 ⎣ ⎦ where Pij is the probability of going from j i in one step. A stochastic matrix → satisfies the following conditions: P 0 ∀i, j ij ≥ and M j ∑ Pij = 1. ∀ i=1 Example The following 3 3matrixdefinesa × discrete time Markov process with three states: 0.90 0.01 0.09 P = 0.01 0.90 0.01 ⎡ ⎤ 0.09 0.09 0.90 ⎣ ⎦ where P23 = 0.01 is the probability of going from 3 2inonestep.Youcan → verify that P 0 ∀i, j ij ≥ and 3 j ∑ Pij = 1. ∀ i=1 Example (contd.) 0.9 0.9 0.01 1 2 0.01 0.09 0.09 0.01 0.09 3 0.9 Figure 1: Three-state Markov process. Single Step Transition Probabilities x(1) = Px(0) x(2) = Px(1) . x(t+1) = Px(t) 0.641 0.90 0.01 0.09 0.7 0.188 = 0.01 0.90 0.01 0.2 ⎡ ⎤ ⎡ ⎤⎡ ⎤ 0.171 0.09 0.09 0.90 0.1 ⎣ x(t+1) ⎦ ⎣ P ⎦⎣ x(t) ⎦ M % &' ( (t%+1) &' (t) (% &' ( xi = ∑ Pij x j j=1 n-step Transition Probabilities Observe that x(3) can be written as fol- lows: x(3) = Px(2) = P Px(1) = P)P Px*(0) = P3)x(0)). ** n-step Transition Probabilities (contd.) Similar logic leads us to an expression for x(n): x(n) = P P... Px(0) ) )n ** = Pnx(0). % &' ( An n-step transition probability matrix can be defined in terms of a single step matrix and a (n 1)-step matrix: − M ( n) = n 1 .
    [Show full text]
  • On Matrices with a Doubly Stochastic Pattern
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector JOURNAL OF MATHEMATICALANALYSISAND APPLICATIONS 34,648-652(1971) On Matrices with a Doubly Stochastic Pattern DAVID LONDON Technion-Israel Institute of Technology, Haifa Submitted by Ky Fan 1. INTRODUCTION In [7] Sinkhorn proved that if A is a positive square matrix, then there exist two diagonal matrices D, = {@r,..., d$) and D, = {dj2),..., di2r) with positive entries such that D,AD, is doubly stochastic. This problem was studied also by Marcus and Newman [3], Maxfield and Mint [4] and Menon [5]. Later Sinkhorn and Knopp [8] considered the same problem for A non- negative. Using a limit process of alternately normalizing the rows and columns sums of A, they obtained a necessary and sufficient condition for the existence of D, and D, such that D,AD, is doubly stochastic. Brualdi, Parter and Schneider [l] obtained the same theorem by a quite different method using spectral properties of some nonlinear operators. In this note we give a new proof of the same theorem. We introduce an extremal problem, and from the existence of a solution to this problem we derive the existence of D, and D, . This method yields also a variational characterization for JJTC1(din dj2’), which can be applied to obtain bounds for this quantity. We note that bounds for I7y-r (d,!‘) dj”)) may be of interest in connection with inequalities for the permanent of doubly stochastic matrices [3]. 2. PRELIMINARIES Let A = (aij) be an 12x n nonnegative matrix; that is, aij > 0 for i,j = 1,***, n.
    [Show full text]
  • Stochastic Adjacency Matrix M
    CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu Web pages are not equally “important” www.joe-schmoe.com vs. www.stanford.edu We already know: Since there is large diversity in the connectivity of the vs. webgraph we can rank the pages by the link structure 2/7/2012 Jure Leskovec, Stanford C246: Mining Massive Datasets 2 We will cover the following Link Analysis approaches to computing importances of nodes in a graph: . Page Rank . Hubs and Authorities (HITS) . Topic-Specific (Personalized) Page Rank . Web Spam Detection Algorithms 2/7/2012 Jure Leskovec, Stanford C246: Mining Massive Datasets 3 Idea: Links as votes . Page is more important if it has more links . In-coming links? Out-going links? Think of in-links as votes: . www.stanford.edu has 23,400 inlinks . www.joe-schmoe.com has 1 inlink Are all in-links are equal? . Links from important pages count more . Recursive question! 2/7/2012 Jure Leskovec, Stanford C246: Mining Massive Datasets 4 Each link’s vote is proportional to the importance of its source page If page p with importance x has n out-links, each link gets x/n votes Page p’s own importance is the sum of the votes on its in-links p 2/7/2012 Jure Leskovec, Stanford C246: Mining Massive Datasets 5 A “vote” from an important The web in 1839 page is worth more y/2 A page is important if it is y pointed to by other important a/2 pages y/2 Define a “rank” r for node j m j a m r a/2 i Flow equations: rj = ∑ ry = ry /2 + ra /2 i→ j dout (i) ra = ry /2 + rm rm = ra /2 2/7/2012 Jure Leskovec, Stanford C246: Mining Massive Datasets 6 Flow equations: 3 equations, 3 unknowns, ry = ry /2 + ra /2 no constants ra = ry /2 + rm .
    [Show full text]
  • An Inequality for Doubly Stochastic Matrices*
    JOURNAL OF RESEARCH of the National Bureau of Standards-B. Mathematical Sciences Vol. 80B, No.4, October-December 1976 An Inequality for Doubly Stochastic Matrices* Charles R. Johnson** and R. Bruce Kellogg** Institute for Basic Standards, National Bureau of Standards, Washington, D.C. 20234 (June 3D, 1976) Interrelated inequalities involving doubly stochastic matrices are presented. For example, if B is an n by n doubly stochasti c matrix, x any nonnegative vector and y = Bx, the n XIX,· •• ,x" :0:::; YIY" •• y ... Also, if A is an n by n nonnegotive matrix and D and E are positive diagonal matrices such that B = DAE is doubly stochasti c, the n det DE ;:::: p(A) ... , where p (A) is the Perron· Frobenius eigenvalue of A. The relationship between these two inequalities is exhibited. Key words: Diagonal scaling; doubly stochasti c matrix; P erron·Frobenius eigenvalue. n An n by n entry·wise nonnegative matrix B = (b i;) is called row (column) stochastic if l bi ; = 1 ;= 1 for all i = 1,. ',n (~l bij = 1 for all j = 1,' . ',n ). If B is simultaneously row and column stochastic then B is said to be doubly stochastic. We shall denote the Perron·Frobenius (maximal) eigenvalue of an arbitrary n by n entry·wise nonnegative matrix A by p (A). Of course, if A is stochastic, p (A) = 1. It is known precisely which n by n nonnegative matrices may be diagonally scaled by positive diagonal matrices D, E so that (1) B=DAE is doubly stochastic. If there is such a pair D, E, we shall say that A has property (").
    [Show full text]