When Does a Digraph Admit a Doubly Stochastic Adjacency Matrix?

Total Page:16

File Type:pdf, Size:1020Kb

When Does a Digraph Admit a Doubly Stochastic Adjacency Matrix? When does a digraph admit a doubly stochastic adjacency matrix? Bahman Gharesifard and Jorge Cortes´ Abstract— Digraphs with doubly stochastic adjacency matri- of weight-balanced digraphs. To the authors’ knowledge, the ces play an essential role in a variety of cooperative control establishment of this relationship is also a novel contribution. problems including distributed averaging, optimization, and The paper is organized as follows. Section II presents some gossiping. In this paper, we fully characterize the class of digraphs that admit an edge weight assignment that makes the mathematical preliminaries from graph theory. Section III digraph adjacency matrix doubly stochastic. As a by-product introduces the problem statement. Section IV examines the of our approach, we also unveil the connection between weight- connection between weight-balanced and doubly stochastic balanced and doubly stochastic adjacency matrices. Several adjacency matrices. Section V gives necessary and sufficient examples illustrate our results. conditions for the existence of a doubly stochastic adja- cency matrix assignment for a given digraph. Section VI I. INTRODUCTION studies the properties of the topological character associated A digraph is doubly stochastic if, at each vertex, the sum with strongly connected doubly stochasticable digraphs. We of the weights of the incoming edges as well as the sum of gather our conclusions and ideas for future work in Sec- the weights of the outgoing edges are equal to one. Doubly tion VII. stochastic digraphs play a key role in networked control II. MATHEMATICAL PRELIMINARIES problems. Examples include distributed averaging [1], [2], We adopt some basic notions from [1], [16], [17]. A [3], [4], distributed convex optimization [5], [6], [7], and directed graph, or simply digraph, is a pair G V, E , gossip algorithms [8], [9]. Because of the numerous algo- = ( ) where V is a finite set called the vertex set and E V V rithms available in the literature that use doubly stochastic ⊆ × is the edge set. If V n, i.e., the cardinality of V is interaction topologies, it is an important research question | | = n Z , we say that G is of order n. We say that an edge to characterize when a digraph can be given a nonzero edge ∈ >0 u,v E is incident away from u and incident toward v, weight assignment that makes it doubly stochastic. This is ( ) ∈ and we call u an in-neighbor of v and v an out-neighbor the question we investigate in this paper. of u. The in-degree and out-degree of v, denoted d (v) and We refer to a digraph as doubly stochasticable if it admits in d (v), are the number of in-neighbors and out-neighbors of a doubly stochastic adjacency matrix. In studying this class out v, respectively. We call a vertex v isolated if it has zero in- of digraphs, we unveil their close relationship with a special and out-degrees. class of weight-balanced digraphs. A digraph is weight- An undirected graph, or simply graph, is a pair G = balanced if, at each node, the sum of the weights of the (V, E), where V is a finite set called the vertex set and incoming edges equals the sum of the weights of the outgo- the edge set E consists of unordered pairs of vertices. In ing edges. The notion of weight-balanced digraph is key in a graph, neighboring relationships are always bidirectional, establishing convergence results of distributed algorithms for and hence we simply use neighbor, degree, etc. for the average-consensus [10], [2] and consensus on general func- notions introduced above. A graph is regular if each vertex tions [11] via Lyapunov stability analysis. Weight-balanced has the same number of neighbors. The union G ∪ G of digraphs also appear in the design of leader-follower strate- 1 2 digraphs G = (V , E ) and G = (V , E ) is defined by gies under time delays [12], virtual leader strategies under 1 1 1 2 2 2 G ∪ G = (V ∪ V , E ∪ E ). The intersection of two asymmetric interactions [13] and stable flocking algorithms 1 2 1 2 1 2 digraphs can be defined similarly. A digraph G is generated for agents with significant inertial effects [14]. We call by a set of digraphs G ,...,G if G = G ∪ · · · ∪ G . We a digraph weight-balanceable if it admits an edge weight 1 m 1 m let E− ⊆ V × V denote the set obtained by changing the assignment that makes it weight-balanced. A characterization order of the elements of E, i.e., (v,u) ∈ E− if (u,v) ∈ E. of weight-balanceable digraphs was presented in [15]. The digraph G = (V, E ∪ E−) is the mirror of G. In this paper, we provide a constructive a necessary and A weighted digraph is a triplet G = (V,E,A), where sufficient condition for a digraph to be doubly stochasticable. (V, E) is a digraph and A ∈ Rn×n is the adjacency matrix. This condition involves a very particular structure that the ≥0 We denote the entries of A by a , where i, j ∈ {1,...,n}. cycles of the digraph must enjoy. As a by-product of our ij The adjacency matrix has the property that the entry a > 0 characterization, we unveil the connection between the struc- ij if (v ,v ) ∈ E and a = 0, otherwise. If a matrix A ture of doubly stochasticable digraphs and a special class i j ij satisfies this property, we say that A can be assigned to the digraph G = (V, E). Note that any digraph can be Bahman Gharesifard and Jorge Cortes´ are with the Department of Mechanical and Aerospace Engineering, University of California San Diego, trivially seen as a weighted digraph by assigning weight 1 {bgharesifard,cortes}@ucsd.edu to each one of its edges. We will find it useful to extend the definition of union of digraphs to weighted digraphs. The v1 / v2 O `B union G ∪ G of weighted digraphs G = (V , E , A ) and BB || 1 2 1 1 1 1 B|B| G = (V , E , A ) is defined by G ∪ G = (V ∪ V , E ∪ || BB 2 2 2 2 1 2 1 2 1 ~|| B E2, A), where v3 / v4 A|V1∩V2 = A1|V1∩V2 + A2|V1∩V2 , Fig. 1. A weight-balanceable digraph for which their exists no doubly stochastic adjacency assignment. A|V1\V2 = A1, A|V2\V1 = A2. For a weighted digraph, the weighted out-degree and in- degree are, respectively, Theorem 2.1: A digraph G = (V, E) is weight- balanceable if and only if the edge set E can be decomposed n n w w into k subsets E1,...,Ek such that d (vi) = aij , d (vi) = aji. out in (i) E = E ∪ E ∪ . ∪ E and Xj=1 Xj=1 1 2 k (ii) every subgraph G = (V, Ei), for i = {1,...,k}, is a A. Graph connectivity notions weight-balanceable digraph. A directed path in a digraph, or in short path, is an ordered Theorem 2.2: Let G = (V, E) be a directed digraph. The sequence of vertices so that any two consecutive vertices in following statements are equivalent. the sequence are an edge of the digraph. A cycle in a digraph (i) Every element of E lies in a cycle. is a directed path that starts and ends at the same vertex and (ii) G is weight-balanceable. has no other repeated vertex. Two cycles are disjoint if they (iii) G is strongly semiconnected. do not have any vertex in common. III. PROBLEM STATEMENT A digraph is strongly connected if there is a path between each pair of distinct vertices and is strongly semiconnected Our main goal in this paper is to obtain necessary and if the existence of a path from v to w implies the existence sufficient conditions that characterize when a digraph is of a path from w to v, for all v, w ∈ V . Clearly, strongly doubly stochasticable. Note that strongly semiconnected- connectedness implies strongly semiconnectedness, but the ness is a necessary, and sufficient, condition for a digraph converse is not true. The strongly connected components to be weight-balanceable. All doubly stochastic digraphs of a directed graph G are its maximal strongly connected are weight-balanced; thus a necessary condition for a di- subdigraphs. graph to be doubly stochasticable is strongly semiconnected- ness. Moreover, weight-balanceable digraphs that are doubly B. Basic notions from linear algebra stochasticable do not have any isolated vertex. However, Rn×n n none of these conditions is sufficient. A simple example A matrix A ∈ ≥0 is weight-balanced if j=1 aij = n n×n illustrates this. Consider the digraph G shown in Figure 1. aji, for all i ∈ {1,...,n}. A matrix AP∈ R is j=1 ≥0 Note that this digraph is strongly connected; thus there exists row-stochastic if each of its rows sums 1. One can similarly P a set of positive weights which makes the digraph weight- define a column-stochastic matrix. We denote the set of all Rn×n Rn×n balanced. However, there exists no set of nonzero weights row-stochastic matrices on ≥0 by RStoc( ≥0 ). A non- Rn×n that makes this digraph doubly stochastic. Suppose zero matrix A ∈ ≥0 is doubly stochastic if it is both row- n×n stochastic and column-stochastic. A matrix A ∈ {0, 1} is 0 α1 0 0 a permutation matrix, where n ∈ Z , if A has exactly one 0 0 α2 0 ≥1 A = , entry 1 in each row and each column. A matrix A ∈ Rn×n is α 0 0 α ≥0 3 4 irreducible if, for any nontrivial partition J ∪K of the index α 0 0 0 5 set {1,...,n}, there exist j ∈ J and k ∈ K such that a 6= jk where α ∈ R , for all i ∈ {1,..., 5}, is a doubly 0.
Recommended publications
  • Eigenvalues of Euclidean Distance Matrices and Rs-Majorization on R2
    Archive of SID 46th Annual Iranian Mathematics Conference 25-28 August 2015 Yazd University 2 Talk Eigenvalues of Euclidean distance matrices and rs-majorization on R pp.: 1{4 Eigenvalues of Euclidean Distance Matrices and rs-majorization on R2 Asma Ilkhanizadeh Manesh∗ Department of Pure Mathematics, Vali-e-Asr University of Rafsanjan Alemeh Sheikh Hoseini Department of Pure Mathematics, Shahid Bahonar University of Kerman Abstract Let D1 and D2 be two Euclidean distance matrices (EDMs) with correspond- ing positive semidefinite matrices B1 and B2 respectively. Suppose that λ(A) = ((λ(A)) )n is the vector of eigenvalues of a matrix A such that (λ(A)) ... i i=1 1 ≥ ≥ (λ(A))n. In this paper, the relation between the eigenvalues of EDMs and those of the 2 corresponding positive semidefinite matrices respect to rs, on R will be investigated. ≺ Keywords: Euclidean distance matrices, Rs-majorization. Mathematics Subject Classification [2010]: 34B15, 76A10 1 Introduction An n n nonnegative and symmetric matrix D = (d2 ) with zero diagonal elements is × ij called a predistance matrix. A predistance matrix D is called Euclidean or a Euclidean distance matrix (EDM) if there exist a positive integer r and a set of n points p1, . , pn r 2 2 { } such that p1, . , pn R and d = pi pj (i, j = 1, . , n), where . denotes the ∈ ij k − k k k usual Euclidean norm. The smallest value of r that satisfies the above condition is called the embedding dimension. As is well known, a predistance matrix D is Euclidean if and 1 1 t only if the matrix B = − P DP with P = I ee , where I is the n n identity matrix, 2 n − n n × and e is the vector of all ones, is positive semidefinite matrix.
    [Show full text]
  • Clustering by Left-Stochastic Matrix Factorization
    Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Maya R. Gupta [email protected] Amol Kapila [email protected] Maryam Fazel [email protected] University of Washington, Seattle, WA 98103, USA Abstract 1.1. Related Work in Matrix Factorization Some clustering objective functions can be written as We propose clustering samples given their matrix factorization objectives. Let n feature vectors d×n pairwise similarities by factorizing the sim- be gathered into a feature-vector matrix X 2 R . T d×k ilarity matrix into the product of a clus- Consider the model X ≈ FG , where F 2 R can ter probability matrix and its transpose. be interpreted as a matrix with k cluster prototypes n×k We propose a rotation-based algorithm to as its columns, and G 2 R is all zeros except for compute this left-stochastic decomposition one (appropriately scaled) positive entry per row that (LSD). Theoretical results link the LSD clus- indicates the nearest cluster prototype. The k-means tering method to a soft kernel k-means clus- clustering objective follows this model with squared tering, give conditions for when the factor- error, and can be expressed as (Ding et al., 2005): ization and clustering are unique, and pro- T 2 arg min kX − FG kF ; (1) vide error bounds. Experimental results on F;GT G=I simulated and real similarity datasets show G≥0 that the proposed method reliably provides accurate clusterings. where k · kF is the Frobenius norm, and inequality G ≥ 0 is component-wise. This follows because the combined constraints G ≥ 0 and GT G = I force each row of G to have only one positive element.
    [Show full text]
  • Adjacency and Incidence Matrices
    Adjacency and Incidence Matrices 1 / 10 The Incidence Matrix of a Graph Definition Let G = (V ; E) be a graph where V = f1; 2;:::; ng and E = fe1; e2;:::; emg. The incidence matrix of G is an n × m matrix B = (bik ), where each row corresponds to a vertex and each column corresponds to an edge such that if ek is an edge between i and j, then all elements of column k are 0 except bik = bjk = 1. 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 2 / 10 The First Theorem of Graph Theory Theorem If G is a multigraph with no loops and m edges, the sum of the degrees of all the vertices of G is 2m. Corollary The number of odd vertices in a loopless multigraph is even. 3 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G. Then the rank of B is n − 1 if G is bipartite and n otherwise. Example 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 4 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G.
    [Show full text]
  • Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A
    1 Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A. Deri, Member, IEEE, and José M. F. Moura, Fellow, IEEE Abstract—We define and discuss the utility of two equiv- Consider a graph G = G(A) with adjacency matrix alence graph classes over which a spectral projector-based A 2 CN×N with k ≤ N distinct eigenvalues and Jordan graph Fourier transform is equivalent: isomorphic equiv- decomposition A = VJV −1. The associated Jordan alence classes and Jordan equivalence classes. Isomorphic equivalence classes show that the transform is equivalent subspaces of A are Jij, i = 1; : : : k, j = 1; : : : ; gi, up to a permutation on the node labels. Jordan equivalence where gi is the geometric multiplicity of eigenvalue 휆i, classes permit identical transforms over graphs of noniden- or the dimension of the kernel of A − 휆iI. The signal tical topologies and allow a basis-invariant characterization space S can be uniquely decomposed by the Jordan of total variation orderings of the spectral components. subspaces (see [13], [14] and Section II). For a graph Methods to exploit these classes to reduce computation time of the transform as well as limitations are discussed. signal s 2 S, the graph Fourier transform (GFT) of [12] is defined as Index Terms—Jordan decomposition, generalized k gi eigenspaces, directed graphs, graph equivalence classes, M M graph isomorphism, signal processing on graphs, networks F : S! Jij i=1 j=1 s ! (s ;:::; s ;:::; s ;:::; s ) ; (1) b11 b1g1 bk1 bkgk I. INTRODUCTION where sij is the (oblique) projection of s onto the Jordan subspace Jij parallel to SnJij.
    [Show full text]
  • Similarity-Based Clustering by Left-Stochastic Matrix Factorization
    JournalofMachineLearningResearch14(2013)1715-1746 Submitted 1/12; Revised 11/12; Published 7/13 Similarity-based Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Toyota Technological Institute 6045 S. Kenwood Ave Chicago, IL 60637, USA Maya R. Gupta [email protected] Google 1225 Charleston Rd Mountain View, CA 94301, USA Amol Kapila [email protected] Maryam Fazel [email protected] Department of Electrical Engineering University of Washington Seattle, WA 98195, USA Editor: Inderjit Dhillon Abstract For similarity-based clustering, we propose modeling the entries of a given similarity matrix as the inner products of the unknown cluster probabilities. To estimate the cluster probabilities from the given similarity matrix, we introduce a left-stochastic non-negative matrix factorization problem. A rotation-based algorithm is proposed for the matrix factorization. Conditions for unique matrix factorizations and clusterings are given, and an error bound is provided. The algorithm is partic- ularly efficient for the case of two clusters, which motivates a hierarchical variant for cases where the number of desired clusters is large. Experiments show that the proposed left-stochastic decom- position clustering model produces relatively high within-cluster similarity on most data sets and can match given class labels, and that the efficient hierarchical variant performs surprisingly well. Keywords: clustering, non-negative matrix factorization, rotation, indefinite kernel, similarity, completely positive 1. Introduction Clustering is important in a broad range of applications, from segmenting customers for more ef- fective advertising, to building codebooks for data compression. Many clustering methods can be interpreted in terms of a matrix factorization problem.
    [Show full text]
  • Network Properties Revealed Through Matrix Functions 697
    SIAM REVIEW c 2010 Society for Industrial and Applied Mathematics Vol. 52, No. 4, pp. 696–714 Network Properties Revealed ∗ through Matrix Functions † Ernesto Estrada Desmond J. Higham‡ Abstract. The emerging field of network science deals with the tasks of modeling, comparing, and summarizing large data sets that describe complex interactions. Because pairwise affinity data can be stored in a two-dimensional array, graph theory and applied linear algebra provide extremely useful tools. Here, we focus on the general concepts of centrality, com- municability,andbetweenness, each of which quantifies important features in a network. Some recent work in the mathematical physics literature has shown that the exponential of a network’s adjacency matrix can be used as the basis for defining and computing specific versions of these measures. We introduce here a general class of measures based on matrix functions, and show that a particular case involving a matrix resolvent arises naturally from graph-theoretic arguments. We also point out connections between these measures and the quantities typically computed when spectral methods are used for data mining tasks such as clustering and ordering. We finish with computational examples showing the new matrix resolvent version applied to real networks. Key words. centrality measures, clustering methods, communicability, Estrada index, Fiedler vector, graph Laplacian, graph spectrum, power series, resolvent AMS subject classifications. 05C50, 05C82, 91D30 DOI. 10.1137/090761070 1. Motivation. 1.1. Introduction. Connections are important. Across the natural, technologi- cal, and social sciences it often makes sense to focus on the pattern of interactions be- tween individual components in a system [1, 8, 61].
    [Show full text]
  • Alternating Sign Matrices and Polynomiography
    Alternating Sign Matrices and Polynomiography Bahman Kalantari Department of Computer Science Rutgers University, USA [email protected] Submitted: Apr 10, 2011; Accepted: Oct 15, 2011; Published: Oct 31, 2011 Mathematics Subject Classifications: 00A66, 15B35, 15B51, 30C15 Dedicated to Doron Zeilberger on the occasion of his sixtieth birthday Abstract To each permutation matrix we associate a complex permutation polynomial with roots at lattice points corresponding to the position of the ones. More generally, to an alternating sign matrix (ASM) we associate a complex alternating sign polynomial. On the one hand visualization of these polynomials through polynomiography, in a combinatorial fashion, provides for a rich source of algo- rithmic art-making, interdisciplinary teaching, and even leads to games. On the other hand, this combines a variety of concepts such as symmetry, counting and combinatorics, iteration functions and dynamical systems, giving rise to a source of research topics. More generally, we assign classes of polynomials to matrices in the Birkhoff and ASM polytopes. From the characterization of vertices of these polytopes, and by proving a symmetry-preserving property, we argue that polynomiography of ASMs form building blocks for approximate polynomiography for polynomials corresponding to any given member of these polytopes. To this end we offer an algorithm to express any member of the ASM polytope as a convex of combination of ASMs. In particular, we can give exact or approximate polynomiography for any Latin Square or Sudoku solution. We exhibit some images. Keywords: Alternating Sign Matrices, Polynomial Roots, Newton’s Method, Voronoi Diagram, Doubly Stochastic Matrices, Latin Squares, Linear Programming, Polynomiography 1 Introduction Polynomials are undoubtedly one of the most significant objects in all of mathematics and the sciences, particularly in combinatorics.
    [Show full text]
  • Diagonal Sums of Doubly Stochastic Matrices Arxiv:2101.04143V1 [Math
    Diagonal Sums of Doubly Stochastic Matrices Richard A. Brualdi∗ Geir Dahl† 7 December 2020 Abstract Let Ωn denote the class of n × n doubly stochastic matrices (each such matrix is entrywise nonnegative and every row and column sum is 1). We study the diagonals of matrices in Ωn. The main question is: which A 2 Ωn are such that the diagonals in A that avoid the zeros of A all have the same sum of their entries. We give a characterization of such matrices, and establish several classes of patterns of such matrices. Key words. Doubly stochastic matrix, diagonal sum, patterns. AMS subject classifications. 05C50, 15A15. 1 Introduction Let Mn denote the (vector) space of real n×n matrices and on this space we consider arXiv:2101.04143v1 [math.CO] 11 Jan 2021 P the usual scalar product A · B = i;j aijbij for A; B 2 Mn, A = [aij], B = [bij]. A permutation σ = (k1; k2; : : : ; kn) of f1; 2; : : : ; ng can be identified with an n × n permutation matrix P = Pσ = [pij] by defining pij = 1, if j = ki, and pij = 0, otherwise. If X = [xij] is an n × n matrix, the entries of X in the positions of X in ∗Department of Mathematics, University of Wisconsin, Madison, WI 53706, USA. [email protected] †Department of Mathematics, University of Oslo, Norway. [email protected]. Correspond- ing author. 1 which P has a 1 is the diagonal Dσ of X corresponding to σ and P , and their sum n X dP (X) = xi;ki i=1 is a diagonal sum of X.
    [Show full text]
  • Representations of Stochastic Matrices
    Rotational (and Other) Representations of Stochastic Matrices Steve Alpern1 and V. S. Prasad2 1Department of Mathematics, London School of Economics, London WC2A 2AE, United Kingdom. email: [email protected] 2Department of Mathematics, University of Massachusetts Lowell, Lowell, MA. email: [email protected] May 27, 2005 Abstract Joel E. Cohen (1981) conjectured that any stochastic matrix P = pi;j could be represented by some circle rotation f in the following sense: Forf someg par- tition Si of the circle into sets consisting of …nite unions of arcs, we have (*) f g pi;j = (f (Si) Sj) = (Si), where denotes arc length. In this paper we show how cycle decomposition\ techniques originally used (Alpern, 1983) to establish Cohen’sconjecture can be extended to give a short simple proof of the Coding Theorem, that any mixing (that is, P N > 0 for some N) stochastic matrix P can be represented (in the sense of * but with Si merely measurable) by any aperiodic measure preserving bijection (automorphism) of a Lesbesgue proba- bility space. Representations by pointwise and setwise periodic automorphisms are also established. While this paper is largely expository, all the proofs, and some of the results, are new. Keywords: rotational representation, stochastic matrix, cycle decomposition MSC 2000 subject classi…cations. Primary: 60J10. Secondary: 15A51 1 Introduction An automorphism of a Lebesgue probability space (X; ; ) is a bimeasurable n bijection f : X X which preserves the measure : If S = Si is a non- ! f gi=1 trivial (all (Si) > 0) measurable partition of X; we can generate a stochastic n matrix P = pi;j by the de…nition f gi;j=1 (f (Si) Sj) pi;j = \ ; i; j = 1; : : : ; n: (1) (Si) Since the partition S is non-trivial, the matrix P has a positive invariant (stationary) distribution v = (v1; : : : ; vn) = ( (S1) ; : : : ; (Sn)) ; and hence (by de…nition) is recurrent.
    [Show full text]
  • Left Eigenvector of a Stochastic Matrix
    Advances in Pure Mathematics, 2011, 1, 105-117 doi:10.4236/apm.2011.14023 Published Online July 2011 (http://www.SciRP.org/journal/apm) Left Eigenvector of a Stochastic Matrix Sylvain Lavalle´e Departement de mathematiques, Universite du Quebec a Montreal, Montreal, Canada E-mail: [email protected] Received January 7, 2011; revised June 7, 2011; accepted June 15, 2011 Abstract We determine the left eigenvector of a stochastic matrix M associated to the eigenvalue 1 in the commu- tative and the noncommutative cases. In the commutative case, we see that the eigenvector associated to the eigenvalue 0 is (,,NN1 n ), where Ni is the ith principal minor of NMI= n , where In is the 11 identity matrix of dimension n . In the noncommutative case, this eigenvector is (,P1 ,Pn ), where Pi is the sum in aij of the corresponding labels of nonempty paths starting from i and not passing through i in the complete directed graph associated to M . Keywords: Generic Stochastic Noncommutative Matrix, Commutative Matrix, Left Eigenvector Associated To The Eigenvalue 1, Skew Field, Automata 1. Introduction stochastic free field and that the vector 11 (,,PP1 n ) is fixed by our matrix; moreover, the sum 1 It is well known that 1 is one of the eigenvalue of a of the Pi is equal to 1, hence they form a kind of stochastic matrix (i.e. the sum of the elements of each noncommutative limiting probability. row is equal to 1) and its associated right eigenvector is These results have been proved in [1] but the proof the vector (1,1, ,1)T .
    [Show full text]
  • The Generalized Dedekind Determinant
    Contemporary Mathematics Volume 655, 2015 http://dx.doi.org/10.1090/conm/655/13232 The Generalized Dedekind Determinant M. Ram Murty and Kaneenika Sinha Abstract. The aim of this note is to calculate the determinants of certain matrices which arise in three different settings, namely from characters on finite abelian groups, zeta functions on lattices and Fourier coefficients of normalized Hecke eigenforms. Seemingly disparate, these results arise from a common framework suggested by elementary linear algebra. 1. Introduction The purpose of this note is three-fold. We prove three seemingly disparate results about matrices which arise in three different settings, namely from charac- ters on finite abelian groups, zeta functions on lattices and Fourier coefficients of normalized Hecke eigenforms. In this section, we state these theorems. In Section 2, we state a lemma from elementary linear algebra, which lies at the heart of our three theorems. A detailed discussion and proofs of the theorems appear in Sections 3, 4 and 5. In what follows below, for any n × n matrix A and for 1 ≤ i, j ≤ n, Ai,j or (A)i,j will denote the (i, j)-th entry of A. A diagonal matrix with diagonal entries y1,y2, ...yn will be denoted as diag (y1,y2, ...yn). Theorem 1.1. Let G = {x1,x2,...xn} be a finite abelian group and let f : G → C be a complex-valued function on G. Let F be an n × n matrix defined by F −1 i,j = f(xi xj). For a character χ on G, (that is, a homomorphism of G into the multiplicative group of the field C of complex numbers), we define Sχ := f(s)χ(s).
    [Show full text]
  • Dynamical Systems Associated with Adjacency Matrices
    DISCRETE AND CONTINUOUS doi:10.3934/dcdsb.2018190 DYNAMICAL SYSTEMS SERIES B Volume 23, Number 5, July 2018 pp. 1945{1973 DYNAMICAL SYSTEMS ASSOCIATED WITH ADJACENCY MATRICES Delio Mugnolo Delio Mugnolo, Lehrgebiet Analysis, Fakult¨atMathematik und Informatik FernUniversit¨atin Hagen, D-58084 Hagen, Germany Abstract. We develop the theory of linear evolution equations associated with the adjacency matrix of a graph, focusing in particular on infinite graphs of two kinds: uniformly locally finite graphs as well as locally finite line graphs. We discuss in detail qualitative properties of solutions to these problems by quadratic form methods. We distinguish between backward and forward evo- lution equations: the latter have typical features of diffusive processes, but cannot be well-posed on graphs with unbounded degree. On the contrary, well-posedness of backward equations is a typical feature of line graphs. We suggest how to detect even cycles and/or couples of odd cycles on graphs by studying backward equations for the adjacency matrix on their line graph. 1. Introduction. The aim of this paper is to discuss the properties of the linear dynamical system du (t) = Au(t) (1.1) dt where A is the adjacency matrix of a graph G with vertex set V and edge set E. Of course, in the case of finite graphs A is a bounded linear operator on the finite di- mensional Hilbert space CjVj. Hence the solution is given by the exponential matrix etA, but computing it is in general a hard task, since A carries little structure and can be a sparse or dense matrix, depending on the underlying graph.
    [Show full text]