Introduction to Flocking {Stochastic Matrices}

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Flocking {Stochastic Matrices} Supelec EECI Graduate School in Control Introduction to Flocking {Stochastic Matrices} A. S. Morse Yale University Gif – sur - Yvette May 21, 2012 CRAIG REYNOLDS - 1987 BOIDS The Lion King CRAIG REYNOLDS - 1987 BOIDSBOID neighborhood Flocking Rules separation alignment cohesion Demetri Terzopoulos Flocking Rules separation alignment cohesion Motivated by simulation results reported in V i c s e k et al . si m u l a t e d a ° o c k of n ag e n t s f p a r t i c l e s g al l mo v i n g in t h e pl a n e a t t h e sa m e sp ee d s, bu t wi t h di ® e re n t he a di n g s µ 1 ; µ 2 ; : : : ; µ n . s = sp e e d s µ i µi = he a di n g Each agent’s heading is updated at the same time as the rest using a local rule based on the average of its own current heading plus the headings of its “neighbors.” Vicsek’s simulations demonstrated that these nearest neighbor rules can cause all agents to eventually move in the same direction despite 1. the absence of a leader and/or centralized coordination 2. the fact that each agent’s set of neighbors changes with time. Vicsek Model ri neighbors of agent i agent i Each agent is a neighbor of itself Each agent has its own sensing radius ri So neighbor relations are not symmetric HEADING UPDATE EQUATIONS s = sp e e d s µ i µi = he a di n g 0 N i(t) = set of indices of agent i s neighbors at time t ni(t) = number of indices in N i(t) Average at time t of headings of neighbors of agent i. Another rule: Vicsek Flocking Problem: Under what conditions do all n headings converge to a common value? Convex combination {Requires collaboration!} Neighbor Graph N of Index Sets N 1, N 2 ,…., N n G = all directed graphs with vertex set V = {1,2,…,n} N = graph in G with an arc from j to i whenever j 2 N i, i 2 {1,2,…,n} j is a ne i g h b or of i i j A self-arced graph = any graph G with self-arcs at all vertices 1 (1,2) Ne i g h b or gr a p h s = se lf -a r c e d gr a ph s 2 3 4 5 7 6 State Space Model ‘ ’ Adjacency Matrix AG of a graph G 2 G: An n£ n matrix of 0 s and 1 s with aij = 1 whenever there is an arc in G from i to j. _ 4 (1,2) 1 In-degree = 4, out-degree = 1 2 3 4 In-degree of vertex i = number of arcs entering vertex i 5 7 6 Out-degree of vertex i = number of arcs leaving vertex i State Space Model ‘ ’ Adjacency Matrix AG of a graph G 2 G: An n£ n matrix of 0 s and 1 s with aij = 1 whenever there is an arc in G from i to j. Flocking Matrix FN of a neighbor graph N 2 G: bijection where f g D N = di a go n a l d1 ; d2 ; : : : ; dn an d Xn ni = di = in - d e g r e e of v er t e x i = aj i j = 1 Update Eqns: State Model: Vicsek flocking problem: Under what conditions do all n headings converge to a common value? A switched linear system No common quadratic Lyapunov function exists B u t t h e no n - n e ga t i v e f un c t i o n f g ¡ f g V ( µ ) = ma x µ i mi n µ i i i Verify this! is at least non-increasing along trajectories But it takes much more to conclude that V ! 0 µ ( t + 1) = FN ( t ) µ ( t ) Vicsek flocking problem: Under what conditions do all n headings converge to a common value? µ ( t + 1) = FN ( t ) µ ( t ) Problem reduces to determining conditions on the sequence N(0), N(1), ... under which where For if this is so, then where and so {Right} Stochastic Matrices Sn£ n= stochastic if 1. it ha s on l y no n - n e g a t i v e en t r i e s 2. its row sums all equal 1 Stochastic matrices closed under multiplication – flocking matrices are not Flocking matrices are stochastic Therefore it is sufficient to determine conditions on an infinite sequence of n£ n stochastic matrices S1, S2, .... so that This is a well studied problem in the theory of non-homogeneous Markov chains If S is a compact set of n£ n stochastic matrices whose members each have at least one positive column, then for each sequence of matrices S1, S2, … from S, and this limit is approached exponentially fast. Why is this true? Induced Norms and Semi-Norms n£ n n£ n For M 2 R and p > 0, let ||M||p denote the induced matrix p norm on R . We will be interested primarily in the cases p = 1, 2, 1 : For any such p, define 1. Nonnegative: |M|p ¸ 0 2. Homogeneous: |rM|p = r|M|p 3. Triangle inequality: |M + M | · |M | + |M | 1 2 p 1 p 2 p verify! These three properties mean that |¢ |p is a semi-norm {If |M|p = 0 were to imply M = 0, then |¢|p would be a norm.} |M|p = 0 ; M = 0 Additional Properties of · · 1. |M|p 1 if ||M||p 1 Because |M|p · ||M||p M is semi - contractive in the p semi-norm if |M|p < 1 n£ n 2. Sub-multiplicative: Suppose M is a subset of R such that M1 = 1 for all M 2 M . Then Proof : Let c0 ,c1 and c2 denote values of c which minimize ||M2M1 - 1c||p, ||M1-1c||p, and ||M2-1c||p respectively. 1 = M21 Suppose M is a subset of Rn£ n such that M1 = 1 for all M 2 M . Let p be fixed and let C be a compact set of semi - contractive matrices in M . Let Then for each infinite sequence of matrices M1 , M2, ... in C, the matrix product converges as i ! 1 as fast as ¸ i converges to zero, to a rank one matrix of the form 1c. Proof: See board We want to use this fact to prove that: If S is a compact set of n£ n stochastic matrices whose members each have at least one positive column, then for each sequence of matrices S1, S2, … from S, and this limit is approached exponentially fast. To do this , it is enough to show that: A stochastic matrix S is semi-contractive in the semi-norm | ¢ |1 if S has a positive column. Any stochastic matrix S can be written as S = 1c + T where c is the largest row vector for which S - 1c is nonnegative and T = S – 1c T1 = S1 – 1c1 = (1 - c1)1 so all row sums of T = (1 - c1) ¸ 0 because T ¸ 0 Moreover c ≠ 0 if and only if S has a positive column. verify! Therefore (1 – c1) < 1 if and only if S has a positive column 0 jSj1 = m in jjS¡ 1d jj1 · jjS¡ 1cjj1 = jjT jj1 = ( 1¡ c1) d A stochastic matrix S is semi-contractive in the semi-norm | ¢ |1 if S has a positive column. Transitioning from Matrices to Graphs For a nonnegative matrix Mn£ n, °(M) is that graph whose adjacency matrix is the transpose of the matrix which results when each non-zero entries in M is replaced by a 1. In other words, for a nonnegative matrix M, °(M) is that graph which has an arc (i, j) from i to j whenever mj,i ≠ 0. Transitioning from Matrices to Graphs For a nonnegative matrix Mn£ n, °(M) is that graph whose adjacency matrix is the transpose of the matrix which results when each non-zero entries in M is replaced by a 1. 0 °( FN ) = °( A N ) = N A graph is strongly rooted if at least one vertex is adjacent to every vertex in the graph strongly rooted graph Motivation for strongly rooted: For any nonnegative matrix M, °(M) has an arc (i, j) whenever mj,i ≠ 0. °(M) is strongly rooted , M has a positive column Transitioning from Matrices to Graphs If S is a compact set of n£ n stochastic matrices whose members each have at least one positive column, then for each sequence of matrices S1, S2, … from S, and this limit is approached exponentially fast. If S is a compact set of n£ n stochastic matrices whose members each have a strongly rooted graph, then for each sequence of matrices S1, S2, … from S, and this limit is approached exponentially fast. Transitioning from Matrices to Graphs When does . T Tq T2 T1 If then Thus establishing convergence to 1c of an infinite product of stochastic matrices boils down to determining when the graph of a product of stochastic matrices is strongly rooted. Transitioning from Matrices to Graphs As before G = set of all directed graphs with vertex set {1,2,....,n}. By the composition of graph G2 2 G with graph G1 2 G , written G2 ± G1, is that directed graph in G which has an arc (i, j) from i to j whenever there is an integer k such that (i, k) is an arc in G1 and (k, j) is an arc in G2.
Recommended publications
  • Eigenvalues of Euclidean Distance Matrices and Rs-Majorization on R2
    Archive of SID 46th Annual Iranian Mathematics Conference 25-28 August 2015 Yazd University 2 Talk Eigenvalues of Euclidean distance matrices and rs-majorization on R pp.: 1{4 Eigenvalues of Euclidean Distance Matrices and rs-majorization on R2 Asma Ilkhanizadeh Manesh∗ Department of Pure Mathematics, Vali-e-Asr University of Rafsanjan Alemeh Sheikh Hoseini Department of Pure Mathematics, Shahid Bahonar University of Kerman Abstract Let D1 and D2 be two Euclidean distance matrices (EDMs) with correspond- ing positive semidefinite matrices B1 and B2 respectively. Suppose that λ(A) = ((λ(A)) )n is the vector of eigenvalues of a matrix A such that (λ(A)) ... i i=1 1 ≥ ≥ (λ(A))n. In this paper, the relation between the eigenvalues of EDMs and those of the 2 corresponding positive semidefinite matrices respect to rs, on R will be investigated. ≺ Keywords: Euclidean distance matrices, Rs-majorization. Mathematics Subject Classification [2010]: 34B15, 76A10 1 Introduction An n n nonnegative and symmetric matrix D = (d2 ) with zero diagonal elements is × ij called a predistance matrix. A predistance matrix D is called Euclidean or a Euclidean distance matrix (EDM) if there exist a positive integer r and a set of n points p1, . , pn r 2 2 { } such that p1, . , pn R and d = pi pj (i, j = 1, . , n), where . denotes the ∈ ij k − k k k usual Euclidean norm. The smallest value of r that satisfies the above condition is called the embedding dimension. As is well known, a predistance matrix D is Euclidean if and 1 1 t only if the matrix B = − P DP with P = I ee , where I is the n n identity matrix, 2 n − n n × and e is the vector of all ones, is positive semidefinite matrix.
    [Show full text]
  • Clustering by Left-Stochastic Matrix Factorization
    Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Maya R. Gupta [email protected] Amol Kapila [email protected] Maryam Fazel [email protected] University of Washington, Seattle, WA 98103, USA Abstract 1.1. Related Work in Matrix Factorization Some clustering objective functions can be written as We propose clustering samples given their matrix factorization objectives. Let n feature vectors d×n pairwise similarities by factorizing the sim- be gathered into a feature-vector matrix X 2 R . T d×k ilarity matrix into the product of a clus- Consider the model X ≈ FG , where F 2 R can ter probability matrix and its transpose. be interpreted as a matrix with k cluster prototypes n×k We propose a rotation-based algorithm to as its columns, and G 2 R is all zeros except for compute this left-stochastic decomposition one (appropriately scaled) positive entry per row that (LSD). Theoretical results link the LSD clus- indicates the nearest cluster prototype. The k-means tering method to a soft kernel k-means clus- clustering objective follows this model with squared tering, give conditions for when the factor- error, and can be expressed as (Ding et al., 2005): ization and clustering are unique, and pro- T 2 arg min kX − FG kF ; (1) vide error bounds. Experimental results on F;GT G=I simulated and real similarity datasets show G≥0 that the proposed method reliably provides accurate clusterings. where k · kF is the Frobenius norm, and inequality G ≥ 0 is component-wise. This follows because the combined constraints G ≥ 0 and GT G = I force each row of G to have only one positive element.
    [Show full text]
  • Descriptive Graph Combinatorics
    Descriptive Graph Combinatorics Alexander S. Kechris and Andrew S. Marks (Preliminary version; November 3, 2018) Introduction In this article we survey the emerging field of descriptive graph combina- torics. This area has developed in the last two decades or so at the interface of descriptive set theory and graph theory, and it has interesting connections with other areas such as ergodic theory and probability theory. Our object of study is the theory of definable graphs, usually Borel or an- alytic graphs on Polish spaces. We investigate how combinatorial concepts, such as colorings and matchings, behave under definability constraints, i.e., when they are required to be definable or perhaps well-behaved in the topo- logical or measure theoretic sense. To illustrate the new phenomena that can arise in the definable context, consider for example colorings of graphs. As usual a Y -coloring of a graph G = (X; G), where X is the set of vertices and G ⊆ X2 the edge relation, is a map c: X ! Y such that xGy =) c(x) 6= c(y). An elementary result in graph theory asserts that any acyclic graph admits a 2-coloring (i.e., a coloring as above with jY j = 2). On the other hand, consider a Borel graph G = (X; G), where X is a Polish space and G is Borel (in X2). A Borel coloring of G is a coloring c: X ! Y as above with Y a Polish space and c a Borel map. In contrast to the above basic fact, there are acyclic Borel graphs G which admit no Borel countable coloring (i.e., with jY j ≤ @0); see Example 3.14.
    [Show full text]
  • Similarity-Based Clustering by Left-Stochastic Matrix Factorization
    JournalofMachineLearningResearch14(2013)1715-1746 Submitted 1/12; Revised 11/12; Published 7/13 Similarity-based Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Toyota Technological Institute 6045 S. Kenwood Ave Chicago, IL 60637, USA Maya R. Gupta [email protected] Google 1225 Charleston Rd Mountain View, CA 94301, USA Amol Kapila [email protected] Maryam Fazel [email protected] Department of Electrical Engineering University of Washington Seattle, WA 98195, USA Editor: Inderjit Dhillon Abstract For similarity-based clustering, we propose modeling the entries of a given similarity matrix as the inner products of the unknown cluster probabilities. To estimate the cluster probabilities from the given similarity matrix, we introduce a left-stochastic non-negative matrix factorization problem. A rotation-based algorithm is proposed for the matrix factorization. Conditions for unique matrix factorizations and clusterings are given, and an error bound is provided. The algorithm is partic- ularly efficient for the case of two clusters, which motivates a hierarchical variant for cases where the number of desired clusters is large. Experiments show that the proposed left-stochastic decom- position clustering model produces relatively high within-cluster similarity on most data sets and can match given class labels, and that the efficient hierarchical variant performs surprisingly well. Keywords: clustering, non-negative matrix factorization, rotation, indefinite kernel, similarity, completely positive 1. Introduction Clustering is important in a broad range of applications, from segmenting customers for more ef- fective advertising, to building codebooks for data compression. Many clustering methods can be interpreted in terms of a matrix factorization problem.
    [Show full text]
  • Alternating Sign Matrices and Polynomiography
    Alternating Sign Matrices and Polynomiography Bahman Kalantari Department of Computer Science Rutgers University, USA [email protected] Submitted: Apr 10, 2011; Accepted: Oct 15, 2011; Published: Oct 31, 2011 Mathematics Subject Classifications: 00A66, 15B35, 15B51, 30C15 Dedicated to Doron Zeilberger on the occasion of his sixtieth birthday Abstract To each permutation matrix we associate a complex permutation polynomial with roots at lattice points corresponding to the position of the ones. More generally, to an alternating sign matrix (ASM) we associate a complex alternating sign polynomial. On the one hand visualization of these polynomials through polynomiography, in a combinatorial fashion, provides for a rich source of algo- rithmic art-making, interdisciplinary teaching, and even leads to games. On the other hand, this combines a variety of concepts such as symmetry, counting and combinatorics, iteration functions and dynamical systems, giving rise to a source of research topics. More generally, we assign classes of polynomials to matrices in the Birkhoff and ASM polytopes. From the characterization of vertices of these polytopes, and by proving a symmetry-preserving property, we argue that polynomiography of ASMs form building blocks for approximate polynomiography for polynomials corresponding to any given member of these polytopes. To this end we offer an algorithm to express any member of the ASM polytope as a convex of combination of ASMs. In particular, we can give exact or approximate polynomiography for any Latin Square or Sudoku solution. We exhibit some images. Keywords: Alternating Sign Matrices, Polynomial Roots, Newton’s Method, Voronoi Diagram, Doubly Stochastic Matrices, Latin Squares, Linear Programming, Polynomiography 1 Introduction Polynomials are undoubtedly one of the most significant objects in all of mathematics and the sciences, particularly in combinatorics.
    [Show full text]
  • Calculating Graph Algorithms for Dominance and Shortest Path*
    Calculating Graph Algorithms for Dominance and Shortest Path⋆ Ilya Sergey1, Jan Midtgaard2, and Dave Clarke1 1 KU Leuven, Belgium {first.last}@cs.kuleuven.be 2 Aarhus University, Denmark [email protected] Abstract. We calculate two iterative, polynomial-time graph algorithms from the literature: a dominance algorithm and an algorithm for the single-source shortest path problem. Both algorithms are calculated di- rectly from the definition of the properties by fixed-point fusion of (1) a least fixed point expressing all finite paths through a directed graph and (2) Galois connections that capture dominance and path length. The approach illustrates that reasoning in the style of fixed-point calculus extends gracefully to the domain of graph algorithms. We thereby bridge common practice from the school of program calculation with common practice from the school of static program analysis, and build a novel view on iterative graph algorithms as instances of abstract interpretation. Keywords: graph algorithms, dominance, shortest path algorithm, fixed-point fusion, fixed-point calculus, Galois connections 1 Introduction Calculating an implementation from a specification is central to two active sub- fields of theoretical computer science, namely the calculational approach to pro- gram development [1,12,13] and the calculational approach to abstract interpre- tation [18, 22, 33, 34]. The advantage of both approaches is clear: the resulting implementations are provably correct by construction. Whereas the former is a general approach to program development, the latter approach is mainly used for developing provably sound static analyses (with notable exceptions [19,25]). Both approaches are anchored in some of the same discrete mathematical structures, namely partial orders, complete lattices, fixed points and Galois connections.
    [Show full text]
  • Representations of Stochastic Matrices
    Rotational (and Other) Representations of Stochastic Matrices Steve Alpern1 and V. S. Prasad2 1Department of Mathematics, London School of Economics, London WC2A 2AE, United Kingdom. email: [email protected] 2Department of Mathematics, University of Massachusetts Lowell, Lowell, MA. email: [email protected] May 27, 2005 Abstract Joel E. Cohen (1981) conjectured that any stochastic matrix P = pi;j could be represented by some circle rotation f in the following sense: Forf someg par- tition Si of the circle into sets consisting of …nite unions of arcs, we have (*) f g pi;j = (f (Si) Sj) = (Si), where denotes arc length. In this paper we show how cycle decomposition\ techniques originally used (Alpern, 1983) to establish Cohen’sconjecture can be extended to give a short simple proof of the Coding Theorem, that any mixing (that is, P N > 0 for some N) stochastic matrix P can be represented (in the sense of * but with Si merely measurable) by any aperiodic measure preserving bijection (automorphism) of a Lesbesgue proba- bility space. Representations by pointwise and setwise periodic automorphisms are also established. While this paper is largely expository, all the proofs, and some of the results, are new. Keywords: rotational representation, stochastic matrix, cycle decomposition MSC 2000 subject classi…cations. Primary: 60J10. Secondary: 15A51 1 Introduction An automorphism of a Lebesgue probability space (X; ; ) is a bimeasurable n bijection f : X X which preserves the measure : If S = Si is a non- ! f gi=1 trivial (all (Si) > 0) measurable partition of X; we can generate a stochastic n matrix P = pi;j by the de…nition f gi;j=1 (f (Si) Sj) pi;j = \ ; i; j = 1; : : : ; n: (1) (Si) Since the partition S is non-trivial, the matrix P has a positive invariant (stationary) distribution v = (v1; : : : ; vn) = ( (S1) ; : : : ; (Sn)) ; and hence (by de…nition) is recurrent.
    [Show full text]
  • Left Eigenvector of a Stochastic Matrix
    Advances in Pure Mathematics, 2011, 1, 105-117 doi:10.4236/apm.2011.14023 Published Online July 2011 (http://www.SciRP.org/journal/apm) Left Eigenvector of a Stochastic Matrix Sylvain Lavalle´e Departement de mathematiques, Universite du Quebec a Montreal, Montreal, Canada E-mail: [email protected] Received January 7, 2011; revised June 7, 2011; accepted June 15, 2011 Abstract We determine the left eigenvector of a stochastic matrix M associated to the eigenvalue 1 in the commu- tative and the noncommutative cases. In the commutative case, we see that the eigenvector associated to the eigenvalue 0 is (,,NN1 n ), where Ni is the ith principal minor of NMI= n , where In is the 11 identity matrix of dimension n . In the noncommutative case, this eigenvector is (,P1 ,Pn ), where Pi is the sum in aij of the corresponding labels of nonempty paths starting from i and not passing through i in the complete directed graph associated to M . Keywords: Generic Stochastic Noncommutative Matrix, Commutative Matrix, Left Eigenvector Associated To The Eigenvalue 1, Skew Field, Automata 1. Introduction stochastic free field and that the vector 11 (,,PP1 n ) is fixed by our matrix; moreover, the sum 1 It is well known that 1 is one of the eigenvalue of a of the Pi is equal to 1, hence they form a kind of stochastic matrix (i.e. the sum of the elements of each noncommutative limiting probability. row is equal to 1) and its associated right eigenvector is These results have been proved in [1] but the proof the vector (1,1, ,1)T .
    [Show full text]
  • Alternating Sign Matrices, Extensions and Related Cones
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/311671190 Alternating sign matrices, extensions and related cones Article in Advances in Applied Mathematics · May 2017 DOI: 10.1016/j.aam.2016.12.001 CITATIONS READS 0 29 2 authors: Richard A. Brualdi Geir Dahl University of Wisconsin–Madison University of Oslo 252 PUBLICATIONS 3,815 CITATIONS 102 PUBLICATIONS 1,032 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Combinatorial matrix theory; alternating sign matrices View project All content following this page was uploaded by Geir Dahl on 16 December 2016. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. Alternating sign matrices, extensions and related cones Richard A. Brualdi∗ Geir Dahly December 1, 2016 Abstract An alternating sign matrix, or ASM, is a (0; ±1)-matrix where the nonzero entries in each row and column alternate in sign, and where each row and column sum is 1. We study the convex cone generated by ASMs of order n, called the ASM cone, as well as several related cones and polytopes. Some decomposition results are shown, and we find a minimal Hilbert basis of the ASM cone. The notion of (±1)-doubly stochastic matrices and a generalization of ASMs are introduced and various properties are shown. For instance, we give a new short proof of the linear characterization of the ASM polytope, in fact for a more general polytope.
    [Show full text]
  • Contents 5 Eigenvalues and Diagonalization
    Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 2017, v. 1.50) Contents 5 Eigenvalues and Diagonalization 1 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial . 1 5.1.1 Eigenvalues and Eigenvectors . 2 5.1.2 Eigenvalues and Eigenvectors of Matrices . 3 5.1.3 Eigenspaces . 6 5.2 Diagonalization . 9 5.3 Applications of Diagonalization . 14 5.3.1 Transition Matrices and Incidence Matrices . 14 5.3.2 Systems of Linear Dierential Equations . 16 5.3.3 Non-Diagonalizable Matrices and the Jordan Canonical Form . 19 5 Eigenvalues and Diagonalization In this chapter, we will discuss eigenvalues and eigenvectors: these are characteristic values (and characteristic vectors) associated to a linear operator T : V ! V that will allow us to study T in a particularly convenient way. Our ultimate goal is to describe methods for nding a basis for V such that the associated matrix for T has an especially simple form. We will rst describe diagonalization, the procedure for (trying to) nd a basis such that the associated matrix for T is a diagonal matrix, and characterize the linear operators that are diagonalizable. Then we will discuss a few applications of diagonalization, including the Cayley-Hamilton theorem that any matrix satises its characteristic polynomial, and close with a brief discussion of non-diagonalizable matrices. 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial • Suppose that we have a linear transformation T : V ! V from a (nite-dimensional) vector space V to itself. We would like to determine whether there exists a basis of such that the associated matrix β is a β V [T ]β diagonal matrix.
    [Show full text]
  • The Characteristic Polynomial of Multi-Rooted
    THE CHARACTERISTIC POLYNOMIAL OF MULTI-ROOTED, DIRECTED TREES LAUREN EATON Mathematics 8 May 2008 Independent Research Project: Candidate for Departmental Honors Graduation: 17 May 2008 THE CHARACTERISTIC POLYNOMIAL OF MULTI-ROOED, DIRECTED TREES LAUREN EATON Abstract. We define the characteristic polynomial for single-rooted trees and begin with a theorem about this polynomial, derived from the known contrac- tion/deletion formula. We expand our scope to include multi-rooted, directed trees. Introducing the concept of a star, we prove two theorems which allow us to evaluate the polynomials of these trees in terms of the stars that comprise them. Finally, we derive and prove a general formula for the characteristic polynomial of multi-rooted, directed trees. Introduction Introduced by Gary Gordon and Elizabeth McMahon in [3] the greedoid char- acteristic polynomial is a generalization of the Tutte polynomial, which is a two- variable invariant which helps describe the structure of a single-rooted graph. Given a multi-rooted, directed tree, one must conduct an often tedious recursive calcula- tion in order to obtain the polynomial. In this paper, we work towards defining a general formula for the characteristic polynomial that requires minimal calculation and is entirely in terms of the vertices. This paper is organized as follows: In Section 1, we start with a focus on single- rooted trees and an adaptation of the characteristic polynomial. In Section 2, we look at the more complex multi-rooted, directed trees. We will find that when deriving the characteristic polynomial of these complicated trees, one can consider them in simplified terms. We will explore the concept of a sink and see how the polynomial is affected by its presence.
    [Show full text]
  • Markov Chains
    Stochastic Matrices The following 3 3matrixdefinesa × discrete time Markov process with three states: P11 P12 P13 P = P21 P22 P23 ⎡ ⎤ P31 P32 P33 ⎣ ⎦ where Pij is the probability of going from j i in one step. A stochastic matrix → satisfies the following conditions: P 0 ∀i, j ij ≥ and M j ∑ Pij = 1. ∀ i=1 Example The following 3 3matrixdefinesa × discrete time Markov process with three states: 0.90 0.01 0.09 P = 0.01 0.90 0.01 ⎡ ⎤ 0.09 0.09 0.90 ⎣ ⎦ where P23 = 0.01 is the probability of going from 3 2inonestep.Youcan → verify that P 0 ∀i, j ij ≥ and 3 j ∑ Pij = 1. ∀ i=1 Example (contd.) 0.9 0.9 0.01 1 2 0.01 0.09 0.09 0.01 0.09 3 0.9 Figure 1: Three-state Markov process. Single Step Transition Probabilities x(1) = Px(0) x(2) = Px(1) . x(t+1) = Px(t) 0.641 0.90 0.01 0.09 0.7 0.188 = 0.01 0.90 0.01 0.2 ⎡ ⎤ ⎡ ⎤⎡ ⎤ 0.171 0.09 0.09 0.90 0.1 ⎣ x(t+1) ⎦ ⎣ P ⎦⎣ x(t) ⎦ M % &' ( (t%+1) &' (t) (% &' ( xi = ∑ Pij x j j=1 n-step Transition Probabilities Observe that x(3) can be written as fol- lows: x(3) = Px(2) = P Px(1) = P)P Px*(0) = P3)x(0)). ** n-step Transition Probabilities (contd.) Similar logic leads us to an expression for x(n): x(n) = P P... Px(0) ) )n ** = Pnx(0). % &' ( An n-step transition probability matrix can be defined in terms of a single step matrix and a (n 1)-step matrix: − M ( n) = n 1 .
    [Show full text]