Input-State Incidence Matrix of Boolean Control Networks and Its Applications$

Total Page:16

File Type:pdf, Size:1020Kb

Input-State Incidence Matrix of Boolean Control Networks and Its Applications$ Input-State Incidence Matrix of Boolean Control Networks and Its ApplicationsI Yin Zhao, Hongsheng Qi, Daizhan Cheng The Key Laboratory of Systems & Control Institute of Systems Science, Academy of Mathematics & Systems Science, Chinese Academy of Sciences, Beijing 100190, P. R. China Abstract The input-state incidence matrix of control Boolean network is proposed. It is shown that this matrix contains complete information of the input-state mapping. Using it, an easily verifiable necessary and sufficient condition for the controllability of Boolean control network is obtained. The corresponding control which drives a point to a given reachable point is designed. Moreover, certain topological properties such as the fixed points and cycles of a Boolean control network are investigated. Then, as another application, a sufficient condition for the observability is presented. Finally, the results are extended to mix-valued logical control systems. Keywords: Boolean control network, input-state incidence matrix, reachability, observability, mix-valued logic 1. Introduction The Boolean network was firstly proposed by Kauffman for modeling complex and nonlinear biological systems [1,2,3]. Since then, it has been studies widely and applied to some other systems. The first interesting topic is the topological structure of a Boolean network. Many progresses have been done [4,5,6]. Another challenging and important topic is the control ISupported partly by NNSF 60736022, and 60221301 of China. Email address: [email protected],[email protected],[email protected] (Yin Zhao, Hongsheng Qi, Daizhan Cheng) Preprint submitted to Systems & Control Letters August 20, 2010 of Boolean networks. The main efforts have been put on the controllability of Boolean networks [7,8,9]. Recently, a new matrix product, called the semi-tensor product of ma- trices, has been proposed, and using it a logical mapping can be expressed into a matrix form [10]. Using them, a Boolean (control) network can be converted into a standard discrete-time dynamic (control) system [11, 12]. In this paper the same framework is utilized. We, therefore, give a brief review on this new technique. For statement ease, we introduce some nota- tions first. (i) Denote by Col(A) (Row(A)) the set of columns (rows) of a matrix A, and Coli(A) (Rowi(A)) the i-th column (row) of A. (ii) D := f0; 1g. i 1 2 n (iii) Let δn be the i-th column of the identity matrix In, and ∆n := fδn; δn; ··· ; δng. When n = 2 we simply use ∆ := ∆2. T (iv) 1k := [1 1 ··· 1] . | {z } k i1 i2 is (v) Assume a matrix M = [δn δn ··· δn ] 2 Mn×s , i.e., its columns, Col(M) ⊂ ∆n. Then M is called a logical matrix, and simply denoted as M = δn[i1 i2 ··· is]: The set of n × s logical matrices is denoted by Ln×s. (vi) A matrix B 2 Mn×s is called a Boolean matrix, if its entries bij 2 D, 8 i; j. The set of n × s Boolean matrices is denoted by Bn×s. (vii) Let A 2 Mn×mn, denoted by Blki(A) the i-th n × n square block of A, i = 1; 2 ··· ; m. A Boolean network with n nodes is described as 8 >x1(t + 1) = f1(x1(t); ··· ; xn(t)) < . (1) :> xn(t + 1) = fn(x1(t); ··· ; xn(t)); xi 2 D; n where fi : D !D, i = 1; ··· ; n are logical functions. Similarly, a Boolean control network with n network nodes, m input 2 nodes, and p outputs is described as 8 x1(t + 1) = f1(x1(t); ··· ; xn(t); u1(t); ··· ; um(t)) <> . (2) :> xn(t + 1) = fn(x1(t); ··· ; xn(t); u1(t); ··· ; um(t)); xi; uj 2 D; yj(t) = hj(x1(t); ··· ; xn(t)); j = 1; ··· ; p; n+m n where fi : D !D, i = 1; ··· ; n and hj : D !D, j = 1; ··· ; p are logical functions. Throughout this paper the matrix product is assumed to be semi-tensor product as A n B, and the symbol \n" is omitted in most places. We refer to [10] for details. 1 2 Identifying 1 ∼ δ2, 0 ∼ δ2, and using semi-tensor product of matrices, logic functions can be expressed by logical matrix, vectors, and their prod- ucts. For example, the truth table of some common logical functions such as \negation (:)", \conjunction (^)", \disjunction (_)", \conditional (!)", \biconditional ($)" and \exclusive or (_¯)" is given in Table1, and their structure matrices are given in Table2. Table 1: Truth Table p q :p p ^ q p _ q p ! q p $ q p_¯q 1 1 0 1 1 1 1 0 1 0 0 0 1 0 0 1 0 1 1 0 1 1 0 1 0 0 1 0 0 1 1 0 Table 2: Matrix Expressions of Logical Functions : Mn = δ2[2 1] ! Mi = δ2[1 2 1 1] _ Md = δ2[1 1 1 2] $ Me = δ2[1 2 2 1] ^ Mc = δ2[1 2 2 2] _¯ Mp = δ2[2 1 1 2] Then, in vector form we have :p = Mn n p and p ^ q = Mc n p n q, etc. n m p Set x = ni=1xi, u = ni=1ui, and y = ni=1yi. Using vector form, (1) and (2) can be expressed by the following (3) and (4) respectively, which are called the algebraic forms of (1) and (2) respectively. x(t + 1) = Lx(t); (3) 3 where L 2 L2n×2n is called structure matrix of the Boolean network. ( x(t + 1) = Lu(t)x(t) (4) y(t) = Hx(t); where L 2 L2n×2n+m , and H 2 L2p×2n . A matrix, A = (aij) 2 Mn×n is called the adjacency matrix (or incidence matrix) of the Boolean network (1), if ( 1; xj(t + 1) depends on xi(t) aij = 0; otherwise: If we consider the network graph, aij 6= 0, iff there is an edge from xi to xj. The difference between L of (3) and A is that L contains complete information of the network dynamics, precisely, (1) can be recovered from L easily, while A does not. The controllability and observability of Boolean control networks were investigated under this framework [13]. As a related topic, the controllable normal form and the realization of Boolean networks were discussed in [14]. The topological structure of Boolean network (without control) such as fixed points and cycles was investigated in [11]. We first give rigorous definitions for the controllability, observability, fixed points and cycles. Definition 1.1. Consider system (2). Denote its state space as X = Dn, and let X0 2 X . 1. X 2 X is said to be reachable from X0 at time s > 0, if we can find a se- quence of controls U(0) = fu1(0); ··· ; um(0)g, U(1) = fu1(1); ··· ; um(1)g, ··· , such that the trajectory of (2) with the initial value X0 and the con- trols fU(t)g, t = 0; 1; ··· will reach X at time t = s. The reachable set at time s is denoted by Rs(X0). The overall reachable set is denoted by 1 R(X0) = [s=1Rs(X0): 2. System (2) is said to be controllable at X0 if R(X0) = X . The system is said to be controllable if it is controllable at every X 2 X . Definition 1.2. Consider system (2). Denote by Y (t) = (y1(t); ··· ; yp(t)) 2 Dp. 4 0 0 1. X1 and X2 are said to be distinguishable, if there exists a control se- quence fU(0);U(1); ··· ;U(s)g, where s ≥ 0, such that 1 s+1 0 2 s+1 0 Y (s + 1) = y (U(s); ··· ;U(0);X1 ) 6= Y (s + 1) = y (U(s); ··· ;U(0);X2 ): (5) 0 0 2. The system is said to be observable, if any two initial points X1 ;X2 2 X are distinguishable. Definition 1.3. Consider system (2). Denote the input-state (product) space by p n S = f(U; X) j U = (u1; ··· ; um) 2 D ;X = (x1; ··· ; xn) 2 D g : Note that jSj = 2m+n. i i j j i 1. Let Si = (U ;X ) 2 S and Sj = (U ;X ) 2 S. Denote by U = i i i i i (u1; ··· ; um), X = (x1; ··· ; xn), etc. (Si;Sj), is said to be a directed edge, if Xi;U i;Xj satisfy (2). Precisely, j i i i i xk = fk(x1; ··· ; xn; u1; ··· ; um); k = 1; ··· ; n: The set of edges is denoted by E ⊂ S × S. 2. The pair (S; E) forms a directed graph, which is called the input-state transfer graph. 3.( S1;S2; ··· ;S`) is called a path, if (Si;Si+1) 2 E, i = 1; 2; ··· ; ` − 1. 4. A path (S1;S2; ··· ) is called a cycle, if Si+` = Si for all i, the smallest ` is called the length of the cycle. Particular, the cycle of length 1 is called a fixed point. Note that when the vector form of logical variables is used, the state space becomes X = ∆2n , similarly, the input-state space becomes S = ∆2m+n . According to the corresponding statement, it is easy to tell which form is used there. In this paper we propose a matrix, called the input-state incidence matrix. Using it, a neat result about to controllability of Boolean control networks is presented by verifying the controllability matrix. The corresponding con- troller is also designed. Based on the structure of reachable set, an easily verifiable sufficient condition for observability is also obtained. Then the 5 input-state incidence matrix is also used to reveal some topological struc- tures of Boolean control networks, including fixed points, cycles etc.
Recommended publications
  • Quiz7 Problem 1
    Quiz 7 Quiz7 Problem 1. Independence The Problem. In the parts below, cite which tests apply to decide on independence or dependence. Choose one test and show complete details. 1 ! 1 ! (a) Vectors ~v = ;~v = 1 −1 2 1 0 1 1 0 1 1 0 1 1 B C B C B C (b) Vectors ~v1 = @ −1 A ;~v2 = @ 1 A ;~v3 = @ 0 A 0 −1 −1 2 5 (c) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = x ; y = x ; y = x10 on (−∞; 1). (d) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = 1 + x; y = 1 − x; y = x3 on (−∞; 1). Basic Test. To show three vectors are independent, form the system of equations c1~v1 + c2~v2 + c3~v3 = ~0; then solve for c1; c2; c3. If the only solution possible is c1 = c2 = c3 = 0, then the vectors are independent. Linear Combination Test. A list of vectors is independent if and only if each vector in the list is not a linear combination of the remaining vectors, and each vector in the list is not zero. Subset Test. Any nonvoid subset of an independent set is independent. The basic test leads to three quick independence tests for column vectors. All tests use the augmented matrix A of the vectors, which for 3 vectors is A =< ~v1j~v2j~v3 >. Rank Test. The vectors are independent if and only if rank(A) equals the number of vectors. Determinant Test. Assume A is square. The vectors are independent if and only if the deter- minant of A is nonzero.
    [Show full text]
  • Adjacency and Incidence Matrices
    Adjacency and Incidence Matrices 1 / 10 The Incidence Matrix of a Graph Definition Let G = (V ; E) be a graph where V = f1; 2;:::; ng and E = fe1; e2;:::; emg. The incidence matrix of G is an n × m matrix B = (bik ), where each row corresponds to a vertex and each column corresponds to an edge such that if ek is an edge between i and j, then all elements of column k are 0 except bik = bjk = 1. 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 2 / 10 The First Theorem of Graph Theory Theorem If G is a multigraph with no loops and m edges, the sum of the degrees of all the vertices of G is 2m. Corollary The number of odd vertices in a loopless multigraph is even. 3 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G. Then the rank of B is n − 1 if G is bipartite and n otherwise. Example 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 4 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G.
    [Show full text]
  • Package 'Popbio'
    Package ‘popbio’ February 1, 2020 Version 2.7 Author Chris Stubben, Brook Milligan, Patrick Nantel Maintainer Chris Stubben <[email protected]> Title Construction and Analysis of Matrix Population Models License GPL-3 Encoding UTF-8 LazyData true Suggests quadprog Description Construct and analyze projection matrix models from a demography study of marked individu- als classified by age or stage. The package covers methods described in Matrix Population Mod- els by Caswell (2001) and Quantitative Conservation Biology by Morris and Doak (2002). RoxygenNote 6.1.1 NeedsCompilation no Repository CRAN Date/Publication 2020-02-01 20:10:02 UTC R topics documented: 01.Introduction . .3 02.Caswell . .3 03.Morris . .5 aq.census . .6 aq.matrix . .7 aq.trans . .8 betaval............................................9 boot.transitions . 11 calathea . 13 countCDFxt . 14 damping.ratio . 15 eigen.analysis . 16 elasticity . 18 1 2 R topics documented: extCDF . 19 fundamental.matrix . 20 generation.time . 21 grizzly . 22 head2 . 24 hudcorrs . 25 hudmxdef . 25 hudsonia . 26 hudvrs . 27 image2 . 28 Kendall . 29 lambda . 31 lnorms . 32 logi.hist.plot . 33 LTRE ............................................ 34 matplot2 . 37 matrix2 . 38 mean.list . 39 monkeyflower . 40 multiresultm . 42 nematode . 43 net.reproductive.rate . 44 pfister.plot . 45 pop.projection . 46 projection.matrix . 48 QPmat . 50 reproductive.value . 51 resample . 52 secder . 53 sensitivity . 55 splitA . 56 stable.stage . 57 stage.vector.plot . 58 stoch.growth.rate . 59 stoch.projection . 60 stoch.quasi.ext . 62 stoch.sens . 63 stretchbetaval . 64 teasel . 65 test.census . 66 tortoise . 67 var2 ............................................. 68 varEst . 69 vitalsens . 70 vitalsim . 71 whale . 74 woodpecker . 74 Index 76 01.Introduction 3 01.Introduction Introduction to the popbio Package Description Popbio is a package for the construction and analysis of matrix population models.
    [Show full text]
  • A Brief Introduction to Spectral Graph Theory
    A BRIEF INTRODUCTION TO SPECTRAL GRAPH THEORY CATHERINE BABECKI, KEVIN LIU, AND OMID SADEGHI MATH 563, SPRING 2020 Abstract. There are several matrices that can be associated to a graph. Spectral graph theory is the study of the spectrum, or set of eigenvalues, of these matrices and its relation to properties of the graph. We introduce the primary matrices associated with graphs, and discuss some interesting questions that spectral graph theory can answer. We also discuss a few applications. 1. Introduction and Definitions This work is primarily based on [1]. We expect the reader is familiar with some basic graph theory and linear algebra. We begin with some preliminary definitions. Definition 1. Let Γ be a graph without multiple edges. The adjacency matrix of Γ is the matrix A indexed by V (Γ), where Axy = 1 when there is an edge from x to y, and Axy = 0 otherwise. This can be generalized to multigraphs, where Axy becomes the number of edges from x to y. Definition 2. Let Γ be an undirected graph without loops. The incidence matrix of Γ is the matrix M, with rows indexed by vertices and columns indexed by edges, where Mxe = 1 whenever vertex x is an endpoint of edge e. For a directed graph without loss, the directed incidence matrix N is defined by Nxe = −1; 1; 0 corresponding to when x is the head of e, tail of e, or not on e. Definition 3. Let Γ be an undirected graph without loops. The Laplace matrix of Γ is the matrix L indexed by V (G) with zero row sums, where Lxy = −Axy for x 6= y.
    [Show full text]
  • Workshop on Modern Applied Mathematics PK 2012 Kraków 23 — 24 November 2012
    Workshop on Modern Applied Mathematics PK 2012 Kraków 23 — 24 November 2012 Warsztaty z Nowoczesnej Matematyki i jej Zastosowań Kraków 23 — 24 listopada 2012 Kraków 23 — 24 November 2012 3 Contents 1 Introduction 5 1.1 Organizing Committee . 6 1.2 Scientific Committee . 6 2 List of Participants 7 3 Abstracts 11 3.1 On residually finite groups Orest D. Artemovych . 11 3.2 Some properties of Bernstein-Durrmeyer operators Magdalena Baczyńska . 12 3.3 Some Applications of Gröbner Bases Katarzyna Bolek . 14 3.4 Criteria for normality via C0-semigroups and moment sequences Dariusz Cichoń . 15 3.5 Picard–Fuchs operator for certain one parameter families of Calabi–Yau threefolds Sławomir Cynk . 16 3.6 Inference with Missing data Christiana Drake . 17 3.7 An outer measure on a commutative ring Dariusz Dudzik . 18 3.8 Resampling methods for weakly dependent sequences Elżbieta Gajecka-Mirek . 18 3.9 Graphs with path systems of length k in a hamiltonian cycle Grzegorz Gancarzewicz . 19 3.10 Hausdorff gaps and automorphisms of P(N)=fin Magdalena Grzech . 20 3.11 Functional Regression Models with Structured Penalties Jarosław Harezlak, Madan Kundu . 21 3.12 Approximation of function of two variables from exponential weight spaces Monika Herzog . 22 3.13 The existence of a weak solution of the semilinear first-order differential equation in a Banach space Mariusz Jużyniec . 25 3.14 Bayesian inference for cyclostationary time series Oskar Knapik . 26 4 Workshop on Modern Applied Mathematics PK 2012 3.15 Hausdorff limit in o-minimal structures Beata Kocel–Cynk . 27 3.16 Resampling methods for nonstationary time series Jacek Leśkow .
    [Show full text]
  • Projects and Publications of the Applied Mathematics Division
    NATIONAL BUREAU OF STANDARDS REPORT 4546 PROJECTS and PUBLICATIONS of the APPLIED MATHEMATICS DIVISION A Quarterly Report October through December 1 955 U. S. DEPARTMENT OF COMMERCE NATIONAL BUREAU OF STANDARDS FOR OFFICIAL DISTRIBUTION U. S. DEPARTMENT OF COMMERCE Sinclair Weeks, Secretary NATIONAL BUREAU OF STANDARDS A. V. Astin, Director THE NATIONAL BUREAU OF STANDARDS The 6cope of activities of the National Bureau of Standards is suggested in the following listing of the divisions and sections engaged in technical work. In general, each section is engaged in specialized research, development, and engineering in the field indicated by its title. A brief description of the activities, and of the resultant reports and publications, appears on the inside of the back cover of this report. Electricity and Electronics. Resistance and Reactance. Electron Tubes. Electrical Instru- ments. Magnetic Measurements. Process Technology. Engineering Electronics. Electronic Instrumentation. Electrochemistry. Optics and Metrology. Photometry and Colorimetry. Optical Instruments. Photographic Technology. Length. Engineering Metrology. Heat and Power. Temperature Measurements. Thermodynamics. Cryogenic Physics. Engines and Lubrication. Engine Fuels. Atomic and Radiation Physics. Spectroscopy. Radiometry. Mass Spectrometry. Solid State Physics. Electron Physics. Atomic Physics. Nuclear Physics. Radioactivity. X-rays. Betatron. Nucleonic Instrumentation. Radiological Equipment. AEC Radiation Instruments. Chemistry. Organic Coatings. Surface Chemistry. Organic
    [Show full text]
  • Incidence Matrices and Interval Graphs
    Pacific Journal of Mathematics INCIDENCE MATRICES AND INTERVAL GRAPHS DELBERT RAY FULKERSON AND OLIVER GROSS Vol. 15, No. 3 November 1965 PACIFIC JOURNAL OF MATHEMATICS Vol. 15, No. 3, 1965 INCIDENCE MATRICES AND INTERVAL GRAPHS D. R. FULKERSON AND 0. A. GROSS According to present genetic theory, the fine structure of genes consists of linearly ordered elements. A mutant gene is obtained by alteration of some connected portion of this structure. By examining data obtained from suitable experi- ments, it can be determined whether or not the blemished portions of two mutant genes intersect or not, and thus inter- section data for a large number of mutants can be represented as an undirected graph. If this graph is an "interval graph," then the observed data is consistent with a linear model of the gene. The problem of determining when a graph is an interval graph is a special case of the following problem concerning (0, l)-matrices: When can the rows of such a matrix be per- muted so as to make the l's in each column appear consecu- tively? A complete theory is obtained for this latter problem, culminating in a decomposition theorem which leads to a rapid algorithm for deciding the question, and for constructing the desired permutation when one exists. Let A — (dij) be an m by n matrix whose entries ai3 are all either 0 or 1. The matrix A may be regarded as the incidence matrix of elements el9 e2, , em vs. sets Sl9 S2, , Sn; that is, ai3 = 0 or 1 ac- cording as et is not or is a member of S3 .
    [Show full text]
  • Projects and Publications of the Applied Mathematics Division
    NATIONAL BUREAU OF STANDARDS REPORT 4658 PROJECTS and PUBLICATIONS of the APPLIED MATHEMATICS DIVISION A Quarterly Report January through March, 1956 U. S. DEPARTMENT OF COMMERCE NATIONAL BUREAU OF STANDARDS FOR OFFICIAL DISTRIBUTION U. S. DEPARTMENT OF COMMERCE Sinclair Weeks, Secretary NATIONAL BUREAU OF STANDARDS A. V. Astin, Director THE NATIONAL BUREAU OF STANDARDS The scope of activities of the National Bureau of Standards is suggested in the following listing of the divisions and sections engaged in technical work. In general, each section is engaged in specialized research, development, and engineering in the field indicated by its title. A brief description of the activities, and of the resultant reports and publications, appears on the inside of the back cover of this report. Electricity and Electronics. Resistance and Reactance. Electron Tubes. Electrical Instru- ments. Magnetic Measurements. Process Technology. Engineering Electronics. Electronic Instrumentation. Electrochemistry. Optics and Metrology. Photometry and Colorimetry. Optical Instruments. Photographic Technology. Length. Engineering Metrology. Heat and Power. Temperature Measurements. Thermodynamics. Cryogenic Physics. Engines and Lubrication. Engine Fuels. Atomic and Radiation Physics. Spectroscopy. Radiometry. Mass Spectrometry. Solid State Physics. Electron Physics. Atomic Physics. Nuclear Physics. Radioactivity. X-rays. Betatron. Nucleonic Instrumentation. Radiological Equipment. AEC Radiation Instruments. Chemistry. Organic Coatings. Surface Chemistry. Organic
    [Show full text]
  • Package 'Matrix'
    Package ‘Matrix’ March 8, 2019 Version 1.2-16 Date 2019-03-04 Priority recommended Title Sparse and Dense Matrix Classes and Methods Contact Doug and Martin <[email protected]> Maintainer Martin Maechler <[email protected]> Description A rich hierarchy of matrix classes, including triangular, symmetric, and diagonal matrices, both dense and sparse and with pattern, logical and numeric entries. Numerous methods for and operations on these matrices, using 'LAPACK' and 'SuiteSparse' libraries. Depends R (>= 3.2.0) Imports methods, graphics, grid, stats, utils, lattice Suggests expm, MASS Enhances MatrixModels, graph, SparseM, sfsmisc Encoding UTF-8 LazyData no LazyDataNote not possible, since we use data/*.R *and* our classes BuildResaveData no License GPL (>= 2) | file LICENCE URL http://Matrix.R-forge.R-project.org/ BugReports https://r-forge.r-project.org/tracker/?group_id=61 Author Douglas Bates [aut], Martin Maechler [aut, cre] (<https://orcid.org/0000-0002-8685-9910>), Timothy A. Davis [ctb] (SuiteSparse and 'cs' C libraries, notably CHOLMOD, AMD; collaborators listed in dir(pattern = '^[A-Z]+[.]txt$', full.names=TRUE, system.file('doc', 'SuiteSparse', package='Matrix'))), Jens Oehlschlägel [ctb] (initial nearPD()), Jason Riedy [ctb] (condest() and onenormest() for octave, Copyright: Regents of the University of California), R Core Team [ctb] (base R matrix implementation) 1 2 R topics documented: Repository CRAN Repository/R-Forge/Project matrix Repository/R-Forge/Revision 3291 Repository/R-Forge/DateTimeStamp 2019-03-04 11:00:53 Date/Publication 2019-03-08 15:33:45 UTC NeedsCompilation yes R topics documented: abIndex-class . .4 abIseq . .6 all-methods . .7 all.equal-methods .
    [Show full text]
  • Contents 5 Eigenvalues and Diagonalization
    Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 2017, v. 1.50) Contents 5 Eigenvalues and Diagonalization 1 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial . 1 5.1.1 Eigenvalues and Eigenvectors . 2 5.1.2 Eigenvalues and Eigenvectors of Matrices . 3 5.1.3 Eigenspaces . 6 5.2 Diagonalization . 9 5.3 Applications of Diagonalization . 14 5.3.1 Transition Matrices and Incidence Matrices . 14 5.3.2 Systems of Linear Dierential Equations . 16 5.3.3 Non-Diagonalizable Matrices and the Jordan Canonical Form . 19 5 Eigenvalues and Diagonalization In this chapter, we will discuss eigenvalues and eigenvectors: these are characteristic values (and characteristic vectors) associated to a linear operator T : V ! V that will allow us to study T in a particularly convenient way. Our ultimate goal is to describe methods for nding a basis for V such that the associated matrix for T has an especially simple form. We will rst describe diagonalization, the procedure for (trying to) nd a basis such that the associated matrix for T is a diagonal matrix, and characterize the linear operators that are diagonalizable. Then we will discuss a few applications of diagonalization, including the Cayley-Hamilton theorem that any matrix satises its characteristic polynomial, and close with a brief discussion of non-diagonalizable matrices. 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial • Suppose that we have a linear transformation T : V ! V from a (nite-dimensional) vector space V to itself. We would like to determine whether there exists a basis of such that the associated matrix β is a β V [T ]β diagonal matrix.
    [Show full text]
  • Eigenvalues of Graphs
    Eigenvalues of graphs L¶aszl¶oLov¶asz November 2007 Contents 1 Background from linear algebra 1 1.1 Basic facts about eigenvalues ............................. 1 1.2 Semide¯nite matrices .................................. 2 1.3 Cross product ...................................... 4 2 Eigenvalues of graphs 5 2.1 Matrices associated with graphs ............................ 5 2.2 The largest eigenvalue ................................. 6 2.2.1 Adjacency matrix ............................... 6 2.2.2 Laplacian .................................... 7 2.2.3 Transition matrix ................................ 7 2.3 The smallest eigenvalue ................................ 7 2.4 The eigenvalue gap ................................... 9 2.4.1 Expanders .................................... 10 2.4.2 Edge expansion (conductance) ........................ 10 2.4.3 Random walks ................................. 14 2.5 The number of di®erent eigenvalues .......................... 16 2.6 Spectra of graphs and optimization .......................... 18 Bibliography 19 1 Background from linear algebra 1.1 Basic facts about eigenvalues Let A be an n £ n real matrix. An eigenvector of A is a vector such that Ax is parallel to x; in other words, Ax = ¸x for some real or complex number ¸. This number ¸ is called the eigenvalue of A belonging to eigenvector v. Clearly ¸ is an eigenvalue i® the matrix A ¡ ¸I is singular, equivalently, i® det(A ¡ ¸I) = 0. This is an algebraic equation of degree n for ¸, and hence has n roots (with multiplicity). The trace of the square matrix A = (Aij ) is de¯ned as Xn tr(A) = Aii: i=1 1 The trace of A is the sum of the eigenvalues of A, each taken with the same multiplicity as it occurs among the roots of the equation det(A ¡ ¸I) = 0. If the matrix A is symmetric, then its eigenvalues and eigenvectors are particularly well behaved.
    [Show full text]
  • Notes on the Proof of the Van Der Waerden Permanent Conjecture
    Iowa State University Capstones, Theses and Creative Components Dissertations Spring 2018 Notes on the proof of the van der Waerden permanent conjecture Vicente Valle Martinez Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/creativecomponents Part of the Discrete Mathematics and Combinatorics Commons Recommended Citation Valle Martinez, Vicente, "Notes on the proof of the van der Waerden permanent conjecture" (2018). Creative Components. 12. https://lib.dr.iastate.edu/creativecomponents/12 This Creative Component is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Creative Components by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Notes on the proof of the van der Waerden permanent conjecture by Vicente Valle Martinez A Creative Component submitted to the graduate faculty in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Major: Mathematics Program of Study Committee: Sung Yell Song, Major Professor Steve Butler Jonas Hartwig Leslie Hogben Iowa State University Ames, Iowa 2018 Copyright c Vicente Valle Martinez, 2018. All rights reserved. ii TABLE OF CONTENTS LIST OF FIGURES . iii ACKNOWLEDGEMENTS . iv ABSTRACT . .v CHAPTER 1. Introduction and Preliminaries . .1 1.1 Combinatorial interpretations . .1 1.2 Computational properties of permanents . .4 1.3 Computational complexity in computing permanents . .8 1.4 The organization of the rest of this component . .9 CHAPTER 2. Applications: Permanents of Special Types of Matrices . 10 2.1 Zeros-and-ones matrices .
    [Show full text]