Perturbation and Intervals of Totally Nonnegative Matrices and Related Properties of Sign Regular Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Perturbation and Intervals of Totally Nonnegative Matrices and Related Properties of Sign Regular Matrices Gedruckt mit Unterstützung des Deutschen Akademischen Austauschdienstes Perturbation and Intervals of Totally Nonnegative Matrices and Related Properties of Sign Regular Matrices Dissertation zur Erlangung des akademischen Grades des Doktors der Naturwissenschaften (Dr. rer. nat.) an der Mathematisch-Naturwissenschaftliche Sektion Fachbereich Mathematik und Statistik vorgelegt von Mohammad Adm Tag der mündlichen Prüfung: 22. January 2016 Referent: Prof. Dr. Jürgen Garloff, University of Konstanz Referent: Prof. Dr. Shaun Fallat, University of Regina Konstanzer Online-Publikations-System (KOPS) URL: http://nbn-resolving.de/urn:nbn:de:bsz:352-0-334132 Abstract A real matrix is called sign regular if all of its minors of the same order have the same sign or are allowed to vanish and is said to be totally nonnegative and totally positive if all of its minors are nonnegative and positive, respectively. Such matrices arise in a remarkable variety of ways in mathematics and many areas of its applications such as differential and integral equations, function theory, approximation theory, matrix analysis, combinatorics, reductive Lie groups, numerical mathematics, statistics, computer aided geometric design, mathematical finance, and mechanics. A great deal of papers is devoted to the problem of determining whether a given n-by-n matrix is totally nonnegative or totally positive. One could calculate each of its minors, but √ that would involve evaluating about 4n/ πn subdeterminants for a given n-by-n matrix. The question arises whether there is a smaller collection of minors whose nonnegativity or positivity implies the nonnegativity or positivity, respectively, of all minors. In this thesis we derive a condensed form of the Cauchon Algorithm which provides an efficient criterion for total nonnegativity of a given matrix and give an optimal determinantal test for total nonnegativity. By this test we reduce the number of needed minors to only n2. It has long been conjectured, by the supervisor of the thesis, that each matrix in a matrix interval with respect to the checkerboard partial ordering is nonsingular totally nonnegative if the two corner matrices are so, i.e., the so-called interval property holds. Invariance of the totally nonnegativity under element-wise perturbation lays at the core of this conjec- ture. The conjecture was affirmatively answered for the totally positive matrices and some subclasses of the totally nonnegative matrices. In this thesis we settle the conjecture and solve the perturbation problem for totally nonnegative matrices under perturbation of their single entries. The key in settling the conjecture is that the entries of the matrix that is obtained by the application of the Cauchon Algorithm to a given nonsingular totally non- negative matrix can be represented as ratios of two minors formed from consecutive rows and columns of the given matrix. Several analogous results hold for nonsingular totally nonpositive matrices, i.e., matrices having all of their minors nonpositive. By using the Cauchon Algorithm we derive a char- acterization of these matrices and an optimal determinantal test for their recognition. We also prove that these matrices and the nonsingular almost strictly sign regular matrices, a subclass of the sign regular matrices, possess the interval property. These and related results evoke the (open) question whether the interval property holds for general nonsingular sign regular matrices. A partial totally nonnegative (positive) matrix has specified and unspecified entries, and all minors consisting of specified entries are nonnegative (positive). A totally nonnegative (positive) completion problem asks which partial totally nonnegative (positive) matrices al- low a choice of the unspecified entries such that the resulting matrix is totally nonnegative i (positive). Much is known about the totally nonnegative completion problem which is eas- ier to solve than the totally positive completion problem, while very little is known about the latter problem. In this thesis we define new patterns of partial totally positive matrices which are totally positive completable. These patterns settle partially two conjectures which are recently posed by Johnson and Wei. Finally, a real polynomial is called Hurwitz or stable if all of its roots have negative real parts. The importance of the Hurwitz polynomials arises in many scientific fields, for example in control theory and dynamical system theory. A rational function is called an R-function of negative type if it maps the open upper complex half plane into the open lower half plane. With these kinds of polynomials and rational functions some special structured matrices are associated which enjoy important properties like total nonnegativity. Con- versely, from the total nonnegativity of these matrices several properties of polynomials and rational functions associated with these matrices can be inferred. We use these connections to give new and simple proofs of some known facts, prove some properties of these struc- tured matrices, and derive new results for interval polynomials and ”interval” R-functions of negative type. ii Zusammenfassung Eine reelle Matrix wird zeichenfest genannt, wenn alle ihre Minoren gleicher Ordnung das- selbe Vorzeichen besitzen oder verschwinden; sie heißt total nichtnegativ bzw. total positiv, wenn ihre s¨amtlichen Minoren nichtnegativ bzw. positiv sind. Derartige Matrizen treten in einer bemerkenswerten Vielzahl von Gebieten der Mathematik und ihren Anwendungen auf, so z. B. in den Differential- und Integralgleichungen, Funktionen-, Approximations- und Matrizentheorie, Kombinatorik, reduktiven Lie-Gruppen, Numerischer Mathematik, Statis- tik, Computer Aided Geometric Design, Mathematischen Finanz¨okonomie und Mechanik. Eine große Anzahl von Arbeiten besch¨aftigt sich mit dem Problem zu entscheiden, ob eine gegebene quadratische Matrix n-ter Ordnung total nichtnegativ (positiv) ist. Man k¨onnte jeden Minor der Matrix berechnen, aber das wurde¨ die Auswertung von ungef¨ahr √ 4n/ πn Unterdeterminanten der gegebenen Matrix erfordern. Die Frage erhebt sich, ob es eine geringere Anzahl von Minoren gibt, von deren Nichtnegativit¨at bzw. Positivit¨at auf die Nichtnegativit¨at bzw. Positivit¨at s¨amtlicher Minoren der Matrix geschlossen werden kann. In der vorliegenden Arbeit geben wir eine komprimierte Form des Cauchon-Algorithmus an, die ein effektives Kriterium fur¨ die totale Nichtnegativit¨at einer Matrix erm¨oglicht, und leiten einen optimalen Determinantentest fur¨ den Nachweis der totalen Nichtnegativit¨at her. Mit Hilfe dieses Testes l¨asst sich die Anzahl der zu untersuchenden Minoren auf nur n2 reduzieren. Lange Zeit wurde von dem Betreuer dieser Arbeit vermutet, dass jede Matrix aus einem Matrixintervall bezuglich¨ der Schachbrett-Halbordnung regul¨ar und total nichtnegativ ist, wenn die beiden Eckmatrizen des Intervalls diese Eigenschaft besitzen, d. h. die sogenannte Intervall-Eigenschaft gilt. Die Frage der Erhaltung der totalen Nichtnegativit¨at einer Matrix bei St¨orung ihrer einzelnen Koeffizienten ist hiermit eng verknupft.¨ Die Vermutung wurde best¨atigt fur¨ die total positiven Matrizen und einige Unterklassen der total nichtnegativen Matrizen. In der vorliegenden Arbeit beweisen wir die Vermutung und l¨osen das Problem der Erhaltung der totalen Nichtnegativit¨at bei St¨orung einzelner Matrixkoeffizienten. Der Schlussel¨ zum Beweis der Vermutung ist der Sachverhalt, dass die Koeffizienten der Matrix, die mittels Anwendung des Cauchon-Algorithmus auf eine regul¨are und totalnichtnegative Matrix gewonnen wird, dargestellt werden k¨onnen als Quotient zweier Minoren, die aus aufeinanderfolgenden Zeilen und Spalten der Ausgangsmatrix gebildet werden. Verschiedene entsprechende Ergebnisse sind fur¨ die regul¨aren total nichtpositiven Ma- trizen gultig,¨ d. h. fur¨ Matrizen, deren s¨amtliche Minoren nichtpositiv sind. Mit Hilfe des Cauchon-Algorithmus leiten wir eine Charakterisierung dieser Matrizen und einen op- timalen Determinantentest zu ihrer Erkennung her. Wir zeigen ferner, dass diese Matrizen und die regul¨aren fast streng zeichenfesten1) (almost strictly sign regular) Matrizen, eine 1) Wenn uns ein entsprechender deutscher Begriff aus der Literatur nicht bekannt ist, verwenden wir die Ubersetzung¨ des englischen Begriffs und geben diesen in Klammen an. iii Unterklasse der zeichenfesten Matrizen, die Intervall-Eigenschaft besitzen. Diese und ver- wandte Resultate legen die (offene) Frage nahe, ob die Intervall-Eigenschaft auch allgemein fur¨ die regul¨aren zeichenfesten Matrizen gilt. Eine total nichtnegative bzw. positive Teilmatrix (partial matrix) ist eine Matrix, die sowohl spezifizierte als auch unspezifizierte Koeffizienten besitzt, wobei die Minoren, die nur aus spezifizierten Koeffizienten gebildet werden, s¨amtlich nichtnegativ bzw. positiv sind. Ein total nichtnegatives bzw. positives Vervollstandigungsproblem¨ (completion problem) fragt, welche total nichtnegativen bzw. positiven Teilmatrizen eine Wahl s¨amtlicher ihrer un- spezifizierten Koeffizienten erlauben derart, dass die jeweilige vervollst¨andigte Matrix total nichtnegativ bzw. positiv ist. Uber¨ das total nichtnegative Vervollst¨andigungsproblem sind viele Ergebnisse bekannt; dieses ist einfacher zu l¨osen als das total positive Problem, uber¨ welches erst sehr wenige Resultate vorliegen. In dieser Arbeit geben wir neue Muster von total positiven Teilmatrizen an, welche eine total positive Vervollst¨andigung erlauben. Diese Muster best¨atigen teilweise zwei Vermutungen
Recommended publications
  • Eigenvalues and Eigenvectors of Tridiagonal Matrices∗
    Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 15, pp. 115-133, April 2006 ELA http://math.technion.ac.il/iic/ela EIGENVALUES AND EIGENVECTORS OF TRIDIAGONAL MATRICES∗ SAID KOUACHI† Abstract. This paper is continuation of previous work by the present author, where explicit formulas for the eigenvalues associated with several tridiagonal matrices were given. In this paper the associated eigenvectors are calculated explicitly. As a consequence, a result obtained by Wen- Chyuan Yueh and independently by S. Kouachi, concerning the eigenvalues and in particular the corresponding eigenvectors of tridiagonal matrices, is generalized. Expressions for the eigenvectors are obtained that differ completely from those obtained by Yueh. The techniques used herein are based on theory of recurrent sequences. The entries situated on each of the secondary diagonals are not necessary equal as was the case considered by Yueh. Key words. Eigenvectors, Tridiagonal matrices. AMS subject classifications. 15A18. 1. Introduction. The subject of this paper is diagonalization of tridiagonal matrices. We generalize a result obtained in [5] concerning the eigenvalues and the corresponding eigenvectors of several tridiagonal matrices. We consider tridiagonal matrices of the form −α + bc1 00 ... 0 a1 bc2 0 ... 0 .. .. 0 a2 b . An = , (1) .. .. .. 00. 0 . .. .. .. . cn−1 0 ... ... 0 an−1 −β + b n−1 n−1 ∞ where {aj}j=1 and {cj}j=1 are two finite subsequences of the sequences {aj}j=1 and ∞ {cj}j=1 of the field of complex numbers C, respectively, and α, β and b are complex numbers. We suppose that 2 d1, if j is odd ajcj = 2 j =1, 2, ..., (2) d2, if j is even where d 1 and d2 are complex numbers.
    [Show full text]
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Polynomial Flows in the Plane
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector ADVANCES IN MATHEMATICS 55, 173-208 (1985) Polynomial Flows in the Plane HYMAN BASS * Department of Mathematics, Columbia University. New York, New York 10027 AND GARY MEISTERS Department of Mathematics, University of Nebraska, Lincoln, Nebraska 68588 Contents. 1. Introduction. I. Polynomial flows are global, of bounded degree. 2. Vector fields and local flows. 3. Change of coordinates; the group GA,(K). 4. Polynomial flows; statement of the main results. 5. Continuous families of polynomials. 6. Locally polynomial flows are global, of bounded degree. II. One parameter subgroups of GA,(K). 7. Introduction. 8. Amalgamated free products. 9. GA,(K) as amalgamated free product. 10. One parameter subgroups of GA,(K). 11. One parameter subgroups of BA?(K). 12. One parameter subgroups of BA,(K). 1. Introduction Let f: Rn + R be a Cl-vector field, and consider the (autonomous) system of differential equations with initial condition x(0) = x0. (lb) The solution, x = cp(t, x,), depends on t and x0. For which f as above does the flow (p depend polynomially on the initial condition x,? This question was discussed in [M2], and in [Ml], Section 6. We present here a definitive solution of this problem for n = 2, over both R and C. (See Theorems (4.1) and (4.3) below.) The main tool is the theorem of Jung [J] and van der Kulk [vdK] * This material is based upon work partially supported by the National Science Foun- dation under Grant NSF MCS 82-02633.
    [Show full text]
  • On Multivariate Interpolation
    On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case.
    [Show full text]
  • Solving Systems of Linear Equations by Gaussian Elimination
    Chapter 3 Solving Systems of Linear Equations By Gaussian Elimination 3.1 Mathematical Preliminaries In this chapter we consider the problem of computing the solution of a system of n linear equations in n unknowns. The scalar form of that system is as follows: a11x1 +a12x2 +... +... +a1nxn = b1 a x +a x +... +... +a x = b (S) 8 21 1 22 2 2n n 2 > ... ... ... ... ... <> an1x1 +an2x2 +... +... +annxn = bn > Written in matrix:> form, (S) is equivalent to: (3.1) Ax = b, where the coefficient square matrix A Rn,n, and the column vectors x, b n,1 n 2 2 R ⇠= R . Specifically, a11 a12 ... ... a1n a21 a22 ... ... a2n A = 0 1 ... ... ... ... ... B a a ... ... a C B n1 n2 nn C @ A 93 94 N. Nassif and D. Fayyad x1 b1 x2 b2 x = 0 1 and b = 0 1 . ... ... B x C B b C B n C B n C @ A @ A We assume that the basic linear algebra property for systems of linear equa- tions like (3.1) are satisfied. Specifically: Proposition 3.1. The following statements are equivalent: 1. System (3.1) has a unique solution. 2. det(A) =0. 6 3. A is invertible. In this chapter, our objective is to present the basic ideas of a linear system solver. It consists of two main procedures allowing to solve efficiently (3.1). 1. The first, referred to as Gauss elimination (or reduction) reduces (3.1) into an equivalent system of linear equations, which matrix is upper triangular. Specifically one shows in section 4 that Ax = b Ux = c, () where c Rn and U Rn,n is given by: 2 2 u11 u12 ..
    [Show full text]
  • Stabilization, Estimation and Control of Linear Dynamical Systems with Positivity and Symmetry Constraints
    Stabilization, Estimation and Control of Linear Dynamical Systems with Positivity and Symmetry Constraints A Dissertation Presented by Amirreza Oghbaee to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering Northeastern University Boston, Massachusetts April 2018 To my parents for their endless love and support i Contents List of Figures vi Acknowledgments vii Abstract of the Dissertation viii 1 Introduction 1 2 Matrices with Special Structures 4 2.1 Nonnegative (Positive) and Metzler Matrices . 4 2.1.1 Nonnegative Matrices and Eigenvalue Characterization . 6 2.1.2 Metzler Matrices . 8 2.1.3 Z-Matrices . 10 2.1.4 M-Matrices . 10 2.1.5 Totally Nonnegative (Positive) Matrices and Strictly Metzler Matrices . 12 2.2 Symmetric Matrices . 14 2.2.1 Properties of Symmetric Matrices . 14 2.2.2 Symmetrizer and Symmetrization . 15 2.2.3 Quadratic Form and Eigenvalues Characterization of Symmetric Matrices . 19 2.3 Nonnegative and Metzler Symmetric Matrices . 22 3 Positive and Symmetric Systems 27 3.1 Positive Systems . 27 3.1.1 Externally Positive Systems . 27 3.1.2 Internally Positive Systems . 29 3.1.3 Asymptotic Stability . 33 3.1.4 Bounded-Input Bounded-Output (BIBO) Stability . 34 3.1.5 Asymptotic Stability using Lyapunov Equation . 37 3.1.6 Robust Stability of Perturbed Systems . 38 3.1.7 Stability Radius . 40 3.2 Symmetric Systems . 43 3.3 Positive Symmetric Systems . 47 ii 4 Positive Stabilization of Dynamic Systems 50 4.1 Metzlerian Stabilization . 50 4.2 Maximizing the stability radius by state feedback .
    [Show full text]
  • Section 2.4–2.5 Partitioned Matrices and LU Factorization
    Section 2.4{2.5 Partitioned Matrices and LU Factorization Gexin Yu [email protected] College of William and Mary Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization One approach to simplify the computation is to partition a matrix into blocks. 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns. Standard computation techniques are inefficient in such cases, so we need to develop techniques which exploit the internal structure of the matrices. In most cases, the matrices of interest have lots of zeros. Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns.
    [Show full text]
  • Linear Algebra and Matrix Theory
    Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .
    [Show full text]
  • Arxiv:2009.05100V2
    THE COMPLETE POSITIVITY OF SYMMETRIC TRIDIAGONAL AND PENTADIAGONAL MATRICES LEI CAO 1,2, DARIAN MCLAREN 3, AND SARAH PLOSKER 3 Abstract. We provide a decomposition that is sufficient in showing when a symmetric tridiagonal matrix A is completely positive. Our decomposition can be applied to a wide range of matrices. We give alternate proofs for a number of related results found in the literature in a simple, straightforward manner. We show that the cp-rank of any irreducible tridiagonal doubly stochastic matrix is equal to its rank. We then consider symmetric pentadiagonal matrices, proving some analogous results, and providing two different decom- positions sufficient for complete positivity. We illustrate our constructions with a number of examples. 1. Preliminaries All matrices herein will be real-valued. Let A be an n n symmetric tridiagonal matrix: × a1 b1 b1 a2 b2 . .. .. .. . A = .. .. .. . bn 3 an 2 bn 2 − − − bn 2 an 1 bn 1 − − − bn 1 an − We are often interested in the case where A is also doubly stochastic, in which case we have ai = 1 bi 1 bi for i = 1, 2,...,n, with the convention that b0 = bn = 0. It is easy to see that− if a− tridiagonal− matrix is doubly stochastic, it must be symmetric, so the additional hypothesis of symmetry can be dropped in that case. We are interested in positivity conditions for symmetric tridiagonal and pentadiagonal matrices. A stronger condition than positive semidefiniteness, known as complete positivity, arXiv:2009.05100v2 [math.CO] 10 Mar 2021 has applications in a variety of areas of study, including block designs, maximin efficiency- robust tests, modelling DNA evolution, and more [5, Chapter 2], as well as recent use in mathematical optimization and quantum information theory (see [14] and the references therein).
    [Show full text]
  • Tropical Totally Positive Matrices 3
    TROPICAL TOTALLY POSITIVE MATRICES STEPHANE´ GAUBERT AND ADI NIV Abstract. We investigate the tropical analogues of totally positive and totally nonnegative matrices. These arise when considering the images by the nonarchimedean valuation of the corresponding classes of matrices over a real nonarchimedean valued field, like the field of real Puiseux series. We show that the nonarchimedean valuation sends the totally positive matrices precisely to the Monge matrices. This leads to explicit polyhedral representations of the tropical analogues of totally positive and totally nonnegative matrices. We also show that tropical totally nonnegative matrices with a finite permanent can be factorized in terms of elementary matrices. We finally determine the eigenvalues of tropical totally nonnegative matrices, and relate them with the eigenvalues of totally nonnegative matrices over nonarchimedean fields. Keywords: Total positivity; total nonnegativity; tropical geometry; compound matrix; permanent; Monge matrices; Grassmannian; Pl¨ucker coordinates. AMSC: 15A15 (Primary), 15A09, 15A18, 15A24, 15A29, 15A75, 15A80, 15B99. 1. Introduction 1.1. Motivation and background. A real matrix is said to be totally positive (resp. totally nonneg- ative) if all its minors are positive (resp. nonnegative). These matrices arise in several classical fields, such as oscillatory matrices (see e.g. [And87, §4]), or approximation theory (see e.g. [GM96]); they have appeared more recently in the theory of canonical bases for quantum groups [BFZ96]. We refer the reader to the monograph of Fallat and Johnson in [FJ11] or to the survey of Fomin and Zelevin- sky [FZ00] for more information. Totally positive/nonnegative matrices can be defined over any real closed field, and in particular, over nonarchimedean fields, like the field of Puiseux series with real coefficients.
    [Show full text]
  • Inverse Eigenvalue Problems Involving Multiple Spectra
    Inverse eigenvalue problems involving multiple spectra G.M.L. Gladwell Department of Civil Engineering University of Waterloo Waterloo, Ontario, Canada N2L 3G1 [email protected] URL: http://www.civil.uwaterloo.ca/ggladwell Abstract If A Mn, its spectrum is denoted by σ(A).IfA is oscillatory (O) then σ(A∈) is positive and discrete, the submatrix A[r +1,...,n] is O and itsspectrumisdenotedbyσr(A). Itisknownthatthereisaunique symmetric tridiagonal O matrix with given, positive, strictly interlacing spectra σ0, σ1. It is shown that there is not necessarily a pentadiagonal O matrix with given, positive strictly interlacing spectra σ0, σ1, σ2, but that there is a family of such matrices with positive strictly interlacing spectra σ0, σ1. The concept of inner total positivity (ITP) is introduced, and it is shown that an ITP matrix may be reduced to ITP band form, or filled in to become TP. These reductions and filling-in procedures are used to construct ITP matrices with given multiple common spectra. 1Introduction My interest in inverse eigenvalue problems (IEP) stems from the fact that they appear in inverse vibration problems, see [7]. In these problems the matrices that appear are usually symmetric; in this paper we shall often assume that the matrices are symmetric: A Sn. ∈ If A Sn, its eigenvalues are real; we denote its spectrum by σ(A)= ∈ λ1, λ2,...,λn ,whereλ1 λ2 λn. The direct problem of finding σ{(A) from A is} well understood.≤ At≤ fi···rst≤ sight it appears that inverse eigenvalue T problems are trivial: every A Sn with spectrum σ(A) has the form Q Q ∈ ∧ where Q is orthogonal and = diag(λ1, λ2,...,λn).
    [Show full text]
  • Using Row Reduction to Calculate the Inverse and the Determinant of a Square Matrix
    Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n × n square matrix A is called invertible if there exists a matrix X such that AX = XA = I, where I is the n × n identity matrix. If such matrix X exists, one can show that it is unique. We call it the inverse of A and denote it by A−1 = X, so that AA−1 = A−1A = I holds if A−1 exists, i.e. if A is invertible. Not all matrices are invertible. If A−1 does not exist, the matrix A is called singular or noninvertible. Note that if A is invertible, then the linear algebraic system Ax = b has a unique solution x = A−1b. Indeed, multiplying both sides of Ax = b on the left by A−1, we obtain A−1Ax = A−1b. But A−1A = I and Ix = x, so x = A−1b The converse is also true, so for a square matrix A, Ax = b has a unique solution if and only if A is invertible. 2 Calculating the inverse To compute A−1 if it exists, we need to find a matrix X such that AX = I (1) Linear algebra tells us that if such X exists, then XA = I holds as well, and so X = A−1. 1 Now observe that solving (1) is equivalent to solving the following linear systems: Ax1 = e1 Ax2 = e2 ... Axn = en, where xj, j = 1, .
    [Show full text]