The General Totally Positive Matrix Completion Problem

Total Page:16

File Type:pdf, Size:1020Kb

The General Totally Positive Matrix Completion Problem The Electronic Journal of Linear Algebra. A publication of the International Linear Algebra Society. Volume 7, pp. 1-20, February 2000. ELA ISSN 1081-3810. http://math.technion.ac.il/iic/ela THE GENERAL TOTALLY POSITIVE MATRIX COMPLETION PROBLEM WITH FEW UNSPECIFIED ENTRIES y z x SHAUN M. FALLAT , CHARLES R. JOHNSON , AND RONALD L. SMITH Abstract. For m-by-n partial totally p ositive matrices with exactly one unsp eci ed entry, the set of p ositions for that entry that guarantee completability to a totally p ositive matrix are characterized. They are the p ositions i; j , i + j 4 and the p ositions i; j , i + j m + n 2. In each case, the set of completing entries is an op en and in nite in case i = j =1 or i = m, j = n interval. In the pro cess some new structural results ab out totally p ositive matrices are develop ed. In addition, the pairs of p ositions that guarantee completability in partial totally p ositive matrices with two unsp eci ed entries are characterized in low dimensions. Key words. totally p ositive matrices, matrix completion problems, partial matrix. AMS sub ject classi cations. 15A48 1. Intro duction. An m-by-n matrix A is called total ly positive, TP, total ly nonnegative, TN, if every minor of A is p ositive nonnegative. A partial matrix is a rectangular array in which some entries are sp eci ed, while the remaining, unsp eci ed, entries are free to b e chosen from an indicated set e.g., the real eld. A completion of a partial matrix is a choice of allowed values for the unsp eci ed entries, resulting in a conventional matrix, and a matrix completion problem asks which partial ma- trices have a completion with a particular desired prop erty. The TP, TN matrices arise in a varietyofways in geometry, combinatorics, algebra, di erential equations, function theory, etc., and have attracted considerable attention recently [1,7,9,17]. There is also now considerable literature on a variety of matrix completion problems [3, 4, 5, 6, 8, 12, 13, 14, 15]. Since total p ositivity and total nonnegativity are in- herited by arbitrary submatrices, in order for a partial matrix to have a TP TN completion, it must b e partial ly TP TN, i.e., every fully sp eci ed submatrix must b e TP TN. Study of the TN completion problem was b egun in [14], where the square, combinatorially symmetric patterns for which every partially TN matrix has a TN completion were characterized under a regularity condition that has b een removed in [6]. Here, we b egin the study of the TP completion problem, in the rectangular case Received by the editors on 13 July 1999. Accepted for publication on 10 January 2000. Handling Editor: Ludwig Elsner. y Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan, S4S 0A2, CANADA. The work of this author was b egun while as a Ph.D candidate at the College of William and Mary [email protected]. z Department of Mathematics, College of William and Mary, Williamsburg, VA. 23187-8795, USA [email protected]. Research supp orted in part by the Oce of Navel Research contract N00014-90-J-1739 and NSF grant 92-00899. x Department of Mathematics, UniversityofTennessee at Chattano oga, Chattano oga, TN 37403- 2598, USA [email protected]. The work of this author was b egun while on sabbatical at the College of William and Mary, supp orted by the University of Chattano oga Foundation. The research was completed under a Summer Research grant from the UniversityofTennessee at Chattano oga's Center of Excellence in Computer Applications. 1 ELA 2 S.M. Fallat, C.R.. Johnson and R.L. Smith without any combinatorial symmetry assumption. It should b e noted that there are imp ortant di erences b etween the TP and TN completion problems. In the former, there are stronger requirements on the data, but also much stronger requirements on the completion. In practice, this di erence is considerable, and for example, results analogous to those in [14] are not yet clear in the TP case. Here in the TP case we again fo cus on patterns, and ask for which patterns do es every partial TP matrix have a TP completion. In general, this app ears to b e a substantial and subtle problem. We give a complete answer for m-by-n patterns with just one unsp eci ed entry, and even here the answer is rather surprising and nontrivial. If min fm; ng 4, then only 12 individual p ositions ensure completability: the upp er left and lower right triangles of 6 entries each. If minfm; ng =1; 2 or 3, then any single p osition ensures completability. In order to prove these results several classical and some new e.g., ab out TP linear systems facts ab out TP matrices are used. Patterns with two unsp eci ed entries are also considered. An inventory of completable pairs when m; n 4, and some general observations are given. However, the general question of two unsp eci ed entries is shown to b e subtle: examples are given to show that the answer do es not dep end just on the \relative" p ositions of the twoentries. The submatrix of an m-by-n matrix A lying in rows and columns , M = f1; 2 :::;mg, N = f1; 2 :::;ng, is denoted by A[ ; ]. Similarly, A ; denotes the submatrix obtained from A by deleting the rows indexed by and columns indexed by . When m = n, the principal submatrix A[ ; ] is abbreviated to A[ ], and the complementary principal submatrix is denoted by A . As usual, for N we let c denote the complementof relativeto N . An m-by-n matrix is TN resp. TP k k for 1 k minm; n, if all minors of size at most k are nonnegative resp. p ositive. It is well known that if A =[a ]isanm-by-n TP TN matrix, then the m-by-n ij k k ~ matrix de ned by A =[a ] is also a TP TN matrix. Wemay refer to mi+1;nj +1 k k ~ the matrix A =[a ] as b eing obtained from A by reversing the indices of mi+1;nj +1 A. Wenow present some identities, discovered by Sylvester see [10], for complete- ness and clarity of comp osition. Let A be an n-by-n matrix, N , and supp ose c j j = k . De ne the n k -by-n k matrix B =[b ], with i; j 2 , by setting ij c b = det A[ [fig; [fj g], for every i; j 2 . Then Sylvester's identity states that ij c for each ; , with j j = j j = l , l1 1 detB [; ] = det A[ ] detA[ [ ; [ ]: nk 1 Observe that a sp ecial case of 1 is detB = det A[ ] detA. Another very useful sp ecial case which is employed throughout is the following. Let A be an n-by-n matrix partitioned as follows 3 2 T a a a 11 13 12 5 4 a A a ; A = 21 22 23 T a a a 31 33 32 where A is n 2-by-n 2 and a ;a are scalars. De ne the matrices 22 11 33 ELA Totally Positive Completions 3 T T a a a a 11 13 12 12 B = ; C = ; a A A a 21 22 22 23 a A A a 21 22 22 23 D = ; E = : T T a a a a 31 33 32 32 ~ ~ If we let b = det B ,~c = det C , d = detD , ande ~ = det E , then by 1 it follows that ~ b c~ = detA detA: det 22 ~ d e~ Hence, provided det A 6=0,wehave 22 detB det E det C detD detA = 2 : detA 22 Wenow state an imp ortant result due to Fekete see, e.g., [1], who givesavery useful and app ealing criterion or characterization of totally p ositive matrices. First we need to de ne the notion of the disp ersion of an index set. For = fi ;i ;:::;i g 1 2 k N , with i <i < < i , the dispersion of , denoted by d , is de ned to be 1 2 k k 1 X i i 1 = i i k 1; with the convention that d = 0 when is j +1 j k 1 j =1 a singleton. The disp ersion of a set represents a measure of the \gaps" in the set . In particular, observe that d = 0 whenever is a contiguous i.e., an index set based on consecutive indices subset of N . Theorem 1.1. Fekete's Criterion An m-by-n matrix A is total ly positive if and only if det A[ ; ] > 0, for al l f1; 2;:::;mg and f1; 2;:::;ng, with j j = j j and d =d =0. In other words a matrix is totally p ositive if and only if the determinant of every square submatrix based on contiguous row and column index sets is p ositive. The next lemma whichmay b e of indep endentinterest proves to b e very useful for certain completion problems for TP matrices, and is no doubt useful for other issues with TP matrices. Lemma 1.2. Let A =[a ;a ;a ] be an n 1-by-n TP matrix. Then, for 1 2 n k =1; 2; ;n; n X a = 3 y a ; k i i i=1 16=k has a unique solution y for which i sg n1 ; if k is o dd sg ny = i i1 sg n1 ; if k is even: ELA 4 S.M.
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Diagonalizing a Matrix
    Diagonalizing a Matrix Definition 1. We say that two square matrices A and B are similar provided there exists an invertible matrix P so that . 2. We say a matrix A is diagonalizable if it is similar to a diagonal matrix. Example 1. The matrices and are similar matrices since . We conclude that is diagonalizable. 2. The matrices and are similar matrices since . After we have developed some additional theory, we will be able to conclude that the matrices and are not diagonalizable. Theorem Suppose A, B and C are square matrices. (1) A is similar to A. (2) If A is similar to B, then B is similar to A. (3) If A is similar to B and if B is similar to C, then A is similar to C. Proof of (3) Since A is similar to B, there exists an invertible matrix P so that . Also, since B is similar to C, there exists an invertible matrix R so that . Now, and so A is similar to C. Thus, “A is similar to B” is an equivalence relation. Theorem If A is similar to B, then A and B have the same eigenvalues. Proof Since A is similar to B, there exists an invertible matrix P so that . Now, Since A and B have the same characteristic equation, they have the same eigenvalues. > Example Find the eigenvalues for . Solution Since is similar to the diagonal matrix , they have the same eigenvalues. Because the eigenvalues of an upper (or lower) triangular matrix are the entries on the main diagonal, we see that the eigenvalues for , and, hence, are .
    [Show full text]
  • Crystals and Total Positivity on Orientable Surfaces 3
    CRYSTALS AND TOTAL POSITIVITY ON ORIENTABLE SURFACES THOMAS LAM AND PAVLO PYLYAVSKYY Abstract. We develop a combinatorial model of networks on orientable surfaces, and study weight and homology generating functions of paths and cycles in these networks. Network transformations preserving these generating functions are investigated. We describe in terms of our model the crystal structure and R-matrix of the affine geometric crystal of products of symmetric and dual symmetric powers of type A. Local realizations of the R-matrix and crystal actions are used to construct a double affine geometric crystal on a torus, generalizing the commutation result of Kajiwara-Noumi- Yamada [KNY] and an observation of Berenstein-Kazhdan [BK07b]. We show that our model on a cylinder gives a decomposition and parametrization of the totally nonnegative part of the rational unipotent loop group of GLn. Contents 1. Introduction 3 1.1. Networks on orientable surfaces 3 1.2. Factorizations and parametrizations of totally positive matrices 4 1.3. Crystals and networks 5 1.4. Measurements and moves 6 1.5. Symmetric functions and loop symmetric functions 7 1.6. Comparison of examples 7 Part 1. Boundary measurements on oriented surfaces 9 2. Networks and measurements 9 2.1. Oriented networks on surfaces 9 2.2. Polygon representation of oriented surfaces 9 2.3. Highway paths and cycles 10 arXiv:1008.1949v1 [math.CO] 11 Aug 2010 2.4. Boundary and cycle measurements 10 2.5. Torus with one vertex 11 2.6. Flows and intersection products in homology 12 2.7. Polynomiality 14 2.8. Rationality 15 2.9.
    [Show full text]
  • 3.3 Diagonalization
    3.3 Diagonalization −4 1 1 1 Let A = 0 1. Then 0 1 and 0 1 are eigenvectors of A, with corresponding @ 4 −4 A @ 2 A @ −2 A eigenvalues −2 and −6 respectively (check). This means −4 1 1 1 −4 1 1 1 0 1 0 1 = −2 0 1 ; 0 1 0 1 = −6 0 1 : @ 4 −4 A @ 2 A @ 2 A @ 4 −4 A @ −2 A @ −2 A Thus −4 1 1 1 1 1 −2 −6 0 1 0 1 = 0−2 0 1 − 6 0 11 = 0 1 @ 4 −4 A @ 2 −2 A @ @ −2 A @ −2 AA @ −4 12 A We have −4 1 1 1 1 1 −2 0 0 1 0 1 = 0 1 0 1 @ 4 −4 A @ 2 −2 A @ 2 −2 A @ 0 −6 A 1 1 (Think about this). Thus AE = ED where E = 0 1 has the eigenvectors of A as @ 2 −2 A −2 0 columns and D = 0 1 is the diagonal matrix having the eigenvalues of A on the @ 0 −6 A main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. Definition 3.3.1 A n × n matrix is A diagonal if all of its non-zero entries are located on its main diagonal, i.e. if Aij = 0 whenever i =6 j. Diagonal matrices are particularly easy to handle computationally. If A and B are diagonal n × n matrices then the product AB is obtained from A and B by simply multiplying entries in corresponding positions along the diagonal, and AB = BA.
    [Show full text]
  • A Nonconvex Splitting Method for Symmetric Nonnegative Matrix Factorization: Convergence Analysis and Optimality
    Electrical and Computer Engineering Publications Electrical and Computer Engineering 6-15-2017 A Nonconvex Splitting Method for Symmetric Nonnegative Matrix Factorization: Convergence Analysis and Optimality Songtao Lu Iowa State University, [email protected] Mingyi Hong Iowa State University Zhengdao Wang Iowa State University, [email protected] Follow this and additional works at: https://lib.dr.iastate.edu/ece_pubs Part of the Industrial Technology Commons, and the Signal Processing Commons The complete bibliographic information for this item can be found at https://lib.dr.iastate.edu/ ece_pubs/162. For information on how to cite this item, please visit http://lib.dr.iastate.edu/ howtocite.html. This Article is brought to you for free and open access by the Electrical and Computer Engineering at Iowa State University Digital Repository. It has been accepted for inclusion in Electrical and Computer Engineering Publications by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. A Nonconvex Splitting Method for Symmetric Nonnegative Matrix Factorization: Convergence Analysis and Optimality Abstract Symmetric nonnegative matrix factorization (SymNMF) has important applications in data analytics problems such as document clustering, community detection, and image segmentation. In this paper, we propose a novel nonconvex variable splitting method for solving SymNMF. The proposed algorithm is guaranteed to converge to the set of Karush-Kuhn-Tucker (KKT) points of the nonconvex SymNMF problem. Furthermore, it achieves a global sublinear convergence rate. We also show that the algorithm can be efficiently implemented in parallel. Further, sufficient conditions ear provided that guarantee the global and local optimality of the obtained solutions.
    [Show full text]
  • Minimal Rank Decoupling of Full-Lattice CMV Operators With
    MINIMAL RANK DECOUPLING OF FULL-LATTICE CMV OPERATORS WITH SCALAR- AND MATRIX-VALUED VERBLUNSKY COEFFICIENTS STEPHEN CLARK, FRITZ GESZTESY, AND MAXIM ZINCHENKO Abstract. Relations between half- and full-lattice CMV operators with scalar- and matrix-valued Verblunsky coefficients are investigated. In particular, the decoupling of full-lattice CMV oper- ators into a direct sum of two half-lattice CMV operators by a perturbation of minimal rank is studied. Contrary to the Jacobi case, decoupling a full-lattice CMV matrix by changing one of the Verblunsky coefficients results in a perturbation of twice the minimal rank. The explicit form for the minimal rank perturbation and the resulting two half-lattice CMV matrices are obtained. In addition, formulas relating the Weyl–Titchmarsh m-functions (resp., matrices) associated with the involved CMV operators and their Green’s functions (resp., matrices) are derived. 1. Introduction CMV operators are a special class of unitary semi-infinite or doubly-infinite five-diagonal matrices which received enormous attention in recent years. We refer to (2.8) and (3.18) for the explicit form of doubly infinite CMV operators on Z in the case of scalar, respectively, matrix-valued Verblunsky coefficients. For the corresponding half-lattice CMV operators we refer to (2.16) and (3.26). The actual history of CMV operators (with scalar Verblunsky coefficients) is somewhat intrigu- ing: The corresponding unitary semi-infinite five-diagonal matrices were first introduced in 1991 by Bunse–Gerstner and Elsner [15], and subsequently discussed in detail by Watkins [82] in 1993 (cf. the discussion in Simon [73]). They were subsequently rediscovered by Cantero, Moral, and Vel´azquez (CMV) in [17].
    [Show full text]
  • Random Matrices, Magic Squares and Matching Polynomials
    Random Matrices, Magic Squares and Matching Polynomials Persi Diaconis Alex Gamburd∗ Departments of Mathematics and Statistics Department of Mathematics Stanford University, Stanford, CA 94305 Stanford University, Stanford, CA 94305 [email protected] [email protected] Submitted: Jul 22, 2003; Accepted: Dec 23, 2003; Published: Jun 3, 2004 MR Subject Classifications: 05A15, 05E05, 05E10, 05E35, 11M06, 15A52, 60B11, 60B15 Dedicated to Richard Stanley on the occasion of his 60th birthday Abstract Characteristic polynomials of random unitary matrices have been intensively studied in recent years: by number theorists in connection with Riemann zeta- function, and by theoretical physicists in connection with Quantum Chaos. In particular, Haake and collaborators have computed the variance of the coefficients of these polynomials and raised the question of computing the higher moments. The answer turns out to be intimately related to counting integer stochastic matrices (magic squares). Similar results are obtained for the moments of secular coefficients of random matrices from orthogonal and symplectic groups. Combinatorial meaning of the moments of the secular coefficients of GUE matrices is also investigated and the connection with matching polynomials is discussed. 1 Introduction Two noteworthy developments took place recently in Random Matrix Theory. One is the discovery and exploitation of the connections between eigenvalue statistics and the longest- increasing subsequence problem in enumerative combinatorics [1, 4, 5, 47, 59]; another is the outburst of interest in characteristic polynomials of Random Matrices and associated global statistics, particularly in connection with the moments of the Riemann zeta function and other L-functions [41, 14, 35, 36, 15, 16]. The purpose of this paper is to point out some connections between the distribution of the coefficients of characteristic polynomials of random matrices and some classical problems in enumerative combinatorics.
    [Show full text]
  • R'kj.Oti-1). (3) the Object of the Present Article Is to Make This Estimate Effective
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 259, Number 2, June 1980 EFFECTIVE p-ADIC BOUNDS FOR SOLUTIONS OF HOMOGENEOUS LINEAR DIFFERENTIAL EQUATIONS BY B. DWORK AND P. ROBBA Dedicated to K. Iwasawa Abstract. We consider a finite set of power series in one variable with coefficients in a field of characteristic zero having a chosen nonarchimedean valuation. We study the growth of these series near the boundary of their common "open" disk of convergence. Our results are definitive when the wronskian is bounded. The main application involves local solutions of ordinary linear differential equations with analytic coefficients. The effective determination of the common radius of conver- gence remains open (and is not treated here). Let K be an algebraically closed field of characteristic zero complete under a nonarchimedean valuation with residue class field of characteristic p. Let D = d/dx L = D"+Cn_lD'-l+ ■ ■■ +C0 (1) be a linear differential operator with coefficients meromorphic in some neighbor- hood of the origin. Let u = a0 + a,jc + . (2) be a power series solution of L which converges in an open (/>-adic) disk of radius r. Our object is to describe the asymptotic behavior of \a,\rs as s —*oo. In a series of articles we have shown that subject to certain restrictions we may conclude that r'KJ.Oti-1). (3) The object of the present article is to make this estimate effective. At the same time we greatly simplify, and generalize, our best previous results [12] for the noneffective form. Our previous work was based on the notion of a generic disk together with a condition for reducibility of differential operators with unbounded solutions [4, Theorem 4].
    [Show full text]
  • Diagonalizable Matrix - Wikipedia, the Free Encyclopedia
    Diagonalizable matrix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_diagonalization Diagonalizable matrix From Wikipedia, the free encyclopedia (Redirected from Matrix diagonalization) In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.[1] A square matrix which is not diagonalizable is called defective. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. Geometrically, a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling) — it scales the space, as does a homogeneous dilation, but by a different factor in each direction, determined by the scale factors on each axis (diagonal entries). Contents 1 Characterisation 2 Diagonalization 3 Simultaneous diagonalization 4 Examples 4.1 Diagonalizable matrices 4.2 Matrices that are not diagonalizable 4.3 How to diagonalize a matrix 4.3.1 Alternative Method 5 An application 5.1 Particular application 6 Quantum mechanical application 7 See also 8 Notes 9 References 10 External links Characterisation The fundamental fact about diagonalizable maps and matrices is expressed by the following: An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A.
    [Show full text]
  • EIGENVALUES and EIGENVECTORS 1. Diagonalizable Linear Transformations and Matrices Recall, a Matrix, D, Is Diagonal If It Is
    EIGENVALUES AND EIGENVECTORS 1. Diagonalizable linear transformations and matrices Recall, a matrix, D, is diagonal if it is square and the only non-zero entries are on the diagonal. This is equivalent to D~ei = λi~ei where here ~ei are the standard n n vector and the λi are the diagonal entries. A linear transformation, T : R ! R , is n diagonalizable if there is a basis B of R so that [T ]B is diagonal. This means [T ] is n×n similar to the diagonal matrix [T ]B. Similarly, a matrix A 2 R is diagonalizable if it is similar to some diagonal matrix D. To diagonalize a linear transformation is to find a basis B so that [T ]B is diagonal. To diagonalize a square matrix is to find an invertible S so that S−1AS = D is diagonal. Fix a matrix A 2 Rn×n We say a vector ~v 2 Rn is an eigenvector if (1) ~v 6= 0. (2) A~v = λ~v for some scalar λ 2 R. The scalar λ is the eigenvalue associated to ~v or just an eigenvalue of A. Geo- metrically, A~v is parallel to ~v and the eigenvalue, λ. counts the stretching factor. Another way to think about this is that the line L := span(~v) is left invariant by multiplication by A. n An eigenbasis of A is a basis, B = (~v1; : : : ;~vn) of R so that each ~vi is an eigenvector of A. Theorem 1.1. The matrix A is diagonalizable if and only if there is an eigenbasis of A.
    [Show full text]
  • Linear Algebra and Matrix Theory
    Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .
    [Show full text]
  • Tropical Totally Positive Matrices 3
    TROPICAL TOTALLY POSITIVE MATRICES STEPHANE´ GAUBERT AND ADI NIV Abstract. We investigate the tropical analogues of totally positive and totally nonnegative matrices. These arise when considering the images by the nonarchimedean valuation of the corresponding classes of matrices over a real nonarchimedean valued field, like the field of real Puiseux series. We show that the nonarchimedean valuation sends the totally positive matrices precisely to the Monge matrices. This leads to explicit polyhedral representations of the tropical analogues of totally positive and totally nonnegative matrices. We also show that tropical totally nonnegative matrices with a finite permanent can be factorized in terms of elementary matrices. We finally determine the eigenvalues of tropical totally nonnegative matrices, and relate them with the eigenvalues of totally nonnegative matrices over nonarchimedean fields. Keywords: Total positivity; total nonnegativity; tropical geometry; compound matrix; permanent; Monge matrices; Grassmannian; Pl¨ucker coordinates. AMSC: 15A15 (Primary), 15A09, 15A18, 15A24, 15A29, 15A75, 15A80, 15B99. 1. Introduction 1.1. Motivation and background. A real matrix is said to be totally positive (resp. totally nonneg- ative) if all its minors are positive (resp. nonnegative). These matrices arise in several classical fields, such as oscillatory matrices (see e.g. [And87, §4]), or approximation theory (see e.g. [GM96]); they have appeared more recently in the theory of canonical bases for quantum groups [BFZ96]. We refer the reader to the monograph of Fallat and Johnson in [FJ11] or to the survey of Fomin and Zelevin- sky [FZ00] for more information. Totally positive/nonnegative matrices can be defined over any real closed field, and in particular, over nonarchimedean fields, like the field of Puiseux series with real coefficients.
    [Show full text]