Pattern-Avoiding (0,1)-Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Pattern-Avoiding (0,1)-Matrices Nova Southeastern University NSUWorks Mathematics Faculty Articles Department of Mathematics 5-6-2020 Pattern-Avoiding (0,1)-Matrices Richard Brualdi Lei Cao Follow this and additional works at: https://nsuworks.nova.edu/math_facarticles Part of the Mathematics Commons Pattern -Avoiding (0, 1)-Matrices Richard A. Brualdi Lei Cao Department of Mathematics Department of Mathematics University of Wisconsin Nova Southeastern University Madison, WI 53706 USA Ft. Lauderdale, FL 33314 USA [email protected] [email protected] May 6, 2020 Abstract We investigate pattern-avoiding (0, 1)-matrices as generalizations of pattern-avoiding permutations. Our emphasis is on 123-avoiding and 321-avoiding patterns for which we obtain exact results as to the maximum number of 1’s such matrices can have. We also give algorithms which, when carried out in all possible ways, construct all of the pattern- avoiding matrices of these two types. Key words and phrases: permutation, pattern-avoiding, 123-avoiding, 312- avoiding, (0, 1)-matrices. Mathematics Subject Classifications: 05A05, 05A15, 15B34. arXiv:2005.00379v2 [math.CO] 5 May 2020 1 Introduction Let n ≥ 2 be a positive integer, and let Sn be the set of permutations of {1, 2,...,n}. Let k be a positive integer with k ≤ n, and let σ =(p1,p2,...,pk) be a permutation in Sk. A permutation π =(i1, i2,...,in) ∈ Sn contains the pattern σ provided there exists 1 ≤ t1 < t2 < ··· < tk ≤ n such that itr < its if and only if ptr <pts (1 ≤ r<s ≤ k); otherwise, π is called σ-avoiding; we also say that π avoids σ. Patterns in permutations and, in particular, pattern avoidance, have been extensively studied. If k = 2 and σ = (1, 2), then σ-avoiding means no ascents and the anti-identity permutation (n, . , 2, 1) is the only σ-avoiding permutation in Sn. If σ = (2, 1), then σ-avoiding means no descents and only the identity permutation (1, 2,...,n) is σ-avoiding. There is a chapter on pattern avoidance in [1]. 1 Pattern avoidance in permutations is already interesting when k = 3. Using reversal (ij goes to position n +1 − j) and complementation (ij is replaced with n +1 − ij), there are only two permutations that are inequivalent with respect to pattern avoidance: (1, 2, 3) (which is equivalent to (3, 2, 1)), and (3, 1, 2) (which is equivalent to (2, 1, 3), (1, 3, 2), and (2, 3, 1)). For each of the six permutations σ in S3 the number of σ-avoiding permutations is the Catalan number 2n C = n . n n +1 It is sometime useful to think of 1 as S(mall), 2 as M(edium), and 3 as L(arge). Then SML and LSM are the only two patterns of size 3 that need be considered. Again let σ =(p1,p2,...,pk) be a permutation in Sk. Since permutations π =(i1, i2,...,in) ∈ Sn and n × n permutation matrices are in a bijective correspondence: (i1, i2,...,in) ↔ P = [pij] where pk,ik =1(k =1, 2,...,n) and pij = 0, otherwise, σ-avoiding permutations can be studied in terms of permutation matrices. An σ-avoiding per- mutation matrix is a permutation matrix corresponding to a σ-avoiding permutation. Thus an n × n permutation matrix P is (1,2,3)-avoiding provided it does not contain as a submatrix the 3 × 3 identity matrix 1 0 0 I3 = 0 1 0 ; (1) 0 0 1 P is (3,1,2)-avoiding provided it does not contain as a submatrix the 3 × 3 matrix 0 0 1 Π3 = 1 0 0 . (2) 0 1 0 From now on we usually use the briefer notation j1j2 ··· jk for a permutation (j1, j2,...,jk) ∈ Sk. Pattern avoidance when expressed in terms of permutation matrices leads to some inter- esting questions. The n × n matrix Jn of all 1’s can be decomposed into both 123-avoiding permutation matrices and 312-avoiding permutation matrices. In fact, Jn can be decomposed into permutation matrices that are both 123-avoiding and 312-avoiding. This can be done by considering the n × n permutation matrices corresponding to the n permutations (n, n−1, n−2,..., 2, 1), (n−1, n−2,..., 2, 1, n), (n−2,..., 2, 1, n, n−1),..., (1, n, n−1,..., 2) each of which consists of an decreasing sequence followed by another decreasing sequence (empty 2 in one case). For example, 1 1 1 1 1 1 1 1 J4 = + . (3) 1 1 1 1 1 1 1 1 Recall that a permutation i1i2 ··· in is a Grassmannian provided that it contains at most one descent (so at most one consecutive 21-pattern) and is a reverse-Grassmannian provided that it contains at most one ascent (so at most one consecutive 12-pattern). The above decomposition of Jn, illustrated in (3), is a decomposition of Jn into reverse-Grassmanians. A 123-avoiding permutation need not be a Grassmannian nor a reverse-Grassmanian, for instance, 563412 is a 123-avoiding permutation but is not a Grassmannian as it has two decreases nor is it a reverse- Grassmannian as it has two increases. The notion of pattern-avoiding permutations (so pattern-avoiding permutation matrices) has been extended to general (0, 1)-matrices. Given a k × k permutation matrix Q, one seeks the largest number of 1’s in an n×n (0, 1)-matrix A such that A does not contain a k ×k submatrix Q′ such that Q ≤ Q′ (entrywise), that is, Q′ has 1’s wherever Q has 1’s but is allowed to have 1’s where Q has 0’s. Put more simply, the matrix A does not contain the pattern prescribed by Q. The emphasis here seems to have been on (asymptotic) inequalities, in particular, the F˝uredi-Hajnal conjecture [3]: Let P be any permutation matrix. Then there is a constant cP such that the number of 1’s in an n × n (0, 1)-matrix A that avoids P is bounded by cP n. Marcus and Tardos [5] solved this conjecture by proving: if P is k × k, then the number of 1’s 2 4 k in A is bounded by 2k k n. (See also pages 159–164 of [1]). There have been considerable investigations on avoiding patterns of various types in (0, 1)-matrices, again mostly in terms of asymptotic inequalities. We mention, for instance, the papers [3, 5, 6, 7, 8] and the thesis [4]. Our motivation comes from pattern avoiding in permutation matrices and the possibility of obtaining exact results for specific permutation matrices Q with equality characterizations. We focus on the two inequivalent patterns of length 3, namely 123 and 312. In the case of 123 we can more generally handle the case of the pattern 12 ··· k for arbitrary k. Also, there is no reason to assume that the matrices A are square. In the next section we consider 12 ··· k-avoiding, m × n (0, 1)-matrices. We determine the maximum number cm,n(12 ··· k) of 1’s such a matrix can contain, and characterize the matrices achieving this maximum. Surprisingly, it turns out that one can use a ‘greedy’ algorithm that always results in a 12 ··· k-avoiding, m×n (0, 1)-matrix with this maximum number cm,n(12 ··· k) of 1’s, and when carried out in all possible ways produces all such matrices. In the following section we obtain similar results for 312-avoiding matrices and its maximum number cm,n(312) of 1’s. We had hoped to settle the case of k1 ··· (k − 1) for arbitrary k, but its solution has eluded us thus far, although we are confident of the correct answer. In the final section we 3 consider some questions for future investigations. 2 12 ··· k-avoiding (0, 1)-Matrices Let A be an m × n (0, 1)-matrix. The matrix A is 12 ··· k-avoiding provided it does not contain a k × k submatrix of the form 1 ∗ ∗ ∗ ∗ ∗ 1 ∗ ∗ ∗ . .. . ∗ . ∗ ∗ (4) ∗ ∗ ··· 1 ∗ ∗ ∗ ··· ∗ 1 where ∗ denotes either a 0 or a 1. In terms of the description in the previous paragraph, Q = Ik, the k × k identity matrix. We start with the very simple case of k = 2 which contains many of the ideas used for general k. Let A = [aij] be an m × n (0, 1)-matrix with m, n ≥ 2. Then A is 12-avoiding provided that A does not have 2 × 2 submatrix of the form 1 ∗ , " ∗ 1 # where again ∗ denotes a 0 or a 1. A 12-avoiding matrix can be regarded as a generalization of a non-increasing sequence (permutation) to an arbitrary (0, 1)-matrix. The matrix A has (m + n − 1) (left-to-right) diagonals (sometimes called Toeplitz diagonals as the entries on them are constant in Toeplitz matrices) starting from any entry in its first row or first column. If A is 12-avoiding, each of these diagonals can contain at most one 1 and hence A can contain at most (m + n − 1) 1’s. We can attain (m + n − 1) 1’s with the matrix A whose first row and first column contains all 1’s and all other entries are 0. We call such a m × n (0, 1)-matrix a trivial 12-avoiding matrix. Such a matrix contains a (m − 1) × (n − 1) zero submatrix Om−1,n−1 in its lower right corner. An m × n (0, 1)-matrix A = [aij] has a complete right-to-left zigzag path, abbreviated to complete R-L zigzag path, provided (i) it has (m + n − 1) 1’s, (ii) a1n = am1 = 1, and (iii) every 1 of A except for am1 has a 1 immediately to its left or immediately below it, but not both.
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Section 2.4–2.5 Partitioned Matrices and LU Factorization
    Section 2.4{2.5 Partitioned Matrices and LU Factorization Gexin Yu [email protected] College of William and Mary Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization One approach to simplify the computation is to partition a matrix into blocks. 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns. Standard computation techniques are inefficient in such cases, so we need to develop techniques which exploit the internal structure of the matrices. In most cases, the matrices of interest have lots of zeros. Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Math 513. Class Worksheet on Jordan Canonical Form. Professor Karen E
    Math 513. Class Worksheet on Jordan Canonical Form. Professor Karen E. Smith Monday February 20, 2012 Let λ be a complex number. The Jordan Block of size n and eigenvalue λ is the n × n matrix (1) whose diagonal elements aii are all λ, (2) whose entries ai+1 i just below the diagonal are all 1, and (3) with all other entries zero. Theorem: Let A be an n × n matrix over C. Then there exists a similar matrix J which is block-diagonal, and all of whose blocks are Jordan blocks (of possibly differing sizes and eigenvalues). The matrix J is the unique matrix similar to A of this form, up to reordering the Jordan blocks. It is called the Jordan canonical form of A. 1. Compute the characteristic polynomial for Jn(λ). 2. Compute the characteristic polynomial for a matrix with two Jordan blocks Jn1 (λ1) and Jn2 (λ2): 3. Describe the characteristic polynomial for any n × n matrix in terms of its Jordan form. 4. Describe the possible Jordan forms (up to rearranging the blocks) of a matrix whose charac- teristic polynomial is (t − 1)(t − 2)(t − 3)2: Describe the possible Jordan forms of a matrix (up to rearranging the blocks) whose characteristic polynomial is (t − 1)3(t − 2)2 5. Consider the action of GL2(C) on the set of all 2 × 2 matrices by conjugation. Using Jordan form, find a nice list of matrices parametrizing the orbits. That is, find a list of matrices such that every matrix is in the orbit of exactly one matrix on your list.
    [Show full text]
  • Totally Positive Toeplitz Matrices and Quantum Cohomology of Partial Flag Varieties
    JOURNAL OF THE AMERICAN MATHEMATICAL SOCIETY Volume 16, Number 2, Pages 363{392 S 0894-0347(02)00412-5 Article electronically published on November 29, 2002 TOTALLY POSITIVE TOEPLITZ MATRICES AND QUANTUM COHOMOLOGY OF PARTIAL FLAG VARIETIES KONSTANZE RIETSCH 1. Introduction A matrix is called totally nonnegative if all of its minors are nonnegative. Totally nonnegative infinite Toeplitz matrices were studied first in the 1950's. They are characterized in the following theorem conjectured by Schoenberg and proved by Edrei. Theorem 1.1 ([10]). The Toeplitz matrix ∞×1 1 a1 1 0 1 a2 a1 1 B . .. C B . a2 a1 . C B C A = B .. .. .. C B ad . C B C B .. .. C Bad+1 . a1 1 C B C B . C B . .. .. a a .. C B 2 1 C B . C B .. .. .. .. ..C B C is totally nonnegative@ precisely if its generating function is of theA form, 2 (1 + βit) 1+a1t + a2t + =exp(tα) ; ··· (1 γit) i Y2N − where α R 0 and β1 β2 0,γ1 γ2 0 with βi + γi < . 2 ≥ ≥ ≥···≥ ≥ ≥···≥ 1 This beautiful result has been reproved many times; see [32]P for anP overview. It may be thought of as giving a parameterization of the totally nonnegative Toeplitz matrices by ~ N N ~ (α;(βi)i; (~γi)i) R 0 R 0 R 0 i(βi +~γi) < ; f 2 ≥ × ≥ × ≥ j 1g i X2N where β~i = βi βi+1 andγ ~i = γi γi+1. − − Received by the editors December 10, 2001 and, in revised form, September 14, 2002. 2000 Mathematics Subject Classification. Primary 20G20, 15A48, 14N35, 14N15.
    [Show full text]
  • Tropical Totally Positive Matrices 3
    TROPICAL TOTALLY POSITIVE MATRICES STEPHANE´ GAUBERT AND ADI NIV Abstract. We investigate the tropical analogues of totally positive and totally nonnegative matrices. These arise when considering the images by the nonarchimedean valuation of the corresponding classes of matrices over a real nonarchimedean valued field, like the field of real Puiseux series. We show that the nonarchimedean valuation sends the totally positive matrices precisely to the Monge matrices. This leads to explicit polyhedral representations of the tropical analogues of totally positive and totally nonnegative matrices. We also show that tropical totally nonnegative matrices with a finite permanent can be factorized in terms of elementary matrices. We finally determine the eigenvalues of tropical totally nonnegative matrices, and relate them with the eigenvalues of totally nonnegative matrices over nonarchimedean fields. Keywords: Total positivity; total nonnegativity; tropical geometry; compound matrix; permanent; Monge matrices; Grassmannian; Pl¨ucker coordinates. AMSC: 15A15 (Primary), 15A09, 15A18, 15A24, 15A29, 15A75, 15A80, 15B99. 1. Introduction 1.1. Motivation and background. A real matrix is said to be totally positive (resp. totally nonneg- ative) if all its minors are positive (resp. nonnegative). These matrices arise in several classical fields, such as oscillatory matrices (see e.g. [And87, §4]), or approximation theory (see e.g. [GM96]); they have appeared more recently in the theory of canonical bases for quantum groups [BFZ96]. We refer the reader to the monograph of Fallat and Johnson in [FJ11] or to the survey of Fomin and Zelevin- sky [FZ00] for more information. Totally positive/nonnegative matrices can be defined over any real closed field, and in particular, over nonarchimedean fields, like the field of Puiseux series with real coefficients.
    [Show full text]
  • 9. Properties of Matrices Block Matrices
    9. Properties of Matrices Block Matrices It is often convenient to partition a matrix M into smaller matrices called blocks, like so: 01 2 3 11 ! B C B4 5 6 0C A B M = B C = @7 8 9 1A C D 0 1 2 0 01 2 31 011 B C B C Here A = @4 5 6A, B = @0A, C = 0 1 2 , D = (0). 7 8 9 1 • The blocks of a block matrix must fit together to form a rectangle. So ! ! B A C B makes sense, but does not. D C D A • There are many ways to cut up an n × n matrix into blocks. Often context or the entries of the matrix will suggest a useful way to divide the matrix into blocks. For example, if there are large blocks of zeros in a matrix, or blocks that look like an identity matrix, it can be useful to partition the matrix accordingly. • Matrix operations on block matrices can be carried out by treating the blocks as matrix entries. In the example above, ! ! A B A B M 2 = C D C D ! A2 + BC AB + BD = CA + DC CB + D2 1 Computing the individual blocks, we get: 0 30 37 44 1 2 B C A + BC = @ 66 81 96 A 102 127 152 0 4 1 B C AB + BD = @10A 16 0181 B C CA + DC = @21A 24 CB + D2 = (2) Assembling these pieces into a block matrix gives: 0 30 37 44 4 1 B C B 66 81 96 10C B C @102 127 152 16A 4 10 16 2 This is exactly M 2.
    [Show full text]
  • The Assignment Problem and Totally Unimodular Matrices
    Methods and Models for Combinatorial Optimization The assignment problem and totally unimodular matrices L.DeGiovanni M.DiSumma G.Zambelli 1 The assignment problem Let G = (V,E) be a bipartite graph, where V is the vertex set and E is the edge set. Recall that “bipartite” means that V can be partitioned into two disjoint subsets V1, V2 such that, for every edge uv ∈ E, one of u and v is in V1 and the other is in V2. In the assignment problem we have |V1| = |V2| and there is a cost cuv for every edge uv ∈ E. We want to select a subset of edges such that every node is the end of exactly one selected edge, and the sum of the costs of the selected edges is minimized. This problem is called “assignment problem” because selecting edges with the above property can be interpreted as assigning each node in V1 to exactly one adjacent node in V2 and vice versa. To model this problem, we define a binary variable xuv for every u ∈ V1 and v ∈ V2 such that uv ∈ E, where 1 if u is assigned to v (i.e., edge uv is selected), x = uv 0 otherwise. The total cost of the assignment is given by cuvxuv. uvX∈E The condition that for every node u in V1 exactly one adjacent node v ∈ V2 is assigned to v1 can be modeled with the constraint xuv =1, u ∈ V1, v∈VX2:uv∈E while the condition that for every node v in V2 exactly one adjacent node u ∈ V1 is assigned to v2 can be modeled with the constraint xuv =1, v ∈ V2.
    [Show full text]
  • Matrix Inverses Recall
    Matrix inverses Recall... DefinitionMatrix inversesA square matrix A is invertible (or nonsingular) if matrix 9 BRecall...such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix B Remark Not all square matrices are invertible. ∃ such that AB = I and BA = I. (We say B is an inverse of A.) Theorem.Remark IfNotA allis squareinvertible, matrices then are its invertible. inverse is unique. 1 RemarkTheorem.WhenIf A Ais invertible,is invertible, then we its denote inverse itsis unique. inverse as A− . 1 Remark When A is invertible, we denote its inverse as A− . Theorem. If A is an n n invertible matrix, then the system of × ~ 1~ linearTheorem. equationsIf A given is an n by A~xn invertible= b has matrix, the unique then solutionthe system~x of= linearA− b. × 1 equations given by Ax = b has the unique solution x = A− b. Proof. Assume A is an invertible matrix. Then we have by associativity of by def'n of Proof. Assume A is an invertiblematrix m matrix.ult. Theniden wetit havey 1 1 A(A− b)=(AA− )b = Ib = b. by def'n of inverse Theorem (Properties1 of matrix inverse). Thus, ~x = A− ~b is a solution to A~x = ~b. 1 1 1 Suppose(a) If A~y isis another invertible, solution then A to− theis itself linear invertible system. and It follows(A− )− that= AA~y. = ~b, 1 1 but multiplying both sides by A− gives ~y = A− ~b = ~x. (b) If A is invertible and c =0is a scalar, then cA is invertible and 1 1 1 (cA)− = c A− .
    [Show full text]