Generalized Alternating Sign Matrices and Signed Permutation Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Generalized Alternating Sign Matrices and Signed Permutation Matrices J. Korean Math. Soc. 58 (2021), No. 4, pp. 921{948 https://doi.org/10.4134/JKMS.j200318 pISSN: 0304-9914 / eISSN: 2234-3008 GENERALIZED ALTERNATING SIGN MATRICES AND SIGNED PERMUTATION MATRICES Richard A. Brualdi and Hwa Kyung Kim Abstract. We continue the investigations in [6] extending the Bruhat order on n × n alternating sign matrices to our more general setting. We show that the resulting partially ordered set is a graded lattice with a well-define rank function. Many illustrative examples are given. 1. Introduction An n × n alternating sign matrix (abbreviated to ASM) is a (0; ±1) matrix such that, ignoring 0's, the +1's and −1's in each row and each column alternate beginning and ending with +1. The origins and many properties of ASMs can be found in [1,3,9{11]. In [6] a generalization of ASMs was defined as follows: Let u = (u1; u2;:::, 0 0 0 0 0 0 0 0 un), u = (u1; u2; : : : ; un), v = (v1; v2; : : : ; vm), and v = (v1; v2; : : : ; vm) be vectors of ±1's. If A is an m × n matrix, we define A(u; u0jv; v0) to be the (m + 2) × (n + 2) matrix (1) with rows indexed by 0; 1; : : : ; m + 1 and columns indexed by 0; 1; : : : ; n + 1. 0 u1 u2 ··· un−1 un 0 0 v1 v1 0 v2 v2 0 0 . (1) A(u; u jv; v ) = . A . : 0 vm−1 vm−1 0 vm vm 0 0 0 0 0 u1 u2 ··· un−1 un 0 We then write A = A(u; u0jv; v0)[1; 2; : : : ; mj1; 2; : : : ; n] to denote that A is the middle m × n submatrix of A(u; u0jv; v0), and also write A(u; u0jv; v0) ! A. Received June 4, 2020; Revised August 21, 2020; Accepted August 31, 2020. 2010 Mathematics Subject Classification. 05B20, 15B48. Key words and phrases. Permutation matrix, alternating sign matrix, Bruhat order. The second author was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (2016R1D1A1B03930279). c 2021 Korean Mathematical Society 921 922 R. A. BRUALDI AND H. K. KIM A(u; u0jv; v0)-ASM is an m×n (0; ±1)-matrix A as in (1) such that, ignoring 0's, the +1's and −1's in rows 0; 1; 2; : : : ; m + 1 and columns 0; 1; 2; : : : ; n + 1 of the (0; ±1)-matrix A(u; u0jv; v0) alternate. Thus the condition that the first and last nonzero entry in each row and column of an ASM is +1 is relaxed, where now the first and last nonzero of each row is determined by v and v0, respectively, and the first and last nonzero of each column is determined by u and u0, respectively. Note also that, unlike for ASMs, the first and last rows and columns of a (u; u0jv; v0)-ASM depend on u; u0; v; v0 and so may con- tain more than one nonzero entry. We denote the set of (u; u0jv; v0)-ASMs 0 0 0 0 by Am;n(u; u jv; v ). If u = u and v = v and m = n, we often abbrevi- ate these notations to: (u; v)-ASM and An(u; v), respectively. If v = u, we also use the abbreviations (u)-ASM and An(u). Observe that if m = n and u = v = (−1; −1;:::; −1), then a (u)-ASM is an ordinary ASM, and An(u) is the usual set An of n × n ASMs. Example 1.1. Let m = 2 and n = 3, and let u = (1; −1; 1), u0 = (−1; 1; −1), v = (1; −1) and v = (1; −1). Then a (u; u0jv; v0)-ASM is given below: 2 0 1 −1 1 0 3 6 1 −1 1 −1 1 7 −1 1 −1 6 7 ! : 6 −1 1 −1 1 −1 7 1 −1 1 4 5 0 −1 1 −1 0 In [6], necessary and sufficient conditions are given for the nonemptiness of 0 0 Am;n(u; u jv; v ). In this paper we primarily consider the case where m = n 0 0 and u = u and v = v , that is, the set An(u; v). In this case, a 180 degree rotation gives a bijection from An(u; v) to An(u; v) where u = (un; : : : ; u1) and v = (vn; : : : ; v1). We also sometimes restrict our attention to the case where u = v, as in ordinary ASMs; in this case, a 90 degree clockwise rotation give a bijection from An(u) to An(u). Also An(u) is invariant under a reflection over the main diagonal. The nonemptiness of An(u; v) is easily decided. Lemma 1.2. Let u = (u1; u2; : : : ; un) and v = (v1; v2; : : : ; vn) be vectors of ±1's. Then An(u; v) 6= ; if and only if u and v contain the same number of +1's. Proof. Suppose that u contains p (+1)'s and (n − p)(−1)'s and v contains q (+1)'s and (n − q)(−1)'s. Let A 2 An(u; v). Then p columns of A sum to −1 and (n − p) columns sum to +1. Similarly, q rows of A sum to −1 and (n − q) rows sum to +1. Hence −p + (n − p) = −q + (n − q) implying that p = q. Conversely, if p = q, let Iu;v be the (0; ±1)-matrix whose p × p submatrix in those columns for which ui = +1 and those rows for which vi = +1 equals −Ip, and whose (n − p) × (n − p) submatrix in those columns for which ui = −1 GENERALIZED ALTERNATING SIGN MATRICES 923 and those rows for which vi = −1 equals In−p, with all other entries equal to 0. Thus Iu;v is in An(u; v). Note that in the above proof, we can replace Ip by any p × p permutation matrix and In−p by any (n − p) × (n − p) permutation matrix, and these p!(n − p)! signed permutation matrices (definition in the next example) are the only signed permutation matrices in An(u; v). Example 1.3. Here and elsewhere we usually use `+' in place of +1 and `−' in place of −1. Also, when we block the cells of a matrix, we usually do not put in 0's with an empty cell signifying a 0. Let u = (−; −; +; +; −) and v = (−; +; −; +; −). Then the following matrix A is in An(u; v). 0 − − + + − 0 − + − 2 + 3 + − + 6 − 7 6 7 − + − ! A = 6 + 7 : 6 7 + − + 4 − 5 − + − + 0 − − + + − 0 The matrix A has a special property in that there is exactly one nonzero in each row and in each column, as in the proof of Lemma 1.2. Thus A is a signed permutation matrix, that is, a permutation matrix in which some of the 1's have been replaced with −1's. In order that An(u; v) contain a signed permutation matrix it is necessary and sufficient, as in this example, that u and v have the same number p of +1's and so the same number (n − p) of −1's. Let u and v be vectors of ±1's each containing p (+1)'s and (n − p)(−1)'s. Let A 2 An(u; v). Then the +1's determine a p × p submatrix A+ of A and the −1's determine the complementary (n−p)×(n−p) submatrix A− of A. Let A+ be −P where P is a p×p permutation matrix, and let A− be the (n−p)×(n−p) matrix Q where Q is a permutation matrix, and where all other entries of A equal 0. Then A is a signed permutation matrix in An(u; v), and every signed permutation matrix in An(u; v) arises in this way. Let Lp be the p × p anti- identity (0; 1) matrix (1's on the antidiagonal). If P = Lp (so A+ = −Lp) and Q = In−p (so A− = In−p), then the resulting signed permutation matrix is denoted by In(u; v) and is called the (u; v)-identity matrix. If P = Ip (so A+ = −Ip) and Q = Ln−p (so A− = Ln−p) matrix, then the resulting signed permutation matrix is denoted by Ln(u; v) and is called the (u; v)-anti-identity matrix. 924 R. A. BRUALDI AND H. K. KIM Example 1.4. Continuing with Example 1.3, we get the (u; v)-identity matrix as given in 0 − − + + − 0 − + − 2 + 3 + − + 6 − 7 6 7 − + − ! 6 + 7 = I5(u; v); 6 7 + − + 4 − 5 − + − + 0 − − + + − 0 and the (u; v)-anti-identity matrix as given in 0 − − + + − 0 − + − 2 + 3 + − + 6 − 7 6 7 − + − ! 6 + 7 = L5(u; v): 6 7 + − + 4 − 5 − + − + 0 − − + + − 0 For any m × n matrix A = [aij], the sum-matrix (also called corner matrix in [11]) Σ(A) = [σij] of A is the m × n matrix where X X σij = σij(A) = akl (1 ≤ i ≤ m; 1 ≤ j ≤ n); k≤i l≤j the sum of the entries in the leading i × j submatrix of A. Define σij = 0 if i = 0 or j = 0. Then the matrix A is uniquely determined by its sum-matrix Σ(A), namely, aij = σij − σi;j−1 − σi−1;j + σi−1;j−1 (1 ≤ i; j ≤ n): The sum-matrix Σ(A) = [σij] of a matrix A 2 An(u; v) has the following properties which are easily verified: • The entries in row i and column i of Σ(A) are taken from the set f0; ±1; ±2;:::; ±ig.
Recommended publications
  • Math 511 Advanced Linear Algebra Spring 2006
    MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 x2:1 : 2; 5; 9; 12 x2:3 : 3; 6 x2:4 : 2; 4; 5; 9; 11 Section 2:1: Unitary Matrices Problem 2 If ¸ 2 σ(U) and U 2 Mn is unitary, show that j¸j = 1. Solution. If ¸ 2 σ(U), U 2 Mn is unitary, and Ux = ¸x for x 6= 0, then by Theorem 2:1:4(g), we have kxkCn = kUxkCn = k¸xkCn = j¸jkxkCn , hence j¸j = 1, as desired. Problem 5 Show that the permutation matrices in Mn are orthogonal and that the permutation matrices form a sub- group of the group of real orthogonal matrices. How many different permutation matrices are there in Mn? Solution. By definition, a matrix P 2 Mn is called a permutation matrix if exactly one entry in each row n and column is equal to 1, and all other entries are 0. That is, letting ei 2 C denote the standard basis n th element of C that has a 1 in the i row and zeros elsewhere, and Sn be the set of all permutations on n th elements, then P = [eσ(1) j ¢ ¢ ¢ j eσ(n)] = Pσ for some permutation σ 2 Sn such that σ(k) denotes the k member of σ. Observe that for any σ 2 Sn, and as ½ 1 if i = j eT e = σ(i) σ(j) 0 otherwise for 1 · i · j · n by the definition of ei, we have that 2 3 T T eσ(1)eσ(1) ¢ ¢ ¢ eσ(1)eσ(n) T 6 .
    [Show full text]
  • The Many Faces of Alternating-Sign Matrices
    The many faces of alternating-sign matrices James Propp Department of Mathematics University of Wisconsin – Madison, Wisconsin, USA [email protected] August 15, 2002 Abstract I give a survey of different combinatorial forms of alternating-sign ma- trices, starting with the original form introduced by Mills, Robbins and Rumsey as well as corner-sum matrices, height-function matrices, three- colorings, monotone triangles, tetrahedral order ideals, square ice, gasket- and-basket tilings and full packings of loops. (This article has been pub- lished in a conference edition of the journal Discrete Mathematics and Theo- retical Computer Science, entitled “Discrete Models: Combinatorics, Com- putation, and Geometry,” edited by R. Cori, J. Mazoyer, M. Morvan, and R. Mosseri, and published in July 2001 in cooperation with le Maison de l’Informatique et des Mathematiques´ Discretes,` Paris, France: ISSN 1365- 8050, http://dmtcs.lori.fr.) 1 Introduction An alternating-sign matrix of order n is an n-by-n array of 0’s, 1’s and 1’s with the property that in each row and each column, the non-zero entries alter- nate in sign, beginning and ending with a 1. For example, Figure 1 shows an Supported by grants from the National Science Foundation and the National Security Agency. 1 alternating-sign matrix (ASM for short) of order 4. 0 100 1 1 10 0001 0 100 Figure 1: An alternating-sign matrix of order 4. Figure 2 exhibits all seven of the ASMs of order 3. 001 001 0 10 0 10 0 10 100 001 1 1 1 100 0 10 100 0 10 0 10 100 100 100 001 0 10 001 0 10 001 Figure 2: The seven alternating-sign matrices of order 3.
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • The Many Faces of Alternating-Sign Matrices
    The many faces of alternating-sign matrices James Propp∗ Department of Mathematics University of Wisconsin – Madison, Wisconsin, USA [email protected] June 1, 2018 Abstract I give a survey of different combinatorial forms of alternating-sign ma- trices, starting with the original form introduced by Mills, Robbins and Rumsey as well as corner-sum matrices, height-function matrices, three- colorings, monotone triangles, tetrahedral order ideals, square ice, gasket- and-basket tilings and full packings of loops. (This article has been pub- lished in a conference edition of the journal Discrete Mathematics and Theo- retical Computer Science, entitled “Discrete Models: Combinatorics, Com- putation, and Geometry,” edited by R. Cori, J. Mazoyer, M. Morvan, and R. Mosseri, and published in July 2001 in cooperation with le Maison de l’Informatique et des Math´ematiques Discr`etes, Paris, France: ISSN 1365- 8050, http://dmtcs.lori.fr.) 1 Introduction arXiv:math/0208125v1 [math.CO] 15 Aug 2002 An alternating-sign matrix of order n is an n-by-n array of 0’s, +1’s and −1’s with the property that in each row and each column, the non-zero entries alter- nate in sign, beginning and ending with a +1. For example, Figure 1 shows an ∗Supported by grants from the National Science Foundation and the National Security Agency. 1 alternating-sign matrix (ASM for short) of order 4. 0 +1 0 0 +1 −1 +1 0 0 0 0 +1 0 +1 0 0 Figure 1: An alternating-sign matrix of order 4. Figure 2 exhibits all seven of the ASMs of order 3.
    [Show full text]
  • Alternating Sign Matrices and Polynomiography
    Alternating Sign Matrices and Polynomiography Bahman Kalantari Department of Computer Science Rutgers University, USA [email protected] Submitted: Apr 10, 2011; Accepted: Oct 15, 2011; Published: Oct 31, 2011 Mathematics Subject Classifications: 00A66, 15B35, 15B51, 30C15 Dedicated to Doron Zeilberger on the occasion of his sixtieth birthday Abstract To each permutation matrix we associate a complex permutation polynomial with roots at lattice points corresponding to the position of the ones. More generally, to an alternating sign matrix (ASM) we associate a complex alternating sign polynomial. On the one hand visualization of these polynomials through polynomiography, in a combinatorial fashion, provides for a rich source of algo- rithmic art-making, interdisciplinary teaching, and even leads to games. On the other hand, this combines a variety of concepts such as symmetry, counting and combinatorics, iteration functions and dynamical systems, giving rise to a source of research topics. More generally, we assign classes of polynomials to matrices in the Birkhoff and ASM polytopes. From the characterization of vertices of these polytopes, and by proving a symmetry-preserving property, we argue that polynomiography of ASMs form building blocks for approximate polynomiography for polynomials corresponding to any given member of these polytopes. To this end we offer an algorithm to express any member of the ASM polytope as a convex of combination of ASMs. In particular, we can give exact or approximate polynomiography for any Latin Square or Sudoku solution. We exhibit some images. Keywords: Alternating Sign Matrices, Polynomial Roots, Newton’s Method, Voronoi Diagram, Doubly Stochastic Matrices, Latin Squares, Linear Programming, Polynomiography 1 Introduction Polynomials are undoubtedly one of the most significant objects in all of mathematics and the sciences, particularly in combinatorics.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Alternating Sign Matrices, Extensions and Related Cones
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/311671190 Alternating sign matrices, extensions and related cones Article in Advances in Applied Mathematics · May 2017 DOI: 10.1016/j.aam.2016.12.001 CITATIONS READS 0 29 2 authors: Richard A. Brualdi Geir Dahl University of Wisconsin–Madison University of Oslo 252 PUBLICATIONS 3,815 CITATIONS 102 PUBLICATIONS 1,032 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Combinatorial matrix theory; alternating sign matrices View project All content following this page was uploaded by Geir Dahl on 16 December 2016. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. Alternating sign matrices, extensions and related cones Richard A. Brualdi∗ Geir Dahly December 1, 2016 Abstract An alternating sign matrix, or ASM, is a (0; ±1)-matrix where the nonzero entries in each row and column alternate in sign, and where each row and column sum is 1. We study the convex cone generated by ASMs of order n, called the ASM cone, as well as several related cones and polytopes. Some decomposition results are shown, and we find a minimal Hilbert basis of the ASM cone. The notion of (±1)-doubly stochastic matrices and a generalization of ASMs are introduced and various properties are shown. For instance, we give a new short proof of the linear characterization of the ASM polytope, in fact for a more general polytope.
    [Show full text]
  • A Conjecture About Dodgson Condensation
    Advances in Applied Mathematics 34 (2005) 654–658 www.elsevier.com/locate/yaama A conjecture about Dodgson condensation David P. Robbins ∗ Received 19 January 2004; accepted 2 July 2004 Available online 18 January 2005 Abstract We give a description of a non-archimedean approximate form of Dodgson’s condensation method for computing determinants and provide evidence for a conjecture which predicts a surprisingly high degree of accuracy for the results. 2005 Elsevier Inc. All rights reserved. In 1866 Charles Dodgson, usually known as Lewis Carroll, proposed a method, that he called condensation [2] for calculating determinants. Dodgson’s method is based on a determinantal identity. Suppose that n 2 is an integer and M is an n by n square matrix with entries in a ring and that R, S, T , and U are the upper left, upper right, lower left, and lower right n − 1byn − 1 square submatrices of M and that C is the central n − 2by n − 2 submatrix. Then it is known that det(C) det(M) = det(R) det(U) − det(S) det(T ). (1) Here the determinant of a 0 by 0 matrix is conventionally 1, so that when n = 2, Eq. (1) is the usual formula for a 2 by 2 determinant. David Bressoud gives an interesting discussion of the genesis of this identity in [1, p. 111]. Dodgson proposes to use this identity repeatedly to find the determinant of M by computing all of the connected minors (determinants of square submatrices formed from * Correspondence to David Bressoud. E-mail address: [email protected] (D.
    [Show full text]
  • More Gaussian Elimination and Matrix Inversion
    Week 7 More Gaussian Elimination and Matrix Inversion 7.1 Opening Remarks 7.1.1 Introduction * View at edX 289 Week 7. More Gaussian Elimination and Matrix Inversion 290 7.1.2 Outline 7.1. Opening Remarks..................................... 289 7.1.1. Introduction..................................... 289 7.1.2. Outline....................................... 290 7.1.3. What You Will Learn................................ 291 7.2. When Gaussian Elimination Breaks Down....................... 292 7.2.1. When Gaussian Elimination Works........................ 292 7.2.2. The Problem.................................... 297 7.2.3. Permutations.................................... 299 7.2.4. Gaussian Elimination with Row Swapping (LU Factorization with Partial Pivoting) 304 7.2.5. When Gaussian Elimination Fails Altogether................... 311 7.3. The Inverse Matrix.................................... 312 7.3.1. Inverse Functions in 1D.............................. 312 7.3.2. Back to Linear Transformations.......................... 312 7.3.3. Simple Examples.................................. 314 7.3.4. More Advanced (but Still Simple) Examples................... 320 7.3.5. Properties...................................... 324 7.4. Enrichment......................................... 325 7.4.1. Library Routines for LU with Partial Pivoting................... 325 7.5. Wrap Up.......................................... 327 7.5.1. Homework..................................... 327 7.5.2. Summary...................................... 327 7.1. Opening
    [Show full text]
  • Pattern Avoidance in Alternating Sign Matrices
    PATTERN AVOIDANCE IN ALTERNATING SIGN MATRICES ROBERT JOHANSSON AND SVANTE LINUSSON Abstract. We generalize the definition of a pattern from permu- tations to alternating sign matrices. The number of alternating sign matrices avoiding 132 is proved to be counted by the large Schr¨oder numbers, 1, 2, 6, 22, 90, 394 . .. We give a bijection be- tween 132-avoiding alternating sign matrices and Schr¨oder-paths, which gives a refined enumeration. We also show that the 132, 123- avoiding alternating sign matrices are counted by every second Fi- bonacci number. 1. Introduction For some time, we have emphasized the use of permutation matrices when studying pattern avoidance. This and the fact that alternating sign matrices are generalizations of permutation matrices, led to the idea of studying pattern avoidance in alternating sign matrices. The main results, concerning 132-avoidance, are presented in Section 3. In Section 5, we enumerate sets of alternating sign matrices avoiding two patterns of length three at once. In Section 6 we state some open problems. Let us start with the basic definitions. Given a permutation π of length n we will represent it with the n×n permutation matrix with 1 in positions (i, π(i)), 1 ≤ i ≤ n and 0 in all other positions. We will also suppress the 0s. 1 1 3 1 2 1 Figure 1. The permutation matrix of 132. In this paper, we will think of permutations in terms of permutation matrices. Definition 1.1. (Patterns in permutations) A permutation π is said to contain a permutation τ as a pattern if the permutation matrix of τ is a submatrix of π.
    [Show full text]
  • Matrices That Commute with a Permutation Matrix
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Matrices That Commute With a Permutation Matrix Jeffrey L. Stuart Department of Mathematics University of Southern Mississippi Hattiesburg, Mississippi 39406-5045 and James R. Weaver Department of Mathematics and Statistics University of West Florida Pensacola. Florida 32514 Submitted by Donald W. Robinson ABSTRACT Let P be an n X n permutation matrix, and let p be the corresponding permuta- tion. Let A be a matrix such that AP = PA. It is well known that when p is an n-cycle, A is permutation similar to a circulant matrix. We present results for the band patterns in A and for the eigenstructure of A when p consists of several disjoint cycles. These results depend on the greatest common divisors of pairs of cycle lengths. 1. INTRODUCTION A central theme in matrix theory concerns what can be said about a matrix if it commutes with a given matrix of some special type. In this paper, we investigate the combinatorial structure and the eigenstructure of a matrix that commutes with a permutation matrix. In doing so, we follow a long tradition of work on classes of matrices that commute with a permutation matrix of a specified type. In particular, our work is related both to work on the circulant matrices, the results for which can be found in Davis’s compre- hensive book [5], and to work on the centrosymmetric matrices, which can be found in the book by Aitken [l] and in the papers by Andrew [2], Bamett, LZNEAR ALGEBRA AND ITS APPLICATIONS 150:255-265 (1991) 255 Q Elsevier Science Publishing Co., Inc., 1991 655 Avenue of the Americas, New York, NY 10010 0024-3795/91/$3.50 256 JEFFREY L.
    [Show full text]
  • The Waldspurger Transform of Permutations and Alternating Sign Matrices
    Séminaire Lotharingien de Combinatoire 78B (2017) Proceedings of the 29th Conference on Formal Power Article #41, 12 pp. Series and Algebraic Combinatorics (London) The Waldspurger Transform of Permutations and Alternating Sign Matrices Drew Armstrong∗ and James McKeown Department of Mathematics, University of Miami, Coral Gables, FL Abstract. In 2005 J. L. Waldspurger proved the following theorem: Given a finite real reflection group G, the closed positive root cone is tiled by the images of the open weight cone under the action of the linear transformations 1 g. Shortly after this − E. Meinrenken extended the result to affine Weyl groups and then P. V. Bibikov and V. S. Zhgoon gave a uniform proof for a discrete reflection group acting on a simply- connected space of constant curvature. In this paper we show that the Waldspurger and Meinrenken theorems of type A give an interesting new perspective on the combinatorics of the symmetric group. In particular, for each permutation matrix g S we define a non-negative integer 2 n matrix WT(g), called the Waldspurger transform of g. The definition of the matrix WT(g) is purely combinatorial but it turns out that its columns are the images of the fundamental weights under 1 g, expressed in simple root coordinates. The possible − columns of WT(g) (which we call UM vectors) biject to many interesting structures including: unimodal Motzkin paths, abelian ideals in the Lie algebra sln(C), Young diagrams with maximum hook length n, and integer points inside a certain polytope. We show that the sum of the entries of WT(g) is half the entropy of the corresponding permutation g, which is known to equal the rank of g in the MacNeille completion of the Bruhat order.
    [Show full text]