A Matrix Handbook for Statisticians

Total Page:16

File Type:pdf, Size:1020Kb

A Matrix Handbook for Statisticians A MATRIX HANDBOOK FOR STATISTICIANS George A. F. Seber Department of Statistics University of Auckland Auckland, New Zealand BICENTENNIAL BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc., Publication This Page Intentionally Left Blank A MATRIX HANDBOOK FOR STATISTICIANS THE WlLEY BICENTENNIAL-KNOWLEDGE FOR GENERATIONS Gachgeneration has its unique needs and aspirations. When Charles Wiley first opened his small printing shop in lower Manhattan in 1807, it was a generation of boundless potential searching for an identity. And we were there, helping to define a new American literary tradition. Over half a century later, in the midst of the Second Industrial Revolution, it was a generation focused on building the future. Once again, we were there, supplying the critical scientific, technical, and engineering knowledge that helped frame the world. Throughout the 20th Century, and into the new millennium, nations began to reach out beyond their own borders and a new international community was born. Wiley was there, expanding its operations around the world to enable a global exchange of ideas, opinions, and know-how. For 200 years, Wiley has been an integral part of each generation's journey, enabling the flow of information and understanding necessary to meet their needs and fulfill their aspirations. Today, bold new technologies are changing the way we live and learn. Wiley will be there, providing you the must-have knowledge you need to imagine new worlds, new possibilities, and new opportunities. Generations come and go, but you can always count on Wiley to provide you the knowledge you need, when and where you need it! n WILLIAM J. PESCE PETERBOOTH WILEY PRESIDENT AND CHIEF ExmzunvE OFFICER CHAIRMAN OF THE BOARD A MATRIX HANDBOOK FOR STATISTICIANS George A. F. Seber Department of Statistics University of Auckland Auckland, New Zealand BICENTENNIAL BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc., Publication Copyright 0 2008 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at m.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 11 1 River Street, Hoboken, NJ 07030, (201) 748-601 1, fax (201) 748-6008, or online at http://www.wiley.comgo/permission. Limit of LiabilitylDisclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic format. For information about Wiley products, visit our web site at www.wiley.com. Wiley Bicentennial Logo: Richard J. Pacific0 Library of Congress Cataloging-in-Publieation Data: Seber, G. A. F. (George Arthur Frederick), 1938- A matrix handbook for statisticians I George A.F. Seber. p.; cm. Includes bibliographical references and index. ISBN 978-0-471-74869-4 (cloth ) 1. Matrices. 2. Statistics. I. Title. QA188.S43 2007 5 12.9'4346~22 2007024691 Printed in the United States of America. 10987654321 CONTENTS Preface xvi 1 Notation 1.1 General Definitions 1.2 Some Continuous Univariate Distributions 1.3 Glossary of Notation 2 Vectors, Vector Spaces, and Convexity 7 2.1 Vector Spaces 7 2.1.1 Definitions 7 2.1.2 Quadratic Subspaces 9 2.1.3 Sums and Intersections of Subspaces 10 2.1.4 Span and Basis 11 2.1.5 Isomorphism 12 2.2 Inner Products 13 2.2.1 Definition and Properties 13 2.2.2 Functionals 15 2.2.3 Orthogonality 16 2.2.4 Column and Null Spaces 18 2.3 Projections 20 V vi CONTENTS 2.3.1 General Projections 20 2.3.2 Orthogonal Projections 21 2.4 Metric Spaces 25 2.5 Convex Sets and Functions 27 2.6 Coordinate Geometry 31 2.6.1 Hyperplanes and Lines 31 2.6.2 Quadratics 31 2.6.3 Areas and Volumes 32 3 Rank 35 3.1 Some General Properties 35 3.2 Matrix Products 37 3.3 Matrix Cancellation Rules 39 3.4 Matrix Sums 40 3.5 Matrix Differences 44 3.6 Partitioned and Patterned Matrices 46 3.7 Maximal and Minimal Ranks 49 3.8 Matrix Index 51 4 Matrix Functions: Inverse, Transpose, Trace, Determinant, and Norm 53 4.1 Inverse 53 4.2 Transpose 54 4.3 Trace 54 4.4 Determinants 57 4.4.1 Introduction 57 4.4.2 Adjoint Matrix 59 4.4.3 Compound Matrix 61 4.4.4 Expansion of a Determinant 61 4.5 Permanents 63 4.6 Norms 65 4.6.1 Vector Norms 65 4.6.2 Matrix Norms 67 4.6.3 Unitarily Invariant Norms 73 4.6.4 M,N-Invariant Norms 77 4.6.5 Computational Accuracy 77 5 Complex, Hermitian, and Related Matrices 79 5.1 Complex Matrices 79 5.1.1 Some General Results 80 5.1.2 Determinants 81 5.2 Hermitian Matrices 82 CONTENTS vii 5.3 Skew-Hermitian Matrices 83 5.4 Complex Symmetric Matrices 84 5.5 Real Skew-Symmetric Matrices 85 5.6 Normal Matrices 86 5.7 Quaternions 87 6 Eigenvalues, Eigenvectors, and Singular Values 91 6.1 Introduction and Definitions 91 6.1.1 Characteristic Polynomial 92 6.1.2 Eigenvalues 95 6.1.3 Singular Values 101 6.1.4 Functions of a Matrix 103 6.1.5 Eigenvectors 103 6.1.6 Hermitian Matrices 104 6.1.7 Computational Methods 105 6.1.8 Generalized Eigenvalues 106 6.1.9 Matrix Products 107 6.2 Variational Characteristics for Hermitian Matrices 108 6.3 Separation Theorems 111 6.4 Inequalities for Matrix Sums 116 6.5 Inequalities for Matrix Differences 119 6.6 Inequalities for Matrix Products 119 6.7 Antieigenvalues and Antieigenvectors 122 7 Generalized Inverses 125 7.1 Definitions 125 7.2 Weak Inverses 126 7.2.1 General Properties 126 7.2.2 Products of Matrices 130 7.2.3 Sums and Differences of Matrices 132 7.2.4 Real Symmetric Matrices 132 7.2.5 Decomposition Methods 133 7.3 Other Inverses 134 7.3.1 Reflexive (912) Inverse 134 7.3.2 Minimum Norm (914) Inverse 134 7.3.3 Minimum Norm Reflexive (9124) Inverse 135 7.3.4 Least Squares (913) Inverse 136 7.3.5 Least Squares Reflexive (9123) Inverse 137 7.4 Moore-Penrose (91234) Inverse 137 7.4.1 General Properties 137 7.4.2 Sums of Matrices 143 viii CONTENTS 7.4.3 Products of Matrices 143 7.5 Group Inverse 145 7.6 Some General Properties of Inverses 145 8 Some Special Matrices 147 8.1 Orthogonal and Unitary Matrices 147 8.2 Permutation Matrices 151 8.3 Circulant, Toeplitz, and Related Matrices 152 8.3.1 Regular Circulant 152 8.3.2 Symmetric Regular Circulant 155 8.3.3 Symmetric Circulant 156 8.3.4 Toeplitz Matrix 158 8.3.5 Persymmetric Matrix 159 8.3.6 Cross-Symmetric (Centrosymmetric) Matrix 160 8.3.7 Block Circulant 160 8.3.8 Hankel Matrix 161 8.4 Diagonally Dominant Matrices 162 8.5 Hadamard Matrices 164 8.6 Idempotent Matrices 166 8.6.1 General Properties 166 8.6.2 Sums of Idempotent Matrices and Extensions 170 8.6.3 Products of Idempotent Matrices 175 8.7 Tripotent Matrices 175 8.8 Irreducible Matrices 177 8.9 Triangular Matrices 178 8.10 Hessenberg Matrices 179 8.11 Tridiagonal Matrices 180 8.12 Vandermonde and Fourier Matrices 183 8.12.1 Vandermonde Matrix 183 8.12.2 Fourier Matrix 184 8.13 Zero-One (0,l) Matrices 186 8.14 Some Miscellaneous Matrices and Arrays 187 8.14.1 Krylov Matrix 187 8.14.2 Nilpotent and Unipotent Matrices 188 8.14.3 Payoff Matrix 188 8.14.4 Stable and Positive Stable Matrices 189 8.14.5 P-Matrix 191 8.14.6 Z- and M-Matrices 191 8.14.7 Three-Dimensional Arrays 194 CONTENTS ix 9 Non-Negative Vectors and Matrices 195 9.1 Introduction 195 9.1.1 Scaling 196 9.1.2 Modulus of a Matrix 197 9.2 Spectral Radius 197 9.2.1 General Properties 197 9.2.2 Dominant Eigenvalue 199 9.3 Canonical Form of a Non-negative Matrix 20 1 9.4 Irreducible Matrices 202 9.4.1 Irreducible Non-negative Matrix 202 9.4.2 Periodicity 207 9.4.3 Non-negative and Nonpositive Off-Diagonal Elements 208 9.4.4 Perron Matrix 209 9.4.5 Decomposable Matrix 210 9.5 Leslie Matrix 210 9.6 Stochastic Matrices 212 9.6.1 Basic Properties 212 9.6.2 Finite Homogeneous Markov Chain 213 9.6.3 Countably Infinite Stochastic Matrix 215 9.6.4 Infinite Irreducible Stochastic Matrix 215 9.7 Doubly Stochastic Matrices 216 10 Positive Definite and Non-negative Definite Matrices 219 10.1 Introduction 219 10.2 Non-negative Definite Matrices 220 10.2.1 Some General Properties 220 10.2.2 Gram Matrix 223 10.2.3 Doubly Non-negative Matrix 223
Recommended publications
  • Group Matrix Ring Codes and Constructions of Self-Dual Codes
    Group Matrix Ring Codes and Constructions of Self-Dual Codes S. T. Dougherty University of Scranton Scranton, PA, 18518, USA Adrian Korban Department of Mathematical and Physical Sciences University of Chester Thornton Science Park, Pool Ln, Chester CH2 4NU, England Serap S¸ahinkaya Tarsus University, Faculty of Engineering Department of Natural and Mathematical Sciences Mersin, Turkey Deniz Ustun Tarsus University, Faculty of Engineering Department of Computer Engineering Mersin, Turkey February 2, 2021 arXiv:2102.00475v1 [cs.IT] 31 Jan 2021 Abstract In this work, we study codes generated by elements that come from group matrix rings. We present a matrix construction which we use to generate codes in two different ambient spaces: the matrix ring Mk(R) and the ring R, where R is the commutative Frobenius ring. We show that codes over the ring Mk(R) are one sided ideals in the group matrix k ring Mk(R)G and the corresponding codes over the ring R are G - codes of length kn. Additionally, we give a generator matrix for self- dual codes, which consist of the mentioned above matrix construction. 1 We employ this generator matrix to search for binary self-dual codes with parameters [72, 36, 12] and find new singly-even and doubly-even codes of this type. In particular, we construct 16 new Type I and 4 new Type II binary [72, 36, 12] self-dual codes. 1 Introduction Self-dual codes are one of the most widely studied and interesting class of codes. They have been shown to have strong connections to unimodular lattices, invariant theory, and designs.
    [Show full text]
  • Matrix Algebra and Control
    Appendix A Matrix Algebra and Control Boldface lower case letters, e.g., a or b, denote vectors, boldface capital letters, e.g., A, M, denote matrices. A vector is a column matrix. Containing m elements (entries) it is referred to as an m-vector. The number of rows and columns of a matrix A is nand m, respectively. Then, A is an (n, m)-matrix or n x m-matrix (dimension n x m). The matrix A is called positive or non-negative if A>, 0 or A :2:, 0 , respectively, i.e., if the elements are real, positive and non-negative, respectively. A.1 Matrix Multiplication Two matrices A and B may only be multiplied, C = AB , if they are conformable. A has size n x m, B m x r, C n x r. Two matrices are conformable for multiplication if the number m of columns of the first matrix A equals the number m of rows of the second matrix B. Kronecker matrix products do not require conformable multiplicands. The elements or entries of the matrices are related as follows Cij = 2::;;'=1 AivBvj 'Vi = 1 ... n, j = 1 ... r . The jth column vector C,j of the matrix C as denoted in Eq.(A.I3) can be calculated from the columns Av and the entries BVj by the following relation; the jth row C j ' from rows Bv. and Ajv: column C j = L A,vBvj , row Cj ' = (CT),j = LAjvBv, (A. 1) /1=1 11=1 A matrix product, e.g., AB = (c: c:) (~b ~b) = 0 , may be zero although neither multipli­ cand A nor multiplicator B is zero.
    [Show full text]
  • Reduction of the Pseudoinverse of a Hermitian Persymmetric Matrix
    MATHEMATICSOF COMPUTATION, VOLUME 28, NUMBER 127, JULY 1974, PAGES 715-717 Reduction of the Pseudoinverse of a Hermitian Persymmetric Matrix By Marvin J. Goldstein Abstract. When the pseudoinverse of a Hermitian persymmetric matrix is computed, both computer time and storage can be reduced by taking advantage of the special structure of the matrix. For any matrix M, let M' and M* denote its transpose and conjugate transpose, respectively. Let J be a permutation matrix whose elements along the southwest- northeast diagonal are ones and whose remaining elements are zeros. Note that J = J* =/"'. Definition 1. M is persymmetric if JM*J = M, the complex conjugate of M. Note that all Toeplitz matrices (/,■,= /,+li,+1) are persymmetric. Definition 2. M is centrosymmetric if JMJ = M; skew-centrosymmetric if JMJ = -M. Note that if a persymmetric matrix is symmetric, it is centrosymmetric; if a persymmetric matrix is skew (M' = -A/) it is skew-centrosymmetric. It is clear, therefore, that the real and imaginary parts of a Hermitian persymmetric matrix are centrosymmetric and skew-centrosymmetric, respectively. In [2], matrix forms for the pseudoinverse of a centrosymmetric matrix are given in terms of the pseudoinverses of smaller matrices. Similar matrix forms for the pseudoinverse of a skew-centrosymmetric matrix are given in [3]. In this paper, we show that the pseudoinversion of a Hermitian persymmetric matrix reduces to the pseudoinversion of a real symmetric matrix of the same order. Definition 3. The pseudoinverse A+ of any matrix A is uniquely defined by the matrix equations: (1) AA+A = A, A+AA+=A+, iA+A)* = A+A, iAA+)*=AA+.
    [Show full text]
  • Persymmetric Jacobi Matrices, Isospectral Deformations And
    PERSYMMETRIC JACOBI MATRICES, ISOSPECTRAL DEFORMATIONS AND ORTHOGONAL POLYNOMIALS VINCENT GENEST, SATOSHI TSUJIMOTO, LUC VINET, AND ALEXEI ZHEDANOV Abstract. Persymmetric Jacobi matrices are invariant under reflection with respect to the anti- diagonal. The associated orthogonal polynomials have distinctive properties that are discussed. They are found in particular to be also orthogonal on the restrictions either to the odd or to the even points of the complete orthogonality lattice. This is exploited to design very efficient inverse problem algorithms for the reconstruction of persymmetric Jacobi matrices from spectral points. Isospectral deformations of such matrices are also considered. Expressions for the associated polynomials and their weights are obtained in terms of the undeformed entities. 1. Introduction Jacobi matrices that are invariant under reflection with respect to the anti-diagonal are said to be persymmetric or mirror-symmetric. These matrices are known to possess tractable inverse spectral problems[2, 3, 4] and arise in a number of situations especially in engineering (see for instance [10]). Interestingly, they have recently appeared in the analysis of perfect state transfer in quantum spin chains[1, 11, 17]. In the latter context the main question is: what should the design of the chain be if it is to act as a quantum wire and transport quantum states from one location to another with probability one. The answer exploits the connection between orthogonal polynomials and Jacobi matrices as well as the features that mirror-symmetry brings. The present paper provides a systematic examination of the properties that the reflection invariance of persymmetric matrices induces on the associated orthogonal polynomials.
    [Show full text]
  • Inverses of 2 × 2 Block Matrices
    An Intematiomd Journal computers & mathematics with applications PERGAMON Computers and Mathematics with Applications 43 (2002) 119-129 www.eisevier.com/locate/camwa Inverses of 2 × 2 Block Matrices TZON-TZER LU AND SHENG-HUA SHIOU Department of Applied Mathematics National Sun Yat-sen University Kaohsiung, Taiwan 80424, R.O.C. ttluCmath, nsysu, edu. tv (Received March 2000; revised and accepted October P000) Abstract--ln this paper, the authors give explicit inverse formulae for 2 x 2 block matrices with three differentpartitions. Then these resultsare applied to obtain inversesof block triangular matrices and various structured matrices such as Hamiltonian, per-Hermitian, and centro-Hermitian matrices. (~) 2001 Elsevier Science Ltd. All rights reserved. Keywords--2 x 2 block matrix, Inverse matrix, Structured matrix. 1. INTRODUCTION This paper is devoted to the inverses of 2 x 2 block matrices. First, we give explicit inverse formulae for a 2 x 2 block matrix D ' (1.1) with three different partitions. Then these results are applied to obtain inverses of block triangular matrices and various structured matrices such as bisymmetric, Hamiltonian, per-Hermitian, and centro-Hermitian matrices. In the end, we briefly discuss the completion problems of a 2 x 2 block matrix and its inverse, which generalizes this problem. The inverse formula (1.1) of a 2 x 2 block matrix appears frequently in many subjects and has long been studied. Its inverse in terms of A -1 or D -1 can be found in standard textbooks on linear algebra, e.g., [1-3]. However, we give a complete treatment here. Some papers, e.g., [4,5], deal with its inverse in terms of the generalized inverse of A.
    [Show full text]
  • Universal Algorithms for Solving the Matrix Bellman Equations Over Semirings
    Universal algorithms for solving the matrix Bellman equations over semirings G. L. Litvinov, A. Ya. Rodionov, S. N. Sergeev & A. N. Sobolevski Soft Computing A Fusion of Foundations, Methodologies and Applications ISSN 1432-7643 Soft Comput DOI 10.1007/s00500-013-1027-5 1 23 Your article is protected by copyright and all rights are held exclusively by Springer- Verlag Berlin Heidelberg. This e-offprint is for personal use only and shall not be self- archived in electronic repositories. If you wish to self-archive your work, please use the accepted author’s version for posting to your own website or your institution’s repository. You may further deposit the accepted author’s version on a funder’s repository at a funder’s request, provided it is not made publicly available until 12 months after publication. 1 23 Author's personal copy Soft Comput DOI 10.1007/s00500-013-1027-5 FOUNDATIONS Universal algorithms for solving the matrix Bellman equations over semirings G. L. Litvinov • A. Ya. Rodionov • S. N. Sergeev • A. N. Sobolevski Ó Springer-Verlag Berlin Heidelberg 2013 Abstract This paper is a survey on universal algorithms computer representation of real numbers are provided by for solving the matrix Bellman equations over semirings various modifications of floating point arithmetics, and especially tropical and idempotent semirings. How- approximate arithmetics of rational numbers (Litvinov ever, original algorithms are also presented. Some appli- et al. 2008), interval arithmetics etc. The difference cations and software implementations are discussed. between mathematical objects (‘‘ideal’’ numbers) and their finite models (computer representations) results in com- putational (for instance, rounding) errors.
    [Show full text]
  • Recursive Algorithms for the Matrix Padй Problem
    MATHEMATICS OF COMPUTATION, VOLUME 35, NUMBER 151 JULY 1980, PAGES 875-892 Recursive Algorithms for the Matrix Padé Problem By Adhemar Bultheel Abstract. A matrix triangularization interpretation is given for the recursive algo- rithms computing the Padé approximants along a certain path in the Padé table, which makes it possible to unify all known algorithms in this field [5], [6]. For the normal Padé table, all these results carry over to the matrix Padé problem in a straightforward way. Additional features, resulting from the noncommutativity are investigated. A generalization of the Trench-Zohar algorithm and related recursions are studied in greater detail. 1. Introduction. Let K[z] be the set of formal power series over a ring K with indeterminate z. For ease of exposition, we restrict K to be the ring of n x n matri- ces, though a more general setting is possible [3]. Let d Piz) denote the degree of a matrix polynomial Piz) and ord Ziz) the or- der of a formal power series Z(z), i.e. N d°Piz) = N o Piz) = £ pkzk, pN*0,pkGK,k = 0,l,...,N, fc=0 and 00 ord Ziz) = Mo Ziz) = £ rkzk, rM # 0, rk Ê K, k = M, M + 1, ... - k-M For given nonnegative integers M and N, and some F(z) G K[z], the right Padé approximation problem consists in finding two matrix polynomials F*MI ' (z) and QlMM(z) such that "(a) d°plM'NHz) < M, d°Q}MlN\z) < N, (b) P[M/N]iz) and ßlM^l(z) have no common right divisor of degree > 0, (c) the right (Ä-) residual Z[M/N]iz), defined by ZIM/N ] (z) = F{z)Q[ MIN ] (z) _ plM/N ] (z) satisfies ord Z[M/N] (z) >M + N+ I, and |_(d) the constant term ql0M/N] of ß[M/7V1(z) equals /.
    [Show full text]
  • Fast Construction of a Symmetric Nonnegative Matrix with a Prescribed Spectrum
    (~ AnIntemational Joumal computers & mathematics withapplications PERGAMON Computers and Mathematics with Applications 42 (2001) 1379-1391 www.elsevier.nl/locate/camwa Fast Construction of a Symmetric Nonnegative Matrix with a Prescribed Spectrum O. RoJo AND R. SOTO t Department of Mathematics Universidad Catdlica del Norte Antofagasta, Chile H. RoJo Department of Mathematics Universidad de Antofagasta Antofagasta, Chile (Received and accepted December 2000) Abstract--In this paper, for a prescribed real spectrum, using properties of the circulant matrices and of the symmetric persymmetric matrices, we derive a fast and stable algorithm to construct a symmetric nonnegative matrix which realizes the spectrum. The algorithm is based on the fast Fourier transform. @ 2001 Elsevier Science Ltd. All rights reserved. Keywords--Inverse eigenvalue problem, Nonnegative matrix, Circulant matrix, Fast Fourier transform, Persymmetric matrix. 1. INTRODUCTION There are many applications involving nonnegative matrices. We mention areas like game theory, Markov chains (stochastic matrices), theory of probability, probabilistic algorithms, numerical analysis, discrete distribution, group theory, matrix scaling, theory of small oscillations of elastic systems (oscillation matrices), and economics. Many references concerning properties of nonnegative matrices are available. Some fundamen- tal results are collected into the following main theorem of Perron and Frobenius [1]. THEOREM 1. Let A be a nonnegative irreducible n x 'n matrix. Then, 1. A has a positive real eigenvalue r(A) equal to its spectral radius p(A), 2. to r(A) there corresponds a positive eigenvector, 3. r(A) increases when any entry of A increases, 4. r(A) is a simple eig~nvalue of A. The eigenvalue r(A) is called the Perron root of A and the corresponding positive eigenveetor is called the Perron vector of A.
    [Show full text]
  • Universal Algorithms for Generalized Discrete Matrix Bellman Equations
    Universal algorithms for generalized discrete matrix Bellman equations with symmetric Toeplitz matrix∗ Serge˘ıSergeev† July 28, 2018 Abstract This paper presents two universal algorithms for generalized discrete matrix Bellman equations with symmetric Toeplitz matrix. The algorithms are semiring extensions of two well-known methods solving Toeplitz systems in the ordinary linear algebra. 1 Introduction As observed by B.A. Carr´e[1, 2], the Gaussian elimination without pivoting can be viewed as a prototype for some algorithms on graphs. M. Gondran [3] and G. Rote [4] made this observation precise by proving that the Gaussian elimination, under certain conditions, can be applied to the linear systems of equations over semirings. arXiv:math/0612309v1 [math.RA] 12 Dec 2006 The notion of universal algorithm over semiring was introduced by G.L. Litvinov, V.P. Maslov and E.V. Maslova in [5, 6]. These papers are to be considered in the framework of pub- lications [7, 8, 9, 10, 11, 12]) of the Russian idempotent school, and more generally, in ∗Research supported by RFBR grant 05-01-00824 and joint RFBR/CNRS grant 05-01-02807. †Department of Physics, Sub-Department of Quantum Statistics and Field Theory, M.V. Lomonosov Moscow State University, 119992 Leninskie Gory, GSP-2, Moscow, Russia. E-mail: [email protected] 1 the framework of idempotent and tropical mathematics, see [13, 14, 15] and references therein. Essentially, an algorithm is called universal if it does not depend on the com- puter representation of data and on a specific realization of algebraic operations involved in the algorithm [6]. Linear algebraic universal algorithms include generalized bordering method, LU- and LDM-decompositions for solving matrix equations.
    [Show full text]
  • Streaming Down the Stern-Brocot Tree: Finding and Expressing Solutions to Pell's Equation in SL(2,Z)
    Jacksonville State University JSU Digital Commons Theses Theses, Dissertations & Graduate Projects Spring 5-7-2021 Streaming Down the Stern-Brocot Tree: Finding and Expressing Solutions to Pell's Equation in SL(2,Z) Marcus L. Shell Jacksonville State University, [email protected] Follow this and additional works at: https://digitalcommons.jsu.edu/etds_theses Part of the Algebra Commons, and the Number Theory Commons Recommended Citation Shell, Marcus L., "Streaming Down the Stern-Brocot Tree: Finding and Expressing Solutions to Pell's Equation in SL(2,Z)" (2021). Theses. 13. https://digitalcommons.jsu.edu/etds_theses/13 This Thesis is brought to you for free and open access by the Theses, Dissertations & Graduate Projects at JSU Digital Commons. It has been accepted for inclusion in Theses by an authorized administrator of JSU Digital Commons. For more information, please contact [email protected]. STREAMING DOWN THE STERN-BROCOT TREE: FINDING AND EXPRESSING SOLUTIONS TO PELL'S EQUATION IN SL2(Z) A Thesis Submitted to the Graduate Faculty of Jacksonville State University in Partial Fulfillment of the Requirements for the Degree of Master of Science with a Major in Mathematics By MARCUS LEE SHELL Jacksonville, Alabama May 7, 2021 ABSTRACT This paper explores and elaborates on a method of solving Pell's equation as introduced by Norman Wildberger. In the first chapters of the paper, foundational topics are introduced in expository style including an explanation of Pell's equation. An explanation of continued fractions and their ability to express quadratic irrationals is provided as well as a connection to the Stern-Brocot tree and a convenient means of representation for each in terms of 2×2 matrices with integer elements.
    [Show full text]
  • Highly Symmetric Generalized Circulant Permutation Matricesୋ
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Available online at www.sciencedirect.com Linear Algebra and its Applications 429 (2008) 367–375 www.elsevier.com/locate/laa Highly symmetric generalized circulant permutation matricesୋ M. Abreu a, D. Labbate b, R. Salvi c, N. Zagaglia Salvi c,∗ a Dipartimento di Matematica, Università della Basilicata, C. da Macchia Romana, 85100 Potenza, Italy b Dipartimento di Matematica, Politecnico di Bari, I-70125 Bari, Italy c Dipartimento di Matematica, Politecnico di Milano, P.zza Leonardo da Vinci, 32, I-20133 Milano, Italy Received 31 July 2006; accepted 26 February 2008 Available online 11 April 2008 Submitted by R.A. Brualdi Abstract In this paper we investigate generalized circulant permutation matrices of composite order. We give a com- plete characterization of the order and the structure of symmetric generalized k-circulant permutation matrices in terms of circulant and retrocirculant block (0, 1)-matrices in which each block contains exactly one or two entries 1. In particular, we prove that a generalized k-circulant matrix A of composite order n = km is symmetric if and only if either k = m − 1ork ≡ 0ork ≡ 1 mod m, and we obtain three basic symmetric generalizedk-circulantpermutationmatrices,fromwhichallothersareobtainedviapermutationsoftheblocks or by direct sums. Furthermore, we extend the characterization of these matrices to centrosymmetric matrices. © 2008 Elsevier Inc. All rights reserved. Keywords: Generalized circulant matrix; Block circulant matrix; Centrosymmetric matrix; Persymmetric matrix 1. Introduction A matrix of order n is said to be h-circulant,1 h<n, when every row, distinct from the first one, is obtained from the preceding one by shifting every element h positions to the right.
    [Show full text]
  • Dihedral Groups of Order 2P of Automorphisms of Compact Riemann Surfaces of Genus P − 1
    Canad. Math. Bull. Vol. 58 (1), 2015 pp. 196–206 http://dx.doi.org/10.4153/CMB-2014-041-3 c Canadian Mathematical Society 2014 Dihedral Groups of Order 2p of Automorphisms of Compact Riemann Surfaces of Genus p − 1 Qingjie Yang and Weiting Zhong Abstract. In this paper we prove that there is only one conjugacy class of dihedral group of order 2p in the 2(p − 1) × 2(p − 1) integral symplectic group that can be realized by an analytic automorphism group of compact connected Riemann surfaces of genus p − 1. A pair of representative generators of the realizable class is also given. 1 Introduction The problem we consider in this paper is the realizability of dihedral groups D2p of order 2p; where p is an odd prime, in SP2(p−1)(Z), the 2(p −1)×2(p −1) symplectic group over the ring of integers Z, by analytic automorphisms of compact connected Riemann surfaces of genus p − 1. This is a special case of a more general problem. Let S be a connected compact Riemann surface of genus g (g ≥ 2) without bound- ary and let G be a subgroup of Aut(S), the group of analytic automorphisms of S. Then G induces a faithful group action on H1(S) = H1(S; Z), the first homology group of S, G∗ : H1(S) −! H1(S): Let a1; a2;:::; ag; b1; b2;:::; bg be a canonical basis of H1(S), that is, a basis for which the intersection matrix is 0 I J = g ; −Ig 0 where Ig is the identity matrix of degree g.
    [Show full text]