Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Linear Algebra Carl de Boor Linear Algebra draft 25jan13 Contents Preface vi Overview viii 1 Sets, assignments, lists, and maps Sets . 1 Assignments, lists . 2 Matrices . 5 Lists of lists . 8 Maps . 8 1-1 and onto . 10 Cardinality and the pigeonhole principle . 11 Some examples . 13 Maps and their graphs . 13 Invertibility . 15 Map composition; left and right inverse . 17 The inversion of maps . 22 2 Vector spaces and linear maps Vector spaces, especially spaces of functions . 24 Linear maps . 28 Linear maps from IFn (aka column maps) . 31 Linear maps from IFT . 39 The linear equation A? = y, and ran A and null A . 39 Inverses . 42 3 The dimension of a vector space Bases . 47 Construction of a basis . 50 Dimension . 53 The dimension of IFT . 54 ii Contents iii Some uses of the dimension concept . 55 Direct sums . 60 The only matrices whose invertibility can be ascertained at a glance 63 Polynomial interpolation . 64 4 Elimination, or: The determination of null A and ran A Elimination and Backsubstitution . 66 The really reduced row echelon form and other reduced forms . 73 The basis for null A obtained from a b-form . 76 The factorization A = A(: ; b)rrrefb(A) . 79 The basis for ran A obtained from a b-form . 79 The rrref(A) and the solving of A? = y . 81 Constructing the inverse of a matrix by elimination . 83 Elimination in vector spaces . 84 5 The inverse of a basis, and interpolation Linear maps into IFn (aka row maps) . 88 A formula for the coordinate map . 89 Change of basis . 92 Interpolation and linear projectors . 93 6 Inner product spaces Definition and examples . 99 The conjugate transpose . 100 Orthogonal projectors and closest points . 103 Least-squares . 106 Orthonormal column maps . 108 ran A and null Ac form an orthogonal direct sum for tar A . 112 The inner product space IFm×n and the trace of a matrix . 113 7 Norms, map norms, and the condition of a basis How to judge the error by the residual . 115 The map norm . 118 Vector norms and their associated map norms . 120 Any linear map close enough to an invertible linear map is invertible . 125 Bounding the interpolation error: Lebesgue's inequality . 125 8 Factorization and rank The need for factoring linear maps . 127 The trace of a linear map . 130 The rank of a matrix and of its (conjugate) transpose . 131 Elimination as factorization . 132 SVD . 134 The Pseudo-inverse . 136 2-norm and 2-condition of a matrix . 137 The effective rank of a noisy matrix . 137 The polar decomposition . 139 Equivalence and similarity . 140 iv Contents 9 Duality Complementary mathematical concepts . 142 The dual of a vector space . 144 The dual of an inner product space . 147 The dual of a linear map . 148 10 The powers of a linear map and its spectrum Examples . 150 Eigenvalues and eigenvectors . 152 Diagonalizability . 155 Are all square matrices diagonalizable? . 158 Does every square matrix have an eigenvalue? . 158 Polynomials in a linear map, Krylov subspaces, and the minimal polynomials . 160 It is enough to understand the eigenstructure of matrices . 166 Every complex (square) matrix is similar to an upper triangular matrix . 167 11 Convergence of the power sequence Convergence of sequences in a normed vector space . 171 Three interesting properties of the power sequence of a linear map . 172 Splitting off the nondefective eigenvalues . 175 Three interesting properties of the power sequence of a linear map: The sequel . 179 The power method . 181 12 Canonical forms The Schur form . 183 The primary decomposition . 186 The Jordan form . 190 The Weyr form . 195 13 Localization of eigenvalues Gershgorin's circles . 197 The trace of a linear map . 199 Determinants . 200 Annihilating polynomials . 203 The multiplicities of an eigenvalue . 205 Perron-Frobenius . 206 14 Optimization and quadratic forms Minimization . 212 Quadratic forms . 213 Reduction of a quadratic form to a sum of squares . 216 Rayleigh quotient . 217 15 More on determinants Definition and basic properties . 223 Sylvester . 230 Contents v Binet-Cauchy . 232 16 Some applications The cross product in 3-space . 234 Rotation in 3-space . 235 Flats: points, vectors, barycentric coordinates, differentiation 237 An example from CAGD . 245 Tridiagonal Toeplitz matrix . 248 Markov Chains . 249 Polynomial interpolation and divided differences . 250 Linear Programming . 253 Total positivity . 262 Least-squares approximation by broken lines . 263 The B-spline basis for a spline space . 265 Frames . 265 A multivariate polynomial interpolant of minimal degree . 266 The reduced monic Gr¨obnerbasis for a zero-dimensional ideal 270 17 Background A nonempty finite subset of R contains a maximal element 271 A nonempty bounded subset of R has a least upper bound . 271 Complex numbers . 272 Convergence of a scalar sequence . 274 A real continuous function on a compact set in Rn has a maximum . 275 Groups, rings, and fields . 276 The ring of univariate polynomials . 279 Horner, or: How to divide a polynomial by a linear factor . 280 The Euclidean Algorithm . 281 18 List of Notation Rough index for this book . 284 Preface This book is motivated by the following realizations: (1) The linear maps between a vector space X over the scalar field IF and the associated coordinate spaces IFn are efficient tools for work on theoretical and practical problems involving X. Those from IFn to X share with matrices the feature of columns, hence are called column maps, while those from X to IFn share with matrices the feature of rows, hence are called row maps. Work with a linear map A usually requires its factorization into a column map and a row map. Such factorization is most efficient for the task if the particular column map is invertible as a map to the range of A, i.e., if it is a basis for that range. (2) Gauss elimination is applied to matrices for the purpose of obtaining bases for their nullspace and for their range. It results in a sequence of ma- trices all with the same nullspace, with the last matrix making the nullspace quite evident. (3) A change of basis amounts to interpolation and vice versa. (4) Since the eigenstructure of a linear map A on a vector space X over the scalar field IF is of interest in the study of the sequence A0 = id;A1 = A; A2;::: of the powers of A, its derivation and discussion is best handled in terms of polynomials p(A) in that linear map with coefficients in IF. While determinants are indispensible and powerful tools in certain situations, they do not provide the best path to understanding eigenstructure. (5) In applications, vector spaces are, by and large, spaces of maps with the vector operations defined pointwise, or derived from such spaces in a straightforward manner. The coordinate spaces IFn are merely the simplest examples of such vector spaces. These realizations led me to volunteer to teach the follow-up linear al- gebra course offered in the Mathematics Department of the University of Wisconsin-Madison and taken by undergraduate Math majors and graduate students from science and engineering departments. Each time, I produced vi Preface vii lecture notes, and this book derives from them. The numbered problems, posed throughout the book and typeset in the smaller font of this paragraph, are meant to deepen understanding of the material. While there are answers available to all the problems, the book contains only the answers to those problems that are referred to in the text for some missing proof detail or as an illustration of a point being made. The latter problems are starred. I adhere to certain notational conventions: (1) I distinguish between equalities being asserted or derived and equali- ties that hold by definition. For the latter, I use := or =: depending on which side is being defined. (2) I distinguish between terms or phrases being defined and those being emphasized. The former are set in boldface (to make them easy to find), the latter in italic. (3) I use only one numerical sequence for labeling all the equations, fig- ures and formal statements in each chapter as that seems to me more helpful for finding any particular labeled item than the more standard separate enu- meration of various classes of items. I do, however, number separately the problems given throughout. (4) I use standard symbols, like 8 (`for all') and 9 (`there exist(s)') with the subsequent `such that' not written out, and standard abbreviations, like ‘iff' for `if and only if', and use braces, f:::g, only to delimit the description of a set. (5) A question mark in an equation indicates the unknown item, the item for which to solve the equation. (6) n-vectors, i.e., elements of Rn or, more generally, of IFn, are written in boldface, like x 2 Rn, with their entries written in subscripted italics, e.g., x = (x1; : : : ; xn). In particular, n-vectors are not written as 1-column matrices. The study of Linear Algebra is incomplete without some numerical experimen- tation. I carry out such experimentation with the help of MATLAB, a program that has grown well beyond its initial purpose of being a \Matrix laboratory" into a very handy tool for experimentation in general scientific computing. Throughout this book, there are paragraphs, typeset like this one, that provide information about MATLAB essential for experimenting with the material under discussion. Some of the problems also require MATLAB, but most of these are easily adapted to other programming languages. With that proviso, any reader not interested in numerical experimentation or well familiar with MATLAB can safely skip all such paragraphs. Overview Here is a quick run-down on this book, with various terms to be learned by studying this book printed in boldface.
Recommended publications
  • Seminar VII for the Course GROUP THEORY in PHYSICS Micael Flohr
    Seminar VII for the course GROUP THEORY IN PHYSICS Mic~ael Flohr The classical Lie groups 25. January 2005 MATRIX LIE GROUPS Most Lie groups one ever encouters in physics are realized as matrix Lie groups and thus as subgroups of GL(n, R) or GL(n, C). This is the group of invertibel n × n matrices with coefficients in R or C, respectively. This is a Lie group, since it forms as an open subset of the vector space of n × n matrices a manifold. Matrix multiplication is certainly a differentiable map, as is taking the inverse via Cramer’s rule. The only condition defining the open 2 subset is that the determinat must not be zero, which implies that dimKGL(n, K) = n is the same as the one of the vector space Mn(K). However, GL(n, R) is not connected, because we cannot move continuously from a matrix with determinant less than zero to one with determinant larger than zero. It is worth mentioning that gl(n, K) is the vector space of all n × n matrices over the field K, equipped with the standard commutator as Lie bracket. We can describe most other Lie groups as subgroups of GL(n, K) for either K = R or K = C. There are two ways to do so. Firstly, one can give restricting equations to the coefficients of the matrices. Secondly, one can find subgroups of the automorphisms of V =∼ Kn, which conserve a given structure on Kn. In the following, we give some examples for this: SL(n, K).
    [Show full text]
  • Inertia of the Matrix [(Pi + Pj) ]
    isid/ms/2013/12 October 20, 2013 http://www.isid.ac.in/estatmath/eprints r Inertia of the matrix [(pi + pj) ] Rajendra Bhatia and Tanvi Jain Indian Statistical Institute, Delhi Centre 7, SJSS Marg, New Delhi{110 016, India r INERTIA OF THE MATRIX [(pi + pj) ] RAJENDRA BHATIA* AND TANVI JAIN** Abstract. Let p1; : : : ; pn be positive real numbers. It is well r known that for every r < 0 the matrix [(pi + pj) ] is positive def- inite. Our main theorem gives a count of the number of positive and negative eigenvalues of this matrix when r > 0: Connections with some other matrices that arise in Loewner's theory of oper- ator monotone functions and in the theory of spline interpolation are discussed. 1. Introduction Let p1; p2; : : : ; pn be distinct positive real numbers. The n×n matrix 1 C = [ ] is known as the Cauchy matrix. The special case pi = i pi+pj 1 gives the Hilbert matrix H = [ i+j ]: Both matrices have been studied by several authors in diverse contexts and are much used as test matrices in numerical analysis. The Cauchy matrix is known to be positive definite. It possessesh ai ◦r 1 stronger property: for each r > 0 the entrywise power C = r (pi+pj ) is positive definite. (See [4] for a proof.) The object of this paper is to study positivity properties of the related family of matrices r Pr = [(pi + pj) ]; r ≥ 0: (1) The inertia of a Hermitian matrix A is the triple In(A) = (π(A); ζ(A); ν(A)) ; in which π(A); ζ(A) and ν(A) stand for the number of positive, zero, and negative eigenvalues of A; respectively.
    [Show full text]
  • Fast Approximation Algorithms for Cauchy Matrices, Polynomials and Rational Functions
    City University of New York (CUNY) CUNY Academic Works Computer Science Technical Reports CUNY Academic Works 2013 TR-2013011: Fast Approximation Algorithms for Cauchy Matrices, Polynomials and Rational Functions Victor Y. Pan How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/gc_cs_tr/386 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] Fast Approximation Algorithms for Cauchy Matrices, Polynomials and Rational Functions ? Victor Y. Pan Department of Mathematics and Computer Science Lehman College and the Graduate Center of the City University of New York Bronx, NY 10468 USA [email protected], home page: http://comet.lehman.cuny.edu/vpan/ Abstract. The papers [MRT05], [CGS07], [XXG12], and [XXCBa] have combined the advanced FMM techniques with transformations of matrix structures (traced back to [P90]) in order to devise numerically stable algorithms that approximate the solutions of Toeplitz, Hankel, Toeplitz- like, and Hankel-like linear systems of equations in nearly linear arith- metic time, versus classical cubic time and quadratic time of the previous advanced algorithms. We show that the power of these approximation al- gorithms can be extended to yield similar results for computations with other matrices that have displacement structure, which includes Van- dermonde and Cauchy matrices, as well as to polynomial and rational evaluation and interpolation. The resulting decrease of the running time of the known approximation algorithms is again by order of magnitude, from quadratic to nearly linear.
    [Show full text]
  • An Accurate Computational Solution of Totally Positive Cauchy Linear Systems
    AN ACCURATE COMPUTATIONAL SOLUTION OF TOTALLY POSITIVE CAUCHY LINEAR SYSTEMS Abdoulaye Cissé Andrew Anda Computer Science Department Computer Science Department St. Cloud State University St. Cloud State University St. Cloud, MN 56301 St. Cloud, MN 56301 [email protected] [email protected] Abstract ≈ ’ ∆ 1 The Cauchy linear systems C(x, y)V = b where C(x, y) = ∆ , x = (xi ) , y = ( yi ) , « xi − y j V = (vi ) , b = (bi ) can be solved very accurately regardless of condition number using Björck-Pereyra-type methods. We explain this phenomenon by the following facts. First, the Björck-Pereyra methods, written in matrix form, are essentially an accurate and efficient bidiagonal decomposition of the inverse of the matrix, as obtained by Neville Elimination with no pivoting. Second, each nontrivial entry of this bidiagonal decomposition is a product of two quotients of (initial) minors. We use this method to derive and implement a new method of accurately and efficiently solving totally positive Cauchy linear systems in time complexity O(n2). Keywords: Accuracy, Björck-Pereyra-type methods, Cauchy linear systems, floating point architectures, generalized Vandermonde linear systems, Neville elimination, totally positive, LDU decomposition. Note: In this paper we assume that the nodes xi and yj are pair wise distinct and C is totally positive 1 Background Definition 1.1 A Cauchy matrix is defined as: ≈ ’ ∆ 1 C(x, y) = ∆ , for i, j = 1,2,..., n (1.1) « xi − y j where the nodes xi and yj are assumed pair wise distinct. C(x,y) is totally positive (TP) if 2 2 0 < y1 < y2 < < yn−1 < yn < x1 < x2 < < xn−1 < xn .
    [Show full text]
  • Displacement Rank of the Drazin Inverse Huaian Diaoa, Yimin Weib;1, Sanzheng Qiaoc;∗;2
    Available online at www.sciencedirect.com Journal of Computational and Applied Mathematics 167 (2004) 147–161 www.elsevier.com/locate/cam Displacement rank of the Drazin inverse Huaian Diaoa, Yimin Weib;1, Sanzheng Qiaoc;∗;2 aInstitute of Mathematics, Fudan University, Shanghai 200433, PR China bDepartment of Mathematics, Fudan University, Shanghai 200433, PR China cDepartment of Computing and Software, McMaster University, Hamilton, Ont., Canada L8S 4L7 Received 20 February 2003; received in revised form 4 August 2003 Abstract In this paper, we study the displacement rank of the Drazin inverse. Both Sylvester displacement and the generalized displacement are discussed. We present upper bounds for the ranks of the displacements of the Drazin inverse. The general results are applied to the group inverse of a structured matrix such as close-to- Toeplitz, generalized Cauchy, Toeplitz-plus-Hankel, and Bezoutians. c 2003 Elsevier B.V. All rights reserved. MSC: 15A09; 65F20 Keywords: Drazin inverse; Displacement; Structured matrix; Jordan canonical form 1. Introduction Displacement gives a quantitative way of identifying the structure of a matrix. Consider an n × n Toeplitz matrix t0 t−1 ··· t−n+1 . t .. .. 1 T = ; . .. .. . t−1 tn−1 ··· t1 t0 ∗ Corresponding author. Tel.: +1-905-525-9140; fax: +1-905-524-0340. E-mail addresses: [email protected] (Y. Wei), [email protected] (S. Qiao). 1 Supported by National Natural Science Foundation of China under Grant 19901006. 2 Partially supported by Natural Science and Engineering Research Council of Canada. 0377-0427/$ - see front matter c 2003 Elsevier B.V. All rights reserved. doi:10.1016/j.cam.2003.09.050 148 H.
    [Show full text]
  • Arxiv:2104.06123V2 [Nlin.SI] 19 Jul 2021
    LINEAR INTEGRAL EQUATIONS AND TWO-DIMENSIONAL TODA SYSTEMS YUE YIN AND WEI FU Abstract. The direct linearisation framework is presented for the two-dimensional Toda equations associated with the infinite-dimensional (1) (2) (1) (2) Z+ Lie algebras A∞, B∞ and C∞, as well as the Kac–Moody algebras Ar , A2r , Cr and Dr+1 for arbitrary integers r ∈ , from the aspect of a set of linear integral equations in a certain form. Such a scheme not only provides a unified perspective to understand the underlying integrability structure, but also induces the direct linearising type solution potentially leading to the universal solution space, for each class of the two-dimensional Toda system. As particular applications of this framework to the two-dimensional Toda lattices, we rediscover the Lax pairs and the adjoint Lax pairs and simultaneously construct the generalised Cauchy matrix solutions. 1. Introduction In the modern theory of integrable systems, the notion of integrability of nonlinear equations often refers to the property that a differential/difference equation is exactly solvable under an initial-boundary condition, which in many cases allows us to construct explicit solutions. Motivated by this, many mathematical methods were invented and developed to search for explicit solutions of nonlinear models, for instance, the inverse scattering transform, the Darboux transform, Hirota’s bilinear method, as well as the algebro-geometric method, etc., see e.g. the monographs [1,11,16,26]. These techniques not only explained the nonlinear phenomena such as solitary waves and periodic waves in nature mathematically, but also motivated the discoveryof a huge number of nonlinear equations that possess “nice” algebraic and geometric properties.
    [Show full text]
  • Fast Approximate Computations with Cauchy Matrices and Polynomials ∗
    Fast Approximate Computations with Cauchy Matrices and Polynomials ∗ Victor Y. Pan Departments of Mathematics and Computer Science Lehman College and the Graduate Center of the City University of New York Bronx, NY 10468 USA [email protected] http://comet.lehman.cuny.edu/vpan/ Abstract Multipoint polynomial evaluation and interpolation are fundamental for modern symbolic and nu- merical computing. The known algorithms solve both problems over any field of constants in nearly linear arithmetic time, but the cost grows to quadratic for numerical solution. We fix this discrepancy: our new numerical algorithms run in nearly linear arithmetic time. At first we restate our goals as the multiplication of an n × n Vandermonde matrix by a vector and the solution of a Vandermonde linear system of n equations. Then we transform the matrix into a Cauchy structured matrix with some special features. By exploiting them, we approximate the matrix by a generalized hierarchically semiseparable matrix, which is a structured matrix of a different class. Finally we accelerate our solution to the original problems by applying Fast Multipole Method to the latter matrix. Our resulting numerical algorithms run in nearly optimal arithmetic time when they perform the above fundamental computations with polynomials, Vandermonde matrices, transposed Vandermonde matrices, and a large class of Cauchy and Cauchy-like matrices. Some of our techniques may be of independent interest. Key words: Polynomial evaluation; Rational evaluation; Interpolation; Vandermonde matrices; Transfor- mation of matrix structures; Cauchy matrices; Fast Multipole Method; HSS matrices; Matrix compression AMS Subject Classification: 12Y05, 15A04, 47A65, 65D05, 68Q25 1 Introduction 1.1 The background and our progress Multipoint polynomial evaluation and interpolation are fundamental for modern symbolic and numerical arXiv:1506.02285v3 [math.NA] 17 Apr 2017 computing.
    [Show full text]
  • Positivity Properties of the Matrix (I + J)I+J
    isid/ms/2015/11 September 21, 2015 http://www.isid.ac.in/∼statmath/index.php?module=Preprint Positivity properties of the matrix (i + j)i+j Rajendra Bhatia and Tanvi Jain Indian Statistical Institute, Delhi Centre 7, SJSS Marg, New Delhi{110 016, India POSITIVITY PROPERTIES OF THE MATRIX [(i + j)i+j] RAJENDRA BHATIA AND TANVI JAIN Abstract. Let p1 < p2 < ··· < pn be positive real numbers. pi+pj It is shown that the matrix whose i; j entry is (pi + pj) is infinitely divisible, nonsingular and totally positive. 1. Introduction Matrices whose entries are obtained by assembling natural numbers in special ways often possess interesting properties. The most famous h 1 i example of such a matrix is the Hilbert matrix H = i+j−1 which has inspired a lot of work in diverse areas. Some others are the min i+j matrix M = min(i; j) ; and the Pascal matrix P = i . There is a considerable body of literature around each of these matrices, a sample of which can be found in [3], [5] and [7]. In this note we initiate the study of one more matrix of this type. Let A be the n×n matrix with its (i; j) entry equal to (i+j −1)i+j−1. Thus 2 1 22 33 ··· nn 3 6 22 33 44 ··· (n + 1)n+1 7 6 3 4 5 7 A = 6 3 4 5 ······ 7 : (1) 6 7 4 ··············· 5 nn ········· (2n − 1)2n−1 More generally, let p1 < p2 < ··· < pn be positive real numbers, and consider the n × n matrix pi+pj B = (pi + pj) : (2) The special choice pi = i − 1=2 in (2) gives us the matrix (1).
    [Show full text]
  • Combined Matrices of Matrices in Special Classes
    Combined matrices of matrices in special classes Miroslav Fiedler Institute of Computer Science AS CR May 2010 Combined matrices of matrices in special classes Let A be a nonsingular (complex, or even more general) matrix. In [3], we called the Hadamard (entrywise) product A ± (A¡1)T the combined matrix of A, and denoted it as C(A). It has good properties: All its row- and column-sums are equal to one. If we multiply A by a nonsingular diagonal matrix from the left or from the right, C(A) will not change. It is a long standing problem to characterize the range of C(A) if A runs through the set of all positive de¯nite n £ n matrices. A partial answer describing the diagonal entries of C(A) in this case was given in [2]: Theorem 1. A necessary and su±cient condition that the numbers a11, a22, : : : ; ann are diagonal entries of a positive de¯nite matrix and ®11; ®22;:::, ®nn the diagonal entries of its inverse matrix, is ful¯lling of : 1. aii > 0, ®ii > 0 for all i, 2. a ® ¡ 1 ¸ 0 for all i, ii ii p P p 3. 2 maxi( aii®ii ¡ 1) · i( aii®ii ¡ 1): In the notation above, it means that a matrix M = [mik] can serve as C(A) for A positive de¯nite, only if its diagonal entries satisfy m ¸ 1 and p P p ii 2 maxi( mii ¡ 1) · i( mii ¡ 1): In fact, there is also another necessary condition, namely, that the matrix M ¡ I, I being the identity matrix, is positive semide¯nite.
    [Show full text]
  • Arxiv:1811.08406V1 [Math.NA]
    MATRI manuscript No. (will be inserted by the editor) Bj¨orck-Pereyra-type methods and total positivity Jos´e-Javier Mart´ınez Received: date / Accepted: date Abstract The approach to solving linear systems with structured matrices by means of the bidiagonal factorization of the inverse of the coefficient matrix is first considered, the starting point being the classical Bj¨orck-Pereyra algo- rithms for Vandermonde systems, published in 1970 and carefully analyzed by Higham in 1987. The work of Higham showed the crucial role of total pos- itivity for obtaining accurate results, which led to the generalization of this approach to totally positive Cauchy, Cauchy-Vandermonde and generalized Vandermonde matrices. Then, the solution of other linear algebra problems (eigenvalue and singular value computation, least squares problems) is addressed, a fundamental tool being the bidiagonal decomposition of the corresponding matrices. This bidi- agonal decomposition is related to the theory of Neville elimination, although for achieving high relative accuracy the algorithm of Neville elimination is not used. Numerical experiments showing the good behaviour of these algorithms when compared with algorithms which ignore the matrix structure are also included. Keywords Bj¨orck-Pereyra algorithm · Structured matrix · Totally positive matrix · Bidiagonal decomposition · High relative accuracy Mathematics Subject Classification (2010) 65F05 · 65F15 · 65F20 · 65F35 · 15A23 · 15B05 · 15B48 arXiv:1811.08406v1 [math.NA] 20 Nov 2018 J.-J. Mart´ınez Departamento de F´ısica y Matem´aticas, Universidad de Alcal´a, Alcal´ade Henares, Madrid 28871, Spain E-mail: [email protected] 2 Jos´e-Javier Mart´ınez 1 Introduction The second edition of the Handbook of Linear Algebra [25], edited by Leslie Hogben, is substantially expanded from the first edition of 2007 and, in connec- tion with our work, it contains a new chapter by Michael Stewart entitled Fast Algorithms for Structured Matrix Computations (chapter 62) [41].
    [Show full text]
  • On the Q-Analogue of Cauchy Matrices Alessandro Neri
    I am a friend of Finite Geometry! On the q-Analogue of Cauchy Matrices Alessandro Neri 17-21 June 2019 - VUB Alessandro Neri 19 June 2019 1 / 19 On the q-Analogue of Cauchy Matrices Alessandro Neri I am a friend of Finite Geometry! 17-21 June 2019 - VUB Alessandro Neri 19 June 2019 1 / 19 q-Analogues Model Finite set Finite dim vector space over Fq Element 1-dim subspace ; f0g Cardinality Dimension Intersection Intersection Union Sum Alessandro Neri 19 June 2019 1 / 19 Examples 1. Binomials and q-binomials: k−1 k−1 n Y n − i n Y 1 − qn−i = ; = : k k − i k 1 − qk−i i=0 q i=0 2. (Chu)-Vandermonde and q-Vandermonde identity: k k m + n X m n m + n X m n = ; = qj(m−k+j): k j k − j k k − j j j=0 q j=0 q q 3. Polynomials and q-polynomials: k q qk a0 + a1x + ::: + ak x ; a0x + a1x + ::: + ak x : 4. Gamma and q-Gamma functions: 1 − qx Γ(z + 1) = zΓ(z); Γ (x + 1) = Γ (x): q 1 − q q Alessandro Neri 19 June 2019 2 / 19 For n = k, det(V ) 6= 0 if and only if the αi 's are all distinct. jfα1; : : : ; αngj = n: In particular, all the k × k minors of V are non-zero. Vandermonde Matrix Let k ≤ n. 0 1 1 ::: 1 1 B α1 α2 : : : αn C B C B α2 α2 : : : α2 C V = B 1 2 n C B .
    [Show full text]
  • Indian Institute of Technology, Bombay Ma 106
    INDIAN INSTITUTE OF TECHNOLOGY, BOMBAY MA 106 Autumn 2012-13 Note 6 APPLICATIONS OF ROW REDUCTION As we saw at the end of the last note, the Gaussian reduction provides a very practical method for solving systems of linear equations. Its superiority can be seen, for example, by comparing it with other methods, e.g. the Crammer’s rule. This rule has a very limited applicability, since it applies only when the number of equations equals the number of unknowns, i.e. when the coefficient matrix A in the system Ax = b (1) is a square matrix (of order n, say). Even when this is so, the rule is silent about what happens if A is not invertible, i.e. when |A| = 0. And even when it does speak, the formula it gives for the solution viz. △1 △2 △n x1 = , x2 = ,...,x = (2) △ △ n △ (where △ = |A|, the determinant of A and for i = 1, 2,...,n, △i is the determinant of the matrix obtained by replacing the i-th column of A by the column vector b) involves the computation of n + 1 determinants of order n each. This is a horrendous task even for small values of n. With row reduction, we not only get a complete answer applicable regard- less of the size of A, we get it very efficiently. The efficiency of an algorithm depends on the number of various elementary operations such as addition and multiplication that have to be carried out in its execution. This num- ber, in general, depends not only on the algorithm but also on the particular piece of data to which it is applied.
    [Show full text]