Applied Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Applied Linear Algebra Applied Linear Algebra MAT 3341 Spring/Summer 2019 Alistair Savage Department of Mathematics and Statistics University of Ottawa This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License Contents Preface 4 1 Matrix algebra5 1.1 Conventions and notation............................5 1.2 Matrix arithmetic.................................6 1.3 Matrices and linear transformations....................... 11 1.4 Gaussian elimination............................... 13 1.5 Matrix inverses.................................. 17 1.6 LU factorization.................................. 25 2 Matrix norms, sensitivity, and conditioning 36 2.1 Motivation..................................... 36 2.2 Normed vector spaces............................... 37 2.3 Matrix norms................................... 40 2.4 Conditioning................................... 44 3 Orthogonality 48 3.1 Orthogonal complements and projections.................... 48 3.2 Diagonalization.................................. 53 3.3 Hermitian and unitary matrices......................... 56 3.4 The spectral theorem............................... 60 3.5 Positive definite matrices............................. 65 3.6 QR factorization................................. 71 3.7 Computing eigenvalues.............................. 77 4 Generalized diagonalization 85 4.1 Singular value decomposition.......................... 85 4.2 Fundamental subspaces and principal components............... 91 4.3 Pseudoinverses.................................. 94 4.4 Jordan canonical form.............................. 98 4.5 The matrix exponential.............................. 102 5 Quadratic forms 106 5.1 Definitions..................................... 106 5.2 Diagonalization of quadratic forms....................... 108 2 Contents 3 5.3 Rayleigh's principle and the min-max theorem................. 112 Index 118 Preface These are notes for the course Applied Linear Algebra (MAT 3341) at the University of Ottawa. This is a third course in linear algebra. The prerequisites are uOttawa courses MAT 1322 and (MAT 2141 or MAT 2342). In this course we will explore aspects of linear algebra that are of particular use in concrete applications. For example, we will learn how to factor matrices in various ways that aid in solving linear systems. We will also learn how one can effectively compute estimates of eigenvalues when solving for precise ones is impractical. In addition, we will investigate the theory of quadratic forms. The course will involve a mixture of theory and computation. It is important to understand why our methods work (the theory) in addition to being able to apply the methods themselves (the computation). Acknowledgements: I would like to thank Benoit Dionne, Monica Nevins, and Mike Newman for sharing with me their lecture notes for this course. Alistair Savage Course website: https://alistairsavage.ca/mat3341 4 Chapter 1 Matrix algebra We begin this chapter by briefly recalling some matrix algebra that you learned in previous courses. In particular, we review matrix arithmetic (matrix addition, scalar multiplication, the transpose, and matrix multiplication), linear transformations, and gaussian elimination (row reduction). Next we discuss matrix inverses. Although you have seen the concept of a matrix inverse in previous courses, we delve into the topic in further detail. In particular, we will investigate the concept of one-sided inverses. We then conclude the chapter with a discussion of LU factorization, which is a very useful technique for solving linear systems. 1.1 Conventions and notation We let Z denote the set of integers, and let N = f0; 1; 2;::: g denote the set of natural numbers. In this course we will work over the field R of real numbers or the field C of complex numbers, unless otherwise specified. To handle both cases simultaneously, we will use the notation F to denote the field of scalars. So F = R or F = C, unless otherwise specified. We call the elements of F scalars. We let F× = F n f0g denote the set of nonzero scalars. We will use uppercase roman letters to denote matrices: A, B, C, M, N, etc. We will use the corresponding lowercase letters to denote the entries of a matrix. Thus, for instance, aij is the (i; j)-entry of the matrix A. We will sometimes write A = [aij] to emphasize this. In some cases, we will separate the indices with a comma when there is some chance for confusion, e.g. ai;i+1 versus aii+1. Recall that a matrix A has size m × n if it has m rows and n columns: 2 3 a11 a12 ··· a1n 6 a a ··· a 7 6 21 22 2n 7 A = 6 . .. 7 : 4 . 5 am1 am2 ··· amn We let Mm;n(F) denote the set of all m × n matrices with entries in F. We let GL(n; F) denote the set of all invertible n×n matrices with entries in F. (Here `GL' stands for general 5 6 Chapter 1. Matrix algebra linear group.) If a1; : : : ; an 2 F, we define 2 3 a1 0 ······ 0 6 .. 7 6 0 a2 . 7 6 . 7 diag(a1; : : : ; an) = 6 . .. .. .. 7 : 6 7 6 . .. .. 7 4 . 0 5 0 ······ 0 an We will use boldface lowercase letters a, b, x, y, etc. to denote vectors. (In class, we will often write vectors as ~a, ~b, etc. since bold is hard to write on the blackboard.) Most of the time, our vectors will be elements of Fn. (Although, in general, they can be elements of any vector space.) For vectors in Fn, we denote their components with the corresponding non-bold letter with subscripts. We will write vectors x 2 Fn in column notation: 2 3 x1 6x 7 6 27 x = 6 . 7 ; x1; x2; : : : ; xn 2 F: 4 . 5 xn Sometimes, to save space, we will also write this vector as x = (x1; x2; : : : ; xn): n For 1 ≤ i ≤ n, we let ei denote the i-th standard basis vector of F . This is the vector ei = (0;:::; 0; 1; 0;:::; 0); n where the 1 is in the i-th position. Then fe1; e2; ··· ; eng is a basis for F . Indeed, every x 2 Fn can be written uniquely as the linear combination x = x1e1 + x2e2 + ··· + xnen: 1.2 Matrix arithmetic We now quickly review the basic matrix operations. Further detail on the material in this section can be found in [Nic, xx2.1{2.3]. 1.2.1 Matrix addition and scalar multiplication We add matrices of the same size componentwise: A + B = [aij + bij]: If A and B are of different sizes, then the sum A + B is not defined. We define the negative of a matrix A by −A = [−aij] 1.2. Matrix arithmetic 7 Then the difference of matrices of the same size is defined by A − B = A + (−B) = [aij − bij]: If k 2 F is a scalar, then we define the scalar multiple kA = [kaij]: We denote the zero matrix by 0. This is matrix with all entries equal to zero. Note that there is some possibility for confusion here since we will use 0 to denote the real (or complex) number zero, as well as the zero matrices of different sizes. The context should make it clear which zero we mean. The context should also make clear what size of zero matrix we are considering. For example, if A 2 Mm;n(F) and we write A + 0, then 0 must denote the m × n zero matrix here. The following theorem summarizes the important properties of matrix addition and scalar multiplication. Proposition 1.2.1. Let A, B, and C be m × n matrices and let k; p 2 F be scalars. Then we have the following: (a) A + B = B + A (commutativity) (b) A + (B + C) = (A + B) + C (associativity) (c)0+ A = A (0 is an additive identity) (d) A + (−A) = 0 (−A is the additive inverse of A) (e) k(A + B) = kA + kB (scalar multiplication is distributive over matrix addition) (f)( k + p)A = kA + pA (scalar multiplication is distributive over scalar addition) (g)( kp)A = k(pA) (h)1 A = A Remark 1.2.2. Proposition 1.2.1 can be summarized as stating that the set Mm;n(F) is a vector space over the field F under the operations of matrix addition and scalar multiplication. 1.2.2 Transpose The transpose of an m × n matrix A, written AT , is the n × m matrix whose rows are the columns of A in the same order. In other words, the (i; j)-entry of AT is the (j; i)-entry of A. So, T if A = [aij]; then A = [aji]: We say the matrix A is symmetric if AT = A. Note that this implies that all symmetric matrices are square, that is, they are of size n × n for some n. Example 1.2.3. We have 2 3 T π 5 π i −1 = i 7 : 5 7 3=2 4 5 −1 3=2 8 Chapter 1. Matrix algebra The matrix 2 1 −5 73 4−5 0 85 7 8 9 is symmetric. Proposition 1.2.4. Let A and B denote matrices of the same size, and let k 2 F. Then we have the following: (a)( AT )T = A (b)( kA)T = kAT (c)( A + B)T = AT + BT 1.2.3 Matrix-vector multiplication m Suppose A is an m × n matrix with columns a1; a2;:::; an 2 F : A = a1 a2 ··· an : For x 2 Fn, we define the matrix-vector product m Ax := x1a1 + x2a2 + ··· + xnan 2 F : Example 1.2.5. If 2 2 −1 03 2−13 3 1=2 π A = 6 7 and x = 1 ; 6−2 1 17 4 5 4 5 2 0 0 0 then 2 2 3 2−13 203 2 −3 3 6 3 7 61=27 6π7 6−5=2 + 2π7 Ax = −1 6 7 + 1 6 7 + 2 6 7 = 6 7 : 4−25 4 1 5 415 4 5 5 0 0 0 0 1.2.4 Matrix multiplication Suppose A 2 Mm;n(F) and B 2 Mn;k(F).
Recommended publications
  • Tight Frames and Their Symmetries
    Technical Report 9 December 2003 Tight Frames and their Symmetries Richard Vale, Shayne Waldron Department of Mathematics, University of Auckland, Private Bag 92019, Auckland, New Zealand e–mail: [email protected] (http:www.math.auckland.ac.nz/˜waldron) e–mail: [email protected] ABSTRACT The aim of this paper is to investigate symmetry properties of tight frames, with a view to constructing tight frames of orthogonal polynomials in several variables which share the symmetries of the weight function, and other similar applications. This is achieved by using representation theory to give methods for constructing tight frames as orbits of groups of unitary transformations acting on a given finite-dimensional Hilbert space. Along the way, we show that a tight frame is determined by its Gram matrix and discuss how the symmetries of a tight frame are related to its Gram matrix. We also give a complete classification of those tight frames which arise as orbits of an abelian group of symmetries. Key Words: Tight frames, isometric tight frames, Gram matrix, multivariate orthogonal polynomials, symmetry groups, harmonic frames, representation theory, wavelets AMS (MOS) Subject Classifications: primary 05B20, 33C50, 20C15, 42C15, sec- ondary 52B15, 42C40 0 1. Introduction u1 u2 u3 2 The three equally spaced unit vectors u1, u2, u3 in IR provide the following redundant representation 2 3 f = f, u u , f IR2, (1.1) 3 h ji j ∀ ∈ j=1 X which is the simplest example of a tight frame. Such representations arose in the study of nonharmonic Fourier series in L2(IR) (see Duffin and Schaeffer [DS52]) and have recently been used extensively in the theory of wavelets (see, e.g., Daubechies [D92]).
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Week 8-9. Inner Product Spaces. (Revised Version) Section 3.1 Dot Product As an Inner Product
    Math 2051 W2008 Margo Kondratieva Week 8-9. Inner product spaces. (revised version) Section 3.1 Dot product as an inner product. Consider a linear (vector) space V . (Let us restrict ourselves to only real spaces that is we will not deal with complex numbers and vectors.) De¯nition 1. An inner product on V is a function which assigns a real number, denoted by < ~u;~v> to every pair of vectors ~u;~v 2 V such that (1) < ~u;~v>=< ~v; ~u> for all ~u;~v 2 V ; (2) < ~u + ~v; ~w>=< ~u;~w> + < ~v; ~w> for all ~u;~v; ~w 2 V ; (3) < k~u;~v>= k < ~u;~v> for any k 2 R and ~u;~v 2 V . (4) < ~v;~v>¸ 0 for all ~v 2 V , and < ~v;~v>= 0 only for ~v = ~0. De¯nition 2. Inner product space is a vector space equipped with an inner product. Pn It is straightforward to check that the dot product introduces by ~u ¢ ~v = j=1 ujvj is an inner product. You are advised to verify all the properties listed in the de¯nition, as an exercise. The dot product is also called Euclidian inner product. De¯nition 3. Euclidian vector space is Rn equipped with Euclidian inner product < ~u;~v>= ~u¢~v. De¯nition 4. A square matrix A is called positive de¯nite if ~vT A~v> 0 for any vector ~v 6= ~0. · ¸ 2 0 Problem 1. Show that is positive de¯nite. 0 3 Solution: Take ~v = (x; y)T . Then ~vT A~v = 2x2 + 3y2 > 0 for (x; y) 6= (0; 0).
    [Show full text]
  • Gram Matrix and Orthogonality in Frames 1
    U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 1, 2018 ISSN 1223-7027 GRAM MATRIX AND ORTHOGONALITY IN FRAMES Abolhassan FEREYDOONI1 and Elnaz OSGOOEI 2 In this paper, we aim at introducing a criterion that determines if f figi2I is a Bessel sequence, a frame or a Riesz sequence or not any of these, based on the norms and the inner products of the elements in f figi2I. In the cases of Riesz and Bessel sequences, we introduced a criterion but in the case of a frame, we did not find any answers. This criterion will be shown by K(f figi2I). Using the criterion introduced, some interesting extensions of orthogonality will be presented. Keywords: Frames, Bessel sequences, Orthonormal basis, Riesz bases, Gram matrix MSC2010: Primary 42C15, 47A05. 1. Preliminaries Frames are generalizations of orthonormal bases, but, more than orthonormal bases, they have shown their ability and stability in the representation of functions [1, 4, 10, 11]. The frames have been deeply studied from an abstract point of view. The results of such studies have been used in concrete frames such as Gabor and Wavelet frames which are very important from a practical point of view [2, 9, 5, 8]. An orthonormal set feng in a Hilbert space is characterized by a simple relation hem;eni = dm;n: In the other words, the Gram matrix is the identity matrix. Moreover, feng is an orthonor- mal basis if spanfeng = H. But, for frames the situation is more complicated; i.e., the Gram Matrix has no such a simple form.
    [Show full text]
  • Uniqueness of Low-Rank Matrix Completion by Rigidity Theory
    UNIQUENESS OF LOW-RANK MATRIX COMPLETION BY RIGIDITY THEORY AMIT SINGER∗ AND MIHAI CUCURINGU† Abstract. The problem of completing a low-rank matrix from a subset of its entries is often encountered in the analysis of incomplete data sets exhibiting an underlying factor model with applications in collaborative filtering, computer vision and control. Most recent work had been focused on constructing efficient algorithms for exact or approximate recovery of the missing matrix entries and proving lower bounds for the number of known entries that guarantee a successful recovery with high probability. A related problem from both the mathematical and algorithmic point of view is the distance geometry problem of realizing points in a Euclidean space from a given subset of their pairwise distances. Rigidity theory answers basic questions regarding the uniqueness of the realization satisfying a given partial set of distances. We observe that basic ideas and tools of rigidity theory can be adapted to determine uniqueness of low-rank matrix completion, where inner products play the role that distances play in rigidity theory. This observation leads to efficient randomized algorithms for testing necessary and sufficient conditions for local completion and for testing sufficient conditions for global completion. Crucial to our analysis is a new matrix, which we call the completion matrix, that serves as the analogue of the rigidity matrix. Key words. Low rank matrices, missing values, rigidity theory, iterative methods, collaborative filtering. AMS subject classifications. 05C10, 05C75, 15A48 1. Introduction. Can the missing entries of an incomplete real valued matrix be recovered? Clearly, a matrix can be completed in an infinite number of ways by replacing the missing entries with arbitrary values.
    [Show full text]
  • 2014 CBK Linear Algebra Honors.Pdf
    PETERS TOWNSHIP SCHOOL DISTRICT CORE BODY OF KNOWLEDGE LINEAR ALGEBRA HONORS GRADE 12 For each of the sections that follow, students may be required to analyze, recall, explain, interpret, apply, or evaluate the particular concept being taught. Course Description This college level mathematics course will cover linear algebra and matrix theory emphasizing topics useful in other disciplines such as physics and engineering. Key topics include solving systems of equations, evaluating vector spaces, performing linear transformations and matrix representations. Linear Algebra Honors is designed for the extremely capable student who has completed one year of calculus. Systems of Linear Equations Categorize a linear equation in n variables Formulate a parametric representation of solution set Assess a system of linear equations to determine if it is consistent or inconsistent Apply concepts to use back-substitution and Guassian elimination to solve a system of linear equations Investigate the size of a matrix and write an augmented or coefficient matrix from a system of linear equations Apply concepts to use matrices and Guass-Jordan elimination to solve a system of linear equations Solve a homogenous system of linear equations Design, setup and solve a system of equations to fit a polynomial function to a set of data points Design, set up and solve a system of equations to represent a network Matrices Categorize matrices as equal Construct a sum matrix Construct a product matrix Assess two matrices as compatible Apply matrix multiplication
    [Show full text]
  • Section 2.4–2.5 Partitioned Matrices and LU Factorization
    Section 2.4{2.5 Partitioned Matrices and LU Factorization Gexin Yu [email protected] College of William and Mary Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization One approach to simplify the computation is to partition a matrix into blocks. 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns. Standard computation techniques are inefficient in such cases, so we need to develop techniques which exploit the internal structure of the matrices. In most cases, the matrices of interest have lots of zeros. Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns.
    [Show full text]
  • Linear Algebra and Matrix Theory
    Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .
    [Show full text]
  • Inner Products and Norms (Part III)
    Inner Products and Norms (part III) Prof. Dan A. Simovici UMB 1 / 74 Outline 1 Approximating Subspaces 2 Gram Matrices 3 The Gram-Schmidt Orthogonalization Algorithm 4 QR Decomposition of Matrices 5 Gram-Schmidt Algorithm in R 2 / 74 Approximating Subspaces Definition A subspace T of a inner product linear space is an approximating subspace if for every x 2 L there is a unique element in T that is closest to x. Theorem Let T be a subspace in the inner product space L. If x 2 L and t 2 T , then x − t 2 T ? if and only if t is the unique element of T closest to x. 3 / 74 Approximating Subspaces Proof Suppose that x − t 2 T ?. Then, for any u 2 T we have k x − u k2=k (x − t) + (t − u) k2=k x − t k2 + k t − u k2; by observing that x − t 2 T ? and t − u 2 T and applying Pythagora's 2 2 Theorem to x − t and t − u. Therefore, we have k x − u k >k x − t k , so t is the unique element of T closest to x. 4 / 74 Approximating Subspaces Proof (cont'd) Conversely, suppose that t is the unique element of T closest to x and x − t 62 T ?, that is, there exists u 2 T such that (x − t; u) 6= 0. This implies, of course, that u 6= 0L. We have k x − (t + au) k2=k x − t − au k2=k x − t k2 −2(x − t; au) + jaj2 k u k2 : 2 2 Since k x − (t + au) k >k x − t k (by the definition of t), we have 2 2 1 −2(x − t; au) + jaj k u k > 0 for every a 2 F.
    [Show full text]
  • Exceptional Collections in Surface-Like Categories 11
    EXCEPTIONAL COLLECTIONS IN SURFACE-LIKE CATEGORIES ALEXANDER KUZNETSOV Abstract. We provide a categorical framework for recent results of Markus Perling on combinatorics of exceptional collections on numerically rational surfaces. Using it we simplify and generalize some of Perling’s results as well as Vial’s criterion for existence of a numerical exceptional collection. 1. Introduction A beautiful recent paper of Markus Perling [Pe] proves that any numerically exceptional collection of maximal length in the derived category of a numerically rational surface (i.e., a surface with zero irreg- ularity and geometric genus) can be transformed by mutations into an exceptional collection consisting of objects of rank 1. In this note we provide a categorical framework that allows to extend and simplify Perling’s result. For this we introduce a notion of a surface-like category. Roughly speaking, it is a triangulated cate- T num T gory whose numerical Grothendieck group K0 ( ), considered as an abelian group with a bilinear form (Euler form), behaves similarly to the numerical Grothendieck group of a smooth projective sur- face, see Definition 3.1. Of course, for any smooth projective surface X its derived category D(X) is surface-like. However, there are surface-like categories of different nature, for instance derived categories of noncommutative surfaces and of even-dimensional Calabi–Yau varieties turn out to be surface-like (Example 3.4). Also, some subcategories of surface-like categories are surface-like. Thus, the notion of a surface-like category is indeed more general. In fact, all results of this paper have a numerical nature, so instead of considering categories, we pass directly to their numerical Grothendieck groups.
    [Show full text]
  • Applications of the Wronskian and Gram Matrices of {Fie”K?
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Applications of the Wronskian and Gram Matrices of {fie”k? Robert E. Hartwig Mathematics Department North Carolina State University Raleigh, North Carolina 27650 Submitted by S. Barn&t ABSTRACT A review is made of some of the fundamental properties of the sequence of functions {t’b’}, k=l,..., s, i=O ,..., m,_,, with distinct X,. In particular it is shown how the Wronskian and Gram matrices of this sequence of functions appear naturally in such fields as spectral matrix theory, controllability, and Lyapunov stability theory. 1. INTRODUCTION Let #(A) = II;,,< A-hk)“‘k be a complex polynomial with distinct roots x 1,.. ., A,, and let m=ml+ . +m,. It is well known [9] that the functions {tieXk’}, k=l,..., s, i=O,l,..., mkpl, form a fundamental (i.e., linearly inde- pendent) solution set to the differential equation m)Y(t)=O> (1) where D = $(.). It is the purpose of this review to illustrate some of the important properties of this independent solution set. In particular we shall show that both the Wronskian matrix and the Gram matrix play a dominant role in certain applications of matrix theory, such as spectral theory, controllability, and the Lyapunov stability theory. Few of the results in this paper are new; however, the proofs we give are novel and shed some new light on how the various concepts are interrelated, and on why certain results work the way they do. LINEAR ALGEBRA ANDITSAPPLlCATIONS43:229-241(1982) 229 C Elsevier Science Publishing Co., Inc., 1982 52 Vanderbilt Ave., New York, NY 10017 0024.3795/82/020229 + 13$02.50 230 ROBERT E.
    [Show full text]
  • Block Matrices in Linear Algebra
    PRIMUS Problems, Resources, and Issues in Mathematics Undergraduate Studies ISSN: 1051-1970 (Print) 1935-4053 (Online) Journal homepage: https://www.tandfonline.com/loi/upri20 Block Matrices in Linear Algebra Stephan Ramon Garcia & Roger A. Horn To cite this article: Stephan Ramon Garcia & Roger A. Horn (2020) Block Matrices in Linear Algebra, PRIMUS, 30:3, 285-306, DOI: 10.1080/10511970.2019.1567214 To link to this article: https://doi.org/10.1080/10511970.2019.1567214 Accepted author version posted online: 05 Feb 2019. Published online: 13 May 2019. Submit your article to this journal Article views: 86 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=upri20 PRIMUS, 30(3): 285–306, 2020 Copyright # Taylor & Francis Group, LLC ISSN: 1051-1970 print / 1935-4053 online DOI: 10.1080/10511970.2019.1567214 Block Matrices in Linear Algebra Stephan Ramon Garcia and Roger A. Horn Abstract: Linear algebra is best done with block matrices. As evidence in sup- port of this thesis, we present numerous examples suitable for classroom presentation. Keywords: Matrix, matrix multiplication, block matrix, Kronecker product, rank, eigenvalues 1. INTRODUCTION This paper is addressed to instructors of a first course in linear algebra, who need not be specialists in the field. We aim to convince the reader that linear algebra is best done with block matrices. In particular, flexible thinking about the process of matrix multiplication can reveal concise proofs of important theorems and expose new results. Viewing linear algebra from a block-matrix perspective gives an instructor access to use- ful techniques, exercises, and examples.
    [Show full text]