Applied Linear Algebra

Applied Linear Algebra

Applied Linear Algebra MAT 3341 Spring/Summer 2019 Alistair Savage Department of Mathematics and Statistics University of Ottawa This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License Contents Preface 4 1 Matrix algebra5 1.1 Conventions and notation............................5 1.2 Matrix arithmetic.................................6 1.3 Matrices and linear transformations....................... 11 1.4 Gaussian elimination............................... 13 1.5 Matrix inverses.................................. 17 1.6 LU factorization.................................. 25 2 Matrix norms, sensitivity, and conditioning 36 2.1 Motivation..................................... 36 2.2 Normed vector spaces............................... 37 2.3 Matrix norms................................... 40 2.4 Conditioning................................... 44 3 Orthogonality 48 3.1 Orthogonal complements and projections.................... 48 3.2 Diagonalization.................................. 53 3.3 Hermitian and unitary matrices......................... 56 3.4 The spectral theorem............................... 60 3.5 Positive definite matrices............................. 65 3.6 QR factorization................................. 71 3.7 Computing eigenvalues.............................. 77 4 Generalized diagonalization 85 4.1 Singular value decomposition.......................... 85 4.2 Fundamental subspaces and principal components............... 91 4.3 Pseudoinverses.................................. 94 4.4 Jordan canonical form.............................. 98 4.5 The matrix exponential.............................. 102 5 Quadratic forms 106 5.1 Definitions..................................... 106 5.2 Diagonalization of quadratic forms....................... 108 2 Contents 3 5.3 Rayleigh's principle and the min-max theorem................. 112 Index 118 Preface These are notes for the course Applied Linear Algebra (MAT 3341) at the University of Ottawa. This is a third course in linear algebra. The prerequisites are uOttawa courses MAT 1322 and (MAT 2141 or MAT 2342). In this course we will explore aspects of linear algebra that are of particular use in concrete applications. For example, we will learn how to factor matrices in various ways that aid in solving linear systems. We will also learn how one can effectively compute estimates of eigenvalues when solving for precise ones is impractical. In addition, we will investigate the theory of quadratic forms. The course will involve a mixture of theory and computation. It is important to understand why our methods work (the theory) in addition to being able to apply the methods themselves (the computation). Acknowledgements: I would like to thank Benoit Dionne, Monica Nevins, and Mike Newman for sharing with me their lecture notes for this course. Alistair Savage Course website: https://alistairsavage.ca/mat3341 4 Chapter 1 Matrix algebra We begin this chapter by briefly recalling some matrix algebra that you learned in previous courses. In particular, we review matrix arithmetic (matrix addition, scalar multiplication, the transpose, and matrix multiplication), linear transformations, and gaussian elimination (row reduction). Next we discuss matrix inverses. Although you have seen the concept of a matrix inverse in previous courses, we delve into the topic in further detail. In particular, we will investigate the concept of one-sided inverses. We then conclude the chapter with a discussion of LU factorization, which is a very useful technique for solving linear systems. 1.1 Conventions and notation We let Z denote the set of integers, and let N = f0; 1; 2;::: g denote the set of natural numbers. In this course we will work over the field R of real numbers or the field C of complex numbers, unless otherwise specified. To handle both cases simultaneously, we will use the notation F to denote the field of scalars. So F = R or F = C, unless otherwise specified. We call the elements of F scalars. We let F× = F n f0g denote the set of nonzero scalars. We will use uppercase roman letters to denote matrices: A, B, C, M, N, etc. We will use the corresponding lowercase letters to denote the entries of a matrix. Thus, for instance, aij is the (i; j)-entry of the matrix A. We will sometimes write A = [aij] to emphasize this. In some cases, we will separate the indices with a comma when there is some chance for confusion, e.g. ai;i+1 versus aii+1. Recall that a matrix A has size m × n if it has m rows and n columns: 2 3 a11 a12 ··· a1n 6 a a ··· a 7 6 21 22 2n 7 A = 6 . .. 7 : 4 . 5 am1 am2 ··· amn We let Mm;n(F) denote the set of all m × n matrices with entries in F. We let GL(n; F) denote the set of all invertible n×n matrices with entries in F. (Here `GL' stands for general 5 6 Chapter 1. Matrix algebra linear group.) If a1; : : : ; an 2 F, we define 2 3 a1 0 ······ 0 6 .. 7 6 0 a2 . 7 6 . 7 diag(a1; : : : ; an) = 6 . .. .. .. 7 : 6 7 6 . .. .. 7 4 . 0 5 0 ······ 0 an We will use boldface lowercase letters a, b, x, y, etc. to denote vectors. (In class, we will often write vectors as ~a, ~b, etc. since bold is hard to write on the blackboard.) Most of the time, our vectors will be elements of Fn. (Although, in general, they can be elements of any vector space.) For vectors in Fn, we denote their components with the corresponding non-bold letter with subscripts. We will write vectors x 2 Fn in column notation: 2 3 x1 6x 7 6 27 x = 6 . 7 ; x1; x2; : : : ; xn 2 F: 4 . 5 xn Sometimes, to save space, we will also write this vector as x = (x1; x2; : : : ; xn): n For 1 ≤ i ≤ n, we let ei denote the i-th standard basis vector of F . This is the vector ei = (0;:::; 0; 1; 0;:::; 0); n where the 1 is in the i-th position. Then fe1; e2; ··· ; eng is a basis for F . Indeed, every x 2 Fn can be written uniquely as the linear combination x = x1e1 + x2e2 + ··· + xnen: 1.2 Matrix arithmetic We now quickly review the basic matrix operations. Further detail on the material in this section can be found in [Nic, xx2.1{2.3]. 1.2.1 Matrix addition and scalar multiplication We add matrices of the same size componentwise: A + B = [aij + bij]: If A and B are of different sizes, then the sum A + B is not defined. We define the negative of a matrix A by −A = [−aij] 1.2. Matrix arithmetic 7 Then the difference of matrices of the same size is defined by A − B = A + (−B) = [aij − bij]: If k 2 F is a scalar, then we define the scalar multiple kA = [kaij]: We denote the zero matrix by 0. This is matrix with all entries equal to zero. Note that there is some possibility for confusion here since we will use 0 to denote the real (or complex) number zero, as well as the zero matrices of different sizes. The context should make it clear which zero we mean. The context should also make clear what size of zero matrix we are considering. For example, if A 2 Mm;n(F) and we write A + 0, then 0 must denote the m × n zero matrix here. The following theorem summarizes the important properties of matrix addition and scalar multiplication. Proposition 1.2.1. Let A, B, and C be m × n matrices and let k; p 2 F be scalars. Then we have the following: (a) A + B = B + A (commutativity) (b) A + (B + C) = (A + B) + C (associativity) (c)0+ A = A (0 is an additive identity) (d) A + (−A) = 0 (−A is the additive inverse of A) (e) k(A + B) = kA + kB (scalar multiplication is distributive over matrix addition) (f)( k + p)A = kA + pA (scalar multiplication is distributive over scalar addition) (g)( kp)A = k(pA) (h)1 A = A Remark 1.2.2. Proposition 1.2.1 can be summarized as stating that the set Mm;n(F) is a vector space over the field F under the operations of matrix addition and scalar multiplication. 1.2.2 Transpose The transpose of an m × n matrix A, written AT , is the n × m matrix whose rows are the columns of A in the same order. In other words, the (i; j)-entry of AT is the (j; i)-entry of A. So, T if A = [aij]; then A = [aji]: We say the matrix A is symmetric if AT = A. Note that this implies that all symmetric matrices are square, that is, they are of size n × n for some n. Example 1.2.3. We have 2 3 T π 5 π i −1 = i 7 : 5 7 3=2 4 5 −1 3=2 8 Chapter 1. Matrix algebra The matrix 2 1 −5 73 4−5 0 85 7 8 9 is symmetric. Proposition 1.2.4. Let A and B denote matrices of the same size, and let k 2 F. Then we have the following: (a)( AT )T = A (b)( kA)T = kAT (c)( A + B)T = AT + BT 1.2.3 Matrix-vector multiplication m Suppose A is an m × n matrix with columns a1; a2;:::; an 2 F : A = a1 a2 ··· an : For x 2 Fn, we define the matrix-vector product m Ax := x1a1 + x2a2 + ··· + xnan 2 F : Example 1.2.5. If 2 2 −1 03 2−13 3 1=2 π A = 6 7 and x = 1 ; 6−2 1 17 4 5 4 5 2 0 0 0 then 2 2 3 2−13 203 2 −3 3 6 3 7 61=27 6π7 6−5=2 + 2π7 Ax = −1 6 7 + 1 6 7 + 2 6 7 = 6 7 : 4−25 4 1 5 415 4 5 5 0 0 0 0 1.2.4 Matrix multiplication Suppose A 2 Mm;n(F) and B 2 Mn;k(F).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    121 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us