Applied Matrix Theory, Math 464/514, Fall 2019

Applied Matrix Theory, Math 464/514, Fall 2019

Applied Matrix Theory, Math 464/514, Fall 2019 Jens Lorenz September 23, 2019 Department of Mathematics and Statistics, UNM, Albuquerque, NM 87131 Contents 1 Gaussian Elimination and LU Factorization 6 1.1 Gaussian Elimination Without Pivoting . 6 1.2 Application to Ax = b + "F (x) ................... 11 1.3 Initial Boundary Value Problems . 13 1.4 Partial Pivoting and the Effect of Rounding . 14 1.5 First Remarks an Permutations and Permutation Matrices . 15 1.6 Formal Description of Gaussian Elimination with Partial Pivoting 18 1.7 Fredholm's Alternative for Linear Systems Ax = b . 19 1.8 Application to Strictly Diagonally Dominant Matrices . 21 1.9 Application of MATLAB . 22 2 Conditioning of Linear Systems 23 2.1 Vector Norms and Induced Matrix Norms . 23 2.2 The Condition Number . 28 2.3 The Perturbed System A(x +x ~) = b + ~b . 30 2.4 Example: A Discretized 4{th Order Boundary{Value problem . 31 2.5 The Neumann Series . 33 2.6 Data Error and Solution Error . 35 3 Examples of Linear Systems: Discretization Error and Condi- tioning 39 3.1 Difference Approximations of Boundary Value Problems . 39 3.2 An Approximation Problem and the Hilbert Matrix . 44 4 Rectangular Systems: The Four Fundamental Subspaces of a Matrix 47 4.1 Dimensions of Ranges and Rank . 48 4.2 Conservation of Dimension . 49 4.3 On the Transpose AT ........................ 51 4.4 Reduction to Row Echelon Form: An Example . 52 1 4.5 The Row Echelon Form and Bases of the Four Fundamental Sub- spaces . 57 5 Direct Sums and Projectors 58 5.1 Complementary Subspaces and Projectors . 58 5.2 The Matrix Representation of a Projector . 60 n 5.3 Orthogonal Complements in C ................... 61 m×n 5.4 The Four Fundamental Subspaces of A 2 C . 62 5.5 Orthogonal Projectors . 64 6 Variational Problems with Equality Constraints 66 6.1 First Order Conditions . 66 6.2 An Application of Lagrange Multipliers to a Quadratic Form . 72 6.3 Second Order Conditions for a Local Minimum . 73 6.4 Supplement . 77 7 Least Squares; Gram{Schmidt and QR Factorization 78 7.1 Example of Data Fitting . 78 7.2 Least Squares Problems and the Normal Equations . 79 7.3 The Gram{Schmidt Process and QR Factorization . 81 7.4 Solution of the Normal Equations Using the QR Factorization . 85 7.5 Householder Reflectors . 86 7.6 Householder Reduction . 89 8 The Singular Value Decomposition 92 8.1 Theoretical Construction of an SVD . 92 8.2 The SVD and the Four Fundamental Subspaces . 95 8.3 SVD and Least Squares . 96 8.4 SVD and Rank . 100 8.5 SVD and Filtering of Noisy Data . 104 9 Determinants 105 9.1 Permutations and Their Signs . 105 9.1.1 The Group Sn . 105 9.1.2 The Sign of a Permutation . 106 9.1.3 Transpositions . 108 9.2 Volumes and Orientation: Intuitive Meaning of the Determinant 109 9.2.1 Orientation . 110 9.2.2 The Case n = 2 . 111 9.3 The Determinant as a Multilinear Function . 112 9.4 Rules for Determinants . 115 9.4.1 Product Formula . 115 9.4.2 The Cases n = 1; 2; 3 . 116 9.4.3 Triangular Matrices . 116 9.4.4 Existence of A−1 . 117 9.4.5 Transpose . 117 9.4.6 Block Matrices . 119 2 9.4.7 Cramer's Rule . 120 9.4.8 Determinant Formula for A−1 . 120 9.4.9 Column Expansion and Row Expansion . 121 9.5 Remarks on Computing Determinants . 122 9.6 The Permanent of a Matrix . 123 9.7 The Characteristic Polynomial . 124 9.8 Vandermond Determinants . 124 10 Eigenvalues, Eigenvectors, and Transformation to Block{Diagonal Form 127 10.1 Eigenvalues Are Zeros of the Characteristic Polynomial . 127 10.2 The Geometric and Algebraic Multiplicities of Eigenvalues . 128 10.3 Similarity Transformations . 129 10.4 Schur's Transformation to Upper Triangular Form . 131 10.5 Transformation of Normal Matrices to Diagonal Form . 133 10.6 Special Classes of Matrices . 135 10.7 Applications to ODEs . 135 10.8 Hadamard's Inequality . 137 10.9 Diagonalizable Matrices . 140 10.10Transformation to Block{Diagonal Form . 142 10.10.1 Two Auxiliary Results . 142 10.10.2 The Blocking Lemma . 143 10.10.3 Repeated Blocking . 146 10.11Generalized Eigenspaces . 147 10.12Summary . 149 10.13The Cayley{Hamilton Theorem . 150 11 Similarity Transformations and Systems of ODEs 152 11.1 The Scalar Case . 152 11.2 Introduction of New Variables . 153 12 The Jordan Normal Form 155 12.1 Preliminaries and Examples . 155 12.2 The Rank of a Matrix Product . 159 12.3 Nilpotent Jordan Matrices . 161 12.4 The Jordan Form of a Nilpotent Matrix . 163 12.5 The Jordan Form of a General Matrix . 167 12.6 Application: The Matrix Exponential . 169 12.7 Application: Powers of Matrices . 170 13 Complementary Subspaces and Projectors 172 13.1 Complementary Subspaces . 172 13.2 Projectors . 173 13.3 Matrix Representations of a Projector . 174 13.4 Orthogonal Complementary Subspaces . 175 13.5 Invariant Complementary Subspaces and Transformation to Block Form . 176 3 13.6 The Range{Nullspace Decomposition . 177 13.7 The Spectral Theorem for Diagonalizable Matrices . 178 13.8 Functions of A ............................182 13.9 Spectral Projectors in Terms of Right and Left Eigenvectors . 182 14 The Resolvent and Projectors 185 14.1 The Resolvent of a Matrix . 185 14.2 Integral Representation of Projectors . 187 14.3 Proof of Theorem 14.2 . 188 14.4 Application: Sums of Eigenprojectors under Perturbations . 190 15 Approximate Solution of Large Linear Systems: GMRES 193 15.1 GMRES . 194 15.1.1 The Arnoldi Process . 194 15.1.2 Application to Minimization over Km . 197 15.2 Error Estimates . 199 15.3 Research Project: GMRES and Preconditioning . 201 16 The Courant{Fischer Min{Max Theorem 202 16.1 The Min{Max Theorem . 202 16.2 Eigenvalues of Perturbed Hermitian Matrices . 205 16.3 Eigenvalues of Submatrices . 207 17 Introduction to Control Theory 210 17.1 Controllability . 210 17.2 General Initial Data . 215 17.3 Control of the Reversed Pendulum . 215 17.4 Derivation of the Controlled Reversed Pendulum Equation via Lagrange . 217 17.5 The Reversed Double Pendulum . 218 17.6 Optimal Control . 219 17.7 Linear Systems with Quadratic Cost Function . 222 18 The Discrete Fourier Transform 225 18.1 Fourier Expansion . 225 18.2 Discretization . 226 18.3 DFT as a Linear Transformation . 227 18.4 Fourier Series and DFT . 229 19 Fast Fourier Transform 236 20 Eigenvalues Under Perturbations 239 20.1 Right and Left Eigenvectors . 239 20.2 Perturbations of A . 240 4 21 Perron{Frobenius Theory 244 21.1 Perron's Theory . 245 21.2 Frobenius's Theory . 251 21.3 Discrete{Time Markov Processes . 255 5 1 Gaussian Elimination and LU Factorization n×n Consider a linear system of equations Ax = b where A 2 C is a square n n matrix, b 2 C is a given vector, and x 2 C is unknown. Gaussian elimination remains one of the most basic and important algorithms to compute the solution x. If the algorithm does not break down and one ignores round–off errors, then the solution x is computed in O(n3) arithmetic operations. For simplicity, we describe Gaussian elimination first without pivoting, i.e., without exchanges of rows and columns. We will explain that the elimination process (if it does not break down) leads to a matrix factorization, A = LU, the so{called LU{factorization of A. Here L is unit{lower triangular and U is upper triangular. The triangular matrices L and U with A = LU are computed in O(n3) steps. Once the factorization A = LU is known, the solution of the system Ax = LUx = b can be computed in O(n2) steps. This observation is important if one wants to solve linear systems Ax = b with the same matrix A, but different right{hand sides b. For example, if one wants to solve a nonlinear system Ax = b + "F (x) by an iterative process Ax(j+1) = b + "F (x(j)); j = 0; 1; 2;::: then the LU factorization of A is very useful. Gaussian elimination without pivoting may break down for very simple invertible systems. An example is 0 1 x 1 1 = 1 1 x2 2 with unique solution x1 = x2 = 1 : We will introduce permutations and permutation matrices and then describe Gaussian elimination with row exchanges, i.e., with partial pivoting. It corre- sponds to a matrix factorization PA = LU where P is a permutation matrix, L is unit lower triangular and U is upper triangular. The algorithm is practically and theoretically important. On the theoretical side, it leads to Fredholm's alternative for any system Ax = b where A is a square matrix. On the practical side, partial pivoting is recommended even if the algorithm without pivoting does not break down. Partial pivoting typically leads to better numerical stability. 1.1 Gaussian Elimination Without Pivoting Example 1.1 Consider the system 6 0 1 0 1 0 1 2 1 1 x1 1 @ 6 2 1 A @ x2 A = @ −1 A (1.1) −2 2 1 x3.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    256 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us