4 Gaussian Elimination 4.1 Lower and Upper Triangular Matrices

4 Gaussian Elimination 4.1 Lower and Upper Triangular Matrices

∗ ∗∗∗∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗∗∗∗ ∗ Figure 4.1: schematic representation of lower (left) and upper (right) matrices (n = 4); blank spaces represent a 0 4 Gaussian elimination goal: solve, for given A ∈ Rn×n and b ∈ Rn, the linear system of equations Ax = b. (4.1) In the sequel, we will often denote the entries of matrices by lower case letters, e.g., the entries of A are (A)ij = aij. Likewise for vectors, we sometimes write xi = xi. Remark 4.1 In matlab, the solution of (4.1) is realized by x = A\b. In python, the function numpy.linalg.solve performs this. In both cases, a routine from lapack1 realizes the actual computation. The matlab realization of the backslash operator \ is in fact very complex. A very good discussion of many aspects of the realization of \ can be found in [1]. 4.1 lower and upper triangular matrices A matrix A ∈ Rn×n is • an upper triangular matrix if Aij = 0 for j < i; • a lower triangular matrix if Aij = 0 for j > i. • a normalized lower triangular matrix if, in addition to being lower triangular, it satisfies Aii = 1 for i =1,...,n. Linear systems where A is a lower or upper triangular matrix are easily solved by “forward substitution” or “back substitution”: Algorithm 4.2 (solve Lx = b using forward substitution) Input: L ∈ Rn×n lower triangular, invertible, b ∈ Rn Output: solution x ∈ Rn of Lx = b for j = 1:n do j−1 xj := bj − ljkxk ljj J convention: empty sum = 0 K k=1 end for P . 1linear algebra package, see en.wikipedia.org/wiki/LAPACK 49 Algorithm 4.3 (solution of Ux = b using back substitution) Input: U ∈ Rn×n upper triangular, invertible, b ∈ Rn Output: solution x ∈ Rn of Ux = b for j = n:-1:1 do n xj := bj − ujkxk ujj k=j+1 ! end for P . The cost of Algorithms 4.2 and 4.3 are O(n2): Exercise 4.4 Compute the number of multiplications and additions in Algorithms 4.2 and 4.3. The set of upper and lower triangular matrices are closed under addition and matrix multipli- cation2: n×n Exercise 4.5 Let L1, L2 ∈ R be two lower triangular matrices. Show: L1 + L2 and L1L2 −1 are lower triangular matrices. If L1 is additionally invertible, then its inverse L1 is also a lower triangular matrix. Analogous results hold for upper triangular matrices. Remark 4.6 (representation via scalar products) Alg. 4.2 (and analogously Alg. 4.3) can be writen using scalar products: for j = 1:n do x(j) := b(j) − L(j, 1 : j − 1) ∗ x(1 : j − 1) L(j, j) end for h i. The advantage of such a formulation is that efficient libraries are available such as BLAS level 13. More generally, rather than realizing dot-products, matrix-vector products, or matrix- matrix-products directly by loops, it is typically advantageous to employ optimized routines such as BLAS. Remark 4.7 In Remark 4.6, the matrix L is accessed in row-oriented fashion. One can reorganize the two loops so as to access L in a column-oriented way. The following algorithm overwrites b with the solution x of Lx = b: for j = 1:n-1 do b(j) = b(j)/L(j, j) b(j +1: n) := b(j +1: n) − b(j)L(j +1: n, j) end for finis 6.DS 2That is, they have the mathematical structure of a ring 3Basic Linear Algebra Subprograms, see en.wikipedia.org/wiki/Basic Linear Algebra Subprograms 50 4.2 classical Gaussian elimination slide 23 The classical Gaussian elimination process transforms the linear system (4.1) into upper trian- gular form, which can then be solved by back substitution. We illustrate the procedure: a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . (4.2) . an1x1 + an2x2 + ··· + annxn = bn Multiplying the 1st equation by a21 l21 := a11 and subtracting this from the 2nd equation produces: a21 a21 a21 a21 (a21 − a11) x1 +(a22 − a12) x2 + ··· +(a2n − a1n) xn = b2 − b1 (4.3) a11 a11 a11 a11 0 (2) (2) (2) =:a22 =:a2n =:b2 Multiplying| the{z 1st equation} | by {z } | {z } | {z } a31 l31 := a11 and subtracting this from the 3rd equation produces: a31 a31 a31 a31 (a31 − a11) x1 +(a32 − a12) x2 + ··· +(a3n − a1n) ·xn = b3 − b1 (4.4) a11 a11 a11 a11 0 (2) (2) (2) =:a32 a3n b3 Generally,| multiplying{z } for |i = 2,...,n{z ,} the 1st equation| by{z li1 :=} ai1/a11 and| subtracting{z } this from the ith equation yields the following equivalent system of equations: a11x1 + a12x2 + ··· + a1nxn = b1 (2) (2) (2) a22 x2 + ··· + a2n xn = b2 . (4.5) . (2) (2) (2) an2 x2 + ··· + ann xn = bn Repeating this process for the (n − 1) × (n − 1) subsystem (2) (2) (2) a22 x2 + ··· + a2n xn = b2 . (2) (2) (2) an2 x2 + ··· + annxn = bn of (4.5) yields a11x1 + a12x2 + a13x3 + ··· + a1nxn = b1 (2) (2) (2) (2) a22 x2 + a23 x3 + ··· + a2n xn = b2 (3) (3) (3) a33 x3 + ··· + a3n xn = b3 (4.6) . (3) (3) (3) an3 x3 ··· ann xn = bn 51 0 0 0 0 0 0 0 A 0 0 0 0 0 0 Figure 4.2: Gaussian elimination: reduction to upper triangular form One repeats this procedure until one has reached triangular form. The following Alg. 4.8 realizes the above procedure: It overwrites the matrix A so that its upper triangle contains the final triangular form, which we will denote U below; the entries lik computed on the way will be collected in a normalized lower triangular matrix L. Algorithm 4.8 (Gaussian elimination without pivoting) Input: A Output: non-trivial entries of L and U; A is overwritten by U for k =1:(n − 1) do for i =(k + 1) : n do aik lik := akk A(i, [k + 1 : n])+ = −lik · A(k, [k + 1 : n]) end for end for Remark 4.9 In (4.8) below, we will see that A = LU, where U and L are computed in Alg. 4.8. Typically, the off-diagonal entries of L are stored in the lower triangular part of A so that effectively, A is overwritten by its LU-factorization. Exercise 4.10 Expand Alg. 4.8 so as to include the modifications of the right-hand side b. 4.2.1 Interpretation of Gaussian elimination as an LU-factorization With the coefficients lij computed above (e.g., in Alg. 4.8) one can define the matrices 1 .. 0 . . .. .. . . . 0 1 L(k) := , k =1,...,n − 1 . .. . lk ,k . +1 . . 0 .. . . .. .. 0 ··· 0 l 0 ··· 0 1 n,k 52 Exercise 4.11 Check that the inverse of L(k) is given by 1 .. 0 . . . .. .. . . 0 1 (L(k))−1 = . (4.7) . .. . −lk ,k . +1 . . 0 .. . . .. .. 0 ··· 0 −l 0 ··· 0 1 n,k Each step of the above Gaussian elimination process sketched in Fig. 4.2 is realized by a multiplication from the left by a matrix, in fact, the matrix (L(k))−1 (cf. (4.7)). That is, the Gaussian elimination process can be described as A = A(1) → A(2) =(L(1))−1A(1) → A(3) =(L(2))−1A(2) =(L(2))−1(L(1))−1A(1) → ... → A(n) =(L(n−1))−1A(n−1) = ... =(L(n−1))−1(L(n−2))−1 ... (L(2))−1(L(1))−1A(1) =:U upper triangular Rewriting this|{z yields,} the LU-factorization A = L(1) ··· L(n−1) U =:L The matrix L is a lower triangular matrix| as{z the product} of lower triangular matrices (cf. Exercise 4.5). In fact, due to the special structure of the matrices L(k), it is given by 1 . l .. L = 21 . . .. .. ln ··· ln,n 1 1 −1 as the following exercise shows: Exercise 4.12 For each k the product L(k)L(k+1) ··· L(n−1) is given by 1 .. 0 . . . .. .. . . 0 1 L(k)L(k+1) ··· L(n−1) = . . . lk ,k 1 +1 . . l .. k+2,k+1 . . .. .. 0 ··· 0 l l ··· l 1 n,k n,k+1 n,n−1 53 Thus, we have shown that Gaussian elimination produces a factorization A = LU, (4.8) where L and U are lower and upper triangular matrices determined by Alg. 4.8. 4.3 LU-factorization In numerical practice, linear systems are solved by computing the factors L and U in (4.8) and the system is then solved with one forward and one back substitution: 1. compute L, U such that A = LU 2. solve Ly = b using forward substitution 3. solve Ux = y using back substitution Remark 4.13 In matlab, the LU-factorization is realized by lu(A). In python one can use scipy.linalg.lu. As we have seen in Section 4.2.1, the LU-factorization can be computed with Gaussian elimi- nation. An alternative way of computing the factors L, U is given in the following section4 4.3.1 Crout’s algorithm for computing LU-factorization We seek L, U such that 1 u11 ··· ··· u1n a11 ··· ··· a1n .. .. l21 . . ! . = . . .. .. .. . l ··· l 1 u a ··· ··· a n1 n,n−1 nn n1 nn 2 2 This represents n equations for n unknowns, i.e., we are looking for lij,uij, such that n ! aik = lij ujk, ∀i, k =1, . , n. j=1 X L is lower triangular, U is upper triangular =⇒ min(i,k) ! aik = lij ujk ∀i, k =1,...,n (4.9) j=1 X 4One reason for studying different algorithms is that the entries of L and U are computed in a different order so that these algorithms differ in their memory access and thus potentially in actual timings.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    24 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us