Math 361S Lecture Notes Linear Systems II

Math 361S Lecture Notes Linear Systems II

Math 361S Lecture notes Linear systems II Jeffrey Wong February 13, 2019 Contents 1 Vector and matrix norms2 1.1 Vector norms...................................3 1.2 Matrix norms...................................4 1.3 Some important matrix norms..........................5 2 Sensitivity of linear systems6 2.1 Bound with exact A ...............................6 2.2 Properties of the condition number.......................8 2.3 Examples.....................................9 2.4 Residuals vs. error................................ 10 3 An important special case 11 3.1 Positive definite matrices............................. 11 3.2 Diagonal dominance............................... 12 3.3 Gerschgorin's theorem (optional)........................ 13 4 Iterative methods 13 4.1 Jacobi method................................... 15 4.1.1 Implementation.............................. 15 4.2 Gauss-Seidel.................................... 16 4.3 An example.................................... 17 4.4 SOR (successive over-relaxation; optional)................... 18 4.5 When to stop?.................................. 19 5 Cholesky factorization 20 5.1 Existence of an LU factorization......................... 20 5.2 Cholesky: theory................................. 20 5.3 Algorithm..................................... 22 5.4 Example...................................... 23 1 6 Full rank least squares, briefly 25 6.1 Derivation of the normal equations....................... 25 7 Additional (optional) notes 26 7.1 Proof of error bound............................... 26 7.2 Numerical stability: Gaussian elimination (details).............. 27 Theory: Motivation To study error, we need to start with a measure of distance between two quantities. For scalars, there is not much to say; recall that the absolute and relative errors between an approximationx ~ and exact value x are: jx − x~j abs. err. = jx − x~j; rel. err. = : jxj For vectors and matrices, there are more options, and not all choices are equally advantageous for error analysis. 1 Vector and matrix norms Hereafter, we consider two flavors of vector spaces: • Rn, the space of real-valued1 vectors of dimension n, • Rm×n; the space of real-valued m × n matrices (for a given m and n) Note that each m and n defines a space, e.g. R2×3 is a different from R3×4. Definition: A norm on a vector space V (e.g. V = Rn) is a function k·k : V ! R with the following properties: i) kxk ≥ 0 for all x 2 V and kxk = 0 if and only if x = 0 ii) kaxk = jajkxk for all a 2 R and x 2 V iii) kx + yk ≤ kxk + kyk for all x; y 2 V (the triangle inequality) The triangle inequality is essential for breaking up norms of expressions into simpler parts! 1The definitions have to be adjusted for vectors/matrices with complex entries; the modifications will be developed later if needed. 2 1.1 Vector norms A norm on Rn is called a vector norm. For instance, the Euclidean norm (or `2-norm') is q 2 2 kxk2 = x1 + ··· xn: The most important vector norms are extensions of this norm: Definition: Let p 2 [1; 1): Then the `p-norm' p p 1=p kxkp = (x1 + ··· + xn) defines a norm on Rn: The 1-norm (or max norm) is kxk1 = max jxij: 1≤i≤n • The unit ball in Rn in a given norm is n fx 2 R : kxk = 1g: • A sequence fxng ⊂ V is said to converge to x (in norm) if lim kxn − xk = 0: n!1 • Two norms k·ka and k·kb are called equivalent if there are constants m and M such that mkxka ≤ kxkb ≤ Mkxka; for all x 2 V: Equivalent norms measure distance in a qualitatively similar way. For instance, if a se- quence fxng converges to x in one norm, it also converges in any other equivalent norm. For vectors/matrices, it turns out that there is no concern about a sequence converging in one norm but not another due to the following: Theorem: Any two norms on a finite dimensional vector space V are equivalent. Throughout, we will have some results that are stated for a generic vector norm (applica- ble to any of them). However, we will see that depending on the problem, there are reasons to favor one particuar norm over another. 3 1.2 Matrix norms A matrix norm is a norm on Rm×n; the space of m × n matrices. One way to create a matrix norm by ‘flattening' the matrix and using a norm for vectors of dimension mn, e.g. kAk = max jaijj: (1) 1≤i;j≤n However, this construction lacks some important structure (see example). The more useful approach is to take a vector norm kk and define the induced norm kAxk kAk = max = maxkAxk: (2) x2Rn; x6=0 kxk kxk=1 It is easy to verify that (2) defines a matrix norm (satisfying the properties (i)-(iii)) for any choice of vector norm. Intuitively, the subordinate matrix norm kAk measures the amount that A can stretch a vector x, i.e. the maximum size kAxk compared to kxk. Notation: It is common to use the same symbol for the vector norm kxk and the subordinate matrix norm kAk even though they are not the same thing. There is no ambiguity because it is clear which one to use from the dimensions of the variable. Induced matrix norms satisfy the key inequality m×n n kAxk ≤ kAkkxk for all A 2 R and x 2 R (3) and, for norms on square matrices, n×n kABk ≤ kAkkBk for all A; B 2 R : (4) Lastly, from the definition (2) we have that kIk = 1. For this course, we will consider only induced norms and make use of their properties. However, it is worth noting that not all matrix norms are induced by a vector norm! Example (induced norms): Consider the norm on Rn×n obtained by treating the elements of A like a flat vector and using the 1-norm: kAk = max jaijj: 1≤i;j≤n Let 1 1 1 1 1 2 A = ;B = ; AB = : 0 1 0 1 0 1 Then kAk = kBk = 1; kABk = 2: so kABk ≤ kAkkBk does not hold; thus this norm is not induced by any vector norm. On the other hand (see next section), kAk1 = kBk1 = 2; kABk1 = 3: 4 1.3 Some important matrix norms 1-norm The matrix 1-norm is the norm induced by the vector 1-norm: kAk1 = max kAxk1: kxk1=1 The matrix 1-norm is particularly useful because it is easy to compute: Computing the 1-norm: Let A be an m × n matrix. Then n X kAk1 = max jaijj: 1≤i≤m j=1 That is, kAk1 is the maximum absolute row sum. n Proof. Suppose x 2 R has kxk1 = 1: Then jxjj ≤ 1 for all j so n n X X j(Ax)ij ≤ jaijjjxjj ≤ jaijj for 1 ≤ i ≤ m: j=1 j=1 Taking the maximum over i, it follows that n X kAxk1 ≤ max jaijj: (5) 1≤i≤m j=1 It remains to show that the inequality can be an equality. Let r be an index for which the ∗ ∗ maximum on the right is achieved. We can construct a vector x such that kx k1 = 1 and the upper bound is achieved by setting ( ∗ 1 arj > 0 xj = : −1 arj < 0 Then 1 n ∗ X X j(Ax )rj = jarjj = max jaijj 1≤i≤m j=1 j=1 ∗ so kAx k1 equals the upper bound in (5), which proves the result. 1-norm The matrix 1-norm is kAk1 = max kAxk1: kxk1=1 By a similar argument to the one we used for kAk1, it can be shown that (see HW) n X kAxk1 = max jaijj = max. abs. col. sum of A: 1≤j≤n i=1 5 2-norm The 2-norm is more complicated. For an n × n real-valued matrix, define the spectral radius ρ(A) to be the magnitude of the largest eigenvalue of A. n×n Theorem: Let A 2 R and let kAk2 be the matrix 2-norm. Then p T kAk2 = ρ(A A): If A is (real-valued and) symmetric then kAk2 = ρ(A) = max jλij: 1≤i≤n The proof requires some work and is omitted here. In general, kAk2 is the magnitude of the largest singular value of A (which is beyond the scope of the current discussion). 2 Sensitivity of linear systems We now study the sensitivity of the linear system Ax = b to errors in A and b, where A 2 Rn×n is invertible and b 2 Rn: Throughout kxk will refer to a vector norm (any one) and kAk will be the induced matrix norm. Letx ~ be an approximate solution. To start, we assume thatx ~ is the exact solution to a perturbed system (A + δA)~x = (b + δb) where δA 2 Rn×n and δb 2 Rn are small perturbations to A and b. The goal is to find relations between the error inx ~ and the perturbations δA and δb: 2.1 Bound with exact A For the simpler case, let's first suppose that A is exact, so Ax = b; Ax~ = b + δb: Subtracting and multiplying by A−1, we get (withx ~ = x + δx) δx = A−1δb: Taking the norm and using the properties of induced norms (eq. (3)), we find that kδxk ≤ kA−1kkδbk: (6) 6 Now to get relative errors, note that from kAxk = kbk and (3) again we have kAk kxk ≥ : kbk Dividing this into the bound (6) yields k∆xk k∆bk ≤ κ(A) kxk kbk where κ(A) is the condition number κ(A) of the matrix A, defined as κ(A) = kAkkA−1k: If A is not invertible, then κ(A) = 1 by convention. The bound says that the relative error in x (the solution) can be at most κ(A) times as large as the relative error in b.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    28 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us