Errors for Linear Systems

Errors for Linear Systems

Errors for Linear Systems When we solve a linear system Ax = b we often do not know A and b exactly, but have only approximations Aˆ and bˆ available. Then the best thing we can do is to solve Aˆxˆ = bˆ exactly which gives a different solution vectorx ˆ. We would like to know how the errors of Aˆ and bˆ influence the error inx ˆ. Example: Consider the linear system Ax = b with 1:01 :99 x 2 1 = : :99 1:01 x2 2 1 2:02 We can easily see that the solution is x = . Now let us use the slightly different right hand side vector bˆ = and 1 1:98 2 solve the linear system Axˆ = bˆ. This gives the solution vectorx ˆ = . In this case a small change in the right hand side 0 vector has caused a large change in the solution vector. Vector norms In order to measure errors in vectors by a single number we use a so-called vector norm. n A vector norm kxk measures the size of a vector x 2 R by a nonnegative number and has the following properties kxk = 0 ) x = 0 kaxk = jajkxk kx + yk ≤ kxk + kyk n for any x;y 2 R , a 2 R. There are many possible vector norms. We will use the three norms kxk1, kxk2, kxk¥ defined by kxk1 = jx1j + ··· + jxnj 1=2 2 2 kxk2 = jx1j + ··· + jxnj kxk¥ = maxfjx1j;:::;jxdjg If we write kxk in an equation without any subscript, then the equation is valid for all three norms (using the same norm everywhere). kxˆ−xk If the exact vector is x and the approximation isx ˆ we can define the relative error with respect to a vector norm as kxk . kbˆ−bk kx−xk Example: Note that in the above example we have ¥ = 0:01, but ˆ ¥ = 1. That means that the relative error of the kbk¥ kxk¥ solution is 100 times as large as the relative error in the given data, i.e., the condition number of the problem is at least 100. Matrix norms n×n A matrix norm kAk measures the size of a matrix A 2 R by a nonnegative number. We would like to have the property n kAxk ≤ kAkkxk for all x 2 R (1) where kxk is one of the above vector norms kxk1 ;kxk2 ;kxk¥. We define kAk as the smallest number satisfying (1): kAxk kAk := sup = max kAxk n x2 n x2R kxk R x6=0 kxk=1 By using the 1, 2, ¥ vector norm in this definition we obtain the matrix norms kAk1, kAk2, kAk¥ (which are in general different numbers). It turns out that kAk1 and kAk¥ are easy to compute: 1 Theorem: kAk = max ai j (maximum of row sums of absolute values) ¥ i= ;:::;n ∑ 1 j=1;:::;n kAk1 = max ai j (maximum of column sums of absolute values) j= ;:::;n ∑ 1 i=1;:::;n Proof: For the infinity norm we have ! kAxk¥ = max ∑ai jx j ≤ max∑ ai j x j ≤ max∑ ai j kxk¥ i j i j i j implying kAk ≤ maxi ∑ j ai j . Let i∗ be the index where the maximum occurs and define x j = signai j, then kxk = 1 and ¥ ∗ ¥ kAxk¥ = maxi ∑ j ai j . For the 1-norm we have ! ! kAxk1 = ∑ ∑ai jx j ≤ ∑ ∑ ai j x j ≤ max∑ ai j kxk1 i j j i j i implying kAk ≤ max j ∑i ai j . Let j∗ be the index where the maximum occurs and define x j = 1 and x j = 0 for j 6= j∗, then 1 ∗ kxk1 = 1 and kAxk1 = max j ∑i ai j . We will not use kAk2 since it is more complicated to compute (it involves eigenvalues). n×n Note that for A;B 2 R we have kABk ≤ kAkkBk since kABxk ≤ kAkkBxk = kAkkBkkxk: The following results about matrix norms will be useful later: Lemma 1: n×n n×n 1 Let A 2 R be nonsingular and E 2 R . Then kEk < kA−1k implies that A + E is nonsingular. n Proof: Assume that A + E is singular. Then there exists a nonzero x 2 R such that (A + E)x = 0 and hence kxk ≤ kAxk = kExk ≤ kEkkxk kA−1k −1 −1 1 The left inequality for b := Ax follows from kxk = A b ≤ A kbk. As kxk > 0 we obtain kA−1k ≤ kEk. Lemma 2: n n×n kyk For given vectors x;y 2 R with x 6= 0 there exists a matrix E 2 R with Ex = y and kEk = kxk . n Proof: For the infinity-norm we have kxk¥ = x j for some j. Let a 2 R be the vector with a j = 1, ak = 0 for j 6= k and let 1 E = ya>; kxk > > > > kyk then (i) a x = kxk¥ implies Ex = y and (ii) ya v ¥ = a v kyk¥ with a v ≤ kvk¥ implies kEk ≤ kxk . n > > For the 1-norm we use a 2 R with a j = sign(x j) since a x = kxk1 and a v ≤ kvk1. > > For the 2-norm we use a = x=kxk2 since a x = kxk2 and a v ≤ kak2 kvk2 = kvk2. Condition numbers Let x denote the solution vector of the linear sytem Ax = b. If we choose a slightly different right hand side vector bˆ then we obtain a different solution vectorx ˆ satisfying Axˆ = bˆ. We want to know how the relative error bˆ − b =kbk influences the relative error kxˆ− xk=kxk (“error propagation”). We have A(xˆ− x) = bˆ − b and therefore −1 −1 kxˆ− xk = A (bˆ − b) ≤ A bˆ − b : 2 On the other hand we have kbk = kAxk ≤ kAkkxk. Combining this we obtain ˆ kxˆ− xk −1 b − b ≤ kAk A : kxk kbk The number cond(A) := kAk A−1 is called condition number of the matrix A. It determines how much the relative error of the right hand side vector can be amplified. The condition number depends on the choice of the matrix norm: In general −1 −1 cond1(A) := kAk1 A 1 and cond¥(A) := kAk¥ A ¥ are different numbers. Example: In the above example we have 1:01 :99 25:25 −24:75 cond (A) = kAk A−1 = = 2 · 50 = 100 ¥ ¥ ¥ : : − : : 99 1 01 ¥ 24 75 25 25 ¥ and therefore kxˆ− xk bˆ − b ≤ 100 kxk kbk which is consistent with our results above (b and bˆ were chosen so that the worst possible error magnification occurs). The fact that the matrix A in our example has a large condition number is related to the fact that A is close to the singular 1 1 matrix B = . 1 1 1 The following result shows that cond(A) indicates how close A is to a singular matrix: kA − Bk 1 Theorem: min = B2Rn×n; B singular kAk cond(A) 1 Proof: (1) Lemma 1 shows: B singular implies kA − Bk ≥ kA−1k . −1 n −1 −1 kxk (2) By the definition of A there exist x;y 2 R such that x = A y and A = kyk . By Lemma 2 there exists a matrix n×n kyk E 2 R such that Ex = y and kEk = kxk . Then B := A − E satisfies Bx = Ax − Ex = y − y = 0, hence B is singular and 1 kA − Bk = kEk = kA−1k . 1:01 :99 1 1 kA−Bk Example: The matrix A = is close to the singular matrix B = so that ¥ = :02 = :01. By the :99 1:01 1 1 kAk¥ 2 1 theorem we have that 0:01 ≤ or cond¥(A) ≥ 100. As we say above we have cond¥(A) = 100, i.e., the matrix B is cond¥(A) really the closest singular matrix to the matrix A. When we solve a linear system Ax = b we have to store the entries of A and b in the computer, yielding a matrix Aˆ with rounded entriesa ˆi j = f l(ai j) and a rounded right hand side vector bˆ. If the original matrix A is singular then the linear system has no solution or infinitely many solutions, so that any computed solution is meaningless. How can we recognize this on a computer? Note that the matrix Aˆ which the computer uses may no longer be singular. ˆ ˆ 1 Answer: We should compute (or at least estimate) cond(A). If cond(A) < e then we can guarantee that any matrix A which M is rounded to Aˆ must be nonsingular: aˆi j − ai j ≤ eM ai j implies Aˆ − A ≤ eM kAk for the infinity or 1-norm. Therefore Aˆ−A Aˆ−A k k eM 1−eM 1 k k 1 ≤ ≈ eM and cond(Aˆ) < ≈ imply < . Hence the matrix A must be nonsingular by the kAˆk 1−eM eM eM kAˆk cond(Aˆ) theorem. Now we assume that we perturb both the right hand side vector b and the matrix A: Theorem: Assume Ax = b and Aˆxˆ = bˆ. If A is nonsingular and Aˆ − A < 1= A−1 there holds ! kxˆ− xk cond(A) bˆ − b Aˆ − A ≤ + kxk kAˆ−Ak kbk kAk 1 − cond(A) kAk 3 Proof: Let E = Aˆ − A, hence Axˆ = bˆ − Exˆ. Subtracting Ax = b gives A(xˆ− x) = (bˆ − b) − Exˆ and therefore −1 kxˆ− xk ≤ A bˆ − b + kEk kxˆk |{z} ≤ kxk + kxˆ− xk −1 −1 1 − kA kkEk kxˆ− xk ≤ A bˆ − b + kEkkxk Dividing by kxk and using kbk ≤ kAkkxk () kxk ≥ kbk=kAk gives ˆ ! −1 kxˆ− xk −1 b − b 1 − kA kkEk ≤ A + kEk kxk kbk=kAk ! kEk kxˆ− xk bˆ − b kEk 1 − cond(A) ≤ cond(A) + kAk kxk kbk kAk −1 kEk kEk If kEk < 1=kA k () kAk < 1=cond(A) we can divide by 1 − cond(A) kAk .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us