
3 Solution of Nonlinear Equations 3.1 Bisection Method The main idea is to make use of the Intermediate Value Theorem (IVT): For f ∈ C[a, b] with f(a)f(b) < 0 there exists a number c ∈ (a, b) such that f(c) = 0. This leads to a simple Algorithm: a + b 1. Take c = . 2 2. If f(a)f(c) < 0 (i.e., root lies in (a, c)), let a = a, b = c. If f(c)f(b) < 0 (i.e., root lies in (c, b)), let a = c, b = b. If f(c) = 0 (i.e., root lies in c), stop. 3. Repeat Convergence Analysis We shall label the intervals used by the algorithm as [a, b] = [a0, b0], [a1, b1], [a2, b2],... By construction 1 b − a = (b − a ), n ≥ 1. n n 2 n−1 n−1 Thus, recursively, 1 b − a = (b − a ), n ≥ 1. (1) n n 2n 0 0 We also know a0 ≤ a1 ≤ a2 ≤ ... ≤ b, and b0 ≥ b1 ≥ b2 ≥ ... ≥ a, which shows us that the sequences {an} and {bn} are both bounded and monotonic, and therefore convergent. Using standard limit laws, equation (1) gives us 1 lim bn − lim an = (b0 − a0) lim = 0. n→∞ n→∞ n→∞ 2n So we now also know that the sequences {an} and {bn} have the same limits, i.e., lim an = lim bn =: r. (2) n→∞ n→∞ It remains to be shown that this number r is a root of the function f. From the bisection algorithm we know f(an)f(bn) < 0. Or, taking limits lim f(an)f(bn) ≤ 0. n→∞ Finally, using (2), we have [f(r)]2 ≤ 0 =⇒ f(r) = 0. 1 Summarizing, the bisection method always converges (provided the initial interval con- tains a root), and produces a root of f. Errors a + b If the algorithm is stopped after n iterations, then r ∈ [a , b ]. Moreover, c = n n n n n 2 is an approximation to the exact root. Note that the error can be bounded by 1 |r − c | ≤ (b − a ) n 2 n n 1 = (b − a ). 2n+1 0 0 Therefore, the error after n steps of the bisection method is guaranteed to satisfy 1 |r − c | ≤ (b − a). (3) n 2n+1 Note: This bound is independent of the function f. Remark: Recall that linear convergence requires en ≤ Cen−1, (4) with some constant C < 1. Thus, 2 n en ≤ C en−2 ≤ ... ≤ C e0 (5) is necessary (but not sufficient) for linear convergence. Now, for the bisection method, 1 e = |r − c | ≤ (b − a), n n 2n+1 and b − a e = . 0 2 Thus, condition (5) is satisfied, but we know from observation (e.g., in the Maple worksheet on convergence) that the bisection method does not converge linearly, i.e., condition (4) at each step is not satisfied. Remark: The previous discussion may ”explain” why so many textbooks wrongly attribute linear convergence to the bisection method. Remark: The Maple worksheet 577 convergence.mws contains some code that pro- duces an animation of several steps of this iterative procedure. Implementation of the Bisection Method Some details to consider (or slight modifications of the basic algorithm): a + b 1. Do not compute c = n n . This formula may become unstable. It is more n 2 b − a stable to use c = a + n n , since here the second summand acts as a small n n 2 correction to the first one. 2 2. When picking the ”correct” sub-interval to continue with, don’t use the test < f(a)f(c) > 0. Instead, use signf(a) 6= signf(c). The multiplication is more expen- sive than a simple sign lookup (remember the standard scientific notation), and it can also produce over- or underflow. 3. Implement some kind of (practical) stopping criterion. All of the following three may be used: (a) Specify a maximum number of allowable iterations in the for-loop con- struction. (b) Check if the error is small enough. We know the bound (3), so check, e.g., if 1 (b − a) < δ, 2n+1 where δ is some specified tolerance. This can also be used as a-priori esti- mator for the number of iterations you may need. (c) Check if f(c) is close enough to zero, i.e., check if |f(c)| < , where is another user-specified tolerance. Note that the stopping criteria in 3 if used by themselves may fail (see the explanation and figure on p.77 of the textbook). 3.1.1 Modification of the Bisection Method: Regula Falsi The following discussion cannot be found in our textbook. The modification comes from taken cn not as the average of an and bn, but as the weighted average |f(bn)| |f(an)| cn = an + bn. (6) |f(an)| + |f(bn)| |f(an)| + |f(bn)| Note: We still have f(an)f(bn) < 0 (by assumption), and therefore (6) is equivalent to f(bn)an − f(an)bn cn = , f(bn) − f(an) or f(bn)(bn − an) cn = bn − . f(bn) − f(an) Notice that this last formula contains the reciprocal of the slope of the secant line at an and bn, and the choice of cn can be illustrated by Figure 1. We will come across another secant-based method later – the secant method. We determine the new interval as for the bisection method, i.e., if f(a)f(c) < 0 (i.e., root lies in (a, c)), let a = a, b = c, if f(c)f(b) < 0 (i.e., root lies in (c, b)), let a = c, b = b, 3 y an bn cn x Figure 1: Choice of cn for regula falsi. if f(c) = 0 (i.e., root lies in c), stop. Remark 1: For concave functions one of the endpoints remains fixed. Thus, the in- terval [an, bn] does not get arbitrarily small. Remark 2: It can be shown that the regula falsi converges linearly (see an example later on when we discuss general fixed point iteration). 3.2 Newton’s Method Let r be such that f(r) = 0, and x be an approximation of the root close to r, i.e, x + h = r, h small. The quantity h can be interpreted as the correction which needs to be added to x to get the exact root r. Recall Taylor’s expansion: f(r) = f(x + h) = f(x) + hf 0(x) + O(h2), or f(r) ≈ f(x) + hf 0(x). (7) Now r is a root of f, i.e., f(r) = 0, and so (7) can be restated as 0 ≈ f(x) + hf 0(x), 4 or f(x) h ≈ − . (8) f 0(x) Thus, using (8), an improved approximation to the root r is f(x) r = x + h ≈ x − . f 0(x) If we embed this into an iterative scheme and also provide an initial guess x0, then we obtain Newton iteration: f(xn) xn+1 = xn − 0 , n ≥ 0, (9) f (xn) with x0 as initial guess. Graphical Interpretation: Consider the tangent line to the graph of f at xn 0 y − f(xn) = f (xn)(x − xn), or 0 y = f(xn) − (x − xn)f (xn). Now we intersect with the x-axis, i.e., set y = 0. This yields 0 f(xn) 0 = f(xn) − (x − xn)f (xn) ⇐⇒ x = xn − 0 . f (xn) The last formula coincides with the Newton formula (9), thus, in Newton’s method, a new approximation to the root of f is obtained by intersecting the tangent line to f at a previous approximate root with the x-axis. Figure 2 illustrates this. The entire iterative procedure can also be viewed as an animation in the Maple worksheet 577 convergence.mws on convergence. Convergence Theorem 3.1 If f has a simple zero at r and f ∈ C2[r − δ, r + δ] for a suitably small δ, then Newton’s method will converge to the root r provided it is started with x0 ∈ [r − δ, r + δ]. Moreover, convergence is quadratic, i.e., there exists a constant C such that 2 |r − xn+1| ≤ C|r − xn| , n ≥ 0. Proof: We will use the notation en = r − xn for the error at step n. Then, following (9), f(xn) en+1 = r − xn+1 = r − xn + 0 f (xn) 5 y xn xn+1 x Figure 2: Graphical interpretation of Newton’s method. 0 f(xn) enf (xn) + f(xn) = en + 0 = 0 . (10) f (xn) f (xn) On the other hand, via Taylor expansion we know e2 0 = f(r) = f(x + e ) = f(x ) + e f 0(x ) + n f 00(ξ ), n n n n n 2 n with ξn between xn and xn + en = r. This immediately implies 1 e f 0(x ) + f(x ) = − e2 f 00(ξ ). (11) n n n 2 n n By inserting (11) into (10) we get 00 1 f (ξn) 2 en+1 = − 0 en. (12) 2 f (xn) Now, if the algorithm converges, then for xn and ξn close to r we have 1 |f 00(r)| |e | ≈ e2 , n+1 2 |f 0(r)| n | {z } const=:C which establishes quadratic convergence. Now we get to the rather technical part of verifying convergence. We begin by letting δ > 0 and picking x0 such that |r − x0| ≤ δ ⇐⇒ |e0| ≤ δ. (13) 6 Then ξ0 in (11) satisfies |r − ξ0| ≤ δ. Now consider (12) for n = 0: 00 1 |f (ξ0)| 2 |e1| = 0 e0 2 |f (x0)| and define 00 1 max|r−ξ0|≤δ |f (ξ0)| c(δ) := 0 .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages22 Page
-
File Size-