Numerical Methods
Total Page:16
File Type:pdf, Size:1020Kb
Numerical Methods Andrew Kobin Fall 2014 Contents Contents Contents 1 Introduction and Background 1 1.1 Basic Calculus . .1 1.2 Floating Point Numbers . .5 1.3 Absolute and Relative Error . .6 1.4 Finite Digit Arithmetic . .7 2 Solving Equations 9 2.1 The Bisection Method . .9 2.2 Fixed Point Iteration . 10 2.3 Newton's Method . 12 2.4 The Secant Method . 13 2.5 Convergence of Iterative Algorithms . 14 3 Interpolation and Approximation 16 3.1 Lagrange Interpolating Polynomials . 16 3.2 Chebyshev Nodes . 23 3.3 Hermite Interpolation . 24 3.4 Splines . 25 4 Numerical Differentiation and Integration 28 4.1 Numerical Derivatives . 28 4.2 Richardson's Process . 31 4.3 Numerical Integration . 32 4.4 Round-Off Error . 36 4.5 Romberg Integration . 37 5 Numerical Linear Algebra 38 5.1 Linear Systems of Equations . 38 5.2 Iterative Methods for Solving Linear Systems . 39 5.3 Norms of Vectors and Matrices . 41 5.4 Convergence of Iterative Methods . 44 5.5 Least Squares Solutions . 45 5.6 Estimating Functions . 49 5.7 The Chebyshev Polynomials . 54 i 1 Introduction and Background 1 Introduction and Background These notes are taken from a course taught by Dr. Matt Mastin at Wake Forest University in the fall of 2014. The main text used for the course is Numerical Analysis, 9th ed., by Burden and Faires. The main topics covered in this course are: Review calculus Floating point arithmetic and rounding error Convergence of algorithms Solving equations Interpolation and approximation Numerical derivatives and integrals Numerical linear algebra 1.1 Basic Calculus A primary tool in calculus is Theorem 1.1.1 (Taylor). Suppose a function f has n continuous derivatives and a (not necessarily continuous) (n + 1)-st derivative on an interval [a; b]. Let x0 2 [a; b]. Then for every x 2 (a; b) there exists a number ξ(x) between x and x0 such that we can write f(x) = Pn(x) + Rn(x) where Pn(x) is a polynomial and Rn(x) is called the error term. In particular, for x0 2 [a; b], f 00(x ) f (n)(x ) P (x) = f(x ) + f 0(x )(x − x ) + 0 (x − x )2 + ::: + 0 (x − x )n: n 0 0 0 2! 0 n! 0 f (n+1)ξ(x) and R (x) = (x − x )n+1, where ξ(x) is the value chosen above. n (n + 1)! 0 A goal will be to further understand the error term Rn(x). In nontechnical terms, if we fix an x0 2 [a; b], then for any x sufficiently close to x0 the value of f(x) will look like a polynomial. Example 1.1.2. Suppose we want to compute the diagonal of a right triangle. α β 1 1.1 Basic Calculus 1 Introduction and Background If we only know α and β with some error of ≤ ", what is the error in a computation of the hypotenuse? In a perfect world, this length would be exactly pα2 + β2, but in the real world our measurements aren't quite so precise. The thing we would actually compute is p(α ± ")2 + (β ± ")2: Then the question becomes: How does this estimate compare (in terms of ") to the `ideal' pα2 + β2? To answer this, note that p p(α + ")2 + (β + ")2 = α2 + 2α" + "2 + β2 + 2β" + "2 = p(α2 + β2) + 2"(α + β) + 2"2 which shows what our hypotenuse `function' looks like a little bit away from (α; β). Wep can use Taylor's Theorem (1.1.1) to estimate in terms of α and β, using the function f(x) = x, where x = α2 + β2 + 2"(α + β) + 2"2. Let's try with n = 1: 2 2 0 2 2 2 f(x) = f(α + β ) + f (α + β )(2"(α + β) + 2" ) + R1(x) p 2"(α + β) + 2"2 = α2 + β2 + + R(x) 2pα2 + β2 p "(α + β) "2 = α2 + β2 + + + R(x): pα2 + β2 pα2 + β2 Other theorems related to Taylor's Theorem (1.1.1) state that the error term R(x) is `small', loosely on the order of "2 or higher. In practical terms, this means that the only error that really plays a role in the above is p"(α+β) . α2+β2 α + β p Exercise. Prove that if α; β > 0, then ≤ 2. pα2 + β2 Definition. A function f on a set X ⊆ R has a limit L at x0, denoted lim f(x) = L x!x0 if for every " > 0 there exists a δ > 0 such that jx − x0j < δ implies jf(x) − Lj < ". x0 − δ x0 + δ f(x) L + " L L − " x0 2 1.1 Basic Calculus 1 Introduction and Background Definition. A function f is continuous at x0 if lim f(x) = f(x0). x!x0 The set of functions that are continuous on a set X is denoted C(X). For example, if X is an interval we will write C[a; b]. 1 Definition. Let fxngn=1 be a sequence of real numbers, then fxng converges to some num- ber x if for every " > 0 there is some N 2 N such that jxn − xj < " whenever n ≥ N. Intuitively, fxng ! x if later values of xn land in smaller and smaller neighborhoods of the `target' x. We will also denote this by lim xn = x. n!1 Theorem 1.1.3. If f is a function on a set of real numbers X, then the following are equivalent: (1) f is continuous at x0. (2) If fxng is a sequence in X converging to x0 then the sequence ff(xn)g ⊂ f(X) con- verges to f(x0). In topology, (2) is known as the sequential definition of continuity. This theorem shows that analyzing functions is really the same as looking at limits of sequences. Theorem 1.1.4 (Mean Value Theorem). If f 2 C[a; b] and f is differentiable on (a; b) then there exists a number c 2 (a; b) so that f(b) − f(a) f 0(c) = : b − a a c b What this says is that there is a point c between a and b such that the slope of the tangent line at c is equal to the slope of the secant line through (a; f(a)) and (b; f(b)). Theorem 1.1.5 (Intermediate Value Theorem). If f 2 C[a; b] and k is any number between f(a) and f(b), then there is some x 2 (a; b) such that f(x) = k. 3 1.1 Basic Calculus 1 Introduction and Background This is sometimes stated as \If you draw a continuous function from a to b, you don't pick up your pencil." Recall Taylor's Theorem (1.1.1) from the start of the section. This is a supremely useful tool in calculus, and especially in computational methods that will be explored later. Think about it this way: a calculator (even a high-powered computer) can only do four arithmetic operations, +; −; × and ÷. How does a calculator compute sin(4), for example? Taylor's Theorem gives us a way of approximating functions such as sin(4) in terms of basic arithmetic operations to a high degree of accuracy. So far we have not described the error term Rn(x). Taylor's Theorem tells us that f (n+1)(ξ(x)) R (x) = (x − x )n+1 n (n + 1)! 0 for some choice of ξ(x) between x and x0. We write ξ(x) because the value will depend on the x chosen (it's a moving target). Pn is called the nth Taylor polynomial for f and Rn is called the remainder or error term. Before proving Taylor's Theorem, we need a lemma. Lemma 1.1.6. Suppose f has n + 1 derivatives on [a; b] and that there is a number c so that f(c) = 0 f 0(c) = 0 . f (n)(c) = 0 and there is a number d 6= c so that f(d) = 0. Then there is a number β so that f (n+1)(β) = 0. Proof. Repeatedly use MVT. Onto the proof of Taylor's Theorem (1.1.1)... Proof. Let Pn(x) be the nth Taylor polynomial for f. Define g(x) = f(x) − Pn(x). Then proving Taylor's Theorem comes down to showing that g(x) = Rn(x) for a choice of ξ(x). 0 First, g(x0) = f(x0) − Pn(x0) = f(x0) − (f(x0) + 0 + 0 + 0 + :::) = 0. Similarly, g (x0) is also 0, and in fact the first n derivatives of g at x0 are all 0. So g almost satisfies Lemma 1.1.6, but we don't have this extra place where g(x) = 0. To remedy this, take x1 6= x0 and let C be the unique constant so that n+1 g(x1) = −C(x1 − x0) : −g(x1) n+1 Namely, C = n+1 . We define a new function h by h(x) = g(x) + C(x − x0) . We (x1 − x0) n+1 claim that h satisfies Lemma 1.1.6. First, h(x0) = g(x0) + C(x0 − x0) = 0 + 0 = 0, and for the first n derivatives of h, (k) (i) n+1−k h (x0) = g (x0) + C · k!(x0 − x0) 4 1.2 Floating Point Numbers 1 Introduction and Background n+1 so these are clearly all zero as well. Plugging in x1 yields h(x1) = g(x1) + C(x1 − x0) = g(x1) − g(x1) = 0, so the conditions of Lemma 1.1.6 are satisfied for h.