Chapter 4

Numerical Di↵erentiation and Integration

4.1 Numerical Di↵erentiation

In this section, we introduce how to numerically calculate the of a function. First, the derivative of the function f at x0 is defined as

f x0 h f x0 f 1 x0 : lim p ` q´ p q. p q “ h 0 h Ñ

This formula gives an obvious way to generate an approximation to f x0 : simply compute 1p q f x0 h f x0 p ` q´ p q h for small values of h. Although this way may be obvious, it is not very successful, due to our old nemesis round-o↵error. But it is certainly a place to start. 2 To approximate f x0 , suppose that x0 a, b ,wheref C a, b , and that x1 x0 h for 1p q Pp q P r s “ ` some h 0 that is suciently small to ensure that x1 a, b . We construct the first Lagrange ‰ Pr s polynomial P0,1 x for f determined by x0 and x1, with its error term: p q x x0 x x1 f x P0,1 x p ´ qp ´ qf 2 ⇠ x p q“ p q` 2! p p qq f x0 x x0 h f x0 h x x0 x x0 x x0 h p qp ´ ´ q p ` qp ´ q p ´ qp ´ ´ qf 2 ⇠ x “ h ` h ` 2 p p qq ´ for some ⇠ x between x0 and x1. Di↵erentiating gives p q f x0 h f x0 x x0 x x0 h f 1 x p ` q´ p q Dx p ´ qp ´ ´ qf 2 ⇠ x p q“ h ` 2 p p qq „ ⇢ f x0 h f x0 2 x x0 h p ` q´ p q p ´ q´ f 2 ⇠ x “ h ` 2 p p qq x x0 x x0 h p ´ qp ´ ´ qDx f 2 ⇠ x . ` 2 p p qq ` ˘ Deleting the terms involving ⇠ x gives p q f x0 h f x0 f 1 x0 p ` q´ p q. p q« h 53 54 CHAPTER 4. NUMERICAL DIFFERENTIATION AND INTEGRATION

One diculty with this formula is that we have no information about Dx f ⇠ x ,sothe p 2 p p qqq truncation error cannot be estimated. When x is x0, however, the coecient of Dx f ⇠ x p 2 p p qqq is 0, and the formula simplifies to

f x0 h f x0 h f 1 x0 p ` q´ p q f 2 ⇠ . (4.1) p q“ h ´ 2 p q

For small values of h, the di↵erence quotient f x0 h f x0 2 can be used to approximate r p ` q´ p qs{ f x0 with an error bounded by M h 2, where M is a bound on f x for x between x0 1p q | | { | 2p q| and x0 h. This formula is known as the forward-di↵erence formula for h 0 and the ` ° backward-di↵erence formula if h 0. † Example 4.1.1. Use the forward-di↵erence formula to approximate the derivative of f x ln x p q“ at x0 1.8usingh 0.1, h 0.05, and h 0.01, and determine bounds for the approximation “ “ “ “ errors.

Solution. The forward-di↵erence formula f 1.8 h f 1.8 p ` q´ p q h with h 0.1 gives “ ln 1.9 ln 1.8 0.64185389 0.58778667 ´ ´ 0.5496722. 0.1 “ 0.1 “ Because f x 1 x2 and 1.8 ⇠ 1.9, a bound for this approximation error is 2p q“´ { † † hf ⇠ h 0.1 | 2p q| | | 0.0154321. 2 “ 2⇠2 † 2 1.8 2 “ p q The approximation and error bounds when h 0.05 and h 0.01 are found in a similar manner “ “ and the results are shown in the following table.

f 1.8 h f 1.8 h hf1.8 h p ` q´ p q | | p ` q h 2 1.8 2 p q 0.10.64185389 0.5406722 0.0154321 0.05 0.61518564 0.5479795 0.0077160 0.01 0.59332685 0.5540180 0.0015432

Since f x 1 x, the exact value of f 1.8 is 0.555,¯ and in this case the error bounds are quite 1p q“ { 1p q close to the true approximation error.

To obtain general derivative approximation formulas, suppose that x0,x1, ,xn are n 1 ¨¨¨ ` distinct numbers in some interval I and that f Cn 1 I . Using the Lagrange interpolation P ` p q formula, we have n x x0 x xn n 1 f x f x L x p ´ q¨¨¨p ´ qf p ` q ⇠ x , p q“ p kq kp q` n 1 ! p p qq k 0 ÿ“ p ` q for some ⇠ x I,whereL x denotes the k-th Lagrange polynomial for f at x0,x1, ,xn. p qP kp q ¨¨¨ Di↵erentiating this expression gives n x x0 x xn n 1 x x0 x xn n 1 f 1 x f x L1 x Dx p ´ q¨¨¨p ´ q f p ` q ⇠ x p ´ q¨¨¨p ´ qDx f p ` q ⇠ x . p q“ p kq kp q` n 1 ! p p qq` n 1 ! p p qq k 0 „ ⇢ ÿ“ p ` q p ` q ” ı 4.1. NUMERICAL DIFFERENTIATION 55

We again have a problem estimating the truncation error unless x is one of the numbers xj.In this case, the last term vanishes and the formula becomes

n n 1 n f p ` q ⇠ xj f 1 xj f x L1 xj p p qq xj x , (4.2) p q“ p kq kp q` n 1 ! p ´ kq k 0 k 0,k j ÿ“ p ` q “π‰ which is called an n 1 -point formula to approximate f xj . p ` q 1p q In general, using more evaluation points in (4.2) produces greater accuracy, although the number of functional evaluations and growth of round-o↵error discourages this somewhat. The most common formulas are those involving three and five evaluation points. We first derive some useful three-point formulas and consider aspects of their errors. Since

x x1 x x2 2x x1 x2 L0 x p ´ qp ´ q L01 x ´ ´ . p q“ x0 x1 x0 x2 ùñ p q“ x0 x1 x0 x2 p ´ qp ´ q p ´ qp ´ q Similarly, we have

2x x0 x2 2x x0 x1 L1 x ´ ´ and L21 x ´ ´ . p q“ x1 x0 x1 x2 p q“ x2 x0 x2 x1 p ´ qp ´ q p ´ qp ´ q Hence, the equation (4.2) becomes (with n 2) “ 2xj x1 x2 2xj x0 x2 f 1 xj f x0 ´ ´ f x1 ´ ´ p q“ p q x0 x1 x0 x2 ` p q x1 x0 x1 x2 „p ´ qp ´ q⇢ „p ´ qp ´ q⇢ 2 (4.3) 2xj x0 x1 1 3 f x2 ´ ´ f p q ⇠j xj x , ` p q x x x x ` 6 p q p ´ kq „ 2 0 2 1 ⇢ k 0,k j p ´ qp ´ q “π‰ for each j 0, 1, 2, where the notation ⇠j indicates that this points depends on xj. “

Three-point formulas

The formula from (4.3) become especially useful if the nodes are equally spaced, that is, when

x1 x0 h and x2 x0 2h for some h 0. “ ` “ ` ‰ We will assume equally-spaced nodes throughout the remainder of this section. Using (4.3) with xj x0, x1 x0 h, and x2 x0 2h, it gives “ “ ` “ ` 2 1 3 1 h 3 f 1 x0 f x0 2f x1 f x2 f p q ⇠0 . p q“h ´2 p q` p q´2 p q ` 3 p q „ ⇢ Doing the same for xj x1 gives “ 2 1 1 1 h 3 f 1 x1 f x0 f x2 f p q ⇠1 , p q“h ´2 p q`2 p q ´ 6 p q „ ⇢ and for xj x2, “ 2 1 1 3 h 3 f 1 x2 f x0 2f x1 f x2 f p q ⇠2 . p q“h 2 p q´ p q`2 p q ` 3 p q „ ⇢ Since x1 x0 h and x2 x0 2h, these formulas can also be expressed as “ ` “ ` 2 1 3 1 h 3 f 1 x0 f x0 2f x0 h f x0 2h f p q ⇠0 , p q“h ´2 p q` p ` q´2 p ` q ` 3 p q „ ⇢ 56 CHAPTER 4. NUMERICAL DIFFERENTIATION AND INTEGRATION

2 1 1 1 h 3 f 1 x0 h f x0 f x0 2h f p q ⇠1 , p ` q“h ´2 p q`2 p ` q ´ 6 p q „ ⇢ and 2 1 1 3 h 3 f 1 x0 2h f x0 2f x0 h f x0 2h f p q ⇠2 . p ` q“h 2 p q´ p ` q`2 p ` q ` 3 p q „ ⇢ As a matter of convenience, the variable substitution x0 for x0 h is used in the middle equation ` to change this formula to an approximation for f x0 . A similar change, x0 for x0 2h,isused 1p q ` in the last equation. This gives three formulas for approximating f x0 : 1p q 2 1 h 3 f 1 x0 3f x0 4f x0 h f x0 2h f p q ⇠0 , p q“2h r´ p q` p ` q´ p ` qs ` 3 p q 2 1 h 3 f 1 x0 f x0 h f x0 h f p q ⇠1 , p q“2h r p ` q´ p ´ qs ´ 6 p q and 2 1 h 3 f 1 x0 3f x0 4f x0 h f x0 2h f p q ⇠2 . p q“2h r p q´ p ` q` p ` qs ` 3 p q Finally, note that the last of these equations can be obtained from the first by simply replacing h with h, so there are actually only two formulas: ´

Three-point Endpoint formula

2 1 h 3 f 1 x0 3f x0 4f x0 h f x0 2h f p q ⇠0 , (4.4) p q“2h r´ p q` p ` q´ p ` qs ` 3 p q where ⇠0 lies between x0 and x0 2h. `

Three-point Midpoint formula

2 1 h 3 f 1 x0 f x0 h f x0 h f p q ⇠1 , (4.5) p q“2h r p ` q´ p ´ qs ´ 6 p q where ⇠1 lies between x0 h and x0 h. ´ ` Although the errors in both (4.4) and (4.5) are O h2 , the error in (4.5) is approximately half p q the error in (4.4). This is because (4.5) uses data on both sides of x0 and (4.4) uses data on only one side. Note also that f needs to be evaluated at only two points in (4.5), whereas in (4.4) three evaluations are needed. The approximation in (4.4) is useful near the ends of an interval, because information about f outside the interval may not be available. The methods presented in (4.4) and (4.5) are called three-point formulas. Similarly, there are five-point formulas that involve evaluating the function at two additional points. The error term for these formulas is O h4 . One common five-point formula is used to determine p q approximations for the derivative at the midpoint.

Five-point Midpoint formula

4 1 h 5 f 1 x0 f x0 2h 8f x0 h 8f x0 h f x0 2h f p q ⇠ , (4.6) p q“12h r p ´ q´ p ´ q` p ` q´ p ` qs ` 30 p q where ⇠ lies between x0 2h and x0 2h. The derivation of this formula is considered in later ´ ` section. The other five-point formula is used for approximations at the endpoints. 4.1. NUMERICAL DIFFERENTIATION 57

Five-point Midpoint formula

1 f 1 x0 25f x0 48f x0 h 36f x0 2h p q“12hr´ p q` p ` q´ p ` q 4 (4.7) h 5 16f x0 3h 3f x0 4h f p q ⇠ ` p ` q´ p ` qs ` 5 p q where ⇠ lies between x0 and x0 4h. Left-endpoint approximations are found using this formula ` with h 0 and right-endpoint approximations with h 0. ° † Example 4.1.2. Values for f x xex are given in the following table. Use all the applicable p q“ three-point and five-point formulas to approximate f 2.0 . 1p q

x 1.81.92.02.12.2 f x 10.889365 12.703199 14.778112 17.148957 19.855030 p q Solution. The data in the table permit us to find four di↵erent three-point approximations. We can use the endpoint formula (4.4) with h 0.1 or with h 0.1, and we can use the midpoint “ “´ formula (4.5) with h 0.1 or with h 0.2. “ “

Using the endpoint formula (4.4) with h 0.1, it gives • “ 1 f 1 2.0 3f 2.0 4f 2.1 f 2.2 p q«0.2 r´ p q` p q´ p qs 5 3 14.778112 4 17.148957 19.855030 22.032310. “ r´ p q` p q´ s“

Using the endpoint formula (4.4) with h 0.1, it gives f x0 22.054525. • “´ 1p q« Using the midpoint formula (4.5) with h 0.1, it gives • “ 1 f 1 x0 f 2.1 f 1.9 5 17.148957 12.7703199 22.228790. p q«0.2r p q´ p qs “ p ´ q“

Using the midpoint formula (4.5) with h 0.2, it gives f x0 22.414163. • “ 1p q« The only five-point for which the table gives sucient data is the midpoint formula (4.6) • with h 0.1. This gives “ 1 f 1 x0 f 1.8 8f 1.9 8f 2.1 f 2.2 p q«1.2r p q´ p q` p q´ p qs 1 10.889365 8 12.703199 8 17.148957 19.855030 22.166999. “ 1.2r ´ p q` p q´ s“

If we had no other information we would accept the five-point midpoint approximation using h 0.1 as the most accurate, and expect the true value to be between that approximation and “ the three-point mid-point approximation that is in the interval 22.166, 22.229 . r s The true value in this case is f 2.0 2 1 e2 22.167168, so the approximation errors are 1p q“p ` q “ actually:

Three-point endpoint with h 0.1: 1.35 10 1; • “ ˆ ´ Three-point endpoint with h 0.1: 1.13 10 1; • “´ ˆ ´ 58 CHAPTER 4. NUMERICAL DIFFERENTIATION AND INTEGRATION

Three-point midpoint with h 0.1: 6.16 10 2; • “ ˆ ´ Three-point midpoint with h 0.2: 2.47 10 1; • “ ˆ ´ Five-point midpoint with h 0.1: 1.69 10 4. • “ ˆ ´

Methods can also be derived to find approximations to higher of a function using only tabulated values of the function at various points. The derivation is algebraically tedious, however, so only a representative procedure will be presented.

Expand a function f in a third Taylor polynomial about a point x0 and evaluate at x0 h and ` x0 h.Then,wehave ´ 1 2 1 3 1 4 4 f x0 h f x0 f 1 x0 h f 2 x0 h f 3 x0 h f p q ⇠1 h , p ` q“ p q` p q ` 2 p q ` 6 p q ` 24 p q and 1 2 1 3 1 4 4 f x0 h f x0 f 1 x0 h f 2 x0 h f 3 x0 h f p q ⇠ 1 h , p ´ q“ p q´ p q ` 2 p q ´ 6 p q ` 24 p ´ q where x0 h ⇠ 1 x0 ⇠1 x0 h. If we add these equations, the terms involving f 1 x0 ´ † ´ † † † ` p q and f x0 cancel, so we have ´ 1p q

2 1 4 4 4 f x0 h f x0 h 2f x0 f 2 x0 h f p q ⇠1 f p q ⇠ 1 h . p ` q` p ´ q“ p q` p q ` 24r p q` p ´ qs

Solving this equation for f x0 gives 2p q 2 1 h 4 4 f 2 x0 f x0 h 2f x0 f x0 h f p q ⇠1 f p q ⇠ 1 . (4.8) p q“h2 r p ´ q´ p q` p ` qs ´ 24r p q` p ´ qs

4 1 4 4 4 Suppose f p q is continuous on x0 h, x0 h .Since2 f p q ⇠1 f p q ⇠ 1 is between f p q ⇠1 4 r ´ ` s r p q` p ´ qs p q and f p q ⇠ 1 , the Intermediate Value Theorem implies that a number ⇠ exists between ⇠1 and p ´ q ⇠ 1, and hence in x0 h, x0 h with ´ p ´ ` q

4 1 4 4 f p q ⇠ f p q ⇠1 f p q ⇠ 1 . p q“2r p q` p ´ qs This allows us to rewrite (4.8) in its final form

2 1 h 4 f 2 x0 f x0 h 2f x0 f x0 h f p q ⇠ (4.9) p q“h2 r p ´ q´ p q` p ` qs ´ 12 p q

4 fomr some ⇠ x0 h, x0 h .Iff is continuous on x0 h, x0 h , it is also bounded, and Pp ´ ` q p q r ´ ` s the approximation is O h2 . p q Example 4.1.3. In Example 4.1.2, we used the data to approximate the first derivative of f x xex at x 2.0. Use the formula (4.9) to approximate f 2.0 . p q“ “ 2p q Solution. The data permits us to determine two approximations for f 2.0 . Using (4.9) with 2p q h 0.1 gives “ 1 f 2 x0 f 1.9 2f 2.0 f 2.1 100 12.703199 2 14.778112 17.148957 p q«0.01r p q´ p q` p qs “ r ´ p q` s 29.593200 “ 4.1. NUMERICAL DIFFERENTIATION 59 and using (4.9) with h 0.2 gives “ 1 f 2 x0 f 1.8 2f 2.0 f 2.2 25 10.889365 2 14.778112 19.855030 p q«0.04r p q´ p q` p qs “ r ´ p q` s 29.704275. “ Because f x x 2 ex, the exact value is f 2.0 29.556224. Hence, the actual errors are 2p q“p ` q 2p q“ 3.7 10 2 and 1.48 10 1,respectively. ´ ˆ ´ ´ ˆ ´ round-off error instability

It is particularly important to pay attention to round-o↵error when approximating derivatives. To illustrate the situation, let us examine the three-point midpoint formula (4.5) 2 1 h 3 f 1 x0 f x0 h f x0 h f p q ⇠ p q“2hr p ` q´ p ´ qs ´ 6 p q more closely. Suppose that in evaluating f x0 h and f x0 h we encounter round-o↵errors p ` q p ´ q e x0 h and e x0 h . Then, our computations actually use the values f˜ x0 h and f˜ x0 h , p ` q p ´ q p ` q p ´ q which are related to the true values by

f x0 h f˜ x0 h e x0 h and f x0 h f˜ x0 h e x0 h . p ` q“ p ` q` p ` q p ´ q“ p ´ q` p ´ q The total error in the approximation, ˜ ˜ 2 f x0 h f x0 h e x0 h e x0 h h 3 f 1 x0 p ` q´ p ´ q p ` q´ p ´ q f p q ⇠1 , p q´ 2h “ 2h ´ 6 p q is due both to round-o↵error, the first part, and to truncation error. If we assume that the round-o↵error e x0 h are bounded by some number " 0 and that the third derivative of f p ˘ q ° is bounded by a number M 0, then ° 2 f˜ x0 h f˜ x0 h " h f 1 x0 p ` q´ p ´ q M. p q´ 2h § h ` 6 To reduce the truncation error, h2M 6, we need to reduce h. But as h is reduced, the round-o↵ { error " h grows. In practice, then, it is seldom advantageous to let h be too small, because in { that case the round-o↵error will dominate the calculations. Example 4.1.4. Consider using the values in the following table to approximate f 0.900 , 1p q where f x sin x. The true value is cos 0.900 0.62161. p q“ p q“ x sin xxsin x 0.800 0.71736 0.901 0.78395 0.850 0.75128 0.902 0.78457 0.880 0.77074 0.905 0.78643 0.890 0.77707 0.910 0.78950 0.895 0.78021 0.920 0.79560 0.898 0.78208 0.950 0.81342 0.899 0.78270 1.000 0.84147

Solution. The formula f 0.900 h f 0.900 h f 1 0.900 p ` q´ p ´ q p q« 2h with di↵erent values of h gives the approximation and the results are listed in the following table. 60 CHAPTER 4. NUMERICAL DIFFERENTIATION AND INTEGRATION

Approximation h to f 0.900 Error 1p q 0.001 0.62500 0.00339 0.002 0.62550 0.00089 0.005 0.62200 0.00039 0.010 0.62150 0.00011 0.020 0.62150 0.00011 0.050 0.62140 0.00021 0.100 0.62055 0.00106

The optimal choice for h appears to lie between 0.005 and 0.05. We can use to verify that a minimum for the error function

" h2 e h M, p q“h ` 6 occurs at h 3" M 1 3,where “p { q {

M max f 3 x max cos x cos 0.8 0.69671. “ x 0.800,1.000 p q “ x 0.800,1.000 | | “ « Pr s Pr s Because values of f are given to five decimal places, we will assume that the round-o↵error is bounded by " 5 10 6. Therefore, the optimal choice of h is approximately “ ˆ ´ 1 3 3" { h 0.028, “ M « ˆ ˙ which is consistent with the results in the table above. In practice, we cannot compute an optimal h to use in approximating the derivative, since we have no knowledge of the third derivative of the function. But we must remain aware that reducing the step size will not always improve the approximation.

4.2 Richardson’s Extrapolation

In this section, we introduce a method to generate high-accuracy results while using low-order formulas. This method is called Richardson’s extrapolation. It can be applied whenever it is known that an approximation technique has an error term with a predictable form, one that depends on a parameter, usually the step size h.

Suppose that for each number h 0, we have a formula N1 h that approximates an unknown ‰ p q constant M, and that the truncation error involved with the approximation has the form

2 3 M N1 h K1h K2h K3h , ´ p q“ ` ` `¨¨¨ for some collection of (unknown) constants K1,K2, . The truncation error is O h ,sounless ¨¨¨ p q there was a large variation in magnitude among the constants, in general, M N1 h K1h. ´ p q« The object of extrapolation is to find an easy way to combine these rather inaccurate rate O h p q approximations in an appropriate way to produce formulas with a higher-order truncation error. To see specifically how we can generate the extrapolation formulas, consider the O h formula p q for approximating M:

2 3 M N1 h K1h K2h K3h . (4.10) “ p q` ` ` `¨¨¨ 4.2. RICHARDSON’S EXTRAPOLATION 61

The formula is assumed to hold for all positive h, so we replace the parameter h by half its value. Then, we have h h h2 h3 M N1 K1 K2 K3 . (4.11) “ 2 ` 2 ` 4 ` 8 ¨¨¨ ˆ ˙ Subtracting (4.10) from twice (4.11) eliminates the term involving K1 and gives

2 3 h h h 2 h 3 M N1 N1 N1 h K2 h K3 h . (4.12) “ 2 ` 2 ´ p q ` 2 ´ ` 4 ´ `¨¨¨ ˆ ˙ „ ˆ ˙ ⇢ ˆ ˙ ˆ ˙ Define h h h N2 h N1 N1 N1 h 2N1 N1 h . p q“ 2 ` 2 ´ p q “ 2 ´ p q ˆ ˙ „ ˆ ˙ ⇢ ˆ ˙ Then (4.12) is an O h2 approximation formula for M: p q K2 2 3K3 3 M N2 h h h . (4.13) “ p q´ 2 ´ 4 ´¨¨¨ Example 4.2.1. In Example 4.1.1, we use the forward-di↵erence method with h 0.1 and “ h 0.05 to find approximations to f 1.8 for f x ln x. Assume that this formula has “ 1p q p q“ truncation error O h and use extrapolation on these values to see if this results in a better p q approximation.

Solution. In Example 4.1.1, we found that f 1.8 0.5406722 with h 0.1 and f 1.8 1p q« “ 1p q« 0.5479795 with h 0.05. This implies that “ N1 0.1 0.5406722 and N1 0.05 0.5479795. p q“ p q“ Extrapolating these results gives the new approximation

N2 0.1 N1 0.05 N1 0.05 N1 0.1 0.555287. p q“ p q`p p q´ p qq “ The h 0.1 and h 0.05 results were found to be accurate within 1.5 10 2 and 7.7 10 3, “ “ ˆ ´ ˆ ´ respectively. Because f 1.8 1 1.8 0.55555 , the extrapolated value is accurate to within 1p q“ { “ ¨¨¨ 2.7 10 4. ˆ ´ In general, extrapolation can be applied whenever the truncation error for a formula has the form m 1 ´ ↵j ↵m Kjh O h j 1 ` p q ÿ“ for a collection of constants Kj and when ↵1 ↵2 ↵3 ↵m. Many formulas used for † † †¨¨¨† extrapolation have truncation errors that contain only even powers of h, that is, have the form

2 4 6 M N1 h K1h K2h K3h . (4.14) “ p q` ` ` `¨¨¨ Assume that approximation has the form of (4.14). Replacing h with h 2 gives the approxima- { tion formula h h2 h4 h6 M N1 K1 K2 K3 . “ 2 ` 4 ` 16 ` 64 `¨¨¨ ˆ ˙ Subtracting (4.14) from 4 times this equation eliminates the h2 term,

4 6 h h 4 h 6 3M 4N1 N1 h K2 h K3 h . “ 2 ´ p q ` 4 ´ ` 16 ´ `¨¨¨ „ ˆ ˙ ⇢ ˆ ˙ ˆ ˙ 62 CHAPTER 4. NUMERICAL DIFFERENTIATION AND INTEGRATION

Dividing this equation by 3 produces an O h4 formula p q 4 6 1 h K2 h 4 K3 h 6 M 4N1 N1 h h h . “ 3 2 ´ p q ` 3 4 ´ ` 3 16 ´ `¨¨¨ „ ˆ ˙ ⇢ ˆ ˙ ˆ ˙ Defining 1 h h 1 h N2 h 4N1 N1 h N1 N1 N1 h , p q“3 2 ´ p q “ 2 ` 3 2 ´ p q „ ˆ ˙ ⇢ ˆ ˙ „ ˆ ˙ ⇢ produces the approximation formula with truncation error O h4 : p q h4 5h6 M N2 h K2 K3 . (4.15) “ p q´ 4 ´ 16 `¨¨¨

Now replace h in (4.15) with h 2 to produce a second O h4 formula { p q h h4 5h6 M N2 K2 K3 . “ 2 ´ 64 ´ 1024 ´¨¨¨ ˆ ˙ Subtracting (4.15) from 16 times this equation eliminates the h4 term and gives

h 15h6 15M 16N2 N2 h K3 . “ 2 ´ p q ` 64 `¨¨¨ „ ˆ ˙ ⇢ Dividing this equation by 15 produces the new O h6 formula p q 1 h h6 M 16N2 N2 h K3 . “ 15 2 ´ p q ` 64 `¨¨¨ „ ˆ ˙ ⇢ We now have the O h6 approximation formula p q 1 h h 1 h N3 h 16N2 N2 h N2 N2 N2 h . p q“15 2 ´ p q “ 2 ` 15 2 ´ p q „ ˆ ˙ ⇢ ˆ ˙ „ ˆ ˙ ⇢ Continuing this procedure gives, for each j 2, 3, ,theO h2j approximation “ ¨¨¨ p q h Nj 1 h 2 Nj 1 h Nj h Nj 1 ´ p { q´ ´ p q. p q“ ´ 2 ` 4j 1 1 ˆ ˙ ´ ´ Example 4.2.2. Taylor’s expansion can be used to show that the Three-Point Midpoint formula (4.5) to approximate f x0 can be expressed with an error formula: 1p q 2 4 1 h h 5 f 1 x0 f x0 h f x0 h f 3 x0 f p q x0 . p q“2hr p ` q´ p ´ qs ´ 6 p q´120 p q´¨¨¨ Find approximations of order O h2 , O h4 , and O h6 for f 2.0 when f x xex and h 0.2. p q p q p q 1p q p q“ “ 5 Solution. The constants K1 f x0 6, K2 f x0 120, , are not likely to be known, “´ 3p q{ “´ p qp q{ ¨¨¨ but this is not important. We only need to know that these constants exist in order to apply extrapolation. We have the O h2 approximation p q 2 4 h h 5 f 1 x0 N1 h f 3 x0 f p q x0 , p q“ p q´ 6 p q´120 p q´¨¨¨ where 1 N1 h f x0 h f x0 h . p q“2hr p ` q´ p ´ qs 4.2. RICHARDSON’S EXTRAPOLATION 63

This gives us the first O h2 approximations p q 1 1 N1 0.2 f 2.2 f 1.8 22.414160 and N2 0.1 f 2.1 f 1.9 22.228786. p q“0.4r p q´ p qs “ p q“0.2r p q´ p qs “

Combining these to produce the first O h4 approximation gives p q 1 N2 0.2 N1 0.1 N1 0.1 N1 0.2 22.166995. p q“ p q`3p p q´ p qq “

To determine an O h6 formula we need another O h4 result, which requires us to find the third p q p q O h2 approximation p q 1 N1 0.05 f 2.05 f 1.95 22.182564. p q“0.1r p q´ p qs “

We can now find the O h4 approximation p q 1 N2 0.1 N1 0.05 N1 0.05 N1 0.1 22.167157 p q“ p q`3p p q´ p qq “ and finally the O h6 approximation p q 1 N3 0.2 N2 0.1 N2 0.1 N2 0.2 22.167168. p q“ p q`15p p q´ p qq “

We would expect the final approximation to be accurate to at least the value 22.167 because the N2 0.2 and N3 0.2 give the same value. In fact, N3 0.2 is accurate to all the listed digits. p q p q p q

The following table shows the order in which the approximations are generated when

2 4 6 M N1 h K1h K2h K3h . “ p q` ` ` `¨¨¨

O h2 O h4 O h6 O h8 p q p q p q p q 1: N1 h p hq 2: N1 2 3: N2 h h p hq 4: N1 4 5: N2 2 6: N3 h ` h ˘ h p hq 7: N1 8: N2 9: N3 10: N4 h ` 8 ˘ ` 4 ˘ 2 p q ` ˘ ` ˘ ` ˘ It is conservatively assumed that the true result is accurate at least to within the agreement of the bottom two results in the diagonal, in this case, to within N3 h N4 h . | p q´ p q| Each column beyond the first in the extrapolation table is obtained by a simple averaging process, so the technique can produce high-order approximations with minimal computational k cost. However, as k increases, the round-o↵error in N1 h 2 will generally increase because p { q the instability of numerical di↵erentiation is related to the step size h 2k. Also, the higher-order { formulas depend increasingly on the entry to their immediate left in the table, which is the reason we recommend comparing the final diagonal entries to ensure accuracy. The technique of extrapolation is used throughout the course. The most prominent applica- tions occur in approximating integrals and for determining approximate solutions to di↵erential equations. 64 CHAPTER 4. NUMERICAL DIFFERENTIATION AND INTEGRATION

4.3 Elements of Numerical Integration

In this section, we introduce methods for evaluating the definite integral of a function that has no explicit antiderivative or whose antiderivative is not easy to obtain. The basic method n involved is called numerical quadrature.Itusesasum i 0 aif xi to approximate the “ p q integral b f x dx. a p q ∞ The methods≥ of quadrature are based on the interpolation polynomials. The basic idea is to select a set of distinct nodes x0,x1, ,xn from the interval a, b . Then, integrate the t ¨¨¨ u r s Lagrange interpolating polynomial n Pn x f xi Li x p q“i 0 p q p q ÿ“ and its truncation error term over a, b to obtain r s b b n b n f n 1 ⇠ x f x dx f x L x dx x x p ` qp p qq dx i i i n 1 ! a p q “ a i 0 p q p q ` a i 0p ´ q ª ª “ ª “ p ` q n ÿ b nπ 1 n 1 a f x x x f p ` q ⇠ x dx, i i n 1 ! i “ i 0 p q` a i 0p ´ q p p qq ÿ“ p ` q ª π“ where ⇠ x is in a, b for each x and p q r s b ai Li x dx, for each i 0, 1, ,n. “ p q “ ¨¨¨ ªa The quadrature formula is, therefore, b n f x dx aif xi , a p q « i 0 p q ª ÿ“ with error given by b n 1 n 1 E f x x f p ` q ⇠ x dx. n 1 ! i p q“ a i 0p ´ q p p qq p ` q ª π“ We first consider formulas produced by using first and second Lagrange polynomials with equally- spaced nodes. This gives the Trapezoidal rule and Simpson’s rule.

Trapezoidal rule

Let x0 a, x1 b, h b a. Denote the linear Lagrange polynomial “ “ “ ´ x x1 x x0 P1 x p ´ q f x0 p ´ q f x1 . p q“ x0 x1 p q`x1 x0 p q p ´ q ´ q Then,

b x1 x1 x x1 x x0 1 f x dx p ´ q f x0 p ´ q f x1 dx f 2 ⇠ x x x0 x x1 dx.(4.16) p q “ x0 x1 p q`x1 x0 p q ` 2 p p qqp ´ qp ´ q ªa ªx0 „p ´ q ´ q ⇢ ªx0 The product x x0 x x1 does not change sign on x0,x1 , so the Weighted Mean Value p ´ qp ´ q r s Theorem for Integrals (Theorem 1.1.21) can be applied to the error term to give, for some ⇠ x0,x1 , Pp q x1 x1 h3 f 2 ⇠ x x x0 x x1 dx f 2 ⇠ x x0 x x1 dx f 2 ⇠ . p p qqp ´ qp ´ q “ p q p ´ qp ´ q “´6 p q ªx0 ªx0 4.3. ELEMENTS OF NUMERICAL INTEGRATION 65

Consequently, (4.16) becomes

b 2 2 x1 3 x x1 x x0 h f x dx p ´ q f x0 p ´ q f x1 f 2 ⇠ p q “ 2 x0 x1 p q`2 x1 x0 p q ´ 12 p q ªa „ p ´ q p ´ q ⇢x0 3 x1 x0 h p ´ q f x0 f x1 f 2 ⇠ . “ 2 r p q` p qs ´ 12 p q

Using the notation h x1 x0 gives the Trapezoidal rule: “ ´ b h h3 f x dx f x0 f x1 f 2 ⇠ . p q “ 2 r p q` p qs ´ 12 p q ªa When f is a positive function, b f x dx is approximated by the area in a trapezoid, as shown a p q in the following figure. The error term for the Trapezoidal rule involves f and the rule gives ≥ 2

Figure 4.1: Graphical illustration for Trapezoidal rule. the exact result when applied to any function whose second derivative is identically zero, that is, any polynomial of degree one or less.

Simpson’s rule

Simpson’s rule results from integrating over a, b the second Lagrange polynomial with equally- r s spaced nodes x0 a, x2 b, and x1 a h,whereh b a 2. “ “ “ ` “p ´ q{

Figure 4.2: Graphical illustration for Simpson’s rule. 66 CHAPTER 4. NUMERICAL DIFFERENTIATION AND INTEGRATION

Using the similar technique, one can derive and obtain the following Simpson’s rule:

b 5 h h 4 f x dx f x0 4f x1 f x2 f p q ⇠ . p q “ 3 r p q` p q` p qs ´ 90 p q ªa The error term in Simpson’s rule involves the fourth derivative of f, so it gives exact results when applied to any polynomial of degree three or less.

2 Example 4.3.1. Compare the Trapezoidal rule and Simpson’s rule approximations to f x dx 0 p q when f x is ª p q (a) x2;(b)x4;(c)x 1 1; p ` q´ (d) ?1 x2;(e)sinx;(f)ex. ` Solution. On 0, 2 the Trapezoidal and Simpson’s rule have the forms: r s 2 Trapezoid: f x dx f 0 f 2 , and p q « p q` p q ª0 2 1 Simpson’s: f x dx f 0 4f 1 f 2 . p q « 3r p q` p q` p qs ª0 When f x x2, they give p q“ 2 Trapezoid: f x dx 02 22 4, and p q « ` “ ª0 2 1 8 Simpson’s: f x dx 02 4 12 22 . p q « 3r ` ¨ ` s“3 ª0 4 The approximation from Simpson’s rule is exact because its truncation error involves f p q,which is identically 0 when f x x2. The results to three places for the functions are summarized in p q“ the following table. Notice that in each instance Simpson’s rule is significantly superior.

(a) (b) (c) (d) (e) (f) Exact value 2.667 6.400 1.099 2.958 1.416 6.389 Trapezoidal 4.000 16.000 1.333 3.326 0.909 8.389 Simpson’s 2.667 6.667 1.111 2.964 1.425 6.421

The standard derivation of quadrature error formulas is based on determining the class of poly- nomials for which these formulas produce exact results. The next definition is used to facilitate the discussion of this derivation.

Definition 4.3.2. The degree of accuracy, or precision, of a quadrature formula is the largest positive integer n such that the formula is exact for xk, for each k 0, 1, ,n. “ ¨¨¨ The Trapezoidal and Simpson’s rules have degrees of precision one and three, respectively, by definition. The Trapezoidal and Simpson’s rules are examples of a class of methods known as - Cotes formulas. There are two types of Newton-Cotes formulas, open and closed. 4.3. ELEMENTS OF NUMERICAL INTEGRATION 67

Closed Newton-Cotes Formulas

The n 1-point closed Newton-Cotes formula uses nodes xi x0 ih, for i 0, 1, ,n,where ` “ ` “ ¨¨¨ x0 a, xn b, and h b a n. It is called closed because the endpoints of the closed interval “ “ “p ´ q{ a, b are included as nodes. See the following figure for illustration. r s

Figure 4.3: Graphical illustration for closed Newton-Cotes formula.

The formula assumes the form b n b b n x xj f x dx a f x , where a L x dx p ´ q dx. i i i i x x a p q « i 0 p q “ a p q “ a j 0,j i i j ª ÿ“ ª ª “π‰ p ´ q The error analysis associated with the closed Newton-Cotes formulas can be summarized in the following theorem. n Theorem 4.3.3. Suppose that i 0 aif xi denotes the n 1-point closed Newton-Cotes for- “ p q ` mula with x0 a, xn b, and h b a n.Thereexists⇠ a, b for which “ “ ∞“p ´ q{ Pp q b n hn 3f n 2 ⇠ n f x dx a f x ` p ` qp q t2 t 1 t n dt, i i n 2 ! a p q “ i 0 p q` 0 p ´ q¨¨¨p ´ q ª ÿ“ p ` q ª if n is even and f Cn 2 a, b , and P ` r s b n hn 2f n 1 ⇠ n f x dx a f x ` p ` qp q t t 1 t n dt, i i n 1 ! a p q “ i 0 p q` 0 p ´ q¨¨¨p ´ q ª ÿ“ p ` q ª if n is odd and f Cn 1 a, b . P ` r s Note that when n is an even number the degree of precision is n 1, although the interpolation ` polynomial is of degree at most n.Whenn is odd, the degree of precision is only n. We remark that when n 1, the Newton-Cotes formula is the Trapezoidal rule and when n 2, the formula “ “ becomes the Simpson’s rule.

Open Newton-Cotes Formulas

The open Newton-Cotes formulas do not include the endpoints of a, b as nodes. They use the r s nodes xi x0 ih, for each i 0, 1, ,n,whereh b a n 2 and x0 a h.This “ ` “ ¨¨¨ “p ´ q{p ` q “ ` implies that xn b h, so we label the endpoints by setting x 1 a and xn 1 b. “ ´ ´ “ ` “ n Theorem 4.3.4. Suppose that i 0 aif xi denotes the n 1-point open Newton-Cotes formula “ p q ` with x 1 a, xn 1 b, and h b a n 2 .Thereexists⇠ a, b for which ´ “ ` “ “p∞ ´ q{p ` q Pp q b n hn 3f n 2 ⇠ n f x dx a f x ` p ` qp q t2 t 1 t n dt, i i n 2 ! a p q “ i 0 p q` 0 p ´ q¨¨¨p ´ q ª ÿ“ p ` q ª 68 CHAPTER 4. NUMERICAL DIFFERENTIATION AND INTEGRATION if n is even and f Cn 2 a, b , and P ` r s b n hn 2f n 1 ⇠ n f x dx a f x ` p ` qp q t t 1 t n dt, i i n 1 ! a p q “ i 0 p q` 0 p ´ q¨¨¨p ´ q ª ÿ“ p ` q ª if n is odd and f Cn 1 a, b . P ` r s As in the case of the closed methods, we have the degree of precision comparatively higher for the even methods than for the odd methods. Some of the common open Newton-Cotes formulas with their error terms are as follows: n 0:Midpointrule “ x1 h3 f x dx 2hf x0 f 1 ⇠ , where x 1 ⇠ x1. ´ x 1 p q “ p q` 3 p q † † ª ´ n 1: “ x2 3h 3h3 f x dx f x0 f x1 f 2 ⇠ , where x 1 ⇠ x2. ´ x 1 p q “ 2 r p q` p qs ` 4 p q † † ª ´

4.4 Composite Numerical Integration

The Newton-Cotes formulas are generally unsuitable for use over large integration intervals. High-degree formulas would be required, and the values of the coecients in these formulas are dicult to obtain. In this section, we discuss a piecewise approach to numerical integration that uses the low-order Newton-Cotes formulas. These are the techniques most often applied. We choose an even integer n. Subdivide the interval a, b into n subintervals, and apply Simp- r s son’s rule on each consecutive pair of subintervals. See Figure 4.4. With h b a n and “p ´ q{

Figure 4.4: Graphical illustration for composite Simpson’s rule. xj a jh, for each j 0, 1, ,n,wehave “ ` “ ¨¨¨ n 2 b { x2j f x dx f x dx a p q “ j 1 x2j 2 p q ª ÿ“ ª ´ n 2 5 { h h 4 f x2j 2 4f x2j 1 f x2j f p q ⇠j , “ 3 r p ´ q` p ´ q` p qs ´ 90 p q j 1 " * ÿ“ 4.4. COMPOSITE NUMERICAL INTEGRATION 69

4 for some ⇠j x2j 2,x2j , provided that f C a, b . Using the fact that for each j Pp ´ q P r s “ 1, 2, , n 2 1wehavef x2j appearing in the term corresponding to the interval x2j 2,x2j ¨¨¨ p { q´ p q r ´ s and also in the term corresponding to the interval x2j,x2j 2 , we can reduce this sum to r ` s b n 2 1 n 2 5 n 2 h p { q´ { h { 4 f x dx f x0 2 f x2j 4 f x2j 1 f xn f p q ⇠j . 3 » ´ fi 90 a p q “ p q` j 1 p q` j 1 p q` p q ´ j 1 p q ª ÿ“ ÿ“ ÿ“ The error associated with– this approximation is fl

5 n 2 h { 4 E f f p q ⇠ , 90 j p q“´ j 1 p q ÿ“ 4 where x2j 2 ⇠j x2j, for each j 1, 2, ,n 2. If f C a, b , the Extreme Value Theorem ´ † † “ ¨¨¨ { P r s implies that f 4 attains its maximum and minimum in a, b .Since p q r s 4 4 4 min f p q x f p q ⇠j max f p q x , x a,b p q§ p q§x a,b p q Pr s Pr s we have n 2 n 4 { 4 n 4 min f p q x f p q ⇠j max f p q x 2 x a,b p q§ p q§ 2 x a,b p q Pr s j 1 Pr s ÿ“ and n 2 4 2 { 4 4 min f p q x f p q ⇠j max f p q x . x a,b p q§n p q§x a,b p q Pr s j 1 Pr s ÿ“ By the Intermediate Value Theorem, there is a constant µ a, b such that Pp q n 2 4 2 { 4 f p q µ f p q ⇠ . n j p q“ j 1 p q ÿ“ Thus, 5 5 h n 4 h 4 E f f p q µ nf p q µ p q“´90 2 p q “´180 p q or since h b a n,wehave ´ ¯ “p ´ q{ b a 4 4 E f p ´ qh f p q µ . p q“´ 180 p q These observations produce the following result. 4 Theorem 4.4.1. Let f C a, b , n be even, h b a n, and xj a jh, for each P r s “p ´ q{ “ ` j 0, 1, ,n. There exists a constant µ a, b for which the Composite Simpson’s rule “ ¨¨¨ Pp q for n subintervals can be written with its error term as

b n 2 1 n 2 h p { q´ { b a 4 4 f x dx f x0 2 f x2j 4 f x2j 1 f xn p ´ qh f p q µ . 3 » ´ fi 180 a p q “ p q` j 1 p q` j 1 p q` p q ´ p q ª ÿ“ ÿ“ – fl Note that the error term for Composite Simpson’s rule is O h4 , whereas it was O h5 for p q p q the standard Simpson’s rule. However, these rates are not comparable because for standard Simpson’s rule we have h fixed at b a 2, but for Composite Simpson’s rule we have h p ´ q{ “ b a n, for n an even integer. This permits us to considerably reduce the value of h when the p ´ q{ Composite Simpson’s rule is used. The subdivision approach can be applied to any of the Newton-Cotes formulas. We summarize the result for the Trapezoidal and Midpoint rules. 70 CHAPTER 4. NUMERICAL DIFFERENTIATION AND INTEGRATION

2 Theorem 4.4.2. Let f C a, b , h b a n, and xj a jh, for each j 0, 1, ,n.There P r s “p ´ q{ “ ` “ ¨¨¨ exists a constant µ a, b for which the Composite Trapezoidal rule for n subintervals can Pp q be written with its error term as b n 1 h ´ b a 2 f x dx f a 2 f x f b p ´ qh f 2 µ . 2 j 12 a p q “ « p q` j 1 p q` p q ´ p q ª ÿ“ 2 Theorem 4.4.3. Let f C a, b , n be even, h b a n 2 , and xj a j 1 h, for each P r s “p ´ q{p ` q “ `p ` q j 1, 0, 1, ,n 1. There exists a constant µ a, b for which the Composite Midpoint “´ ¨¨¨ ` Pp q rule for n 2 subintervals can be written with its error term as ` b n 2 { b a 2 f x dx 2h f x p ´ qh f 2 µ . 2j 6 a p q “ j 1 p q` p q ª ÿ“ Example 4.4.4. Determine values of h that will ensure an approximation error of less than ⇡ 0.00002 when approximating sin xdxand employing (a) Composite Trapezoidal rule and (b) 0 Composite Simpson’s rule. ª

(a) The error form for Composite Trapezoidal rule for f x sin x on 0,⇡ is p q“ r s 2 2 ⇡h ⇡h ⇡h 5 f 2 µ sin µ 2 10´ 12 p q “ 12 p´ q § 12 † ˆ to ensure sucient accuracy with this technique. Since h ⇡ n implies that n ⇡ h,we “ { “ { need 3 3 1 2 ⇡ 5 ⇡ { 2 10´ n 359.44 12n2 † ˆ ùñ ° 12 2 10 5 « ˆ p ˆ ´ q˙ and the Composite Trapezoidal rule requires n 360. • (b) The error form for the Composite Simpson’s rule for f x sin x on 0,⇡ is p q“ r s 4 4 4 ⇡h 4 ⇡h ⇡h 5 f p q µ sin µ 2 10´ 180 p q “ 180 § 180 † ˆ to ensure sucient accuracy with this technique. Using again the fact that n ⇡ h we “ { have 5 5 1 4 ⇡ 5 ⇡ { 2 10´ n 17.07. 180n4 † ˆ ùñ ° 180 2 10 5 « ˆ p ˆ ´ q˙ Hence, the Composite Simpson’s rule requires only n 18. Moreover, the Composite • Simpson’s rule with n 18 gives “ ⇡ ⇡ 8 j⇡ 9 2j 1 ⇡ sin xdx 2 sin 4 sin p ´ q 2.0000104. « 54 9 ` 18 “ 0 « j 1 ˆ ˙ j 1 ˆ ˙ ª ÿ“ ÿ“ 5 This is accurate to within about 10´ because the true value of integral is 2.

Composite Simpson’s rule is the clear choice if you wish to minimize computation. For compar- ison purposes, consider the Composite Trapezoidal rule using h ⇡ 18 for the integral in the “ { example above. This approximation uses the same function evaluations as Composite Simpson’s rule but the approximation in this case ⇡ ⇡ 17 j⇡ sin xdx 2 sin sin 0 sin ⇡ 1.9949205 « 36 18 ` ` “ 0 « j 1 ˆ ˙ ª ÿ“ is accurate only to about 5 10 3. ˆ ´ 4.7. GAUSSIAN QUADRATURE 71

Round-Off error stability

In Example 4.4.4 we saw that ensuring an accuracy of 2 10 5 for approximating the integral ˆ ´ required 360 subdivisions of 0,⇡ for the Composite Trapezoidal rule and only 18 for the Com- r s posite Simpson’s rule. In addition to the fact that less computation is needed for the Simpson’s technique, you might suspect that because of fewer computations this method would also involve less round-o↵error. However, an important property shared by all the composite integration techniques is a stability with respect to round-o↵error. That is, the round-o↵error does not depend on the number of calculations performed. To demonstrate this fact, suppose we apply the Composite Simpson’s rule with n subintervals to a function f on a, b and determine the maximum bound for the round-o↵error. Assume r s that f xi is approximated by f˜ xi and that p q p q f xi f˜ xi ei, for each i 0, 1, ,n, p q“ p q` “ ¨¨¨ where ei denotes the round-o↵error associated with using f˜ xi to approximate f xi .Then p q p q the accumulated error, e h , in the Composite Simpson’s rule is p q n 2 1 n 2 h p { q´ { e h e0 2 e2j 4 e2j 1 en p q“ 3 » ` ` ´ ` fi j 1 j 1 ÿ“ ÿ“ – fl n 2 1 n 2 h p { q´ { e0 2 e2j 4 e2j 1 en . 3 »| | | | | ´ | | |fi § ` j 1 ` j 1 ` ÿ“ ÿ“ If the round-o↵errors are uniformly– bounded by ",then fl h n n h e h " 2 1 " 4 " " 3n" nh". p q§ 3 ` 2 ´ ` 2 ` “ 3 “ But nh b a,sowehave” ´ ¯ ´ ¯ ı “ ´ e h b a ", p q§p ´ q a bound independent of h and n. This means that, even though we may need to divided an interval into more parts to ensure accuracy, the increased computation that is required does not increase the round-o↵error. This result implies that the procedure is stable as h approaches zero. Recall that this was not true of the numerical di↵erentiation procedures.

4.7 Gaussian Quadrature

The Newton-Cotes formulas were derived by integrating interpolating polynomials. The error term in the interpolating polynomial of degree n involves the n 1-st derivative of the function ` being approximated, so a Newton-Cotes formula is exact when approximating the integral of any polynomial of degree less than or equal to n. All the Newton-Cotes formulas use values of the function at equally-spaced points. This re- striction is convenient when the formulas are combined to form the composite rules, but it can significantly decrease the accuracy of the approximation. Gaussian quadrature chooses the points for evaluation in an optimal, rather than equally- spaced way. The nodes x0,x1, ,xn in the interval a, b and coecients c0,c1, ,cn, are ¨¨¨ r s ¨¨¨ chosen to minimize the expected error obtained in the approximation b n f x dx cif xi . a p q « i 0 p q ª ÿ“ 72 CHAPTER 4. NUMERICAL DIFFERENTIATION AND INTEGRATION

To measure the degree of accuracy, we assume that the best choice of these values produces the exact result for the largest class of polynomials, that is, the choice that gives the greatest degree of precision.

The coecients c0,c1, ,cn in the formula are arbitrary, and the nodes x0,x1, ,xn are ¨¨¨ ¨¨¨ restricted only by the fact that they must lie in a, b , the interval of integration. This gives r s us 2n 2 parameters to choose. If the coecients of a polynomial are considered parameters, ` the class of polynomials of degree at most 2n 1 also contains 2n 2 parameters. This, is the ` ` largest class of polynomials for which it is reasonable to expect a formula to be exact. We show how to select the coecients and nodes when n 1 and the interval of integration is “ 1, 1 . We will discuss the more general situation later. Suppose that we want to determine r´ s c0,c1,x0, and x1 so that the integration formula

1 f x dx c0f x0 c1f x1 1 p q « p q` p q ª´ gives the exact result whenever f x is a polynomial of degree 2 1 1 3 or less, that is, when 2 3 p q p q` “ f x a0 a1x a2x a3x , for some collection of constants a0,a1,a2, and a3.Sincewehave p q“ ` ` `

2 3 2 3 a0 a1x a2x a3x dx a0 1 dx a1 xdx a2 x dx a3 x dx, p ` ` ` q “ ` ` ` ª ª ª ª ª this is equivalent to showing that the formula gives exact results when f x is 1,x,x2, and x3. p q Hence, we need c0,c1,x0, and x1 so that

1 1 c0 1 c1 1 1 dx 2,c0x0 c1x1 xdx 0, ¨ ` ¨ “ 1 “ ` “ 1 “ ª´ ª´ 1 1 2 2 2 2 3 3 3 c0x0 c1x1 x dx and c0x0 c1x1 x dx 0. ` “ 1 “ 3 ` “ 1 “ ª´ ª´ It shows that this system of equations has the unique solution

?3 ?3 c0 1,c1 1,x0 , and x1 , “ “ “´ 3 “ 3 which gives the approximation formula

1 ?3 ?3 f x dx f f . 1 p q « ´ 3 ` 3 ª´ ˆ ˙ ˆ ˙ This formula has degree of accuracy 3, that is, it produces the exact result for every polynomial of degree 3 or less.

Legendre Polynomials

The technique we have described could be used to determine the nodes and coecients for formu- las that give exact results for higher-degree polynomials. The set that is relevant to our problem is the Legendre polynomials, a collection of polynomials P0 x ,P1 x , ,Pn x , with t p q p q ¨¨¨ p q ¨¨¨u properties:

1. For each n, Pn x is a monic polynomial of degree n (i.e., the coecient of highest order p q term is 1). 4.7. GAUSSIAN QUADRATURE 73

1 2. P x Pn x dx 0wheneverP x is a polynomial of degree less than n. 1 p q p q “ p q ª´ n n! d 2 n 3. We have Pn x x 1 for n 1 and P0 x 1. p q“ 2n ! dxn p ´ q • p q“ p q “ ‰ The first few Legendre polynomials are

2 1 3 3 P0 x 1,P1 x x, P2 x x , and P3 x x x. p q“ p q“ p q“ ´ 3 p q“ ´ 5 The roots of these polynomials are distinct, lie in the interval 1, 1 ,haveasymmetrywithre- p´ q spect to the origin, and, most importantly, are the correct choice for determining the parameters that give us the ndoes and coecient for our quadrature method.

The nodes x0,x1, ,xn needed to produce an integral approximation formula that gives exact ¨¨¨ results for any polynomial of degree less than 2n 2 are the roots of the n 1-st degree Legendre ` ` polynomial. This is established by the following result.

Theorem 4.7.1. Suppose that x0,x1, ,xn are the roots of the n 1-st Legendre polynomial ¨¨¨ ` Pn 1 x and that for each i 0, 1, ,n,thenumbersci are defined by ` p q “ ¨¨¨ 1 n x xj c ´ dx. i x x “ 1 j 0,j i i j ª´ “π‰ ´ If P x is any polynomial of degree less than 2n 2, then p q ` 1 n P x dx ciP xi . 1 p q “ i 0 p q ª´ ÿ“ 1 n That is, the degree of accuracy of the quadrature formula 1 f x dx i 1 cif xi is 2n 1. ´ p q « “ p q ` ≥ ∞ Proof. Let us first consider the situation for a polynomial P x of degree less than n 1. p q ` Rewrite P x in terms of n-th Lagrange coecient polynomials with nodes at the roots of n 1- p q ` st Legendre polynomial Pn 1 x . The error term for this representation involves the n 1-st ` p q ` derivative of P x .SinceP x is of degree less than n 1, the n 1-st derivative of P x is 0, p q p q ` ` p q and this representation is of exact. So

n n n n x xj P x P x Lp q x ´ P x i i x x i p q“i 0 p q p q“i 1 j 0,j i i j p q ÿ“ ÿ“ “π‰ ´ and

1 1 n n x xj P x dx ´ P x dx x x i 1 p q “ 1 «i 0 j 1,j i i j p q ª´ ª´ ÿ“ “π‰ ´ n 1 n n x xj ´ dx P x c P x . x x i i i “ i 0 « 1 j 1,j i i j p q“i 0 p q ÿ“ ª´ “π‰ ´ ÿ“ Hence, the result is true for polynomials of degree less than n 1. ` Consider a polynomial P x of degree at least n 1 but less than 2n 2. Divide P x by the p q ` ` p q n 1-st Legendre polynomial Pn 1 x . This gives two polynomials Q x and R x , each of ` ` p q p q p q degree less than n 1with ` P x Q x Pn 1 x R x . p q“ p q ` p q` p q 74 CHAPTER 4. NUMERICAL DIFFERENTIATION AND INTEGRATION

Note that xi is a root of Pn 1 x for each i 0, 1, ,n,sowehave ` p q “ ¨¨¨ P xi Q xi Pn 1 xi R xi R xi . p q“ p q ` p q` p q“ p q We now invoke the unique power of the Legendre polynomials. First, the degree of the polynomial Q x is less than n 1, so by Legendre property (2), p q ` 1 Q x Pn 1 x 0. 1 p q ` p q“ ª´ Then, since R x is a polynomial of degree less than n 1, the opening argument implies that p q ` 1 n R x dx ciR xi . 1 p q “ i 0 p q ª´ ÿ“ Putting these facts together verifies that the formula is exact for the polynomial P x : p q 1 1 1 n n P x dx Q x Pn 1 x R x dx R x dx ciR xi ciP xi . ` 1 p q “ 1r p q p q` p qs “ 1 p q “ i 0 p q“i 0 p q ª´ ª´ ª´ ÿ“ ÿ“ This completes the proof.

The constants ci needed for the quadrature rule can be generated from the equation in Theorem 4.7.1, but both these constants and the roots of the Legendre polynomials are extensively tab- ulated. The quadrature formula formed by these special nodes (called Gaussian nodes) and weights are called Gaussian quadrature. The following table lists these values for n 2, 3, 4, “ and 5.

n Root rn,i Coecients cn,i 20.5773502692 1.0000000000 0.5773502692 1.0000000000 ´ 30.7745966692 0.5555555556 0.0000000000 0.8888888889 0.7745966692 0.5555555556 ´ 40.8611363116 0.3478548451 0.3399810436 0.6521451549 0.3399810436 0.6521451549 ´ 0.8611363116 0.3478548451 ´ 50.9061798459 0.2369268850 0.5384693101 0.4786286705 0.0000000000 0.5688888889 0.5384693101 0.4786286705 ´ 0.9061798459 0.2369268850 ´ 1 x Example 4.7.2. Approximate 1 e cos xdxusing Gaussian quadrature with n 3. ´ “ ≥ Solution. Let f x ex cos x. The entries in the table above give us p q“ 1 ex cos xdx 0.59f 0.7745966692 0.89f 0 0.59f 0.7745966692 1.9333904. 1 « p q` p q` p´ q“ ª´ Integration by parts can be used to show that the true value of the integral is 1.9334214, so the absolute error is less than 3.2 10 5. ˆ ´ 4.7. GAUSSIAN QUADRATURE 75

b An integral f x dx over an arbitrary a, b can be transformed into an integral over 1, 1 a p q r s r´ s by using theª change of variables: 2x a b 1 t ´ ´ x b a t a b . “ b a ùñ “ 2rp ´ q ` ` s ´ This permits Gaussian quadrature to be applied to any interval a, b because r s b 1 b a t a b b a f x dx f p ´ q ` ` p ´ q dt. (4.17) a p q “ 1 2 2 ª ª´ ˆ ˙ 3 Example 4.7.3. Consider the integral x6 x2 sin 2x dx 317.3442466. ´ p q “ ª1 (a) Compare the results for the closed Newton-Cotes formula with n 1, the open Newton- “ Cotes formula with n 1, and Gaussian Quadrature when n 2. “ “ (b) Compare the results for the closed Newton-Cotes formula with n 2, the open Newton- “ Cotes formula with n 2, and Gaussian Quadrature when n 3. “ “ Solution. (a) Each of the formula in this part requires 2 evaluations of the function f x p q“ x6 x2 sin 2x . The Newton-Cotes approximations are ´ p q 2 Closed n 1: f 1 f 3 731.6054420; “ 2r p q` p qs “ 3 2 3 Open n 1: p { q f 5 3 f 7 3 188.7856682. “ 2 r p { q` p { qs “ Gaussian quadrature applied to this problem requires that the integral first be transformed into a problem whose interval of integration is 1, 1 . Using (4.17), gives r´ s 3 1 x6 x2 sin 2x dx t 2 6 t 2 2 sin 2 t 2 dt. 1 ´ p q “ 1p ` q ´p ` q p p ` qq ª ª´ Let F t t 2 6 t 2 2 sin 2t 4 . Gaussian quadrature with n 2 then gives p q“p ` q ´p ` q p ` q “ 3 x6 x2 sin 2x dx F 0.5773502692 F 0.5773502692 306.8199344; ´ p q « p´ q` p q“ ª1 (b) Each of the formula in this part requires 3 evaluations. The Newton-Cotes approximations are 1 Closed n 2: f 1 4f 2 f 3 333.2380940; “ 3r p q` p q` p qs “ 4 1 2 Open n 2: p { q 2f 1.5 f 2 2f 2.5 303.5912023. “ 3 r p q´ p q` p qs “ Gaussian quadrature with n 3, once the transformation has been done, gives “ 3 x6 x2 sin 2x dx 317.2641516. ´ p q « ª1 The Gaussian quadrature results are clearly superior in each instance.