Numerical Integration and the Redemption of the Trapezoidal Rule

Numerical Integration and the Redemption of the Trapezoidal Rule

Numerical integration and the redemption of the trapezoidal rule S. G. Johnson, MIT Applied Math, IAP Math Lecture Series 2011 Created January 2011; updated May 8, 2020 1 Numerical integration (“quadrature”) Freshman calculus revolves around differentiation and integration. Unfortunately, while you can almost always differentiate functions by hand (if the derivative exists at all), most functions cannot be integrated by hand in closed form [e.g. try integrating sin(x+ cos(x))]. Instead, they must be integrated approximately on a computer, a process known as numerical integration or quadrature. (Historically, “quadrature” was a syn- onym for integration in general—literally, converting areas into equivalent squares— but in modern usage “quadrature” almost exclusively refers to computational algo- rithms.) In particular, suppose we are computing the following definite integral, over an interval [0;p] (chosen for convenience below):1 Z p I = f (x)dx 0 for some given function f (x) that is a “black box” (we may only know how to evaluate it given any x, not how to manipulate it symbolically). In numerical integration, we want to approximate this integral I by a sum IN: N I ≈ ∑ wn f (xn) = IN n=0 for N + 1 quadrature points xn 2 [0;p] and corresponding quadrature weights wn. The central problem in numerical integration is this: • How can we choose the points xn and the weights wn so that the error jIN − Ij goes to zero as rapidly as possible as we increase N? This problem has a long history, dating back to ancient approximations for p by approx- imating the area or circumference of a circle with polygons, and involves beautiful and 1Note that there is no loss of generality in the choice of the [0;p] interval: if we are computing some R b R b arbitrary integral a g(y)dy, we can convert it back to the [0;p] form by a change of variables: a g(y)dy = jb−aj R p x R p jb−aj x p 0 g( p [b − a] + a)dx = 0 f (x)dx for f (x) = p g( p [b − a] + a). 1 (a) (b) f(x) f(x) errors f(0) error error f( ) trapezoid area total trapezoid areas = [f(0)+f( )]/2 ~ integral ~ integral x x 0 0 N=5 Figure 1: Illustration of (a) the trapezoidal rule and (b) the composite trapezoidal rule for integrating f (x) on [0;p]. In each case, we approximate the area under f (x) by the area of (a) one or (b) N trapezoids. That is, we evaluate f (x) at N +1 points xn = np=N for n = 0;1;:::;N, connect the points by straight lines, and approximate the integral by the integral of this interpolated piecewise-linear function. sometimes surprising mathematics—the exponential blowup of polynomial (“Newton– Cotes”) approximations (Runge phenomena), orthogonal bases of polynomials (Gaus- sian quadrature, Chebyshev approximation, etc.), and deep forays into Fourier analysis (e.g., Clenshaw–Curtis quadrature). Quadrature is closely related to other important numerical algorithms such as numerical linear algebra (e.g. Lanczos iterations), ap- proximation theory, and fast Fourier transform algorithms (FFTs, which themselves encompose a host of group theory, number theory, polynomial algebras, and other fas- cinating topics). For higher-dimensional numerical integration (cubature), the story becomes even more intricate, ranging from statistics (Monte–Carlo integration) and number theory (low-discrepancy sequences and quasi-Monte–Carlo methods) to frac- tals (sparse grids). Here, we will just dip our toes into the problem, beginning by analyzing one of the simplest—deceptively simple!—quadrature methods, the trapezoidal rule. Not only does a full analysis of the accuracy of this method lead us directly into the far- reaching topic of Fourier series, but we also find that a simple transformation turns the lowly trapezoidal rule from one of the crudest quadrature schemes into one of the best, Clenshaw–Curtis quadrature. 2 The trapezoidal rule The trapezoidal rule, in its most basic form, connects the endpoints (0; f (0)) and (p; f (p)) by a straight line and approximates the area by the area of a trapezoid: f (0) + f (p) I ≈ p ; 2 as shown in figure 1(a). Of course this approximation is rather crude, so we refine it by increasing the number of trapezoids: by “trapezoidal rule” one usually means a 2 x2 f'' f(x) = (linear) + f'' (quadratic) error area ~ x3 f'' x = / N Figure 2: Schematic estimate of the local error of the trapezoidal rule. In a small interval Dx = p=N, the function f (x) can be approximated by a Taylor expansion, and the lowest-order correction to the trapezoidal rule’s linear approximation is represented by the quadratic term. Correspondingly, the maximum deviation of f (x) from the trapezoid is ∼ Dx2 within the interval (multiplied by f 00 at some point). The resulting error area (red shaded region) is therefore proportional to Dx2 × Dx = Dx3. composite trapezoidal rule: divide [0;p] into N intervals and apply the trapezoidal rule to each one, as shown in figure 1(b). In the common case of equal intervals of width Dx = p=N, summing these trapezoid areas yields the following approximate integral, also called the Euler–Maclaurin formula: " # p f (0) + f (p) N−1 IN = + ∑ f (np=N) : N 2 n=1 Note that the 1=2 factors cancelled except for the first and last points. Clearly, as N ! ¥ we must have IN ! I (at least, for any Riemann-integrable function). The question now is, how fast does the error jI − INj decrease with N? 2.1 A simple, pessimistic error estimate A crude, but perhaps too pessimistic, upper bound on the error is as follows. The trapezoidal rule corresponds to approximating f (x) by a straight line on each interval Dx = p=N. If we look at the Taylor expansion of f (x), the lowest-order deviation from a straight line is the quadratic ( f 00) term, and this term means that f deviates from a straight line by at most ∼ Dx2 within the interval (multiplied by some coefficient pro- portional to f 00), as depicted in figure 2. The corresponding error area is then propor- tional to Dx2 ×Dx = Dx3 ∼ 1=N3. This is the local error from a single interval. As there are N such intervals, the total error should be bounded above by N ×1=N3 = 1=N2: the error of the trapezoidal rule decreases at worst proportional to 1=N2 (for continu- ous integrands). This is an upper bound on the error, because we have neglected the possibility that the errors from different intervals will be of opposite signs and mostly cancel if we are very lucky. At first glance, however, it seems unlikely that such cancellations will occur to such an extent that the error will decrease faster than 1=N2, and indeed this turns out to be the case—for most f (x), the trapezoidal-rule error is exactly proportional to 1=N2 3 2 2 esin[( x+1) + 2 cos(4x+1)] esin[sin( x+1) + 2 cos(4x+1)] 3 3 0 0 10 2.5 10 2.5 2 2 ) ) x 1.5 x 1.5 ( -2 ( -2 f 10 f 10 1 1 0.5 0.5 10 -4 0 0 -4 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 10 x / x / 10 -6 10 -6 1/ -8 N 2 10 reference 10 -8 10 -10 fractional error in trapezoidal quadrature trapezoidal error in fractional quadrature trapezoidal error in fractional 10 -10 10 -12 10 1 10 2 10 3 10 4 0 10 20 30 40 50 number N of trapezoids number N of trapezoids Figure 3: Left: Fractional error jIN − Ij=jIj verus N for the trapezoidal rule IN when 2 integrating the function esin[(x+1) +2cos(4x+1)] (inset). For reference, the dashed line 2 shows 1=N dependence, demonstrating that the IN errors indeed decrease asymptot- ically at this rate. Right: the same fractional error, but this time for the integrand sin[sin2(x+1)+2cos(4x+1)] e (inset). Note that this is a semi-log scale: the IN errors appear to be decreasing at least exponentially fast with N—a miracle has occurred in the trape- zoidal rule? for large N. However, it turns out there are some functions that do much, much better; if we can understand these special cases, and in general understand the error more rigorously, we can try to rearrange the computation so that this improvement occurs almost always. 2.2 A numerical experiment and a Miracle In figure 3(left) is plotted the fractional error jIN −Ij=jIj in the trapezoidal-rule approx- imation versus N on a log–log scale for an arbitrarily chosen nasty-looking function 2 esin[(x+1) +2cos(4x+1)] shown in the inset. As expected from the crude analysis above, the errors decrease asymptotically at a rate that is almost exactly proportional to 1=N2 (an exact 1=N2 dependence is shown as a dashed line for reference). If we want eight decimal places of accuracy we need around 104 function evaluations, but this is not too bad on a computer (unless we need to evaluate millions of such integrals, or unless our integrand is a much nastier function like the output of a planetary climate simulation!). 2 For “fun,” we try a slightly different integrand in the right panel of figure 3: esin[sin (x+1)+2cos(4x+1)] (inset). Again plotting error versus N, it seems again to be roughly a straight line (a little more wiggly than before)—but wait, this is no longer a log–log scale, this is a log–linear scale.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us