<<

ECE 3040 Lecture 20: Numerical Integration II © Prof. Mohamad Hassoun

This lecture covers the following topics:

 Introduction  Richardson extrapolation & Romberg integration  Gauss *: Two-point Gauss-Legendre formula  Adaptive quadrature  Matlab built-in numerical integration  Matlab and : polyint and int  -based integration  Multiple : integral2 & integral3 

* Numerical integration is sometimes referred to as quadrature. This is an old term that originally meant constructing a square that has the same as a circle. Today, the term quadrature refers to numerical integration.

Introduction

In the previous lecture, the integration formulas could be applied to a table of values representing an unknown function or to a known function. For tabulated data, we are limited by the number of and the spacing between the points that are available. On the other hand, if the analytical form of the function is available, then we can generate as many values of the function as is required to improve the accuracy of numerical integration.

This lecture capitalizes on the ability to generate function values to develop efficient techniques for numerical integration. Three such techniques are presented: Romberg integration, Gauss quadrature and adaptive quadrature.

High-efficiency built-in Matlab numerical integration functions (integral, integral2 and integral3) are presented. Integration of and Matlab symbolic integration are discussed. Also, Taylor series-based integration is presented.

This lecture extends the results of the previous lecture to numeric multiple integration. For example, the midpoint, trapezoidal or Simpson’s 1/3 method would be applied in the first dimension, with the values in the second dimension held constant. Then, the method would be applied to integrate the resulting numerical data as if it was one dimensional.

The lecture concludes with the Monte Carlo integration method.

Richardson Extrapolation & Romberg Integration

Richardson extrapolation is a technique which uses two numerical estimates of an integral to compute a third, more accurate approximation. Such extrapolation can be employed with any of the integration formulas form Lecture 19. In the following, Richardson extrapolation is developed in the context of the integration. Recall that the trapezoidal rule with step size ℎ, has a truncation error 1 퐸 (ℎ) ≅ − ℎ2 [푓′(푏) − 푓′(푎)] (1) 푡 12 If we refer to the integral estimate as 퐼(ℎ) and the exact value of the integral as I, then we may write

퐼 = 퐼(ℎ) + 퐸푡(ℎ)

Let us make two estimates for I using two different step sizes, ℎ1 and ℎ2, with

ℎ2 < ℎ1. Then, we can relate the two estimates by

퐼(ℎ1) + 퐸푡(ℎ1) = 퐼(ℎ2) + 퐸푡(ℎ2) (2) The ratio of the truncation errors can be easily determined, and may be expressed as [employing Eqn. (1)] 퐸 (ℎ ) ℎ 2 푡 1 ≅ ( 1) 퐸푡(ℎ2) ℎ2

Solving for 퐸푡(ℎ1) gives 2 ℎ1 퐸푡(ℎ1) ≅ 퐸푡(ℎ2) ( ) (3) ℎ2 Now, substitute this result in Eqn. (2), 2 ℎ1 퐼(ℎ1) + 퐸푡(ℎ2) ( ) ≅ 퐼(ℎ2) + 퐸푡(ℎ2) ℎ2 and solve for 퐸푡(ℎ2) to obtain,

퐼(ℎ1) − 퐼(ℎ2) 퐸푡(ℎ2) ≅ ℎ 2 1 − ( 1) ℎ2 Thus we have developed an estimate of the truncation error with the smaller step size, ℎ2, in terms of the integral estimates and their step sizes. This estimate can then be substituted into

퐼 = 퐼(ℎ2) + 퐸푡(ℎ2) to yield an improved estimate of the integral, 1 퐼 ≅ 퐼(ℎ2) + [퐼(ℎ1) − 퐼(ℎ2)] ℎ 2 1 − ( 1) ℎ2 It has been shown that the error of this estimate is 푂(ℎ4). Thus, we have combined two trapezoidal rule estimates of 푂(ℎ2) to yield a significantly improved estimate ℎ of 푂(ℎ4). For the special case ℎ = 1 this equation becomes 2 2

Richardson extrapolation can be thought of as a weighted average of the approximations (note that the weight coefficients 4/3 and −1/3 add up to one). The following example illustrates the application of this method.

Your turn: Show that Richardson extrapolation for Simpson’s 1/3 rule with ℎ2 = ℎ 1 is given by 2 16 1 퐼 ≅ 퐼(ℎ ) − 퐼(ℎ ) 15 2 15 1 1 (Recall that Simpson’s rule has 퐸 (ℎ) ≅ − ℎ4 [푓′′′(푏) − 푓′′′(푎)]) 푡 180

Example. Use Richardson extrapolation to improve the trapezoidal rule evaluation of the following integral and compute the absolute relative true error (employ 푛 = 4, 8, 16). 4 퐼 = ∫ [10 − 푥3(푥 − 1)(푥 − 2)(푥 − 3)(푥 − 4)]푑푥 0

The exact value of the integral is 64.3810, computed as follows (recall Lecture 7 and refer to the section on polyint later in this lecture),

The evaluations of the integral using the trapezoidal rule (using function int_trapz) are tabulated, along with the corresponding absolute relative true error 휀푡, in the 푏−푎 4 following table. [Recall that, ℎ = = ] 푛 푛

Segments ℎ 퐼(ℎ) 휀푡 4 1 40 37.9% 8 0.5 56.8750 11.7% 16 0.25 62.4121 3.1%

Combining the estimates for the 4 and 8 segments using Richardson extrapolation yields 4 1 퐼 = (56.8750) − (40) = 62.5000 (휀 = 2.9%) 푙 3 3 푡 Similarly, combining the estimates for the 8 and 16 segments, yields 4 1 퐼 = (62.4121) − (56.8750) = 64.2578 (휀 = 0.19%) 푚 3 3 푡 The subscripts ‘푙’ and ‘푚’ signify the less accurate and more accurate integrals, respectively. The accuracy of 퐼푚 should be comparable to that obtained using Simpson’s 1/3 rule with 푛 = 16 (that would be 퐼푆 = 64.2578 @ 휀푡 = 0.19%), which confirms the 푂(ℎ4) convergence of the hybrid trapezoidal/Richardson extrapolation integration method. In fact, 퐼푚 = 퐼푠 which is proved formally in the next section.

The integral value of 퐼푚 = 64.2578 could have been obtained using basic trapezoidal rule integration with an approximate step-size ℎ given by the solution of the following theoretical equation 1 1 퐸 = 64.381 − 64.2578 ≅ − ℎ2[푓′(4) − 푓′(0)] = − ℎ2(−384) = 32ℎ2 푡 12 12 4−0 Solving, we obtain ℎ ≅ 0.062. This value requires 푛 = = 65 segments. On 0.062 the other hand, employing Richardson is more efficient than a single application of the trapezoidal rule, because it only required 8 + 16 = 24 segments. The above method to improve the accuracy of the trapezoidal rule is a subset of a more general method for combining integrals to generate improved estimates, known as Romberg integration. In fact, the two 푂(ℎ4) improved integrals with values 퐼푙 = 62.5000 and 퐼푚 = 64.2578 can be combined to yield an even better value with 푂(ℎ6) accuracy. Here, the second-level Richardson extrapolation equation used to achieve the 푂(ℎ6) accuracy is 16 1 퐼 ≅ 퐼 − 퐼 15 푚 15 푙 4 where 퐼푚 and 퐼푙 are the more and less accurate 푂(ℎ ) estimates, respectively. So, for the above example, applying the second-level Richardson extrapolation leads to the improved accuracy value of the integral,

16 1 퐼 ≅ (64.2578) − (62.5000) = 64.3750 (휀 = 0.009%) 15 15 푡

Similarly, according to the Romberg integration technique, two 푂(ℎ6) results 8 (퐼푚, 퐼푙) can be combined to compute an estimate that is 푂(ℎ ) using the formula 64 1 퐼 ≅ 퐼 − 퐼 63 푚 63 푙 The Romberg formulas can be conveniently expressed as follows,

Your turn: Employ Romberg integration and the above table in order to arrive at a 푂(ℎ8) solution for the above example. Do you expect the solution to be exact? Explain. Also, determine the number of segments required by the standard trapezoidal rule to achieve such accuracy.

Richardson Extrapolation of the Trapezoidal Rule Leads to Simpson’s Rule

Let us consider the integral 2푎 퐼 = ∫ 푓(푥)푑푥 푎

2푎−푎 The one segment (ℎ = = 푎) applications of the trapezoidal rule gives the 1 1 approximation 푎 퐼(ℎ ) = (푓(푎) + 푓(2푎)) 1 2

2푎−푎 The two segment (ℎ = = 푎/2) applications of the trapezoidal rule gives the 2 2 approximation

푎 푎 푎 푎 푎 푎 푎 푎 퐼(ℎ ) = (푓(푎) + 푓 (푎 + )) + (푓 (푎 + ) + 푓(2푎)) = 푓(푎) + 푓 (푎 + ) + 푓(2푎) 2 4 2 4 2 4 2 2 4

Applying Richardson extrapolation to the above estimates leads to

4 1 4 푎 푎 푎 푎 1 푎 푎 퐼 = 퐼(ℎ ) − 퐼(ℎ ) = ( 푓(푎) + 푓 (푎 + ) + 푓(2푎)) − ( 푓(푎) + 푓(2푎)) 푅 3 2 3 1 3 4 2 2 4 3 2 2 푎 푎 2푎 푎 푎 ( ) 푎 = 푓(푎) + 푓 (푎 + ) + 푓(2푎) = 2 (푓(푎) + 4푓 (푎 + ) + 푓(2푎)) 6 3 2 6 3 2 1 = ℎ (푓(푎) + 4푓(푎 + ℎ ) + 푓(푎 + 2ℎ )) 3 2 2 2

The last expression is identical to one (three-point/two-segment) approximation, 퐼푠, 푎 of Simpson’s 1/3 rule with integration step, ℎ = . Therefore, we have just proved 2 that Richardson extrapolation applied to the trapezoidal rule leads to Simpson’s 1/3 rule.

Your turn: Show that the application of Richardson extrapolation to the 푂(ℎ4) Simpson 1/3 rule leads to the 푂(ℎ6) Boole’s rule (refer to Lecture 19 for the formulas for those rules).

Your turn: Derive the two-segment integration rule that results from an application of Richardson extrapolation to the simple integration rule with 퐼(ℎ1) = 푎푓(푎) and 푎 푎 푎 2푎 퐼(ℎ ) = 푓(푎) + 푓 (푎 + ) for approximating ∫ 푓(푥)푑푥. Compare the absolute 2 2 2 2 푎 relative error of the resulting rule to that of the two-segment trapezoidal rule for the following test integral and its associated true value,

2 푥3 퐼 = ∫ 푥 푑푥 = 0.9515 1 푒 − 1

Gauss Quadrature: Two-Point Gauss-Legendre Formula

푏 ( ) The trapezoidal rule approximates the integral ∫푎 푓 푥 푑푥 as the area under the straight line connecting the function values at the ends of the integration segment [푎 푏], as shown in the following figure.

The area under the straight line, generated by the trapezoidal rule, can be expressed as 푏 − 푎 푏 − 푎 푏 − 푎 퐼 = [푓(푎) + 푓(푏)] = 푓(푎) + 푓(푏) 푇 2 2 2

= 훼0푓(푎) + 훽0푓(푏) 푏−푎 where 훼 = 훽 = . Because the trapezoidal rule must pass through the 0 0 2 segment’s end points, there are cases where the formula results in large error.

Suppose that the constraint of having the line pass through the end points is relaxed. This allows us to fit the straight line to pass through two points in the neighborhood [푎 푏], such that the approximation of the area is enhanced by balancing the positive and negative errors. Hence, we would arrive at an improved estimate of the integral. (Refer to the following figure). Gauss quadrature is the name of the technique that implements such strategy. The particular formula derived next is called the two-point Gauss-Legendre formula.

The objective of the Gauss quadrature is to fit the straight line, through two points

(푥0, 푓(푥0)) and (푥1, 푓(푥1)), such that the area

퐼퐺 = 훼푓(푥0) + 훽푓(푥1) is exact when the function 푓(푥) being integrated is linear or constant. Since we have four parameters (훼, 훽, 푥0, 푥1) to determine, we need two more constraints (equations). Let those constraint be that 퐼퐺 is the exact definite integral of 푓(푥) when the function is quadratic and cubic. This allows us to formulate the following system of four linear equations

( ) 푏 푓 푥 = 퐾 → 퐼퐺 = 훼퐾 + 훽퐾 = ∫푎 퐾푑푥 = 퐾(푏 − 푎), or 훼 + 훽 = 푏 − 푎 (1) 푏 퐾 푓(푥) = 퐾푥 → 퐼 = 훼퐾푥 + 훽퐾푥 = ∫ 퐾푥푑푥 = (푏2 − 푎2), or 퐺 0 1 푎 2 1 훼푥 + 훽푥 = (푏2 − 푎2) (2) 0 1 2 푏 퐾 푓(푥) = 퐾푥2 → 퐼 = 훼퐾푥2 + 훽퐾푥2 = ∫ 퐾푥2푑푥 = (푏3 − 푎3), or 퐺 0 1 푎 3 1 훼푥2 + 훽푥2 = (푏3 − 푎3) (3) 0 1 3 푏 퐾 푓(푥) = 퐾푥3 → 퐼 = 훼퐾푥3 + 훽퐾푥3 = ∫ 퐾푥3푑푥 = (푏4 − 푎4), or 퐺 0 1 푎 4 1 훼푥3 + 훽푥3 = (푏4 − 푎4) (4) 0 1 4 Solving the above set of nonlinear algebraic equations [(1) − (4)] using Matlab’s symbolic solve function (where we use ‘a1’ for 훼 and ‘b1’ for 훽) gives

Now, selecting the solutions that satisfies 푏 > 푎 and 푥1 > 푥0, leads to the two-point Gauss-Legendre formula 푏 − 푎 푎 + 푏 √3 푎 + 푏 √3 퐼 = [푓 ( − (푏 − 푎)) + 푓 ( + (푏 − 푎))] 퐺 2 2 6 2 6 For 푛 segments, it can be shown that the above formula generalizes to {with 푏 − 푎 푏−푎 replaced by = ℎ, 푎 + 푏 is replaced by 푥 + 푥 = [푎 + (푘 − 1)ℎ] + [푎 + 푛 푘 푘+1 푘ℎ] = 2푎 + (2푘 − 1)ℎ }, 푛 1 2푎 + (2푘 − 1)ℎ √3 2푎 + (2푘 − 1)ℎ √3 퐼 = ℎ ∑ [푓 ( − ℎ) + 푓 ( + ℎ)] 퐺 2 2 6 2 6 푘=1 푛 1 1 √3 1 √3 = ℎ ∑ [푓 (푎 + (푘 − ) ℎ − ℎ) + 푓 (푎 + (푘 − ) ℎ + ℎ)] 2 2 6 2 6 푘=1 푛 1 3 + √3 3 − √3 = ℎ ∑ [푓 (푎 + (푘 − ) ℎ ) + 푓 (푎 + (푘 − ) ℎ)] 2 6 6 푘=1 Example. Estimate the following integral using the two-point Gauss-Legendre formula with one segment and compare it to the two-segment Simpson’s rule. Compute the absolute relative true error in percent.

2 ∫ sin(sin 푥) 푑푥 = 0.8164 1 Solution.

푎 + 푏 √3 √3 푓 ( − (푏 − 푎)) = sin (sin (1.5 − )) = 0.8052 2 6 6

푎 + 푏 √3 √3 푓 ( + (푏 − 푎)) = sin (sin (1.5 + )) = 0.8285 2 6 6 2 − 1 퐼 = (0.8052 + 0.8285) = 0.8169 (휀 = 0.06%) 퐺 2 The error is comparable to that using [the 푂(ℎ4)] Simpson’s 1/3 rule (with 푛 = 2): 1 푏 + 푎 퐼 = ℎ [푓(푎) + 4푓 ( ) + 푓(푏)] 푆 3 2 1 2 − 1 3 = ( ) [sin (sin(1)) + 4 sin (sin ( )) + sin(sin(2))] 3 2 2 = 0.8159 (휀 = 0.06%). Also, the Gauss-Legendre rule has saved one function evaluation. Note that the two-point Gauss-Legendre rule has the same convergence rate [푂(ℎ4)] as Simpson’s 1/3 rule, but does not require 푛 to be even valued. However, whereas Simpson’s rule can be used to integrate data as well as analytic functions, the Gauss-Legendre rule is only appropriate when the function to be integrated is specified analytically. It should also be noted that higher-point versions of the Gauss-Legendre method can be derived that lead to higher convergence rates.

Your turn: Write a Matlab function (call it int_gauss) that implements multiple segment, two-point Gauss-Legendre integration and use it to verify the above result (set 푛 = 1). Repeat with 푛 = 2. Compare your results to the (4-decimal-place accurate) solution 퐼 = 0.8164. Adaptive Quadrature

Consider the following integral,

4 퐼 = ∫ [10 − 푥3(푥 − 1)(푥 − 2)(푥 − 3)(푥 − 4)]푑푥 0

The trapezoidal rule with eight equal segments can be used to approximate the integral, as depicted in the following figure.

In order to improve the approximation, we can double the number of the segments (or triple, etc.) as depicted in the next figure.

However, a steep increase in the number of segments will slow down the numerical method. One strategy to increase the accuracy of the integration method, and at the same time keep the method efficient, would be to increase the number of segments (i.e., decrease their width) only in regions where the function exhibits increased variations. This idea is depicted in the following figure.

The above technique is the basis for adaptive quadrature methods. These adaptive methods automatically adjust the step-size ℎ so that small steps are taken in regions of sharp variations and larger steps are taken where the function changes slowly. Usually, Simpson’s 1/3 rule (or Gauss quadrature) is used to approximate the integral of a given segment. Then the rule is applied at a second level of refinement and the difference between these two levels is used to estimate the relative error. If the error is acceptable, no further refinement is required and the integral estimate for that particular segment is deemed acceptable. If the error estimate is too large, the step size is decreased and the process is repeated.

Matlab Built-in Numerical Integration Function integral

Matlab has a built-in function named “integral” that implements adaptive quadrature. This function uses a technique known as Lobatto quadrature and elements of Gauss quadrature.

Note: Older versions of Matlab (before release R2012a) used function quad for numeric integration.

The syntax used by integral, for integrating the function 푓(푥) over the limits 푎 and 푏, is integral(f, a, b, ‘AbsTol’,tol), where tol is the desired absolute error tolerance (default is 10−6). Here, the “dot notation” must be used when entering the Matlab expression for f. Example. Integrate the following function between the limits 0 and 1. 1 1 푓(푥) = + (푥 − 0.3)2 + 0.01 (푥 − 0.9)2 + 0.04

In order to achieve this accuracy, Simpson’s 1/3 rule requires 40 segments and the trapezoidal rule requires 425 segments (your turn: confirm it). Matlab Polynomial and Symbolic Integration: polyint and int

Lecture 7 (on polynomials) has already introduced Matlab tools for integrating polynomials and for symbolic integration of analytic functions. The Matlab functions associated with such analytic integration are revisited in this section.

The integration of polynomials is simple and is based on the elementary integral 1 ∫ 푥푛푑푥 = 푥푛+1 푛 + 1 Therefore, the definite integral of a polynomial of order 푛 is given by

푏 푏 푛 푛−1 푎푛 푛+1 푎푛−1 푛 푎1 2 ∫ (푎푛푥 + 푎푛−1푥 + ⋯ + 푎1푥 + 푎0)푑푥 = [ 푥 + 푥 + ⋯ + 푥 + 푎0푥] 푎 푛 + 1 푛 2 푎 푎 푎 푎 푎 푎 푎 = [ 푛 푏푛+1 + 푛−1 푏푛 + ⋯ + 1 푏2 + 푎 푏] − [ 푛 푎푛+1 + 푛−1 푎푛 + ⋯ + 1 푎2 + 푎 푎] 푛 + 1 푛 2 0 푛 + 1 푛 2 0

Matlab’s built-in polyint function can integrate a polynomial from its coefficient vector. If the integral is definite, then the polyval function can be used to evaluate the integrated polynomial at the integration limits.

Example. Evaluate the integral

4 ∫ (−2푥3 + 3푥2 + 5푥 + 1) 푑푥 1 Solution.

Matlab can also perform symbolic integration of a wide range of analytic functions using the built-in symbolic toolbox int function. The following are examples of using function int.

int can also be used to evaluate definite integrals:

Of course, non-elementary integrals have no explicit (analytic) answer. Here is how Matlab’s int responds to such integrals:

Taylor Series-Based Integration

We may also integrate a function employing the Taylor series approximation of the function (expanded at a point chosen, say, at the center of the integration interval), followed by Matlab’s symbolic integration to obtain a polynomial expression for the answer as well as a numeric answer. This is illustrated in the following example.

Example. Evaluate the following non-elementary integral based on Taylor series expansion and symbolic integration. Compare the result to the one obtained using numerical integration employing Matlab’s integral. 휋 ∫ sin(sin(푥)) 푑푥 0 Solution. Perform Taylor series expansion at 휋/2 (with polynomial degree, say, 13) and apply symbolic integration (recall that the Matlab function double converts an exact symbolic number to its double precision floating point value),

For comparison purposes, here is the result using numerical integration employing integral:

Your turn: Repeat the above example employing a polynomial of order 18. As we saw earlier, Matlab has two built-in functions (sym and matlabFunction) that help convert between symbolic and anonymous function representations. Let us say we start with an anonymous function 푓(푥) = sin (sin(푥)) which we are interested in integrating based on a Taylor series expansion. Then, we would like to evaluate the integrated function at 푥 = 0, 휋/6, 휋/4. Here is how it is done using Matlab:

It should be noted here that, in the last two instructions in the above code, “fun” is an analytic polynomial (represented as an anonymous function). The following is a 휋 plot of 푓(푥) and its running integral for 0 ≤ 푥 ≤ . 2

Multiple Integrals & Matlab’s Functions: integral2 & integral3

Multiple integrals are common in . For example, the formula for computing the average, 〈푓〉, of a two dimensional function 푓(푥, 푦) can be written as

푏 푑 ∫ ∫ 푓(푥, 푦) 푑푥푑푦 〈푓〉 = 푎 푐 (푏 − 푎)(푑 − 푐)

The numerator takes the form of a double integral. The solution of a double integral is straight forward: (1) integrate with respect to one of the variable, say 푥, treating 푦 as a constant, and (2) integrate the resulting expression with respect to 푦. These ideas are illustrated in the following example.

Example. Find the average of the function 푓(푥, 푦) = 푥푦2, over the limits −1 ≤ 푥 ≤ 2 and 1 ≤ 푦 ≤ 2. Also, plot the function. We start by evaluating the double 2 2 2 integral 퐼 = ∫1 ∫−1 푥푦 푑푥푑푦. First, express the double integral 2 2 2 2 as ∫1 (∫−1 푥푦 푑푥) 푑푦 = ∫1 퐼(푦)푑푦 and solve the inner integral, 퐼(푦), to obtain 2 2 2 푥2 3 퐼(푦) = ∫ 푥푦2 푑푥 = 푦2 ∫ 푥푑푥 = 푦2 [ ] = 푦2 2 2 −1 −1 −1

Next, integrate 퐼(푦) with respect to the variable 푦 to obtain,

2 2 3 3 푦3 3 8 1 7 퐼 = ∫ 푦2푑푦 = [ ] = ( − ) = = 3.5 2 2 3 2 3 3 2 1 1

Finally, the average is computed as

3.5 3.5 〈푓〉 = = = 1.1667 (2 − (−1))(2 − 1) 3

So, basically, the integral is first evaluated in one dimension. The result of this first integration is integrated in the second dimension. Note that the order of integration is not important (but we have to be careful so as to include the proper integration limits for each integral).

A 3-D plot of 푓(푥, 푦) = 푥푦2 can be obtained using the following Matlab script that utilizes Matlab’s built-in function surf.

A numerical double-integral is based on the same ideas. First, an integration method (say, trapezoidal or Simpson 1/3 rule) is applied in the first dimension with each value of the second dimension held constant. Then, the method would be applied to integrate in the second dimension. The following example solves the last example using the trapezoidal rule, with two segments in each dimension.

[2−(−1)] The 푥 samples start at -1 and increment by ℎ = = 1.5. So we get the 푥 푥 2 data as {-1, 0.5, 2}. Similarly for the 푦 values, the first sample is at 1 and the 2−1 remaining two samples are separated by ℎ = = 0.5, resulting in the three 푦 푦 2 samples {1, 1.5, 2}. The following figures show a grid representation of the 푥 and 푦 samples (left figure) and the values of the function at those points (right figure).

The trapezoidal rule, applied for 푥, with 푦 = 1, can be expressed as

2 ℎ 1.5 푥 ∑[푓(푥 , 푦) + 푓(푥 , 푦)] = [(푥 푦2 + 푥 푦2) + (푥 푦2 + 푥 푦2)] 2 푘 푘+1 2 1 2 2 3 푘=1 3 3 1 3 = [푥 푦2 + 2푥 푦2 + 푥 푦2] = [(−1)12 + 2 ( ) 12 + (2)12] = 4 1 2 3 4 2 2

Similarly, for 푦 = 1.5 and 푦 = 2 we obtain, respectively,

3 1 27 [(−1)1.52 + 2 ( ) 1.52 + (2)1.52] = 4 2 8 and

3 1 3 [(−1)22 + 2 ( ) 22 + (2)22] = 4 2 2

These values are then integrated, using the trapezoidal rule with ℎ푦 = 0.5, along the 푦 direction

to obtain

0.5 27 0.5 27 3 (6 + ) + ( + ) = 3.5625 2 8 2 8 2

The average is then, 3.5625/3 = 1.1875 which has an absolute relative true error (in percent) of

1.1667 − 1.1875 | | 100% = 1.78% 1.1667 If we repeat this problem using Simpson’s 1/3 rule, we should expect to get the exact value for the integral. This is so, because Simpson’s 1/3 rule is exact for cubic functions (that includes linear and quadratic functions). Refer to Problem 8 in Assignment 12.

We could also have arrived at a rough estimate of the average by simply averaging the nine 푓(푥, 푦) grid values:

9 9 9 1 −4 + 2 + 8 − + + − 1 + + 2 4 8 2 2 = 1.2083 9 which has an absolute relative true error of

1.1667 − 1.2083 | | 100% = 3.57% 1.1667

Matlab has built-in functions to implement both double (integral2) and triple (integral3) integration. The syntax for integral2 is z = integral2(f, xmin, xmax, ymin, ymax, ‘AbsTol’, tol), where f is an anonymous function representation of 푓(푥, 푦). If 푡표푙 is not specified, a default tolerance of 10−6 is used. The following is an example of solving the double-integral problem using integral2,

2 2 퐼 = ∫ ∫ 푥푦2 푑푥푑푦 1 −1

The first set of integral limits (−1, 2) correspond to the first variable in the anonymous function definition (x in this case).

Note: Older versions of Matlab used dblquad and triplequad for integral2 and integral3, respectively. Monte Carlo Integration The average, 〈푓〉, of a over the interval [푎 푏] is given by 퐼 1 푏 〈푓〉 = = ∫ 푓(푥)푑푥 푏 − 푎 푏 − 푎 푎 The average can also be approximated by sampling the function 푓(푥) at many, many points 푥𝑖 and using the 푁 samples to approximate the average: 푁 1 〈푓〉 ≅ ∑ 푓(푥 ) 푁 𝑖 𝑖=1 The error (which is a function of 푁) in the approximation is then 푁 1 푏 1 휀(푁) = ∫ 푓(푥)푑푥 − ∑ 푓(푥 ) 푏 − 푎 푁 𝑖 푎 𝑖=1 If we assume that the error is normally distributed with standard deviation 휎, then (refer to any standard text on statistics) the error can be computed as

휎 √〈푓2〉 − 〈푓〉2 휀(푁) = ± = ± √푁 √푁 and the integral 퐼 would have the following approximation 푁 푏 푏 − 푎 (푏 − 푎)휎 퐼 = ∫ 푓(푥)푑푥 = ∑ 푓(푥𝑖) ± 푁 푁 푎 𝑖=1 √ Therefore, in the limit of large 푁, we may approximate the integral as 푁 푏 푏 − 푎 퐼 = ∫ 푓(푥)푑푥 ≅ ∑ 푓(푥 ) 푁 𝑖 푎 𝑖=1

Selecting uniformly distributed random samples 푥𝑖 and computing the integral 퐼 according to the above formula is referred to as the Monte Carlo integration method, named after the casino at Monte Carlo.

As an example, consider the integral (whose exact value is 2) 휋 퐼 = ∫ sin(푥)푑푥 0 The averages of sin(푥) and sin2(푥) over the interval [0 휋] are, respectively, 1 휋 2 〈푓〉 = ∫ sin(푥) 푑푥 = 휋 0 휋 1 휋 1 〈푓2〉 = ∫ sin2(푥) 푑푥 = 휋 0 2 leading to the error standard deviation

1 4 휎 = √〈푓2〉 − 〈푓〉2 = √ − ≅ 0.3078 2 휋2

Now, we may compute the absolute relative error in the approximation of the integral 퐼 as

휋 휋 (휋 − 0)(0.3078) sin(푥) 푑푥 − ∑푁 sin(푥 ) ± ∫0 𝑖=1 𝑖 푁 휋(0.3078) 0.967 | 푁 | = | √ | = ≅ 휋 2 ∫0 sin(푥) 푑푥 2√푁 2√푁

Then to achieve an accuracy of at least three significant digits (refer to the last slide 0.967 in Lecture 6), < (5)10−4, requires 푁 > 9672 which is on the order of 106; a 2√푁 huge calculation compared, say, to Simpson’s 1/3 integration method.

Your turn: Write a Matlab function that implements the Monte Carlo integration method. (Note: The program can be as short as 2 lines of code!). Recall that rand(N,1) generates a vector of 푁 uniformly distributed numbers in the interval [0 1]. The (anonymous) function 푓(푥) and the number of samples (푁) should be 휋 ( ) input arguments to your function. Test the function for ∫0 sin (sin 푥 )푑푥 with 푁 = 106 samples and compute the absolute relative error. Estimate (via simulation) the required number of samples for the method to converge to at least 3 significant 휋 ( ) digits for the integral ∫0 sin(sin 푥 )푑푥 (assume an exact value, 퐼 = 1.7865).