<<

ECE 3040 Lecture 8: Taylor Approximations I © Prof. Mohamad Hassoun

This lecture covers the following topics:

 The  Taylor series expansion for some basic functions  Evidence of digital machine rounding error  Elementary function computation using Taylor series  Rearrangement of functions to control rounding error  Significance of the Taylor series expansion point a  Taylor expansion of multivariate functions

The Taylor Series The concept of a Taylor series was discovered by the Scottish mathematician James Gregory and formally introduced by the English mathematician in 1715. Taylor’s series is of great value in the study of numerical methods and the implementation of numerical algorithms. In mathematics, a Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function's at a single point. The series is based on Taylor’s Theorem that states that any smooth function, 푓(푥), can be approximated, in the neighborhood of an expansion point a, as the polynomial:

(푥 − 푎) (푥 − 푎)2 (푥 − 푎)푛 푓(푥) = 푓(푎) + 푓(1)(푎) + 푓(2)(푎) + ⋯ + 푓(푛)(푎) + ⋯ 1! 2! 푛! where 푓(푘)(푎) is the 푘th , evaluated at 푥 = 푎. We may truncate the infinite series at the 푛th term and use the resulting 푛th degree polynomial 푝푛(푥) as an approximation for the original function 푓(푥); i.e., 푝푛(푥) ≅ 푓(푥):

(푥 − 푎) (푥 − 푎)2 (푥 − 푎)푛 푝 (푥) = 푓(푎) + 푓(1)(푎) + 푓(2)(푎) + ⋯ + 푓(푛)(푎) 푛 1! 2! 푛!

It can be shown that the truncation error, 푅푛 = 푓(푥) − 푝푛(푥), which is the sum of all series terms above the 푛th term, is given by:

푓(푛+1)(훾) 푅 = (푥 − 푎)푛+1 푛 (푛 + 1)!

Where 훾 is an unknown value that lies between 푥 and 푎. Note that if |푥 − 푎| < 1, then only a small number of series terms would suffice to give very accurate approximation for 푓(푥) (note the term in the denominator).

If the Taylor series is centered at zero (i.e., the expansion point 푎 is set to zero), then that series is also called a Maclaurin series (or power series), named after the Scottish mathematician Colin Maclaurin who made extensive use of this special case of Taylor series in the 18th century. The power series is then, 푥 푥2 푥푛 푓(푥) = 푓(0) + 푓(1)(0) + 푓(2)(0) + ⋯ + 푓(푛)(0) + ⋯ 1! 2! 푛! It should be noted that the Maclaurin series (and, as a matter of fact, the Taylor series centered at any point 푎) of a polynomial function is finite, and is identical to the polynomial itself.

Example. Find and compare the Taylor series approximation for 푓(푥) = sin(푥), at 푎 = 0.

Solution: The derivatives of sin(푥) are: 푓(1)(푥) = cos(푥) , 푓(2)(푥) = − sin(푥) , 푓(3)(푥) = − cos(푥), 푓(4)(푥) = sin(푥), 푓(5) = cos(푥) , … Evaluating the derivatives at 0, we obtain 푓(1)(0) = 1, 푓(2)(0) = 0, 푓(3)(0) = −1, 푓(4)(0) = 0, 푓(5)(0) = 1 … which leads to the following polynomial approximations:

푝0(푥) = 푓(0) = sin(0) = 0 푥 푝 (푥) = 푓(0) + 푓(1)(0) = 0 + 푥(1) = 푥 1 1! 푥 푥2 푥 푝 (푥) = 푓(0) + 푓(1)(0) + 푓(2)(0) = 0 + 푥(1) + (0) = 푥 (same as 푝 ) 2 1! 2! 2 1 푥 푥2 푥3 푥3 푝 (푥) = 푓(0) + 푓(1)(0) + 푓(2)(0) + 푓(3)(0) = 푥 − 3 1! 2! 3! 3! 푥3 푝 (푥) = 푥 − (same as 푝 ) 4 3! 3 푥3 푥5 푝 (푥) = 푥 − + 5 3! 5!

It is not surprising to see that only odd terms are present in the expansion, since sin(푥) is an odd function (has symmetry with respect to the origin).

The following is a Matlab plot comparing the various 푝푛(푥) approximations of sin(푥):

sin(푥) (red), 푝1(푥) (green), 푝3(푥) (magenta) and 푝5(푥) (blue). Note how the higher degree polynomial approximations are closer to sin(푥), over a wider range of 푥.

Matlab has a convenient function for generating the Taylor polynomial: taylor(f,x,a,’order’,n). Here, f represents the function 푓(푥) defined symbolically, or as an anonymous function, x is the label for the independent variable, a is the point at which the expansion is to be performed (the default value is zero), and ‘order’,n generates an approximation up to and including the 푛 − 1 order term. If the order is not specified, then the default value 푛 = 6 is used. Here are few examples of the use of the taylor function.

As would be expected, the Taylor series for a polynomial function is the function itself, regardless of the expansion point:

The following is an 11-degree polynomial approximation for 푓(푥) = sin(푥).

We may plot this 11-degree polynomial by first defining it as an anonymous function:

Alternatively, we may utilize the sym2poly command which converts a symbolic polynomial to coefficient vector representation, and then use the polyval command to evaluate the polynomial, as follows:

Taylor Series Expansion for Some Basic Functions

The following is a list of Taylor/Maclaurin/power series expansions (at 푎 = 0) for several frequently encountered analytic functions.

Notes: The power series can be used to generate infinite sums and their corresponding values. For example, setting 푥 = 1 in the series for 푒푥 leads to 푒1 = 푒 = 1 + 1 + 1 1 1 + + ⋯ or 푒 = ∑∞ . 2! 3! 푘=0 푘! (1 + 푥)푛 is an 푛-degree polynomial function in factored form. Hence, it is not surprising that its power series is an exact, finite polynomial of degree 푛. Here, the 푛 푛! binomial coefficient ( ) = . 푘 푘!(푛−푘)! 1 The last series gives rise to the ∑∞ 푎푘 = if −1 < 푎 < 1. 푘=0 1−푎 A power series can be expressed as infinite sum that is useful in programming. For example, cos(푥) can be expressed as, ∞ (−1)푛 cos(푥) = ∑ 푥2푛 (2푛)! 푛=0 that can be approximated by truncating the infinite series as follows: 푘 (−1)푛 cos(푥) ≅ ∑ 푥2푛 (2푛)! 푛=0 The following Matlab function cos_approx accepts a scalar 푥 and a positive, even integer 푚 and generates a table of values for the approximations of cos(푥) using polynomials of varying order: 0, 2, 4, … and 푚. The function also computes the absolute relative error. This is essentially programming Problem 7 in Lecture 6, but with an alternative expression for the approximating sum.

Example from Electric Circuits The characteristic 푖 − 푣 relationship for a semiconductor diode can be expressed as

푖(푣) = 푎(푒푏푣 − 1) where 푖 is the current through the diode and 푣 is the voltage drop. The above equation is plotted in the following figure. The plot also depicts the graphical solution for the current when the voltage applied across the diode is 푣(푡) = 퐴푐표푠(휔푡).

We may expand 푖(푣) employing a power series as

(푏푣)2 (푏푣)3 푖(푣) = 푎 [(1 + 푏푣 + + + ⋯ ) − 1] 2 6

When the voltage 푣(푡) has small values we may approximate the current as

푎푏2 푖(푣) ≅ 푎푏푣 + 푣2 2

So, for the case 푣(푡) = 퐴cos(휔푡) with small 퐴 we obtain the current 푎푏2퐴2 푖(푡) ≅ 푎푏퐴cos(휔푡) + cos2(휔푡) 2 Now, we may (optionally) employ the trigonometric identity 1 1 cos2(푥) = + cos(2푥) 2 2 to the above approximation and obtain the truncated (Fourier series) form

푎푏2퐴2 푎푏2퐴2 푖(푡) ≅ + 푎푏퐴cos(휔푡) + cos(2휔푡) 4 4

The following plot compares the exact current 푖(푡) = 0.1(푒cos(푡) − 1) to its approximation 푖(푡) ≅ 0.025 + 0.1cos(푡) + 0.025cos(2푡). We have assumed 푎 = 0.1, 푏 = 1, 퐴 = 1 and 휔 = 1. [Your turn: repeat for 퐴 = 0.5, 1.5.]

Your turn: Repeat the above example assuming a cubic expansion for 푖(푡). Evidence of Digital Machine Rounding Error Let us consider a polynomial and see what happens when we apply Taylor series expansion to it. Let 푓(푥) be simply equal to (푥 − 1)7. If we expand this function, we obtain a 7th degree polynomial:

Let us now apply Taylor series expansion at 푎 = 0, with ‘order’ set to 21:

As expected, the expansion is exact since 푓(푥) is a polynomial. Here is a plot, of the function (푥 − 1)7 and its expansion, for −4 < 푥 < 6 (the plots look identical):

Let us now zoom-in and plot the two function forms in the neighborhood of 푥 = 1:

The blue plot is for (푥 − 1)7 and the green one is for its expanded version: x^7 - 7*x^6 + 21*x^5 - 35*x^4 + 35*x^3 - 21*x^2 + 7*x – 1 For example, if 푥 = 1.01, then the two expressions will return, respectively:

thus, resulting in a surprising absolute relative error of 42.11% What is the problem?! The two functions are identical and, theoretically, their plots must be identical. You are witnessing the machine round-off error in action. This is an error that is due to the inability of digital machines to represent some numbers perfectly. It is due to the finite number of binary bits available to represent decimal numbers. (Machine round-off errors are the topic of the Lecture 10). Elementary Function Computation using Taylor Series

Many microprocessors have an adder circuit (build out of logic gates) which performs addition. They do not have additional logic circuits to perform other arithmetic operations (-, *, /), or elementary functions [such as sin(푥), cos(푥), 푒푥, ln(푥), etc]. Subtraction is performed via addition after converting negative numbers using a special binary code called two’s complement. Multiplication and division can then be performed based on repeated addition and subtraction, respectively. Exponentiation (say, 푥4) can be performed as repeated multiplication (푥푥푥푥). But how does a microprocessor compute a function such as 푓(푥) = √푥 or 푓(푥) = sin(푥)? One method is to use a program that calculates 푓(푥) using its polynomial approximation (Taylor series). Notice that a polynomial can be evaluated at a given point 푥 by performing multiplications, additions and subtractions. For example, the evaluation of the polynomial 푥2 − 2푥 + 1 at 푥 = 3 can be performed as 3 ∗ 3 – 2 ∗ 3 + 1. For 푓(푥) = sin (푥), if 푥 is close to 푎 (the point of expansion; set to zero here) then 푥3 푥5 the polynomial approximation sin(푥) ≅ 푝 (푥) = 푥 − + can be used 5 3! 5! (remember that the angle is measured in radians): sin(0.2) = 0.198669330795061

0.23 0.25 푝 (0.2) = 0.2 − + = 0.198669333333333 (8 decimal digit accuracy) 5 6 120 −9 True error = sin(0.2) − 푝5(0.2) ≅ (−2.54)10 |sin(.2)−푝 (.2)| Absolute relative error (in %) = 5 100% ≅ (1.28)10−6 % |sin (.2)| Similarly, sin(1) = 0.841470984807897

13 15 푝 (1) = 1 − + = 0.841666666666667 (3 digit accuracy) 5 6 120 -4 True error = sin(1) − 푝5(1) ≅ -1.96x10 |sin(1)−푝 (1)| And the relative absolute error (%) = 5 100% ≅ 0.02 % |sin (1)| 3휋 On the other hand, as 푥 moves away from 0, say 푥 = , the approximation fails: 2 3휋 3휋 sin ( ) − 푝 ( ) ≅ −7.636666525534631, which is a completely useless 2 5 2 approximation:

3휋 3휋 |sin( )−푝 ( )| 2 5 2 Absolute relative error (%) = 3휋 100% ≅ 764% |sin( )| 2

Note that the approximation also applies to complex arguments, 푥: sin(푖) = 1.175201193643801푖

푖3 푖5 푖 푖 푝 (푖) = 푖 − + = 푖 − + = 1.175000000000000i 5 6 120 6 120 −4 −4 |True error| = |sin(i) - 푝5(푖)| ≅ |(2.01)10 푖| = (2.01)10

For |푥– 푎| > 1, more power series terms must be added in order for the polynomial approximation for 푓(푥) to be accurate. In fact, we can utilize the Taylor series truncation error, 푅푛 = 푓(푥) − 푝푛(푥), introduced earlier:

푓(푛+1)(훾) 푅 = (푥 − 푎)푛+1 푛 (푛 + 1)! to determine beforehand the required polynomial order for a given desired accuracy to be achieved. Since, the 푛th derivative of sin(푥) is a sinusoid, with values between −1 and 1 (i.e.,−1 ≤ 푓(푛+1)(훾) ≤ 1), the absolute residual error is bounded from above by: |(푥 − 푎)푛+1| |푅 | ≤ 푛 (푛 + 1)!

In the above approximations, we had chosen 푎 = 0 and 푛 = 5. So, theoretically, the absolute truncation error, for 푥 = 0.2, is bounded from above by

|(0.2 − 0)5+1| 0.26 |푅 | ≤ = ≅ (89)10−9 푛 (5 + 1)! 720

This result is in line with the absolute true truncation error of (2.54)10−9, computed earlier.

1 Similarly, for 푥 = 1, the absolute truncation error is bound by = (14)10−4, and 6! is consistent with the computed absolute true error of (1.96)10−4.

휋 What is the smallest degree (푛) polynomial approximation of sin that leads to an 2 휋 휋 absolute true error = |sin ( ) − 푝 ( )| < 10−6? From the above truncation error 2 푛 2 formula, we need 푛 that satisfies: 휋 푛+1 |( ) | 2 < 10−6 (푛 + 1)!

Here is a table for the value of the left hand side of the inequality for various 푛 values. From this Table, we find that 푛 should be equal to 11.

휋 푛+1 |( ) | 2 n (푛 + 1)! 5 0.020863480763353 7 9.192602748394263푒 − 04 9 2.520204237306060푒 − 05 11 4.710874778818170푒 − 07

Matlab verification (note: the argument “ ‘order’,10 ” leads to a 9-degree polynomial)

Rearrangement of Formulas to Control Rounding Error

A significant loss of accuracy due to machine round-off error occurs when two numbers of about the same value are subtracted. Therefore, before computing formulas [mathematical expressions, 푦 = 푓(푥)] using a digital machine, the formula should be rearranged to avoid severe cancellation (in some region of the argument 푥). For example, for large values of 푥, the expression √푥 + 1 − √푥 leads to severe cancellations because √푥 + 1 ≅ √푥. One trick from is to multiply and divide by the conjugate √푥 + 1 + √푥 (√푥 + 1 − √푥) √푥 + 1 + √푥 which leads to the expression 1

√푥 + 1 + √푥 that has no cancellation and, therefore, leads to a more accurate numerical answer (due to reduced round-off error). The following is Matlab verification for 푥 = 109.

Sometimes a rearrangement cannot be found to remove the cancellation. For the case of cancellations due to very small values of 푥, we may use the Taylor series expansion of the expression (about some suitable expansion point) as a way to control machine round-off error. This is illustrated in the following example.

Example. Evaluate 푦 = 푒푥 − 1 for 푥 = 10−6. Since the cancellation occurs for small values of 푥, we perform Taylor series expansion of 푒푥 at the origin (i.e., Maclaurin series) to obtain 푥2 푥3 푥4 푒푥 = 1 + 푥 + + + … 2 6 24 But, since the 푥4 term is contributing a very tiny amount at 푥 = 10−6, on the order of 10−24, we may truncate the series and use the approximation 푥2 푥3 푒푥 ≅ 1 + 푥 + + 2 6 푥 푥2 Therefore, we employ the expression 푦 = 푥 (1 + + ) as opposed to the original 2 6 expression, 푦 = 푒푥 − 1. The following is Matlab verification.

푥4 The reader might want to verify that including the term does not change the 24 numerical result. Doing so will also verify that 1.000000500000167e-06 is the more accurate answer. The improved accuracy in the above example is due to the fact that in the Taylor representation/approximation there is no subtraction of close-valued terms. But, the original representation, 푒푥 − 1, we subtract very close quantities (when 푥 is small).

Your turn: Derive more (numerically) appropriate formulations for the following expressions. Verify the reduction in round-off error using Matlab (employ 푥 = 10−9).

sin(푥) a. 푥

ln(1−푥) b. ln(1+푥) Significance of the Expansion Point a

In the above examples that dealt with the Taylor series approximation of the sin(푥) function, we chose to set the expansion point 푎 to zero. This generates a polynomial that provides maximum accuracy for points 푥 in the proximity of zero. If we want the approximation to work well close to 휋/2, then we should pick 푎 to be 휋/2, or we keep 푎 = 0 and use a relatively high-degree polynomial approximation of sin(푥). But, the later choice would slow down the computations and is prone to numerical truncation error (as we saw earlier). It would make sense to choose a such that we strike a compromise: one polynomial would provide acceptable approximation for 푥 between 0 and 휋/2. That would suggest that we generate the approximation polynomial from Taylor series expansion (half way) at 푎 = 휋/4, and pick a sufficiently large order 푛 so that the truncation error at the end points is acceptable (say, 푛 > 11). But, how about the accuracy of the approximation for |푥| > 휋/2? Fortunately, in this case, the sin(푥) is periodic and has odd symmetry. Recall that, sin(푥 + 2푘휋) = sin(푥) for any integer 푘; Also, sin(−푥) = − sin(푥) and sin(푥 ± 휋) = − sin(푥). We may then take advantage of these properties so that, for any 푥, the approximation will never be worse than that that at 0 or휋/2. One can then write a Matlab function, call it sin_poly, which computes the sin(푥) via polynomial approximation. Here is an example that will help in developing such a function. Let us apply the above ideas to approximate sin(31휋/6). We first generate (computed only once) a 13-degree polynomial from an “‘order’, 14” Taylor series expansion at 푎 = 휋/4. Here is the Matlab-generated polynomial:

Next, we need to reduce the angle 31휋/6 to its equivalent positive, acute angle (i.e., 0 < angle < 휋/2). We can accomplish that as follows: 31휋/6 = 휋/6 + 휋 + 4휋Therefore, sin(31휋/6) = sin(휋/6 + 휋) = −sin(휋/6). We can take advantage of Matlab’s command rem(x,2*pi), which returns the remainder (푟) of dividing 푥 by 2휋, and then set 푥 = 푟 − 휋 if 휋/2 < 푟 ≤ 3휋/2. On the other hand, if 푟 > 3휋/2 we set 푥 = 2휋 − 푟. In either case, the sine of the transformed angle must be multiplied by −1. The following Matlab code illustrates the angle transformation for 푥 = 31휋/6:

Now, we are ready to approximate sin(31휋/6),

And we have to remember to multiply the above answer by −1, since sin(푥) = 휋 휋 − sin(푥 − 휋). Similarly, sin (− ) = − sin ( ) and evaluating the approximation 3 3 polynomial at 휋/3, and multiplying the result by −1, gives:

A function is provided to you, sin_poly, which incorporates the above properties of the sine function to compute sin(푥) for any real (scalar or vector) 푥. The following is a version that generates a table (sin_poly_table).

Approximation of 흅: 휋 is most famous for representing the ratio of the circumference 퐶 to the diameter 푑 for any circle (휋 = 퐶/푑). The approximation 휋 ≅ 3.16 was known since the time of ancient Egypt, over 4000 years ago. Since then, this number (and its approximations) has appeared in many different mathematical contexts. The first theoretical calculation seems to have been carried out by Archimedes (an ancient Greek mathematician, physicist, engineer, inventor and astronomer, 287-212 BC). He obtained the interval approximation 223/71 < 휋 < 22/7 based on bounding the circumference of a unit-diameter circle between 96-sided inscribed and circumscribed polygons (see figure below).

Almost 2000 years later, John Machin (1680-1752), a professor of astronomy at Gresham College, London, discovered the approximation 1 1 휋 ≅ 16 atan ( ) − 4 atan ( ) 5 239 which has the true error:

Your turn: Generate the power (Maclaurin) series of order 푛 for the atan(푥) 푥3 푥5 function [for instance, for 푛 = 5 the approximation is atan(푥) ≅ 푥 − + . Write 3 5 a Matlab script that approximates 휋 using the Machin’s approximation formula and employing a power series for atan(푥). Your script should generate a table of true errors for 푛 = 1, 3, 5, 7 and 9. How many decimal places of accuracy did you find for 푛 = 9? (It is interesting to note that, in October 2014, 휋 has been calculated to 13.3 trillion digits!)

Taylor Expansion of Multivariate Functions

The Taylor series can be generalized to handle functions of 푛 variables, 푓(풙) =

푓(푥1, 푥2, 푥3, … 푥푛). The quadratic approximation, 푓̃(풙), is given by 1 푓̃(풙) = 푓(풂) + (풙 − 풂)푇∇푓(풂) + (풙 − 풂)푇H(풂)(풙 − 풂) 2 푇 The above expression assumes a column vector, 풙 = [푥1 푥2 푥3 … 푥푛] and an 푇 expansion point 풂 = [푎1 푎2 푎3 … 푎푛] . The notation ∇푓(풙) stands for the vector and is defined as

휕푓(풙) 휕푓(풙) 휕푓(풙) 푇 ∇푓(풂) = ∇푓(풙)|풙=풂 = [ | | … | ] 휕푥1 풙=풂 휕푥2 풙=풂 휕푥푛 풙=풂

2 The notation H(풂) = H(풙)|풙=풂 = ∇ 푓(풙)| 풙=풂 stands for the Hessian matrix and is defined as

휕2푓(풙) 휕2푓(풙) 휕2푓(풙)

2 … 휕푥1 휕푥1휕푥2 휕푥1휕푥푛 휕2푓(풙) 휕2푓(풙) 휕2푓(풙) … 휕푥 휕푥 휕푥2 휕푥 휕푥 퐻(풙) = 2 1 2. 2 푛 .

. 2 2 2 휕 푓(풙) 휕 푓(풙) 휕 푓(풙) … 2 [휕푥푛휕푥1 휕푥푛휕푥2 휕푥푛 ]

Example: Give a quadratic function that approximates the following function in the vicinity of the origin. Plot the function and its approximation for −1 ≤ 푥1 ≤ 1 and −1 ≤ 푥2 ≤ 1. 4 푓(푥1, 푥2) = (푥2 − 푥1) + 8푥1푥2 − 푥1 + 푥2 + 3

The function evaluated at the origin gives: 푓(0,0) = 3

푇 3 휕푓(풙) 휕푓(풙) −4(푥2 − 푥1) + 8푥2 − 1 ∇푓(ퟎ) = [ | | ] = [ 3 ] 휕푥1 휕푥2 +4(푥 − 푥 ) + 8푥 + 1 풙=ퟎ 풙=ퟎ 2 1 1 푥1=0, 푥2=0 −1 = [ ] +1 The Hessian is 휕2푓(풙) 휕2푓(풙)

2 휕푥 휕푥1휕푥2 퐻(ퟎ) = 1 = 휕2푓(풙) 휕2푓(풙)

휕푥 휕푥 2 [ 2 1 휕푥2 ]풙=ퟎ 12(푥 − 푥 )2 −12(푥 − 푥 )2 + 8 0 8 = [ 2 1 2 1 ] = [ ] −12(푥 − 푥 )2 + 8 12(푥 − 푥 )2 8 0 2 1 2 1 푥1=0, 푥2=0

Finally, the quadratic approximation is

−1 1 0 8 푥1 푓̃(푥1, 푥2) = 3 + [푥1 푥2] [ ] + [푥1 푥2] [ ] [ ] +1 2 8 0 푥2 or,

푓̃(푥1, 푥2) = 8푥1푥2 − 푥1 + 푥2 + 3

4 ̃ 푓(푥1, 푥2) = (푥2 − 푥1) + 8푥1푥2 − 푥1 + 푥2 + 3 푓(푥1, 푥2) = 8푥1푥2 − 푥1 + 푥2 + 3

Plots superimposed: