<<

ECE 3040 Lecture 17: © Prof. Mohamad Hassoun

This lecture covers the following topics:

 Introduction  Interpolation using a single polynomial  ’s interpolation  Matlab built-in : polyfit  The curse of high-dimensional polynomials  Cubic interpolation  Matlab built-in cubic : spline  Interpolation using rational functions

Introduction

Polynomial interpolation is a procedure for modeling a set of precise data points using a polynomial , 푝(푥), that fits the data exactly (passes through all provided data points). The data points are normally obtained from a complicated , 푓(푥), of an engineering or scientific system derived from physical principles. Once an interpolation polynomial is computed, it can be used to replace the complicated mathematical model for the purpose of analysis and design. For instance, the interpolating polynomial can be used to estimate the value of the function at a new point 푥′, as 푓(푥′) ≅ 푝(푥′). The solution for the coefficients of the interpolation polynomial 푝(푥) can be determined by solving an associated system of linear equations, or can be computed using formulas.

The data points used for interpolation can also be a set of very accurately measured experimental values. In practice, if the set of data points is large, different polynomials are used to construct a piece-wise interpolation function; this procedure is known as spline interpolation. Rational functions may also be used for interpolation.

This lecture will also introduces two built-in Matlab polynomial-based interpolation functions: polyfit and spline. Interpolation Using a Single Polynomial

A straight line can be completely defined using two points on the straight line. The slope, 푎1, and 푦-intercept, 푎0, coefficients in the representation 푝(푥) = 푎1푥 + 푎0 are sufficient to define the straight line. The coefficients of the straight line that passes through the points (푥1, 푓(푥1)) and (푥2, 푓(푥2)) are unique and are the solution of the system of two linear equations:

푓(푥1) = 푎1푥1 + 푎0

푓(푥2) = 푎1푥2 + 푎0 or, in matrix form, 푥 1 푎 푓(푥 ) [ 1 ] [ 1] = [ 1 ] 푥2 1 푎0 푓(푥2)

Similarly, three points are required to compute the coefficients of a parabola, 2 푝(푥) = 푎2푥 + 푎1푥 + 푎0. For example, the three coefficients of the parabola that passes through the distinct (non-collinear) points (푥1, 푓(푥1)), (푥2, 푓(푥2)) and (푥3, 푓(푥3)) are unique and are the solution of the system of three linear equations:

2 푓(푥1) = 푎2푥1 + 푎1푥1 + 푎0 2 푓(푥2) = 푎2푥2 + 푎1푥2 + 푎0 2 푓(푥3) = 푎2푥3 + 푎1푥3 + 푎0 or, in matrix form, 2 푥1 푥1 1 푎2 푓(푥1) 2 [푥2 푥2 1] [푎1 ] = [푓(푥2)] 2 푎 푥3 푥3 1 0 푓(푥3)

In fact, fitting a polynomial to a set of given data points is not new to us. Recall that we had encountered this problem in the context of parabolic interpolation-based optimization in Lecture 14. The following is an example of second-order polynomial interpolation.

Example. Use parabolic interpolation to approximate the function 푥2 푓(푥) = − 2sin(푥) 10 from three of its points at 푥1 = 0, 푥2 = 1 and 푥3 = 2. Then, employ the quadratic model to predict the value of 푓(푥), at 푥 = 1.5. Also, compute the true error. Solution. Substituting the values of 푥 in the function 푓(푥) leads to the three interpolation points: (0,0), (1, −1.583) and (2, −1.419). The coefficients of the parabola can then be obtained (employing the above formulation) as the solution of

0 0 1 푎2 0 [1 1 1] [푎1 ] = [−1.583] 4 2 1 푎0 −1.419

The (green) plot of the interpolation parabola 푝(푥) = 0.874푥2 − 2.457푥 and the (blue) plot of 푓(푥), are shown below, within the interval [-0.5 3].

The interpolated value at 푥 = 1.5 is 푝(1.5) = −1.719, giving a true error of 푓(1.5) − 푝(1.5) = −1.770 − (−1.719) = −0.051. The above formulation can be extended to interpolating the 푛 points

{ (푥1, 푓(푥1)), (푥2, 푓(푥2)), … , (푥푛, 푓(푥푛))} Using the an (푛 − 1)-order polynomial 푛−1 2 푝(푥) = 푎푛−1푥 + ⋯ + 푎2푥 + 푎1푥 + 푎0 which results in the following 푛x푛 system of linear equation: 푎 푥푛−1 … 푥2 푥 1 푛−1 푓(푥 ) 1 1 1 ⋮ 1 푛−1 2 푥 ⋯ 푥 푥2 1 푓(푥2) 2 2 푎2 = [ ] ⋮ ⋮ ⋮ ⋮ ⋮ 푎 ⋮ 푛−1 2 1 [푥 ⋯ 푥 푥 1] 푓(푥푛) 푛 푛 푛 [ 푎0 ]

Coefficient matrices of this form are referred to as Vandermonde matrices. Matlab can generate the using the built-in function vander(x), where x is a vector of the 푥푖 data points. The following is an example.

The Vandermonde matrix can become ill-conditioned (especially for large 푛), and therefore the solution for the above system formulation is very sensitive to round- off errors. The Matlab instruction cond(A) can be used to return the of a matrix A; a high valued condition number indicates a nearly singular matrix. For example,

The next section presents an alternative, numerically robust formulation for the polynomial interpolation problem. Newton’s Interpolation Polynomials

There are a variety of alternative forms for expressing an interpolation polynomial beyond the previous formulation. Newton’s interpolation polynomial is among the most popular.

The simplest form of interpolation is to start with two points (푥1, 푓(푥1)) and

(푥2, 푓(푥2)) and connect them with a straight line. The equation for a straight line that passes through two points can be easily derived and is expressed as

푓(푥2) − 푓(푥1) 푝푁1(푥) = (푥 − 푥1) + 푓(푥1) 푥2 − 푥1

We will refer to 푝푁1(푥) as Newton’s linear-interpolation formula. It allows us to approximate the value of 푓(푥) at some point 푥 inside [푥1 푥2]. The notation 푝푁1 designates that this is a first-order Newton interpolating polynomial. The nominal form for 푝푁1 is given by 푝푁1(푥) = 푏1 + 푏2(푥 − 푥1) where the coefficients are given by

푓(푥2) − 푓(푥1) 푏1 = 푓(푥1), 푏2 = 푥2 − 푥1 It is obvious here that the smaller the interval between the data points, the better the approximation. Alternatively, one can increase the order of the polynomial for increased accuracy if more data points are available. For three data points, the second-order Newton’s interpolation polynomial, 푝푁2(푥), is

푝푁2(푥) = 푏1 + 푏2(푥 − 푥1) + 푏3(푥 − 푥1)(푥 − 푥2) with coefficients 푓(푥 ) − 푓(푥 ) 푓(푥 ) − 푓(푥 ) 3 2 − 2 1 푓(푥2) − 푓(푥1) 푥3 − 푥2 푥2 − 푥1 푏1 = 푓(푥1), 푏2 = , 푏3 = 푥2 − 푥1 푥3 − 푥1

Notice that the coefficients 푏1 and 푏2 are the same in the second and first-order Newton’s polynomials. This means that if two points are given and a first order Newton’s polynomial is fitted to pass through these points and then a third point is added, the polynomial 푝푁2 would then be computed as

푝푁2 = 푝푁1 + 푏3(푥 − 푥1)(푥 − 푥2) and only one new coefficient (푏3) would need to be computed. This observation is also true for higher order Newton polynomials. It is interesting to note that 푏3 is the ′′ finite-difference, second- approximation of 푓 (푥3).

The general form for Newton’s (푛 − 1)-order interpolating polynomial on the 푛 data points {(푥1, 푓(푥1)), (푥2, 푓(푥2)), … , (푥푛, 푓(푥푛))}, is given by

푝푁(푛−1)(푥) = 푏1 + 푏2(푥 − 푥1) + 푏3(푥 − 푥1)(푥 − 푥2) + ⋯ +

푏푛(푥 − 푥1)(푥 − 푥2) … (푥 − 푥푛−1) where the 푏푖 constants are evaluated directly (and efficiently) from the data points, using the formulas:

푏1 = 푓(푥1)

푓(푥2) − 푏1 푏2 = 푥2 − 푥1

푓(푥3) − 푓(푥2) − 푏2 푥3 − 푥2 푏3 = 푥3 − 푥1 푓(푥 ) − 푓(푥 ) 푓(푥 ) − 푓(푥 ) 4 3 − 3 2 푥4 − 푥3 푥3 − 푥2 − 푏3 푥4 − 푥2 푏4 = 푥4 − 푥1 and so on.

Your turn: Deduce the formula for 푏5. Example. Solve the last example using a second-order Newton’s interpolation polynomial. Again, the three interpolation points are (0,0), (1, −1.583) and (2, −1.419). Solution. The 푏 coefficients are:

푏1 = 푓(푥1) = 푓(0) = 0

푓(푥2) − 푏1 −1.583 − 0 푏2 = = = −1.583 푥2 − 푥1 1 − 0

푓(푥3) − 푓(푥2) −1.419 − (−1.583) − 푏2 − (−1.583) 푥3 − 푥2 2 − 1 푏3 = = = 0.874 푥3 − 푥1 2 − 0 which lead to the interpolation polynomial,

푝푁2(푥) = 0 − 1.583(푥 − 0) + 0.874(푥 − 0)(푥 − 1) = −1.583푥 + 0.874푥(푥 − 1) = 0.874푥2 − 2.457푥

The interpolated value at 푥 = 1.5 is 푝푁2(1.5) = −1.719. These results are consistent with those obtained based on solving the Vandermonde matrix formulation (refer to the previous example). The coefficients of the can also be computed systematically by solving (using forward substitution utilizing the “\” operator) the system of equations shown below. Your turn: Use it to find {푏1, 푏2, 푏3} for the last example. 1 0 0 0 … 0 푏 푓(푥 ) 1 1 1 (푥 − 푥 ) 0 0 … 0 푏 푓(푥 ) 2 1 2 2 1 (푥 − 푥 ) (푥 − 푥 )(푥 − 푥 ) 0 … 0 푏 푓(푥 ) 3 1 3 1 3 2 3 = 3 ...... [1 (푥푛 − 푥1) (푥푛 − 푥1)(푥푛 − 푥2) … (푥푛 − 푥1)(푥푛 − 푥2)(… )(푥푛 − 푥푛−1)] [푏푛] [푓(푥푛)] Function newtint is a Matlab implementation of Newton’s interpolation.

The following is an application of the function newtint to the solution of the last example:

Your turn: Rewrite the above function using the matrix formulation shown at the end of the last page. Email your Matlab function to the grader before the due date for Assignment 11. Matlab Built-in Polynomial Interpolation: polyfit

For the case where the number of data points is equal to the number of coefficients of the fit polynomial, the built-in Matlab function polyfit performs interpolation (polyfit can also be used to solve regression problems, as discussed later in Lecture 18). So, for interpolation, this function returns the coefficients of the (푛 − 1)- degree polynomial that passes through 푛 data points.

The basic function-call is p=polyfit(x,y,m), where x is a vector of 푛 independent values, y is the corresponding vector of 푛 dependent values and 푚 = 푛 − 1. The function returns the coefficient vector of the (푛 − 1)-order interpolation polynomial.

The last example can be solved using polyfit and polyval as follows:

The Curse of High-Dimensional Polynomials

When the data set is large, a high-order interpolating polynomial is required. Interpolating with high-order polynomials should be avoided, because they tend to be highly sensitive to round-off error and may also lead to overfitting. The later phenomenon is illustrated in the next example. The next section introduces spline interpolation as a way to avoid the overfitting problem encountered when interpolating from large number of data points.

Example. Generate three sets of 5, 7 and 11 equidistant points within 1 [-1 1] from the function 푓(푥) = . For each data set, plot the function, its 1+25푥2 interpolation polynomial and the interpolation points on the same graph.

Solution. Employing polyfit and polyval, we can solve the problem, as follows:

For 5 data points the solution is:

[푓(푥) is the plot in red] Similarly, for 7 data points the solution is:

And for 11 data points, the solution is:

As can be seen from the above plots, as the degree of the interpolation polynomial increases, the interpolation accuracy deteriorates significantly for points near the edges of the interval. Cubic Spline Interpolation

When a set of 푛 data points is given and a single (푛 − 1)-degree polynomial is used for interpolation, then the polynomial gives the exact values at the points (passes through the points) and gives estimates (interpolated) values between the points. When the number of points is small, the order of the polynomial is low and, typically that leads to reasonably accurate interpolation. However, as already demonstrated in the last example, large interpolation errors (overfitting) may occur when a high- degree polynomial is used for interpolation.

When a large number of points is involved, a better interpolation can be obtained using multiple low-order polynomials (linear, quadratic or cubic) instead of a single high-order polynomial. Here, each polynomial is valid in one interval between two or several points. Typically, all polynomials are of the same degree, but each has different coefficients. Interpolation in this way is called piecewise or spline interpolation. For example, quadratic polynomials employed to connect each pair of adjacent points are called quadratic splines. These polynomials can be constructed so that the connections (knots) between adjacent quadratic equations are visually smooth. The concept of the spline originated from the drafting technique of using a thin, flexible tool (called spline) to draw smooth through a set of points. See below for a picture of a typical spline drafting tool.

Linear splines. The notation used for splines is presented in the following figure. For 푛 data points, there are 푛 − 1 intervals. Each interval 푖 has its own spline function, 푠푖, resulting in a total of 푛 − 1 splines.

[Figure courtesy: Applied Numerical Methods with Matlab, 2nd edition, S. C. Chapra (McGraw-Hill, 2008)]

With linear splines, interpolation is carried out by using a straight line between adjacent points. The 푖th spline can then be expressed by Newton’s formula

푓(푥푖+1) − 푓(푥푖) 푠푖(푥) = (푥 − 푥푖) + 푓(푥푖), 푖 = 1, 2, … , 푛 − 1 푥푖+1 − 푥푖 The interpolated value at a given point 푥′ is computed by simply “looking up” the ′ interval 푖 that 푥′ belongs to, and then evaluating 푠푖(푥 ).

Example. Fit the five data points in the following table with linear splines. Then, interpolate at 푥 = 3.5 and 5.

푖 푥푖 푓(푥푖) 1 3.0 2.5 2 4.5 1.0 3 7.0 2.5 4 9.0 0.5 Solution. The data is plotted in the following figure and the three linear splines are shown.

The equations for the 푛 − 1 linear splines are: 1.0 − 2.5 푠 (푥) = (푥 − 3.0) + 2.5 = −푥 + 5.5 1 4.5 − 3 2.5 − 1.0 푠 (푥) = (푥 − 4.5) + 1.0 = 0.6푥 − 1.7 2 7.0 − 4.5 0.5 − 2.5 푠 (푥) = (푥 − 7) + 2.5 = −푥 + 9.5 3 9.0 − 7.0

The first interpolation point 푥 = 3.5 belongs to the first interval, [3.0 4.5], so we employ spline 푠1 to solve for the interpolated value of the function: 푠1(3.5) = −3.5 + 5.5 = 2.

The second interpolation point 푥 = 5.0 belongs to the second interval, [4.5 7.0], so we employ spline 푠2 to solve for the interpolated value of the function: 푠2(5.0) = 0.6(5.0) − 1.7 = 1.3.

Cubic splines. Visual inspection of the above figure indicates that linear spline interpolation is not smooth. At the data point at which two adjacent splines meet (called a knot) the slope changes abruptly. This implies that the derivative is not continuous at the knot, which violates the practical assumption that the underlying function being interpolated is smooth. This deficiency is overcome by using higher order polynomial splines that would allow for the first derivative at the knot to be continuous. For improved smoothness, we may also require that the second derivative is continuous at each knot. These requirements can’t be achieved with a linear spline, since a straight line does not have enough parameters (coefficients) to allow for such flexibility. Therefore, higher order polynomial splines must be used.

The following derivation capture the spirit of the spline method. Here we seek the lowest-order polynomial spline 푠푖, that passes through the two interpolating points (푥푖, 푓(푥푖)) and (푥푖+1, 푓(푥푖+1)), have a slope (first derivative) equal to the slope of spline 푠푖−1 at 푥푖 and have a concavity (second derivative) that is equal to that of spline 푠푖−1, at 푥푖. Mathematically, we require that spline 푠푖 satisfy all of the following four constraints,

푠푖(푥푖) = 푓(푥푖)

푠푖(푥푖+1) = 푓(푥푖+1) 푑푠 (푥 ) 푑푠 (푥 ) 푖 푖 = 푖−1 푖 푑푥 푑푥 푑2푠 (푥 ) 푑2푠 (푥 ) 푖 푖 = 푖−1 푖 푑푥2 푑푥2

The polynomial that satisfies these four constraints must have at least four degrees of freedom (coefficients). That would be a 3rd-order polynomial, or a cubic. Therefore, 푠푖 has the form,

3 2 푠푖(푥) = 푎푖,3푥 +푎푖,2푥 + 푎푖,1푥 + 푎푖,0, 푖 = 1, 2, … , 푛 − 1

(Note that in order to fit 푛 data points, we would require 푛 − 1 splines.) In order to compute all 4(푛 − 1) spline coefficients, we would need to solve 4(푛 − 1) equations. The computation of the cubic splines is illustrated next for three data points 푥1, 푥2, 푥3. For three points, we need [푛 − 1 = 3 − 1 = 2] two cubic splines:

3 2 푠1(푥) = 푎1,3푥 +푎1,2푥 + 푎1,1푥 + 푎1,0 (spline between 푥1 and 푥2) 3 2 푠2(푥) = 푎2,3푥 +푎2,2푥 + 푎2,1푥 + 푎2,0 (spline between 푥2 and 푥3) whose first and second are given by, respectively,

푑푠 (푥) 1 = 3푎 푥2+2푎 푥 + 푎 푑푥 1,3 1,2 1,1 푑2푠 (푥) 1 = 6푎 푥+2푎 푑푥2 1,3 1,2 푑푠 (푥) 2 = 3푎 푥2+2푎 푥 + 푎 푑푥 2,3 2,2 2,1 푑2푠 (푥) 2 = 6푎 푥+2푎 푑푥2 2,3 2,2

In order to satisfy the four constraints, we would need to solve eight equations. Let us write down these equations. Each spline must pass through two end data points, resulting in the four equations

3 2 푠1(푥1) = 푎1,3푥1 +푎1,2푥1 + 푎1,1푥1 + 푎1,0 = 푓(푥1) (1) 3 2 푠1(푥2) = 푎1,3푥2 +푎1,2푥2 + 푎1,1푥2 + 푎1,0 = 푓(푥2) (2) 3 2 푠2(푥2) = 푎2,3푥2 +푎2,2푥2 + 푎2,1푥2 + 푎2,0 = 푓(푥2) (3) 3 2 푠2(푥3) = 푎2,3푥3 +푎2,2푥3 + 푎2,1푥3 + 푎2,0 = 푓(푥3) (4) The first derivatives (also the second derivatives) of splines 푠1(푥) and 푠2(푥), at the interior knot (푥2) must be equal, resulting in two additional equations: 2 2 3푎1,3푥2 +2푎1,2푥2 + 푎1,1 = 3푎2,3푥2 +2푎2,2푥2 + 푎2,1

6푎1,3푥2+2푎1,2 = 6푎2,3푥2+2푎2,2 Or, equivalently, 2 2 3푎1,3푥2 +2푎1,2푥2 + 푎1,1 − 3푎2,3푥2 −2푎2,2푥2 − 푎2,1 = 0 (5)

3푎1,3푥2+푎1,2−3푎2,3푥2−푎2,2 = 0 (6)

So far, we have a total of six equations and we need two more equations. These two additional equations can be obtained in different ways, depending on how we want the end splines 푠1 and 푠푛−1 to be constrained at the endpoints, 푥1 and 푥푛, respectively. One common approach is to provide user specified first derivatives, ′ ′ 푓′(푥1) and 푓′(푥푛). For example, specifying zero derivatives 푓 (푥1) = 푓 (푥푛) = 0 forces the end splines to have flat tails. This specification (termed clamped-end- condition) leads to the last two equations (for the present case of 푛 = 3), 2 ′ 3푎1,3푥1 +2푎1,2푥1 + 푎1,1 = 푓 (푥1) = 0 (7) 2 ′ 3푎2,3푥3 +2푎2,2푥3 + 푎2,1 = 푓 (푥3) = 0 (8)

The solution for the cubic spline coefficients, for the three data points problem, becomes the solution of the following 8x8 matrix system [Equations (1)-(8)] 3 2 푥1 푥1 푥1 1 0 0 0 0 푎1,3 푓(푥1) 푥3 푥2 푥 1 0 0 0 0 2 2 2 푎1,2 푓(푥2) 3 2 0 0 0 0 푥2 푥2 푥2 1 푎1,1 푓(푥2) 3 2 0 0 0 0 푥 푥 푥 1 푎1,0 푓(푥 ) 3 3 3 = 3 2 2 푎2,3 0 3푥2 2푥2 1 0 −3푥2 −2푥2 −1 0 푎2,2 0 3푥2 1 0 0 −3푥2 −1 0 0 푎 ′ 3푥2 2푥 1 0 0 0 0 0 2,1 푓 (푥1) 1 1 [푎 ] ′ 2 2,0 [푓 (푥3)] [ 0 0 0 0 3푥3 2푥3 1 0] The above cubic spline formulation can be extended to 푛 points. Example. Fit the data in the following table using cubic splines. Assume spline derivatives at the end points, 푥1 and 푥3, to be 0. Then, interpolate at 푥 = 3.5 and 5.

푖 푥푖 푓(푥푖) 1 3.0 2.5 2 4.5 1.0 3 7.0 2.5

The cubic spline formulation applied to the above data leads to the system of linear equations (verify it),

27 9 3 1 0 0 0 0 푎1,3 2.5

91.125 20.25 4.5 1 0 0 0 0 푎1,2 1.0

0 0 0 0 91.125 20.25 4.5 1 푎1,1 1.0

0 0 0 0 343 49 7 1 푎1,0 2.5 = 60.75 9 1 0 −60.75 −9 −1 0 푎2,3 0 13.5 1 0 0 −13.5 −1 0 0 푎2,2 0 27 6 1 0 0 0 0 0 푎2,1 0 [ 0 0 0 0 147 14 1 0] [푎2,0] [ 0 ]

Matlab left-division solution:

Therefore, the two fitted splines are 3 2 푠1(푥) ≅ 0.622푥 − 7.200푥 + 26.400푥 − 28.700 3 2 푠2(푥) ≅ −0.288푥 + 5.088푥 − 28.896푥 + 54.244

The plot for the interpolating cubic spline is shown below.

The first interpolation point, 푥 = 3.5, belongs to the first interval, [3.0 4.5], so we employ spline 푠1 to solve for the interpolated value of the function: 푠1(3.5) = 2.1682. (Note: By retaining more decimal digits for the coefficients of 푠1(푥) leads to the more accurate value of 2.1778). Similarly, the second interpolation point, 푥 = 5.0, belongs to the second interval, [4.5 7.0], so we employ spline 푠2 to solve for the interpolated value of the function: 푠2(5.0) = 0.9640. Matlab Built-in Cubic Spline Interpolation: spline

Matlab has several built-in functions that implement piecewise interpolation. The spline function performs cubic spline interpolation. spline has the general syntax ys = spline(x,y,xx), where x and y are vectors containing the data points and ys is a vector containing the results of the spline interpolation evaluated at the points of vector xx. By default, the spline function uses a “not-a-knot” condition. Here, the idea is to force the continuity of the third derivative at the second and the next-to-last knots. This introduces the two additional equations needed by the preceding formulation. With this method, the first derivative constraints at the first and the last data points are not enforced. On the other hand, if y contains two more points than x has entries, then the first and last values of y are used as the derivatives at the end points 푥1 and 푥푛, respectively, leading to the clamped-end-condition formulation described earlier. Verification of last example:

Example. Consider the interpolation problem employing the nine data points (푥−2)(2푥+1) generated by evaluating the function 푓(푥) = at the following nine 푥 1+푥2 1 1 values, [−5 − 3 − 1 − 0 1 3 6]. Plot the 8-degree polynomial interpolation 2 3 solution (use polyfit function), and compare it (graphically) to cubic spline interpolating (employ the spline function). Try both clamped-end and not-a-knot- end options.

Solution with “clamped-end” condition: x=[-5 -3 -1 -1/2 0 1/3 1 3 6]; y=(x-2).*(2*x+1)./(1+x.^2); xx=linspace(-5,6); yy=(xx-2).*(2*xx+1)./(1+xx.^2); p=polyfit(x,y,8); y8=polyval(p,xx); yc=[0 y 0]; ys=spline(x,yc,xx); plot(x,y,’or’,xx,ys,xx,y8,xx,yy) axis([-5 6 -3 3]);

Cubic Spline vs. Polynomial Interpolation

푓(푥) (red). Spline interpolation (blue). Polynomial interpolation (green). Effect of cubic spline end condition: x=[-5 -3 -1 -1/2 0 1/3 1 3 6]; y=(x-2).*(2*x+1)./(1+x.^2); xx=linspace(-5,6); yy=(xx-2).*(2*xx+1)./(1+xx.^2); ys1=spline(x,y,xx); yc=[0 y 0]; ys2=spline(x,yc,xx); plot(x,y,’or’,xx,ys2,xx,ys1,xx,yy) axis([-5 6 -3 3]);

The following graph compares the two spline solutions. Here, since 푓(푥) is flat at its left and right ends, the clamped-end condition leads to a more appropriate interpolation solution.

Spline interpolation with the clamped-end condition (blue), and with the not-a-knot end condition (green).

Matlab has a convenient utility for generating and visualizing cubic splines. It employs the not-a-knot end condition by default. First, the user generates a plot of the data.

Next, from the Figure Window menu, the user would select Tools>Basic Fitting and then choose the “spline interpolant” option. The following screen capture shows the results using the data from the previous example. The red trace is the exact function and the blue plot is the cubic spline fit (to the data points shown as red circles). This blue trace is identical to the green trace in the above figure.

Interpolation Using Rational Functions

As we have demonstrated earlier, interpolation with a polynomial can be problematic when a large number of points is used. Polynomials also lack the ability to realize horizontal asymptotes and/or vertical ones (singularities). A of the form 푎 + 푎 푥 + 푎 푥2 + ⋯ + 푎 푥푛 ̃ 0 1 2 푛 푓(푥) = 2 푚 1 + 푏1푥 + 푏2푥 + ⋯ + 푏푚푥 can easily realize asymptotes. Rational functions can also be much less oscillatory. Therefore, we should expect rational functions to be superior interpolants to polynomials.

We have encountered such rational functions (in Lecture 9) in connection with Padé approximation, where the 푓̃(푥) parameters (coefficients 푎푖, 푏푖) were obtained by solving a system of equations. There, we assumed that the function being approximated, 푓(푥), is known and that the system of equations for the coefficients was obtained by setting: 푓̃(0) = 푓(0), 푓̃̇(0) = 푓̇(0), 푓̃̈(0) = 푓̈(0), etc.

Rational functions with 푁 parameters can also be used for interpolating 푁 points. The following is a simple example that illustrates the method of rational function interpolation.

Example. Interpolate the points (0,1), (0.5, √1.5) and (1, √2) [obtained from 푓(푥) = √1 + 푥 ] using the rational function

푎 + 푎 푥 푓̃(푥) = 0 1 1 + 푏1푥 Rewriting the equation we obtain

푓̃(푥)(1 + 푏1푥) = 푎0 + 푎1푥 or, 푎0 + 푥푎1 − 푥푓̃(푥)푏1 = 푓̃(푥)

Evaluating the last equation at (푥, 푓̃(푥)) = (0,1) we obtain 푎0 = 1. The remaining two parameters are obtained as the solution of the following set of two linear algebraic equations generated from the points (0.5, √1.5) and (1, √2), as follows, 0.5푎1 − 0.5√1.5푏1 = √1.5 − 1 푎1 − √2푏1 = √2 − 1

The solution is 푎1 ≅ 0.6775 and 푏1 ≅ 0.1862 leading to

1 + 0.6775푥 푓̃(푥) = 1 + 0.1862푥

Note that this function has a horizontal asymptote (as, 푥 → ∞) at 0.6775/0.1862 ≅ 1 3.639 and a singularity at 푥 = − ≅ −5.3710 (vertical asymptote due to a zero 0.1862 denominator). The interpolation solution is depicted in the following figures.

Note that 푓̃(푥) is only accurate for 푥 ∈ [0 2].

Your turn: Approximate the function 푓(푥) = ln (1 + 푥) employing the four- parameter rational function 푎 푥 + 푎 푥2 ̃ 1 2 푓(푥) = 2 1 + 푏1푥 + 푏2푥

1 1 3 over the interval ≤ 푥 ≤ 2. Choose the 푥 values { , 1, , 2} when performing 2 2 2 interpolation. Plot 푓(푥) and 푓̃(푥) for 푥 ∈ [−0.85 5]. Does 푓̃(푥) have a singularity close to 푥 = −1, that matches the singularity in ln(1 + 푥)?