<<

> 4. and Approximation

Most functions cannot be evaluated exactly: √ x, ex, ln x, trigonometric functions

since by using a computer we are limited to the use of elementary arithmetic operations +, −, ×, ÷ With these operations we can only evaluate and rational functions ( divided by polynomials).

4. Interpolation Math 1070 > 4. Interpolation and Approximation Interpolation

Given points x0, x1, . . . , xn and corresponding values

y0, y1, . . . , yn

find a f(x) such that

f(xi) = yi, i = 0, . . . , n.

The interpolation function f is usually taken from a restricted class of functions: polynomials.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1 Theory Interpolation of functions

f(x)

x0, x1, . . . , xn

f(x0), f(x1), . . . , f(xn)

Find a polynomial (or other special function) such that

p(xi) = f(xi), i = 0, . . . , n.

What is the error f(x) = p(x)?

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.1 Linear interpolation Linear interpolation

Given two sets of points (x0, y0) and (x1, y1) with x0 6= x1, draw a line through them, i.e., the graph of the linear polynomial

x − x1 x − x0 x0 x1 `(x) = y0 + y1 x0 − x1 x1 − x0 y0 y1 (x − x)y + (x − x )y `(x) = 1 0 0 1 (5.1) x1 − x0 We say that `(x) interpolates the value yi at the point xi, i = 0, 1, or `(xi) = yi, i = 0, 1

Figure: Linear interpolation 4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.1 Linear interpolation

Example

Let the data points be (1, 1) and (4,2). The polynomial P1(x) is given by (4 − x) · 1 + (x − 1) · 2 P (x) = (5.2) 1 3 √ The graph y = P1(x) and y = x, from which the data points were taken.

√ Figure: y = x and its linear interpolating polynomial (5.2) 4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.1 Linear interpolation

Example Obtain an estimate of e0.826 using the function values

e0.82 =· 2.270500, e0.83 =· 2.293319

Denote x0 = 0.82, x1 = 0.83. The interpolating polynomial P1(x) x interpolating e at x0 and x1 is (0.83 − x) · 2.270500 + (x − 0.82) · 2.293319 P (x) = (5.3) 1 0.01 and P1(0.826) = 2.2841914 while the true value s

e0.826 =· 2.2841638

to eight significant digits.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.2 Quadratic Interpolation

Assume three data points (x0, y0), (x1, y1), (x2, y2), with x0, x1, x2 distinct. We construct the quadratic polynomial passing through these points using Lagrange’s folmula

P2(x) = y0L0(x) + y1L1(x) + y2L2(x) (5.4) with Lagrange interpolation basis functions for quadratic interpolating polynomial (x−x1)(x−x2) L0(x) = (x0−x1)(x0−x2)

(x−x0)(x−x2) L1(x) = (5.5) (x1−x0)(x1−x2)

(x−x0)(x−x1) L2(x) = (x2−x0)(x2−x1) Each L (x) has degree 2 ⇒ P (x) has degree ≤ 2. Moreover i 2  Li(xj) = 0, j 6= i 1, i = j for 0 ≤ i, j ≤ 2 i.e., Li(xj) = δi,j = Li(xi) = 1 0, i 6= j the Kronecker delta function.

P2(x) interpolates the data P2(x) = yi, i=0,1,2

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.2 Quadratic Interpolation Example

Construct P2(x) for the data points (0, −1), (1, −1), (2, 7). Then

(x−1)(x−2) x(x−2) x(x−1) P (x)= · (−1)+ · (−1)+ · 7 (5.6) 2 2 −1 2

Figure: The quadratic interpolating polynomial (5.6)

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.2 Quadratic Interpolation

With linear interpolation: obvious that there is only one straight line passing through two given data points. With three data points: only one quadratic interpolating polynomial whose graph passes through the points. Indeed: assume ∃Q2(x), deg(Q2) ≤ 2 passing through (xi, yi), i = 0, 1, 2, then it is equal to P2(x). The polynomial

R(x) = P2(x) − Q2(x)

has deg(R) ≤ 2 and

R(xi) = P2(xi) − Q2(xi) = yi − yi = 0, for i = 0, 1, 2

So R(x) is a polynomial of degree ≤ 2 with three roots ⇒ R(x) ≡ 0

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.2 Quadratic Interpolation

Example Calculate a quadratic interpolate to e0.826 from the function values

e0.82 =· 2.27050 e0.83 =· 2.293319 e0.84 =· 2.231637

0.82 0.83 0.84 With x0 = e , x1 = e , x2 = e , we have

· P2(0.826) = 2.2841639

to eight digits, while the true answer e0.826 =· 2.2841638 and · P1(0.826) = 2.2841914.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.3 Higher-degree interpolation Lagrange’s Formula

Given n + 1 data points (x0, y0), (x1, y1),..., (xn, yn) with all xi’s distinct, ∃ unique Pn, deg(Pn) ≤ n such that Pn(xi) = yi, i = 0, . . . , n given by Lagrange’s Formula

n X Pn(x) = yiLi(x) = y0L0(x) + y1L1(x) + ··· ynLn(x) (5.7) i=0 n Y x−xj (x−x0) ··· (x−xi−1)(x−xi+1) ··· (x−xn) where Li(x)= = , xi −xj (xi −x0) ··· (xi −xi−1)(xi −xi) ··· (xi −xn) j=0,j6=i

where Li(xj) = δij

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.4 Divided differences Remark; The Lagrange’s formula (5.7) is suited for theoretical uses, but is impractical for computing the value of an interpolating polynomial: knowing P2(x) does not lead to a less expensive way to compute P3(x). But for this we need some preliminaries, and we start with a discrete version of the derivative of a function f(x). Definition (First-order divided difference)

Let x0 6= x1, we define the first-order divided difference of f(x) by

f(x1) − f(x2) f[x0, x1] = (5.8) x1 − x2

If f(x) is differentiable on an interval containing x0 and x1, then the mean value theorem gives 0 f[x0, x1] = f (c), for c between x0 and x1.

Also if x0, x1 are close together, then x + x  f[x , x ] ≈ f 0 0 1 0 1 2 usually a very good approximation.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.4 Divided differences

Example

Let f(x) = cos(x), x0 = 0.2, x1 = 0.3.

Then cos(0.3) − cos(0.2) f[x , x ] = =· −0.2473009 (5.9) 0 1 0.3 − 0.2 while x + x  f 0 0 1 = − sin(0.25) =· −0.2474040 2

so f[x0, x1] is a very good approximation of this derivative.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.4 Divided differences Higher-order divided differences are defined recursively

Let x0, x1, x2 ∈ R distinct. The second-order divided difference

f[x1, x2] − f[x0, x1] f[x0, x1, x2] = (5.10) x2 − x0

Let x0, x1, x2, x3 ∈ R distinct. The third-order divided difference

f[x1, x2, x3] − f[x0, x1, x2] f[x0, x1, x2, x3] = (5.11) x3 − x0

In general, let x0, x1, . . . , xn ∈ R, n + 1 distinct numbers. The divided difference of order n

f[x1, . . . , xn] − f[x0, . . . , xn−1] f[x0, . . . , xn] = (5.12) xn − x0 or the Newton divided difference.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.4 Divided differences Theorem n Let n ≥ 1, f ∈ C [α, β] and x0, x1, . . . , xn n + 1 distinct numbers in [α, β]. Then 1 (n) f[x0, x1, . . . , xn] = n! f (c) (5.13) for some unknown point c between the maximum and the minimum of x0, . . . , xn. Example

Let f(x) = cos(x), x0 = 0.2, x1 = 0.3, x2 = 0.4.

The f[x0, x1] is given by (5.9), and cos(0.4) − cos(0.3) f[x , x ] = =· −0.3427550 1 2 0.4 − 0.3 hence from (5.11) −0.3427550 − (−0.2473009) f[x , x , x ] =· = −0.4772705 (5.14) 0 1 2 0.4 − 0.2 For n = 2, (5.13) becomes 1 1 1 f[x , x , x ] = f 00(c) = − cos(c) ≈ − cos(0.3) =· −0.4776682 0 1 2 2 2 2 which is nearly equal to the result in (5.14).

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.5 Properties of divided differences

The divided differences (5.12) have special properties that help simplify work with them. (1) Let (i0, i1, . . . , in) be a permutation (rearrangement) of the integers (0, 1, . . . , n). It can be shown that

f[xi0 , xi1 , . . . , xin ] = f[x0, x1, . . . , xn] (5.15)

The original definition (5.12) seems to imply that the order of x0, x1, . . . , xn is important, but (5.15) asserts that it is not true. For n = 1 f(x0) − f(x1) f(x1) − f(x0) f[x0, x1] = = = f[x1, x0] x0 − x1 x1 − x0 For n = 2 we can expand (5.11) to get f(x0) f(x1) f(x2) f[x0, x1, x2] = + + (x0 − x1)(x0 − x2) (x1 − x0)(x1 − x2) (x2 − x0)(x2 − x1)

(2) The definitions (5.8), (5.11) and (5.12) extend to the case where some or

all of the xi coincide, provided that f(x) is sufficiently differentiable.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.5 Properties of divided differences

For example, define

f(x0) − f(x1) f[x0, x0] = lim f[x0, x1] = lim x1→x0 x1→x0 x1 − x0 0 f[x0, x0] = f (x0)

For an arbitrary n ≥ 1, let all xi → x0; this leads to the definition 1 f[x , . . . , x ] = f (n)(x ) (5.16) 0 0 n! 0 For the cases where only some of nodes coincide: using (5.15), (5.16) we can extend the definition of the divided difference. For example

0 f[x0, x1] − f[x0, x0] f[x0, x1] − f (x0) f[x0, x1, x0] = f[x0, x0, x1] = = x1 − x0 x1 − x0

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.5 Properties of divided differences MATLAB program: evaluating divided differences

Given a set of values f(x0), . . . , f(xn) we need to calculate the set of divided differences

f[x0, x1], f[x0, x1, x2], . . . , f[x0, x1, . . . , xn]

We can use the MATLAB function divdif using the function call divdif y = divdif(x nodes, y values) Note that MATLAB does not allow zero subscripts, hence x nodes and y values have to be redefined as vectors containing n + 1 components:

x nodes = [x0, x1, . . . , xn]

x nodes(i) = xi−1, i = 1, . . . , n + 1

y values = [f(x0), f(x1), . . . , f(xn)]

y values(i) = f(xi−1), i = 1, . . . , n + 1

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.5 Properties of divided differences MATLAB program: evaluating divided differences function divdif y = divdif(x nodes,y values) % % This is a function % divdif y = divdif(x nodes,y values) % It calculates the divided differences of the function % values given in the vector y values, which are the values of % some function f(x) at the nodes given in x nodes. On exit, % divdif y(i) = f[x 1,...,x i], i=1,...,m % with m the length of x nodes. The input values x nodes and % y values are not changed by this program. % divdif y = y values; m = length(x nodes); for i=2:m for j=m:-1:i divdif y(j) = (divdif y(j)-divdif y(j-1)) ... /(x nodes(j)-x nodes(j-i+1)); end end

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.5 Properties of divided differences

This program is illustrated in this table.

i xi cos(xi) Di 0 0.0 1.000000 0.1000000E+1 1 0.1 0.980067 -0.9966711E-1 2 0.4 0.921061 -0.4884020E+0 3 0.6 0.825336 0.4900763E-1 4 0.8 0.696707 0.3812246E-1 5 1.0 0.540302 -0.3962047E-2 6 1.2 0.36358 -0.1134890E-2

Table: Values and divided differences for cos(x)

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.6 Newton’s Divided Differences Newton interpolation formula

Interpolation of f at x0, x1, . . . , xn Idea: use Pn−1(x) in the definition of Pn(x):

Pn(x) = Pn−1(x) + C(x) C(x) ∈ P correction term  n =⇒ C(xi) = 0 i = 0, . . . , n − 1

=⇒ C(x) = a(x − x0)(x − x1) ... (x − xn−1) n Pn(x) = ax + ...

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.6 Newton’s Divided Differences

Definition: The divided difference of f(x) at points x0, . . . , xn

is denoted by f[x0, x1, . . . , xn] and is defined to be the coefficient of n x in Pn(x; f).

C(x) = f[x0, x1, . . . , xn](x − x0) ... (x − xn−1)

pn(x; f) = pn−1(x; f)+f[x0, x1, . . . , xn](x−x0) ... (x−xn−1) (5.17)

p1(x; f) = f(x0) + f[x0, x1](x − x0) (5.18)

p2(x; f)=f(x0)+f[x0, x1](x−x0)+f[x0, x1, x2](x−x0)(x−x1) (5.19) . .

pn(x; f) = f(x0) + f[x0, x1](x − x0) + ... (5.20)

+f[x0, . . . , xn](x − x0) ... (x − xn−1) =⇒ Newton interpolation formula

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.6 Newton’s Divided Differences

For (5.18), consider p1(x0), p1(x1). Easily, p1(x0) = f(x0), and   f(x1)−f(x0) p1(x1)=f(x0)+(x1 −x0) =f(x0)+[f(x1)−f(x0)]=f(x1) x1 − x0

So: deg(p1) ≤ 1 and satisfies the interpolation conditions. Then the uniqueness of polynomial interpolation ⇒ (5.18) is the linear interpolation polynomial to f(x) at x0, x1. For (5.19), note that

p2(x) = p1(x) + (x − x0)(x − x1)f[x0, x1, x2]

It satisfies: deg(P2) ≤ 2 and

p2(xi) = p1(xi) + 0 = f(xi), i = 0, 1

p2(x2) = f(x0) + (x2 − x0)f[x0.x1] + (x2 − x0)(x2 − x1)f[x0, x1, x2]

= f(x0) + (x2 − x0)f[x0.x1] + (x2 − x1){f[x1, x2] − f[x0, x1]}

= f(x0) + (x1 − x0)f[x0.x1] + (x2 − x1)f[x1, x2]

= f(x0) + {f(x1) − f(x0)} + {f(x2) − f(x1)} = f(x2) By the uniqueness of polynomial interpolation, this is the quadratic interpolating polynomial to f(x) at {x0, x1, x2}. 4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.6 Newton’s Divided Differences Example

Find p(x) ∈ P2 such that p(−1) = 0, p(0) = 1, p(1) = 4.

p(x) = p(x0) + p[x0, x1](x − x0) + p[x0, x1, x2](x − x0)(x − x1)

xi p(xi) p[xi, xi+1] p[xi, xi+1, xi+1] −1 0 1 1 0 1 3 1 4 p(x) = 0 + 1(x + 1) + 1(x + 1)(x − 0) = x2 + 2x + 1

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.6 Newton’s Divided Differences Example

Let f(x) = cos(x). The previous table contains a set of nodes xi, the values f(xi) and the divided differences computed with divdif Di = f[x0, . . . , xi] i ≥ 0

pn(0.1) pn(0.3) pn(0.5) 1 0.9900333 0.9700999 0.9501664 2 0.9949173 0.9554478 0.8769061 3 0.9950643 0.9553008 0.8776413 4 0.9950071 0.9553351 0.8775841 5 0.9950030 0.9553369 0.8775823 6 0.9950041 0.9553365 0.8775825 True 0.9950042 0.9553365 0.8775826

Table: Interpolation to cos(x) using (5.20)

This table contains the values of pn(x) for various values of n, computed with interp, and the true values of f(x).

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.6 Newton’s Divided Differences

In general, the interpolation node points xi need not to be evenly spaced, nor be arranged in any particular order to use the divided difference interpolation formula (5.20) To evaluate (5.20) efficiently we can use a nested multiplication algorithm

Pn(x) = D0 + (x − x0)D1 + (x − x0)(x − x1)D2

+ ... + (x − x0) ··· (x − xn−1)Dn

with D0 = f(x0),Di = f[x0, . . . , xi] for i ≥ 1

Pn(x) = D0 + (x − x0)[D1 + (x − x1)[D2 + ··· (5.21)

+ (x − xn−2)[Dn−1 + (x − xn−1)Dn] ··· ]] For example

P3(x) = D0 + (x − x0)D1 + (x − x1)[D2 + (x − x2)D3]

(5.21) uses only n multiplications to evaluate Pn(x) and is more convenient for a fixed degree n. To compute a sequence of interpolation polynomials of increasing degree is more efficient to se the original form (5.20).

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.1.6 Newton’s Divided Differences MATLAB: evaluating Newton divided difference for polynomial interpolation

function p eval = interp(x nodes,divdif y,x eval) % % This is a function % p eval = interp(x nodes,divdif y,x eval) % It calculates the Newton divided difference form of % the interpolation polynomial of degree m-1, where the % nodes are given in x nodes, m is the length of x nodes, % and the divided differences are given in divdif y. The % points at which the interpolation is to be carried out % are given in x eval; and on exit, p eval contains the % corresponding values of the interpolation polynomial. % m = length(x nodes); p eval = divdif y(m)*ones(size(x eval)); for i=m-1:-1:1 p eval = divdif y(i) + (x eval - x nodes(i)).*p eval; end

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation

Formula for error E(t) = f(t) − pn(t; f)

n X Pn(x) = f(xj)Lj(x) j=0

Theorem n+1 Let n ≥ 0, f ∈ C [a, b], x0, x1, . . . , xn distinct points in [a, b]. Then

f(x) − Pn(x) = (x − x0) ... (x − xn)f[x0, x1, . . . , xn, x] (5.22) (x − x )(x − x ) ··· (x − x ) = 0 1 n f (n+1)(c ) (5.23) (n + 1)! x

for x ∈ [a, b], cx unknown between the minimum and maximum of x0, x1, . . . , xn.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation Proof of Theorem

Fix t ∈ R. Consider

pn+1(x; f) − interpolates f at x0, . . . , xn, t

pn(x; f) − interpolates f at x0, . . . , xn

From Newton

pn+1(x; f) = pn(x; f) + f[x0, . . . , xn, t](x − x0) ... (x − xn)

Let x = t.

f(t) := pn+1(t, f) = pn(t; f) + f[x0, . . . , xn, t](t − x0) ... (t − x) E(t) := f[x0, x1, . . . , xn, t](t − x0) ... (t − xn) 

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation Examples

n n! f(x) = x , f[x0, . . . , xn] = n! = 1 or n pn(x; f) = 1 · x ⇒ f[x0, . . . , xn] = 1 f(x) ∈ Pn−1, f[x0, x1, . . . , xn] = 0 n+1 f(x) = x , f[x0, . . . , xn] = x0 + x1 + ... + xn

n+1 n+1 R(x) = x − pn(x; f) ∈ P

R(xi) = 0, i = 0, . . . , n n+1 n R(xi) = (x − x0) ... (x − xn) = x − (x0 + ... + xn)x + ... | {z } Pn

=⇒ f[x0, . . . , xn] = x0 + x1 + ... + xn.

If f ∈ Pm,  polynomial degree m − n − 1 if n ≤ m − 1 f[x , . . . , x , x] = 0 m 0 if n > m − 1

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation

Example Take f(x) = ex, x ∈ [0, 1] and consider the error in linear interpolation to f(x) using x0, x1 satisfying 0 ≤ x0 ≤ x1 ≤ 1.

From (5.23)

(x − x )(x − x ) ex − P (x) = 0 1 ecx 1 2

for some cx between the max and min of x0, x1 and x. Assuming x0 < x < x1, the error is negative and approximately a quadratic polynomial (x − x)(x − x ) ex − P (x) = − 1 0 ecx 1 2 Since x0 ≤ cx ≤ x1 (x − x)(x − x ) (x − x)(x − x ) 1 0 ex0 ≤ |ex − P (x)| = 1 0 ex1 2 1 2

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation

For a bound independent of x

2 (x1 − x)(x − x0) h max = , h = x1 − x0 x0≤x≤x1 2 8

and ex1 ≤ e on [0, 1]

h2 |ex − P (x)| ≤ e, 0 ≤ x ≤ x ≤ x ≤ 1 1 8 0 1

independent of x, x0, x1. Recall that we estimated e0.826 =· 2.2841914 by e0.82 and e0.83, i.e., h = 0.01. h2 0.012 |ex − P (x)| ≤ e = |ex − P (x)| ≤ 2.72 = 0.0000340, 1 8 1 8 The actual error being −0.0000276, it satisfies this bound.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation Example Again let f(x) = ex on [0, 1], but consider the quadratic interpolation.

(x − x0)(x − x1)(x − x2) ex − P (x) = ecx 2 6

for some cx between the min and max of x0, x1, x2 and x. Assuming evenly spaced points, h = x1 − x0 = x2 − x1, and 0 ≤ x0 < x < x2 ≤ 1, we have as before

x (x − x0)(x − x1)(x − x2) 1 |e − P2(x)| ≤ e 6

while

(x − x )(x − x )(x − x ) h3 max 0 1 2 = √ (5.24) x0≤x≤x2 6 9 3

hence 3 x h e 3 |e − P2(x)| ≤ √ ≈ 0.174h 9 3 4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation For h = 0.01, 0 ≤ x ≤ 1

x −7 |e − P2(x)| ≤ 1.74 × 10

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation (x + h)x(x − h) x3 − xh Let w (x) = = 2 6 6 a translation along x-axis of polynomial in (5.24). Then x = ± √h satisfies 3

2 2   3 0 3x − h h h 0 = w (x) = and gives w2 ±√ = √ 2 6 3 9 3

Figure: y = w2(x) 4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.2.2 Behaviour of the error

When we consider the error formula (5.23) or (5.22), the polynomial

ψn(x) = (x − x0) ··· (x − xn)

is crucial in determining the behaviour of the error. Let assume that x0, . . . , xn are evenly spaced and x0 ≤ x ≤ xn.

Figure: y = Ψ6(x) 4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.2.2 Behaviour of the error Remark that the interpolation error

is relatively larger in [x0, x1] and [x5, x6], and is likely to be smaller near the middle of the node points ⇒ In practical interpolation problems, high-degree polynomial interpolation with evenly spaced nodes is seldom used. But when the set of nodes is suitable chosed, high-degree polynomials can be very useful in obtaining polynomial approximations to functions. Example Let f(x) = cos(x), h = 0.2, n = 8 and then interpolate at x = 0.9.

Case (i) x0 = 0.8, x8 = 2.4 ⇒ x = 0.9 ∈ [x0, x1]. By direct calculation of P8(0.9), · −9 cos(0.9) − P8(0.9) = −5.51 × 10

Case (ii) x0 = 0.2, x8 = 1.8 ⇒ x = 0.9 ∈ [x3, x4], where x4 is the midpoint. · −10 cos(0.9) − P8(0.9) = 2.26 × 10 , a factor of 24 smaller than the first case.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.2.2 Behaviour of the error Example 1 10 Le f(x) = 1+x2 , x ∈ [−5, 5], n > 0 an even integer, h = n

xj = −5 + jh, j = 0, 1, 2, . . . , n

It can be shown that for many points x in [−5, 5], the sequence of {Pn(x)} does not converge to f(x) as n → ∞.

1 Figure: The interpolation to 1+x2 4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3 Interpolation using spline functions x 0 1 2 2.5 3 3.5 4 y 2.5 0.5 0.5 1.5 1.5 1.125 0 The simplest method of interpolating data in a table: connecting the node points by straight lines: the curve y = `(x) is not very smooth.

Figure: y = `(x): piecewise linear interpolation We want to construct a smooth curve that interpolates the given data point that follows the shape of y = `(x).

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3 Interpolation using spline functions

Next choice: use polynomial interpolation. With seven data point, we consider the interpolating polynomial P6(x) of degree 6.

Figure: y = P6(x): polynomial interpolation Smooth, but quite different from y = `(x)!!!

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3 Interpolation using spline functions

A third choice: connect the data in the table using a succession of quadratic interpolating polynomials: on each [0, 2], [2, 3], [3, 4].

Figure: y = q(x): piecewise quadratic interpolation

Is smoother than y = `(x), follows it more closely than y = P6(x), but at x = 2 and x = 3 the graph has corners, i.e., q0(x) is discontinuous.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.1 Spline interpolation Cubic spline interpolation

Suppose n data points (xi, yi), i = 1, . . . , n are given and

a = x1 < . . . < xn = b

We seek a function S(x) defined on [a, b] that interpolates the data

S(xi) = yi, i = 1, . . . , n

Find S(x) ∈ C2[a, b], a natural cubic spline such that

S(x)is a polynomial of degree ≤ 3on each subinterval [xj−1, xj], j = 2, 3, . . . ,n;

S(xi) = yi, i = 1, . . . , n, S00(a) = S00(b).

On [xi−1, xi]: 4 degrees of freedom (cubic spline) ⇒ 4(n − 1) DOF.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.1 Spline interpolation

S(xi) = yi : n constraints 0 0  S (xi − 0) = S (xi + 0) continuity 2 = 1, . . . , n − 1 00 00  S (xi − 0) = S (xi + 0) i = 2, . . . , n − 1 3n − 6 S(xi − 0) = S(xi + 0) i = 2, . . . , n − 1  n + (3n − 6) = 4n − 6 constraints

Two more constraints needed to be added, Boundary Conditions:

00 S = 0 at a = x1, xn = b,

for a natural cubic spline. (Linear system, with symmetric, positive definite, diagonally dominant matrix.)

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.2 Construction of the cubic spline

Introduce the variables M1,...,Mn with 00 Mi ≡ S (xi), i = 1, 2, . . . , n 00 Since S(x) is cubic on each [xj−1, xj] ⇒ S (x) is linear, hence determined by its values at two points: 00 00 S (xj−1) = Mj−1,S (xj) = Mj

00 (xj − x)Mj−1 + (x − xj−1)Mj ⇒S (x) = , xj−1 ≤ x ≤ xj (5.25) xj − xj−1 00 Form the second antiderivative of S (x) on [xj−1, xj] and apply the interpolating conditions

S(xj−1) = yj−1,S(xj) = yj we get (x − x)3M + (x − x )3M (x − x)y + (x − x )y S(x) = j j−1 j−1 j + j j−1 j−1 j 6(xj − xj−1) 6(xj − xj−1) 1 − (x − x ) [(x − x)M + (x − x )M ] (5.26) 6 j j−1 j j−1 j−1 j

for x ∈ [xj−1, xj], j = 1, . . . , n.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.2 Construction of the cubic spline Formula (5.26) implies that S(x) is continuous over all [a, b] and satisfies the interpolating conditions S(xi) = yi. Similarly, formula (5.25) for S00(x) implies that is continuous on [a, b]. To ensure the continuity of S0(x) over [a, b]: we require S00(x) on [xj−1, xj] and [xj, xj+1] have to give the same value at their common pointxj, j = 2, 3, . . . , n − 1, leading to the tridiagonal linear system x − x x − x x − x j j−1 M + j+1 j−1 M + j+1 j M (5.27) 6 j−1 3 j 6 j+1 y − y y − y = j+1 j − j j−1 , j = 2, 3, . . . , n − 1 xj+1 − xj xj − xj−1 These n − 2 equations together with the assumption S00(a) = S00(b) = 0:

M1 = Mn = 0 (5.28)

leads to the values M1,...,Mn, hence the function S(x).

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.2 Construction of the cubic spline Example Calculate the natural cubic spline interpolating the data  1 1 1  (1, 1), (2, ), (3, ), (4, ) 2 3 4

n = 4, and all xj−1 − xj = 1. The system (5.27) becomes

1 2 1 1 6 M1+ 3 M2 + 6 M3 = 3 1 2 1 1 6 M2 + 3 M3 + 6 M4 = 12 and with (5.28) this yields 1 M = ,M = 0 2 2 3 which by (5.26) gives  1 x3 − 1 x2 − 1 x + 3 , 1 ≤ x ≤ 2  12 4 3 2  1 3 3 2 7 17 S(x) = − 12 x + 4 x − 3 x + 6 , 2 ≤ x ≤ 3  1 7 − 12 x + 12 , 3 ≤ x ≤ 4 Are S0(x) and S00(x) continuous?

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.2 Construction of the cubic spline Example Calculate the natural cubic spline interpolating the data x 0 1 2 2.5 3 3.5 4 y 2.5 0.5 0.5 1.5 1.5 1.125 0 n = 7, system (5.27) has 5 equations

Figure: Natural cubic spline interpolation y = S(x) Compared to the graph of linear and quadratic interpolation y = `(x) and y = q(x), the cubic spline S(x) no longer contains corners.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.3 Other interpolation spline functions So far we only interpolated data points, wanting a smooth curve. When we seek a spline to interpolate a known function, we are interested also in the accuracy. Let f(x) be given on [a, b], that we want to interpolate on evenly spaced values of x. For n > 1, let b − a h = , x = a + (j − 1)h, j = 1, 2, . . . , n n − 1 j

and Sn(x) be the natural cubic spline interpolating f(x) at x1, . . . , xn. It can be shown that

2 max |f(x) − Sn(x)| ≤ ch (5.29) a≤x≤b

00 00 (4) where c depends on f (a), f (b) and maxa≤x≤b |f (x)|. The reason why Sn(x) doesn’t converge more rapidly (have an error bound with a higher power of h) is that f 00(a) 6= 0 6= f 00(b), while by definition 00 00 00 00 Sn(a) = Sn(b) = 0. For functions f(x) with f (a) = f (b) = 0, the RHS of (5.29) can be replaced with ch4.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.3 Other interpolation spline functions

To improve on Sn(x), we look for other interpolating functions S(x) that interpolate f(x) on a = x1 < x2 < . . . < xn = b Recall the definition of natural cubic spline:

1 S(x) cubic on each subinterval [xj−1, xj]; 2 S(x),S0(x) and S00(x) are continuous on [a, b]; 00 00 3 S (x1) = S (xn) = 0. We say that S(x) is a cubic spline on [a, b] if

1 S(x) cubic on each subinterval [xj−1, xj]; 2 S(x),S0(x) and S00(x) are continuous on [a, b]; With the interpolating conditions

S(xi) = yi, i = 1, . . . , n

the representation formula (5.26) and the tridiagonal system (5.27) are still valid.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.3 Other interpolation spline functions

This system has n − 2 equations and n unknowns: M1,...,Mn. By 00 00 replacing the end conditions (5.28) S (x1) = S (xn) = 0, we can obtain other interpolating cubic splines.

If the data (xi, yi) is obtained by evaluating a function f(x)

yi = f(xi), i = 1, . . . , n then we choose endpoints (boundary conditions) for S(x) that would yield a better approximation to f(x). We require 0 0 0 0 S (x1) = f (x1),S (xn) = f (xn) or 00 00 00 00 S (x1) = f (x1),S (xn) = f (xn) When combined with(5.26-5.27), either of these conditions leads to a unique interpolating spline S(x), dependent on which of these conditions is used. In both cases, the RHS of (5.29) can be replaced 4 (4) by ch , where c depends on maxx∈[a,b] f (x) .

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.3 Other interpolation spline functions

If the derivatives of f(x) are not known, then extra interpolating conditions can be used to ensure that the error bond of (5.29) is proportional to h4. In particular, suppose that

x1 < z1 < x2, xn−1 < z2 < xn

and f(z1), f(z2) are known. Then use the formula for S(x) in (5.26) and

s(z1) = f(z1), s(z2) = f(z2) (5.30)

This adds two new equations to the system (5.27), one for M1 and M2, and another equation for Mn−1,Mn. This form is preferable to the interpolating natural cubic spline, and is almost equally easy to produce. This is the default form of spline interpolation that is implemented in MATLAB. The form of spline formed in this way is said to satisfy the not-a-knot interpolation boundary conditions.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.3 Other interpolation spline functions

Interpolating cubic spline functions are a popular way to represent data analytically because 1 are relatively smooth: C2, 2 do not have the rapid oscillation that sometime occurs with the high-degree polynomial interpolation, 3 are relatively easy to work with on a computer. They do not replace polynomials, but are a very useful extension of them.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.4 The MATLAB program spline

The standard package MATLAB contains the function rm spline. The standard calling sequence is

y = spline(x nodes, y nodes, x)

which produces the cubic spline function S(x) whose graph passes through the points {(ξi, ηi): i = 1, . . . , n} with

(ξi, ηi) = (x nodes(i), y nodes (i))

and n the length of x nodes (and y nodes). The not-a-knot interpolation conditions of (5.30) are used. The point (ξ2, η2) is the point (z1, f(z1)) of (5.30) and (ξn−1, ηn−1) is the point (z2, f(z2)).

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.4 The MATLAB program spline

Example Approximate the function f(x) = ex on the interval [a, b] = [0, 1]. For n > 0, define h = 1/n and interpolating nodes

x1 = 0, x2 = h, x3 = 2h, . . . , xn+1 = nh = 1

Using spline, we produce the cubic interpolating spline interpolant Sn,1 to f(x). With the not-a-knot interpolation conditions, the nodes x2 and xn are the points z1 and z2 in (5.30). For a general smooth function f(x), it turns out that te magnitude of the error f(x) − Sn,1(x) is largest around the endpoints of the interval of approximation.

4. Interpolation Math 1070 > 4. Interpolation and Approximation > 4.3.4 The MATLAB program spline Example Two interpolating nodes are inserted, the midpoints of the subintervals [0, h] and [1 − h, 1]: 1 1 x =0, x = h, x =h, x =2h, . . . , x =(n−1)h, x =1− h, x =1 1 2 2 3 4 n+1 n+2 2 n+3

Using spline results in a cubic spline function Sn,2(x); with the not-a-knot interpolation conditions conditions, the nodes x2 and xn+2 are the points z1 and z2 of (5.30). Generally, Sn,2 is a more accurate approximation than is Sn,1(x). The cubic polynomials produced for Sn,2(x) by spline for the intervals 1 [x1, x2] and [x2, x3] are the same, thus we can use the polynomial for [0, 2 h] for the entire interval [0, h]. Similarly for [1 − h, h].

(1) (2) n En Ratio En Ratio 5 1.01E-4 1.11E-5 10 6.92E-6 14.6 7.88E-7 14.1 20 4.56E-7 15.2 5.26E-8 15.0 40 2.92E-8 15.6 3.39E-9 15.5 Table: Cubic spline approximation to f(x) = ex

4. Interpolation Math 1070