Polynomial Interpolation: Summary ∏
Total Page:16
File Type:pdf, Size:1020Kb
Polynomial Interpolation 1 Polynomial Interpolation: Summary • Statement of problem. We are given a table of n + 1 data points (xi,yi): x x0 x1 x2 ... xn y y0 y1 y2 ... yn and seek a polynomial p of lowest degree such that p(xi) = yi for 0 ≤ i ≤ n Such a polynomial is said to interpolate the data. • Finding the interpolating polynomial using the Vandermonde matrix. 2 n Here, pn(x) = ao +a1x +a2x +...+anx where the coefficients are found by imposing p(xi) = yi for each i = 0 to n. The resulting system is 2 n 1 x0 x0 ... x0 a0 y0 1 x x2 ... xn a y 1 1 1 1 1 1 x x2 ... xn a y 2 2 2 2 = 2 . . .. . . . . . 2 n 1 xn xn ... xn an yn The matrix above is called the Vandermonde matrix. If this was singular it would imply that for some nonzero set of coefficients the associated polynomial of degree ≤ n would have n + 1 zeros. This can’t be so this matrix equation can be solved for the unknown coefficients of the polynomial. • The Lagrange interpolation polynomial. n x − x p (x) = y ` (x) + y ` (x) + ... + y ` (x) where ` (x) = ∏ j n 0 0 1 1 n n i x − x j = 0 i j j 6= i • The Newton interpolation polynomial (Divided Differences) pn(x) = c0 + c1(x − x0) + c2(x − x0)(x − x1) + ... + cn(x − x0)(x − x1)···(x − xn−1) where these coefficients will be found using divided differences. It is fairly clear however that c0 = y0 and c = y1−y0 . For now, the remaining terms can be found by the recursive relationship described below. 1 x1−x0 Let p0(x) = y0 = c0, and k−1 k−1 pk(x) = pk−1(x) + ck ∏(x − xi) where ck = (yk − pk−1(xk))/(∏(xk − xi)) for 1 ≤ k ≤ n i=0 i=0 Polynomial Interpolation 2 • Existence and Uniqueness Theorem If x0,x1,...,xn are n + 1 distinct real numbers, then for arbitrary values y0,y1,...,yn, there is a unique polynomial pn of degree at most n such that pn(xi) = yi for 0 ≤ i ≤ n. Proof: We have proven the existence by finding such polynomials. To prove uniqueness suppose that qn and pn are two different polynomials of degree ≤ n which both interpolate the same data. Then the polynomial pn − qn is of degree ≤ n and the value of this polynomial is zero at n + 1 data points. But a polynomial of degree n has at most n zeros unless it is the zero polynomial. Therefore pn −qn = 0 and so pn = qn. ♦. Conclusion: All three methods of finding an interpolating polynomial result in the same polynomial, they are just expressed differently. • Approximation Error Theorem The error associated with approximating a function f (x) ∈ Cn+1[a,b] with a polynomial that interpolates f at n + 1 distinct points in [a,b] is given by f (x) = Pn(x) + En(x) (1) f (n+1)(η) E (x) = (x − x )(x − x ) ... (x − x ) (2) n 0 1 n (n + 1)! where η ∈ (a,b) and depends on x. proof: If x = xi for some i this is true because both sides = 0. Assume that x 6= xi for i = 0...n. Define w(t) = n ∏i=0(t − xi) and notice that w(x) 6= 0. Define φ(x) = f (x) − p(x) − λw(x) where λ = ( f (x) − p(x))/w(x) and notice that φ(x) ∈ Cn+1[a,b] and vanishes at n + 2 values of x. By Rolle’s theorem this implies that φ 0 vanishes at n + 1 points in (a,b). Continuing to generalized Rolle’s theorem: φ n+1 = 0 at (at least) one point, call it η. Now notice that φ n+1(η) = f n+1(η)−λ(n+1)! = 0. Recalling the value of λ and the form of w(x) gives the desired result. ♦ • Benefits of the various interpolation polynomials – Using the Vandermonde matrix is not a very good method for any situation. The system is ill- conditioned and therefore the coefficients may be calculated very inaccurately. Also the amount of work is excessive. – Using the Lagrange interpolating polynomial is well suited for using the same set of x-values for various y-values. In this case you could easily change the coefficients of the `i(x) functions to suit the desired y values. – Using the Newton interpolating polynomial is usually the best choice. It has the advantage that data pairs can be added and interpolated by merely adding one additional term to the previous interpolat- ing polynomial. Under other restrictions the coefficients give information about the derivatives of a function being approximated as well as the error. Polynomial Interpolation 3 • Chebyshev Polynomials T0(x) = 1 T1(x) = x Tn+1(x) = 2xTn(x) − Tn−1(x) It can be shown (not easily) that −1 Tn(x) = cos(n cos (x)) for x ∈ [−1,1] • Minimizing Error by Choosing Chebyshev Nodes In the Approximation Error Theorem we saw f (n+1)(η) n En(x) = ∏(x − xi) (n + 1)! i=0 The second term in the product on the right hand side can be minimized by choosing scaled values of the Chebyshev Nodes. These x values are chosen as the roots of Tn+1(x) with the result that 2i + 1 x = cos π for 0 ≤ i ≤ n where x ∈ [−1,1] i 2(n + 1) If we are dealing with a different interval: suppose [a,b] instead of [−1,1] we make the transformation: b − a t = a + (x + 1). i 2 i If we now choose ti as the nodes over which we interpolate our function, the error in the approximation should be minimized..