Interpolation Methods

Total Page:16

File Type:pdf, Size:1020Kb

Interpolation Methods Chapter 3 Interpolation methods Peter­Wolfgang Gr¨aber Systems Analysis in Water Management CHAPTER 3. INTERPOLATION METHODS Problem: Some measured values (dependent variable) are dependent on independent variables in one -, two -, three- or four-dimensional space measurement, and generally they are represented by the three space coordinates (depending upon coordinate system e.g. xn� yn� zn orr n� αn� zn orr n� αn� ϑn (see 2 vector analysis)) and the timet n. We have a discontinuous value tables in this case. For the one dimensional case e.g.: independent dependend variable variable x0 y0 f(x 0) x1 y1 f(x 1) . xn yn f(x n) The pointsx 0� x1� ...� xn are called supporting points, and the pointsy 0� y1� ...� yn are the basic or supporting values. If we are loo#ing for function values, whose arguments lie within the range (x0� xn), we name it interpolation In contrast if the function values we are looking for lie outside the range (x0� xn) we call this extrapolation. $y interpolation or extrapolation we try to &nd a continuous functionw p(x), which reflects the orig inal functiony n f(x n) as exactly as possible (see &gure (.)). It is always assumed that the interpolation function only matches the original function on the supporting points. !he accuracy for the interval in between, e.g. the matching of both functions, depends on the number and the distribution of the supporting points. According to the sampling theorem the quantisation error increases proportionally to the rise of the function. Note: No interpolation algorithm can be used as replacement for an enlargement ot the measured value density. By means of an interpolation algorithm one receives in each case approximated values. ,- Figure 3.): representation of the discontinuous measured data acquisition ,) CHAPTER 3. INTERPOLATION METHODS Example for application of interpolation: The pollutant concentrationC(x), which runs from a refuse dump, is measured at the pointsx 0� x1� x2 (see &gure (.2). The pollutant concentration at the pointx F l, which cause the danger by 'owing in the river, is to be estimated by interpolation. * conclusion is to be given whether this value exceeds the limiting value. Figure 3.2: representation of an interpolation problem x0 C0 f(x 0) x1 C1 f(x 1) xF l / x2 C2 f(x 2) For the solution of this problem an interpolation functionw p(x) is to seek for as 0replacement0 for the functionC n f(x n). This function should ful&l the following conditions: wi p(x i) C i i )� . � n (3.)) ∀ 72 i.e. w0 p(x 0) C 0 w1 p(x 1) C 1 (3.2) . wn p(x n) C n Then it is supposed that the intermediate values of the functionw p(x) are a good appro%imation of the intermediate values of the functionC n f(x n). For the determination of the functionw p(x) different interpolation methods can be used. We differentiate thereby one- and multi-dimensional procedures. !he mul- tidimensional methods play an important role in connection with the geographical information systems �GIS) and are also often applied in connection with geostatis tics. In the following some methods will be introduced in connection with water economical .uestions. 2olynomial interpolation • Spline interpolation (peace wise polynomial interpolation) • Kriging method • ,( 3.1 Polynomial interpolation In this methodp(x) has the form of an alg ebraic polynomial of ordern: 2 n w p(x) a 0 4a 1x4a 2x 4 ...4a nx (3.() The advantage of this method is that the intermediate values can be computed as easily as possible. Based on a value table withn4) pairs of variates ma ximally ann-th order polynomial can be exactly determined: n k y : p(x) ak x (3.5) · k �=0 with the property: n k y(xi) p(x i) ak x w i (3.6) ≈ · i k �=0 This polynomial is the interpolation polynomial to the given system of interpolation supporting points. Normally we loo# for polynomials of lower order (n (), which &t together t he pairs ≤ of values at least by pairs: p(x) a 0 4a 1x linear interpolation 2 p(x) a 0 4a 1x4a 2x .uadratic interpolation 2 3 p(x) a 0 4a 1x4a 2x 4a 3x cubic interpolation The application of polynomials with higher order ma#es the calculations more di7cult and leads to very large 'uctuations. From the different display formats for the polynomials follow di1erent interpolation methods for the determination of the coefficientsa i of ann-th order polynomial. * ll this different methods lead to the same polynomial. Systems Analysis in Water Management Peter­Wolfgang Gr¨aber 3.1. Polynomial interpolation This interpolation methods are: analytical power function • Lagrange • Aiken • Newton • 3.1.1 Analytical power function This method assumes that for each supporting point the polynomialw p(x) ful&ls the conditiony(x i) p(x i). In this case we get for then4) supporting points a system ofn 4 ) equations withn 4 ) un#nown coefficien tsa 0� . � an. 2 n a0 4a 1x0 4a 2x0 4 ...4a nx0 y 0 (3.8) 2 n a0 4a 1x1 4a 2x1 4 ...4a nx1 y 1 . 2 n a0 4a 1xn 4a 2xn 4 ...4a nxn y n This equation system can be written as a matri% equation: ! " # · with the matrices: 2 n )x 0 x0 x 0 a0 y0 � ··· � � 2 n )x 1 x1 x 1 a1 y1 ! ··· " # . .. . . . . . 2 n )x n xn x n an yn ··· ,6 CHAPTER 3. INTERPOLATION METHODS The matri%! and the vector# on the right side repre sent the #nown coefficients, whereby" is the solution vector. ! he linear equation system can be solved by all the #nown methods (see section 1.(, solution of equations systems, page 22) The determinant of this linear equation system is: (x1 x 0) (x 2 x 0) (x 3 x 0) ... (x n x 0) � − · − · − · · − · x x n x x x x ... x x ) 0 0 ( 2 1) ( 3 1) ( n 1) � ··· � · − · − · · − · � � � n� �)x 1 x 1 � (x3 x 2) ... (x n x 2) D � ··· � · − · · − · (3.,) � � �. .. � . �. � . � � � � � n� �)x n x n� (xn 1 x n 2) (x n x n 2) � ··· � · − − − · − − · � � � � � � (xn x n 1) · − − and is named the andermond determinant. Since all supporting points are (must be) pairwise different, isD - and the linear � equation system is explicit solvable. There is only one polynomial of the ordern which ful&ls the propert yy i f(x i) p(x i) with the coefficients (see section ).2.( determinants, page )9): Da0 Da1 Dan a � a � �a n (3.:) 0 D 1 D ··· D With this coefficients the interpolation polynomial is: 2 n y(x) p(x) a 4a x4a x 4 4a n x ≈ 0 1 · 2 · ··· · The interpolation value at the placex P is: 2 n y(xP ) p(x P ) a 4a x P 4a x 4 4a n x ≈ 0 1 · 2 · P ··· · P Although the beginning of this method is very simple, the &nal determination of the interpolation polynomial requires a relative large computation, particularly if a great number of supporting points are to be ta#en in account. ,8 3.1. Polynomial interpolation Example for the interpolation $ith the analytical po$er function method: Find a .uadratic polynomial by using the values of the following table and calculate 1 1 the valuey f( 2 ) at the placex 2 . x - ) 2 y - ) - Since only three supporting points are given, the polynomial can only be a second order polynomial. * .uadratic polynomial has the form 2 p(x) a 0 4a 1x4a 2x It must be: yi p(x i) 2 yi a 0 4a 1xi 4a 2xi p(-) - a 4a -4a - - a - ⇒ 0 1 · 2 · ⇒ 0 p()) ) a 4a )4a ) 2 ) a 4a ) ⇒ 0 1 · 2 · ⇒ 1 2 p(2) - a 4a 24a 2 2 - 2a 4 5a - ⇒ 0 1 · 2 · ⇒ 1 2 From this three equations follows: a0 - a1 2 a ) 2 − Thus the interpolation polynomial is: p(x) 2x x 2 − 1 With this function the value at the placex 2 can be computed: ) ) ) ) 2 ( f p 2 2 ≈ 2 · 2 − 2 5 � � � � � � ,, CHAPTER 3. INTERPOLATION METHODS 3.1.2 Lagrange interpolation formula Lagrange wrote the interpolation in the following form: y(xP ) p(x P ) L (xP ) y 4L (xP ) y 4...4L n(xP ) y n (3.9) ≈ 0 · 0 1 · 1 · With the Lagrange interpolation no closed analytical functions are computed, but only single valuesp(x P ) for each interpolation pointx P . !hereby the coe7cientsL i(x) of the interpolation valuesy i (fori -�)� . � n) aren-th order polynomials o fx P . These are computed from the supporting pointsx i and are called the Lagrange polynomials. The Lagrange polynomials ofn-th order are: (xP x 1)(xP x 2) (x P x n) L0(x) − − ··· − (x x )(x x ) (x x n) 0 − 1 0 − 2 ··· 0 − (xP x 0)(xP x 2) (x P x n) L1(x) − − ··· − (x x )(x x ) (x x n) 1 − 0 1 − 2 ··· 1 − . (3.)-) (xP x 0)(xP x 1) (x P x i 1)(xP x i+1) (x P x n) Li(x) − − ··· − − − ··· − (xi x 0)(xi x 1) (x i x i 1)(xi x i+1) (x i x n) − − ··· − − − ··· − . (xP x 0)(xP x 1)(xP x 2) (x P x n 1) Ln(x) − − − ··· − − (xn x 0)(xn x 1) (x n x n 1) − − ··· − − Thus the Lagrange interpolation polynomial is: y f(x P ) p(x P ) L (xP )y 4L (xP )y 4...4L n(xP )yn ≈ 0 0 1 1 (xP x 1)(xP x 2) (x P x n) y − − ··· − y0 (x x )(x x ) (x x n) 0 − 1 0 − 2 ··· 0 − (xP x 0)(xP x 2) (x P x n) 4 − − ··· − y1 (3.))) (x x )(x x ) (x x n) 1 − 0 1 − 2 ··· 1 − .
Recommended publications
  • Generalized Barycentric Coordinates on Irregular Polygons
    Generalized Barycentric Coordinates on Irregular Polygons Mark Meyery Haeyoung Leez Alan Barry Mathieu Desbrunyz Caltech - USC y z Abstract In this paper we present an easy computation of a generalized form of barycentric coordinates for irregular, convex n-sided polygons. Triangular barycentric coordinates have had many classical applications in computer graphics, from texture mapping to ray-tracing. Our new equations preserve many of the familiar properties of the triangular barycentric coordinates with an equally simple calculation, contrary to previous formulations. We illustrate the properties and behavior of these new generalized barycentric coordinates through several example applications. 1 Introduction The classical equations to compute triangular barycentric coordinates have been known by mathematicians for centuries. These equations have been heavily used by the earliest computer graphics researchers and have allowed many useful applications including function interpolation, surface smoothing, simulation and ray intersection tests. Due to their linear accuracy, barycentric coordinates can also be found extensively in the finite element literature [Wac75]. (a) (b) (c) (d) Figure 1: (a) Smooth color blending using barycentric coordinates for regular polygons [LD89], (b) Smooth color blending using our generalization to arbitrary polygons, (c) Smooth parameterization of an arbitrary mesh using our new formula which ensures non-negative coefficients. (d) Smooth position interpolation over an arbitrary convex polygon (S-patch of depth 1). Despite the potential benefits, however, it has not been obvious how to generalize barycentric coordinates from triangles to n-sided poly- gons. Several formulations have been proposed, but most had their own weaknesses. Important properties were lost from the triangular barycentric formulation, which interfered with uses of the previous generalized forms [PP93, Flo97].
    [Show full text]
  • On Multivariate Interpolation
    On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case.
    [Show full text]
  • Math 541 - Numerical Analysis Interpolation and Polynomial Approximation — Piecewise Polynomial Approximation; Cubic Splines
    Polynomial Interpolation Cubic Splines Cubic Splines... Math 541 - Numerical Analysis Interpolation and Polynomial Approximation — Piecewise Polynomial Approximation; Cubic Splines Joseph M. Mahaffy, [email protected] Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center San Diego State University San Diego, CA 92182-7720 http://jmahaffy.sdsu.edu Spring 2018 Piecewise Poly. Approx.; Cubic Splines — Joseph M. Mahaffy, [email protected] (1/48) Polynomial Interpolation Cubic Splines Cubic Splines... Outline 1 Polynomial Interpolation Checking the Roadmap Undesirable Side-effects New Ideas... 2 Cubic Splines Introduction Building the Spline Segments Associated Linear Systems 3 Cubic Splines... Error Bound Solving the Linear Systems Piecewise Poly. Approx.; Cubic Splines — Joseph M. Mahaffy, [email protected] (2/48) Polynomial Interpolation Checking the Roadmap Cubic Splines Undesirable Side-effects Cubic Splines... New Ideas... An n-degree polynomial passing through n + 1 points Polynomial Interpolation Construct a polynomial passing through the points (x0,f(x0)), (x1,f(x1)), (x2,f(x2)), ... , (xN ,f(xn)). Define Ln,k(x), the Lagrange coefficients: n x − xi x − x0 x − xk−1 x − xk+1 x − xn Ln,k(x)= = ··· · ··· , Y xk − xi xk − x0 xk − xk−1 xk − xk+1 xk − xn i=0, i=6 k which have the properties Ln,k(xk) = 1; Ln,k(xi)=0, for all i 6= k. Piecewise Poly. Approx.; Cubic Splines — Joseph M. Mahaffy, [email protected] (3/48) Polynomial Interpolation Checking the Roadmap Cubic Splines Undesirable Side-effects Cubic Splines... New Ideas... The nth Lagrange Interpolating Polynomial We use Ln,k(x), k =0,...,n as building blocks for the Lagrange interpolating polynomial: n P (x)= f(x )L (x), X k n,k k=0 which has the property P (xi)= f(xi), for all i =0, .
    [Show full text]
  • CS 450 – Numerical Analysis Chapter 7: Interpolation =1[2]
    CS 450 { Numerical Analysis Chapter 7: Interpolation y Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign [email protected] January 28, 2019 yLecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80 2 Interpolation 3 Interpolation I Basic interpolation problem: for given data (t1; y1); (t2; y2);::: (tm; ym) with t1 < t2 < ··· < tm determine function f : R ! R such that f (ti ) = yi ; i = 1;:::; m I f is interpolating function, or interpolant, for given data I Additional data might be prescribed, such as slope of interpolant at given points I Additional constraints might be imposed, such as smoothness, monotonicity, or convexity of interpolant I f could be function of more than one variable, but we will consider only one-dimensional case 4 Purposes for Interpolation I Plotting smooth curve through discrete data points I Reading between lines of table I Differentiating or integrating tabular data I Quick and easy evaluation of mathematical function I Replacing complicated function by simple one 5 Interpolation vs Approximation I By definition, interpolating function fits given data points exactly I Interpolation is inappropriate if data points subject to significant errors I It is usually preferable to smooth noisy data, for example by least squares approximation I Approximation is also more appropriate for special function libraries 6 Issues in Interpolation
    [Show full text]
  • Polynomial Interpolation
    Numerical Methods 401-0654 6 Polynomial Interpolation Throughout this chapter n N is an element from the set N if not otherwise specified. ∈ 0 0 6.1 Polynomials D-ITET, ✬ ✩D-MATL Definition 6.1.1. We denote by n n 1 n := R t αnt + αn 1t − + ... + α1t + α0 R: α0, α1,...,αn R . P ∋ 7→ − ∈ ∈ the R-vectorn space of polynomials with degree n. o ≤ ✫ ✪ ✬ ✩ Theorem 6.1.2 (Dimension of space of polynomials). It holds that dim n = n +1 and n P P ⊂ 6.1 C∞(R). p. 192 ✫ ✪ n n 1 ➙ Remark 6.1.1 (Polynomials in Matlab). MATLAB: αnt + αn 1t − + ... + α0 Vector Numerical − Methods (αn, αn 1,...,α0) (ordered!). 401-0654 − △ Remark 6.1.2 (Horner scheme). Evaluation of a polynomial in monomial representation: Horner scheme p(t)= t t t t αnt + αn 1 + αn 2 + + α1 + α0 . ··· − − ··· Code 6.1: Horner scheme, polynomial in MATLAB format 1 function y = polyval (p,x) 2 y = p(1); for i =2: length (p), y = x y+p( i ); end ∗ D-ITET, D-MATL Asymptotic complexity: O(n) Use: MATLAB “built-in”-function polyval(p,x); △ 6.2 p. 193 6.2 Polynomial Interpolation: Theory Numerical Methods 401-0654 Goal: (re-)constructionofapolynomial(function)frompairs of values (fit). ✬ Lagrange polynomial interpolation problem ✩ Given the nodes < t < t < < tn < and the values y ,...,yn R compute −∞ 0 1 ··· ∞ 0 ∈ p n such that ∈P j 0, 1,...,n : p(t )= y . ∀ ∈ { } j j ✫ ✪ D-ITET, D-MATL 6.2 p. 194 6.2.1 Lagrange polynomials Numerical Methods 401-0654 ✬ ✩ Definition 6.2.1.
    [Show full text]
  • Sparse Polynomial Interpolation and Testing
    Sparse Polynomial Interpolation and Testing by Andrew Arnold A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philsophy in Computer Science Waterloo, Ontario, Canada, 2016 © Andrew Arnold 2016 I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii Abstract Interpolation is the process of learning an unknown polynomial f from some set of its evalua- tions. We consider the interpolation of a sparse polynomial, i.e., where f is comprised of a small, bounded number of terms. Sparse interpolation dates back to work in the late 18th century by the French mathematician Gaspard de Prony, and was revitalized in the 1980s due to advancements by Ben-Or and Tiwari, Blahut, and Zippel, amongst others. Sparse interpolation has applications to learning theory, signal processing, error-correcting codes, and symbolic computation. Closely related to sparse interpolation are two decision problems. Sparse polynomial identity testing is the problem of testing whether a sparse polynomial f is zero from its evaluations. Sparsity testing is the problem of testing whether f is in fact sparse. We present effective probabilistic algebraic algorithms for the interpolation and testing of sparse polynomials. These algorithms assume black-box evaluation access, whereby the algorithm may specify the evaluation points. We measure algorithmic costs with respect to the number and types of queries to a black-box oracle.
    [Show full text]
  • Lecture 19 Polynomial and Spline Interpolation
    Lecture 19 Polynomial and Spline Interpolation A Chemical Reaction In a chemical reaction the concentration level y of the product at time t was measured every half hour. The following results were found: t 0 .5 1.0 1.5 2.0 y 0 .19 .26 .29 .31 We can input this data into Matlab as t1 = 0:.5:2 y1 = [ 0 .19 .26 .29 .31 ] and plot the data with plot(t1,y1) Matlab automatically connects the data with line segments, so the graph has corners. What if we want a smoother graph? Try plot(t1,y1,'*') which will produce just asterisks at the data points. Next click on Tools, then click on the Basic Fitting option. This should produce a small window with several fitting options. Begin clicking them one at a time, clicking them off before clicking the next. Which ones produce a good-looking fit? You should note that the spline, the shape-preserving interpolant, and the 4th degree polynomial produce very good curves. The others do not. Polynomial Interpolation n Suppose we have n data points f(xi; yi)gi=1. A interpolant is a function f(x) such that yi = f(xi) for i = 1; : : : ; n. The most general polynomial with degree d is d d−1 pd(x) = adx + ad−1x + ::: + a1x + a0 ; which has d + 1 coefficients. A polynomial interpolant with degree d thus must satisfy d d−1 yi = pd(xi) = adxi + ad−1xi + ::: + a1xi + a0 for i = 1; : : : ; n. This system is a linear system in the unknowns a0; : : : ; an and solving linear systems is what computers do best.
    [Show full text]
  • Piecewise Polynomial Interpolation
    Jim Lambers MAT 460/560 Fall Semester 2009-10 Lecture 20 Notes These notes correspond to Section 3.4 in the text. Piecewise Polynomial Interpolation If the number of data points is large, then polynomial interpolation becomes problematic since high-degree interpolation yields oscillatory polynomials, when the data may fit a smooth function. Example Suppose that we wish to approximate the function f(x) = 1=(1 + x2) on the interval [−5; 5] with a tenth-degree interpolating polynomial that agrees with f(x) at 11 equally-spaced points x0; x1; : : : ; x10 in [−5; 5], where xj = −5 + j, for j = 0; 1;:::; 10. Figure 1 shows that the resulting polynomial is not a good approximation of f(x) on this interval, even though it agrees with f(x) at the interpolation points. The following MATLAB session shows how the plot in the figure can be created. >> % create vector of 11 equally spaced points in [-5,5] >> x=linspace(-5,5,11); >> % compute corresponding y-values >> y=1./(1+x.^2); >> % compute 10th-degree interpolating polynomial >> p=polyfit(x,y,10); >> % for plotting, create vector of 100 equally spaced points >> xx=linspace(-5,5); >> % compute corresponding y-values to plot function >> yy=1./(1+xx.^2); >> % plot function >> plot(xx,yy) >> % tell MATLAB that next plot should be superimposed on >> % current one >> hold on >> % plot polynomial, using polyval to compute values >> % and a red dashed curve >> plot(xx,polyval(p,xx),'r--') >> % indicate interpolation points on plot using circles >> plot(x,y,'o') >> % label axes 1 >> xlabel('x') >> ylabel('y') >> % set caption >> title('Runge''s example') Figure 1: The function f(x) = 1=(1 + x2) (solid curve) cannot be interpolated accurately on [−5; 5] using a tenth-degree polynomial (dashed curve) with equally-spaced interpolation points.
    [Show full text]
  • Sparse Polynomial Interpolation Codes and Their Decoding Beyond Half the Minimum Distance *
    Sparse Polynomial Interpolation Codes and their Decoding Beyond Half the Minimum Distance * Erich L. Kaltofen Clément Pernet Dept. of Mathematics, NCSU U. J. Fourier, LIP-AriC, CNRS, Inria, UCBL, Raleigh, NC 27695, USA ÉNS de Lyon [email protected] 46 Allée d’Italie, 69364 Lyon Cedex 7, France www4.ncsu.edu/~kaltofen [email protected] http://membres-liglab.imag.fr/pernet/ ABSTRACT General Terms: Algorithms, Reliability We present algorithms performing sparse univariate pol- Keywords: sparse polynomial interpolation, Blahut's ynomial interpolation with errors in the evaluations of algorithm, Prony's algorithm, exact polynomial fitting the polynomial. Based on the initial work by Comer, with errors. Kaltofen and Pernet [Proc. ISSAC 2012], we define the sparse polynomial interpolation codes and state that their minimal distance is precisely the code-word length 1. INTRODUCTION divided by twice the sparsity. At ISSAC 2012, we have Evaluation-interpolation schemes are a key ingredient given a decoding algorithm for as much as half the min- in many of today's computations. Model fitting for em- imal distance and a list decoding algorithm up to the pirical data sets is a well-known one, where additional minimal distance. information on the model helps improving the fit. In Our new polynomial-time list decoding algorithm uses particular, models of natural phenomena often happen sub-sequences of the received evaluations indexed by to be sparse, which has motivated a wide range of re- an arithmetic progression, allowing the decoding for a search including compressive sensing [4], and sparse in- larger radius, that is, more errors in the evaluations terpolation of polynomials [23,1, 15, 13,9, 11].
    [Show full text]
  • Interpolation Observations Calculations Continuous Data Analytics Functions Analytic Solutions
    Data types in science Discrete data (data tables) Experiment Interpolation Observations Calculations Continuous data Analytics functions Analytic solutions 1 2 From continuous to discrete … From discrete to continuous? ?? soon after hurricane Isabel (September 2003) 3 4 If you think that the data values f in the data table What do we want from discrete sets i are free from errors, of data? then interpolation lets you find an approximate 5 value for the function f(x) at any point x within the Data points interval x1...xn. 4 Quite often we need to know function values at 3 any arbitrary point x y(x) 2 Can we generate values 1 for any x between x1 and xn from a data table? 0 0123456 x 5 6 1 Key point!!! Applications of approximating functions The idea of interpolation is to select a function g(x) such that Data points 1. g(xi)=fi for each data 4 point i 2. this function is a good 3 approximation for any interpolation differentiation integration other x between y(x) 2 original data points 1 0123456 x 7 8 What is a good approximation? Important to remember What can we consider as Interpolation ≡ Approximation a good approximation to There is no exact and unique solution the original data if we do The actual function is NOT known and CANNOT not know the original be determined from the tabular data. function? Data points may be interpolated by an infinite number of functions 9 10 Step 1: selecting a function g(x) Two step procedure We should have a guideline to select a Select a function g(x) reasonable function g(x).
    [Show full text]
  • Parameterization for Curve Interpolation Michael S
    Working title: Topics in Multivariate Approximation and Interpolation 101 K. Jetter et al., Editors °c 2005 Elsevier B.V. All rights reserved Parameterization for curve interpolation Michael S. Floater and Tatiana Surazhsky Centre of Mathematics for Applications, Department of Informatics, University of Oslo, P.O. Box 1035, 0316 Oslo, Norway Abstract A common task in geometric modelling is to interpolate a sequence of points or derivatives, sampled from a curve, with a parametric polynomial or spline curve. To do this we must ¯rst choose parameter values corresponding to the interpolation points. The important issue of how this choice a®ects the accuracy of the approxi- mation is the focus of this paper. The underlying principle is that full approximation order can be achieved if the parameterization is an accurate enough approximation to arc length. The theory is illustrated with numerical examples. Key words: interpolation, curves, arc length, approximation order 1. Introduction One of the most basic tasks in geometric modelling is to ¯t a smooth curve d through a sequence of points p ; : : : pn in , with d 2. The usual approach is 0 ¸ to ¯t a parametric curve. We choose parameter values t0 < < tn and ¯nd a d ¢ ¢ ¢ parametric curve c : [t ; tn] such that 0 ! c(ti) = pi; i = 0; 1; : : : ; n: (1) Popular examples of c are polynomial curves (of degree at most n) for small n, and spline curves for larger n. In this paper we wish to focus on the role of the parameter values t0; : : : ; tn. Empirical evidence has shown that the choice of parameterization has a signi¯cant e®ect on the visual appearance of the interpolant c.
    [Show full text]
  • Chinese Remainder Theorem
    THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens 1. Introduction The Chinese remainder theorem says we can uniquely solve every pair of congruences having relatively prime moduli. Theorem 1.1. Let m and n be relatively prime positive integers. For all integers a and b, the pair of congruences x ≡ a mod m; x ≡ b mod n has a solution, and this solution is uniquely determined modulo mn. What is important here is that m and n are relatively prime. There are no constraints at all on a and b. Example 1.2. The congruences x ≡ 6 mod 9 and x ≡ 4 mod 11 hold when x = 15, and more generally when x ≡ 15 mod 99, and they do not hold for other x. The modulus 99 is 9 · 11. We will prove the Chinese remainder theorem, including a version for more than two moduli, and see some ways it is applied to study congruences. 2. A proof of the Chinese remainder theorem Proof. First we show there is always a solution. Then we will show it is unique modulo mn. Existence of Solution. To show that the simultaneous congruences x ≡ a mod m; x ≡ b mod n have a common solution in Z, we give two proofs. First proof: Write the first congruence as an equation in Z, say x = a + my for some y 2 Z. Then the second congruence is the same as a + my ≡ b mod n: Subtracting a from both sides, we need to solve for y in (2.1) my ≡ b − a mod n: Since (m; n) = 1, we know m mod n is invertible.
    [Show full text]