<<

Appl. Math. Inf. Sci. 9, No. 1, 1-8 (2015) 1 Applied Mathematics & Information Sciences An International Journal

http://dx.doi.org/10.12785/amis/090101

Two and Three Dimensional Partition of Unity Interpolation by Product-Type Functions

Roberto Cavoretto∗ Department of Mathematics “G. Peano”, University of Torino, Via Carlo Alberto 10, 10123 Torino, Italy

Received: 3 Mar. 2014, Revised: 3 Jun. 2014, Accepted: 4 Jun. 2014 Published online: 1 Jan. 2015

Abstract: In this paper we analyze the behavior of product-type radial basis functions (RBFs) and splines, which are used in a partition of unity interpolation scheme as local approximants. In particular, we deal with the case of bivariate and trivariate interpolation on a relatively large number of scattered data points. Thus, we propose the local use of compactly supported RBF and spline interpolants, which take advantage of being expressible in the multivariate setting as a product of univariate functions. Numerical experiments show good accuracy and stability of the partition of unity method combined with these product-type interpolants, comparing it with the one obtained by replacing compactly supported RBFs and splines with Gaussians.

Keywords: multivariate approximation, local interpolation schemes, partition of unity methods, radial basis functions, splines, scattered data

1 Introduction can be expressed in the multivariate setting as products of univariate functions. The former are well-known in We consider the problem of interpolating a continuous approximation theory and practice, and are usually used function g : Ω R on a compact domain Ω Rm, as radial functions in the field of multivariate → ⊂ interpolation and approximation (see e.g. [10,19,21,34]). m = 2,3, defined on a finite set XN = xi,i = 1,2,...,N of data points or nodes, which are{ situated in Ω. It} On the other hand, the latter, firstly considered in consists of finding an interpolant I : Ω R such that, probability theory [20,29], have successfully been given the x and the corresponding function→ values g , the proposed for multivariate scattered data interpolation and i i integration in [4,5,6] and for landmark-based image interpolation conditions I (xi) = g(xi), i = 1,2,...,N, are satisfied. registration in [1,3]. We remark that both of these families of univariate functions (depending on a shape In particular, we are interested in considering the parameter) are compactly supported, strictly positive interpolation of large scattered data sets, a problem which definite, and enjoy noteworthy theoretical and has gained much interest in several areas of applied computational properties, such as the spline convergence sciences and scientific computing, where the need of to the Gaussian function. Furthermore, we observe that having accurate, fast and stable algorithms is often the multivariate spline interpolant is noteworthy because essential (see, e.g, [19,22,26,27]). Among the various it is neither a mesh-based formula nor a radial one, but it multivariate approximation techniques, the partition of asymptotically behaves like a Gaussian interpolant (see unity methods such as Shepard’s type interpolants turn [4,9]). Numerical experiments point out that, for out to be particularly effective in meshfree or meshless sufficiently regular basis functions such as the interpolation (see [2,8,12,13,14,16,23,24,28,30,31]). product-type radial basis function and spline of smothness In this paper, we propose the use of product-type C4, the partition of unity scheme combined with RBF and compactly supported radial basis functions (RBFs) and spline interpolants is comparable in accuracy with that splines, which are used in a partition of unity obtained by using Gaussian ones, even if they are usually interpolation scheme as local approximants. Specifically, much better conditioned than Gaussians. here we consider two families of basis functions, known as Wendland functions and Lobachevsky splines, which

∗ Corresponding author e-mail: [email protected] c 2015 NSP Natural Sciences Publishing Cor. 2 R. Cavoretto: Two and Three Dimensional Partition of Unity...

The paper is organized as follows. In Section 2 at first where the interpolation matrix we consider the problem of scattered data interpolation by A = ai j = φp j(xi) , i, j = 1,2,...,N, (2) product-type Wendland functions and Lobachevsky { } { } splines, recalling their analytic expressions and some is symmetric and depends on the values of p and δ in (1). properties; then, we describe the partition of unity Since these compactly supported RBFs are strictly method, which makes use of product-type RBF and spline positive definite, the interpolation matrix A in (2) is interpolants as local approximants. In Section 3 we refer positive definite for any set of distinct nodes (see [19]). to the corresponding partition of unity algorithms Furthermore, since Wendland functions are designed for bivariate and trivariate interpolation. Section (univariate) strictly positive definite functions, we can 4 summarizes several numerical results in order to construct multivariate strictly positive definite functions analyze accuracy and stability of the local approximation from univariate ones (see, e.g., [34]). scheme combined with compactly supported radial basis λ λ λ function and spline interpolants, also comparing their Theorem 1.Suppose that 1, 2,..., m are strictly R errors with those of the Gaussian. Finally, Section 5 deals positive definite and integrable functions on , then with conclusions and future work. m Λ(x)= λ1(x1)λ2(x2) λm(xm), x = (x1,x2,...,xm) R , ··· ∈ is a strictly positive definite function on Rm. 2 Partition of unity scheme

Let us consider a g : Ω R on a 2.1.2 Spline functions compact domain Ω Rm, m 1,→ a set ⊂ ≥ XN = xi = (x1i,x2i,...,xmi),i = 1,2,...,N Ω of For even n 2, we construct the product-type spline { } ⊂ ≥ scattered data points, and the set interpolant of g at the nodes xi in the form GN = g(xi),i = 1,2,...,N of the corresponding function{ values. } N F˜n(x)= ∑ c˜jφ˜n j(x), x Ω, j=1 ∈

2.1 Product-type functions requiring F˜n(xi)= g(xi), i = 1,2,...,N. The interpolant F˜n is a linear combination of products of univariate shifted 2.1.1 Radial basis functions and rescaled functions fn∗, known as Lobachevsky splines [4], i.e. In order to construct an interpolation formula generated by m compactly supported RBFs, for even p 0 we considerthe φ˜ φ˜ α α ≥ n j(x) n j(x; )= ∏ fn∗( (xh xh j)), (3) product-type interpolant of the form ≡ h=1 −

N where for j = 1,2,...,N φ Ω Fp(x)= ∑ c j p j(x), x , n 1 n n j=1 ∈ α k fn∗( (xh xh j)) = n ∑ ( 1) − 3 2 (n 1)! k 0 − k r − =   requiring Fp(xi)= g(xi), i = 1,2,...,N. The interpolant n 1 n − Fp is a linear combination of products of univariate shifted α(xh xh j) + (n 2k) , ζ × 3 − − and rescaled functions p, called Wendland functions [32], r + i.e. or, equivalently, m φ φ δ ζ δ n 1 n α p j(x) p j(x; )= ∏ p( (xh xh j)), (1) 2 + 2 √ 3 x ≡ h=1 − n 1 k n f ∗(α(xh xh j)) = ∑ ( 1) n − 3 2n(n 1)!   − k where, setting r = (xh xh j), for j = 1,2,...,N r − k=0 − n 1   n − ζ δ . δ α 0( r) = (1 r)+ , (xh xh j) + (n 2k) , ζ δ . − δ 3 δ × 3 − − 2( r) = (1 r)+ (3 r + 1), r  . − 5 α R+ ζ δ δ δ 2 δ and is a shape parameter. Here the of fn∗ 4( r) = (1 r)+ 8( r) + 5 r + 1 , ∈ . − 7 3 2 is given by [ √3n/α,√3n/α]. The coefficients c˜ = c˜j ζ6(δr) = (1 δr) 21(δr) + 19(δr) + 7δr + 1 , − +  are obtained− by solving the system of linear equations{ } and δ R+ is a shape parameter. Note that the support of ∈ A˜c˜ = g, ζp is [ 1/δ,1/δ]. The coefficients c = c j are computed − { } by solving the linear system whose interpolation matrix

Ac = g, A˜ = a˜i j = φ˜n j(xi) , i, j = 1,2,...,N, (4) { } { }

c 2015 NSP Natural Sciences Publishing Cor. Appl. Math. Inf. Sci. 9, No. 1, 1-8 (2015) / www.naturalspublishing.com/Journals.asp 3

α Ω Rm Ω d is symmetric and depends on the choice of n and in (3). Definition 1.Let be a bounded set. Let j=1 As also these splines turn out to be strictly positive definite be an open and bounded⊆ covering of Ω. This means{ } that for any even n 2, the interpolation matrix A˜ in (4) is Ω Ω d Ω all j are open and bounded and that j=1 j. Set positive definite≥ for any distinct node set. ⊆ δ = diam(Ω )= sup Ω x y . We call a family of j j x,y j 2 S From the central limit theorem (see [20,29]), we ∈ ||d − || k m ∞ nonnegative functions Wj with Wj C (R ) a remark that the m-variate spline converges for n to { } j=1 ∈ the m-variate Gaussian, i.e. → k-stable partition of unity with respect to the covering d Ω j if m α2 ∑m 2 j=1 1 ( i=1 xi ) { } lim ∏ f ∗(αxi)= exp − . Ω n ∞ n (2π)m/2 2 1)supp(Wj) j; → i=1 d ⊆   2)∑ Wj(x) 1 on Ω; j=1 ≡ Hence, these product-type functions asymptotically 3)for every β Nm with β k there exists a constant behave like radial functions, though they are not radial in ∈ 0 | |≤ Cβ > 0 such that themselves. Finally, since also these splines are (univariate) β β D Wj Ω Cβ /δ | |, strictly positive definite functions for even n 2, from || ||L∞( j) ≤ j Theorem 1 it follows that we can construct multivariate≥ for all 1 j d. strictly positive definite functions from univariate ones. ≤ ≤ In accordance with the statements in [33] we require additional regularity assumptions on the covering 2.2 Interpolation method d Ω j . { } j=1 In this subsection we present the partition of unity Definition 2.Suppose that Ω Rm is bounded and method. It was firstly suggested in [7,25] in the context of ⊆ XN = xi,i = 1,2,...,N Ω are given. An open and meshfree Galerkin methods for the solution of partial { d} ⊆ bounded covering Ω j is called regular for (Ω,XN ) differential equations, but then it became a common tool { } j=1 also used in the field of approximation theory (see [34]). if the following properties are satisfied: The basic idea of the partition of unity method is to 1)for each x Ω, the number of subdomains Ω j with start with a partition of the open and bounded domain ∈ x Ω j is bounded by a global constant K; Ω Rm Ω Ω d Ω ∈ into d subdomains j such that j 1 j 2)each subdomain Ω satisfies an interior cone ⊆ ⊆ = j with some mild overlap among the subdomains. Firstly, condition; we take a partition of unity, which consists of a familyS of 3)the local fill distances hX j,Ω j are uniformly bounded compactly supported, non-negative, continuous functions X X by the global fill distance hXN ,Ω , where j = N Wj with supp(Wj) Ω j such that Ω ∩ ⊆ j. d k Rm ∑ Wj(x)= 1. Therefore, after defining the space Cν ( ) of all j=1 functions g Ck whose derivatives of order β = k β ∈ O ν | | Then, we consider the global approximant of the form satisfy D g(x)= ( x 2 ) for x 2 0, we consider the following convergence|| result|| (see,|| || e.g.,→ [19,34]). d I (x)= ∑ Hj(x)Wj(x), x Ω, (5) ∈ Theorem 2.Let Ω Rm be open and bounded and j=1 ⊆ suppose that XN = xi,i = 1, 2,...,N Ω. Let where ϕ k Rm { } ⊆ m j Cν ( ) be a strictly positive definite function. Let ϕ Ω∈ d Ω X Hj(x)= ∑ ck k(x), j j=1 be a regular covering for ( , N ) and let k=1 { }d d Wj be k-stable for Ω j . Then the error between { } j=1 { } j=1 is a local interpolant, which is constructed using the m j g Nϕ (Ω), where Nϕ is the native space of ϕ, and its Ω nodes belonging to the subdomain j and solving a local partition∈ of unity interpolant (5) can be bounded by interpolation problem. In particular, Hj denotes either a RBF interpolant F or a spline interpolant F˜ , whereas β β (k+ν)/2 β p j n j D g(x) D I (x) Ch −| | g N Ω , XN ,Ω ϕ ( ) ϕk represents the generic k-th basis, φpk or φ˜nk. In fact, we | − |≤ | | note that if the local approximants satisfy the interpolation for all x Ω and all β k/2. conditions at data point xi, i.e. ∈ | |≤ H (x )= g(x ), Remark.We observe that the partition of unity method j i i preserves the local approximation order for the global fit. the global approximant also interpolates at this node, i.e. In fact, we can efficiently compute large product-type interpolants by solving small interpolation problems and I (x )= g(x ), i = 1,2,...,N. i i then combine them along with the global partition of d Then, we give the following definition (see [33]). unity Wj . This approach allows us to decompose a { } j=1

c 2015 NSP Natural Sciences Publishing Cor. 4 R. Cavoretto: Two and Three Dimensional Partition of Unity...

large problem into many small problems, ensuring at the ii)a second family of q strips, parallel to the x2-axis same time that the accuracy obtained for the local fits is direction, is considered. carried over to the global one. In particular, the partition Note that each of the two strip structures are ordered and of unity method can be thought as a Shepard’s method numbered from 1 to q; moreover, the choice in (7) follows with higher-order data, since local approximations H j directly from the side length of the domain Ω (unit instead of data values g j are used. square), that here is 1, and the subdomain radius δsubdom in (6). 3 Algorithms Stage 4. The unit square is partitioned by a square-based structure consisted of q2 squares, whose In this section, we describe in detail the partition of unity length of the sides is given by δsquare δsubdom. Then, the algorithms used in the interpolation processes, first following structure is considered: ≡ referring to the bivariate algorithm and then to the –the sets XN , Cd and Es are partitioned by the square trivariate one. They are characterized by the partition of 2 X C E Ω structure into q subsets Nk , dk and sk , the domain in square or cube cells, enabling us to use 2 efficient searching procedures. The basic versions of these k = 1,2,...,q , interpolation algorithms have been proposed and widely where Nk, dk and sk are the number of points in the k-th tested in [17,18]. Here, such algorithms have been square. modified and efficiently updated to locally apply the Stage 5. In order to identifythe squares to be examined product-type interpolants. in the searching procedure, we consider the following two steps:

3.1 Bivariate interpolation (I)since δsquare δsubdom, the ratio between these ≡ quantities is denoted by i∗ = δsubdom/δsquare = 1. INPUT: 2 Thus, the number j∗ = (2i∗ + 1) of squares to be –N, number of data; examined for each node is 9. –XN = (x1i,x2i),i = 1,2,...,N , set of data points; (II)for each square k = [v,w], v,w = 1,2,...,q, a { } –GN = gi,i = 1,2,...,N , set of data values; square-based searching procedure is considered, { } –d, number of subdomains; examining the points from the square [v i∗,w i∗] to C − − – d = (x¯1i,x¯2i),i = 1,2,...,d , set of subdomain the square [v + i∗,w + i∗], with the exception of those points;{ } points close to the boundary of Ω, where we reduce –s, number of evaluation points; the total number of squares to be examined. –Es = (x˜1i,x˜2i),i = 1,2,...,s , set of evaluation points. { } Then, after defining which and how many squares are OUTPUT: to be examined, the square-based searching procedure is applied: –As = I (x˜1i,x˜2i),i = 1,2,...,s , set of approximated values.{ } C 2 –for each subdomain point of dk , k = 1,2,...,q , to determine all nodes belonging to a subdomain. The Stage 1. The set XN of nodes and the set Es of number of nodes of the subdomain centred at (x¯ ,x¯ ) evaluation points are ordered with respect to the x2-axis 1i 2i is counted and stored in m , i = 1,2,...,d; direction by applying a quicksortx procedure. i 2 –for each evaluation point of E , k = 1,2,...,q2, to Stage 2. sk For each subdomain point (x¯1i,x¯2i), find all those belonging to a subdomain of centre i = 1,2,...,d, a local circular subdomain is constructed, (x¯1i,x¯2i) and radius δsubdom. The number of whose radius depends on the subdomain number d, that is subdomains containing the i-th evaluation point is 2 counted and stored in r , i = 1,2,...,s. δ = . (6) i subdom d r Stage 6. A local interpolant H , j = 1,2,...,d, is found This value is suitably chosen, supposing to have a nearly j for each subdomain point. uniform node distribution and assuming that the ratio n/d 4. Stage 7. A local approximant Hj and a weight function ≈ W , j = 1,2,...,d, is found for each evaluation point. Stage 3. A double structure of crossed strips is j constructed as follows: Stage 8. Applying the global fit (5), one can find approximated values computed at any evaluation point i)a first family of q strips, parallel to the x1-axis (x˜1,x˜2) Es. direction, is considered taking ∈ 1 q = δ , (7)  subdom  3.2 Trivariate interpolation and a quicksortx1 procedure is applied to order the nodes belonging to each strip; INPUT:

c 2015 NSP Natural Sciences Publishing Cor. Appl. Math. Inf. Sci. 9, No. 1, 1-8 (2015) / www.naturalspublishing.com/Journals.asp 5

–N, number of data; Stage 5. In order to identify the cubes to be examined –XN = (x1i,x2i,x3i),i = 1,2,...,N , set of data points; in the searching procedure, we consider the following two { } –GN = gi,i = 1,2,...,N , set of data values; steps: –d, number{ of subdomains;} δ δ –Cd = (x¯1i,x¯2i,x¯3i),i = 1,2,...,d , set of subdomain (I)since cube subdom, the ratio between these quantities { } ≡ points; is denoted by i∗ = δsubdom/δcube = 1. Thus, the number 3 –s, number of evaluation points; j∗ = (2i∗ + 1) of cubes to be examined for each node –Es = (x˜1i,x˜2i,x˜3i),i = 1,2,...,s , set of evaluation is 27. points.{ } (II)for each cube k = [u,v,w], u,v,w = 1,2,...,q, a cube- partition searching procedure is considered, examining OUTPUT: the points from the cube [u i ,v i ,w i ] to the − ∗ − ∗ − ∗ cube [u + i∗,v + i∗,w + i∗], with the exception of those –As = I (x˜1i,x˜2i,x˜3i),i = 1,2,...,s , set of { } points close to the boundary of Ω, where we reduce approximated values. the total number of cubes to be examined. Stage 1. The set X of nodes and the set E of N s Then, after defining which and how many cubes are evaluation points are ordered with respect to the x -axis 3 to be examined, the cube-partition searching procedure is direction by applying a quicksort procedure. x3 applied: Stage 2. For each subdomain point (x¯1i,x¯2i,x¯3i), i = 1,2,...,d, a local spherical subdomain is constructed, C 3 –for each subdomain point of dk , k = 1,2,...,q , to whose spherical radius depends on the subdomain determine all nodes belonging to a subdomain. The number d, that is number of nodes of the subdomain centred at (x¯ ,x¯ ,x¯ ) is counted and stored in m , √2 1i 2i 3i i δsubdom = . (8) i = 1,2,...,d; √3 d E 3 –for each evaluation point of sk , k = 1,2,...,q , in Although other choices of δ are possible, this value order to find all those belonging to a subdomain of subdom δ is suitably chosen, supposing to have a nearly uniform centre (x¯1i,x¯2i,x¯3i) and radius subdom. The number of node distribution and assuming that the ratio n/d 8. subdomains containing the i-th evaluation point is ≈ counted and stored in ri, i = 1,2,...,s. Stage 3. A triple structure of intersecting parallelepipeds is constructed as follows: Stage 6. A local interpolant Hj, j = 1,2,...,d, is found i)a first family of q parallelepipeds, parallel to the x1- for each subdomain point. axis direction, is considered taking Stage 7. A local approximant Hj and a weight function W , j = 1,2,...,d, is found for each evaluation point. 1 j q = , (9) Stage 8. Applying the global interpolant (5), one can δsubdom   find approximated values computed at any evaluation point and a quicksortx1 procedure is applied to order the (x˜1,x˜2,x˜3) Es. nodes belonging to each parallelepiped; ∈ ii)a second family of q parallelepipeds, parallel to the

x2-axis direction, is constructed and a quicksortx2 procedure is used to order the nodes belonging to 3.3 Complexity each of the resulting parallelepipeds; iii)a third family of q parallelepipeds, parallel to the x3- axis, is considered. The algorithms make a repeated use of the quicksort routine along different directions. They require on Note that each of the three families of parallelepipeds are average a time complexity O(M logM), M being the ordered and numbered from 1 to q; the choice in (9) number of nodes to be sorted. In particular, we need a follows directly from the side length of the domain, i.e. preprocessing phase to build the data structure, whose the unit cube, and the subdomain radius δsubdom in (8). computational cost is of the order O(N logN) for the Stage 4. The unit cube is partitioned by a cube-based sorting of all N nodes and O(slogs) for the sorting of all structure consisted of q3 cubes, whose side length is s evaluation points in Stage 1. Then, to compute the local product-type interpolants, we solve d linear systems δcube δsubdom. Then, the following structure is considered:≡ of (relatively) small sizes and the computational cost is of order O(m3), i = 1,2,...,d, for each subdomain, where X C E i –the sets N , d and s are partitioned by the cube mi is the number of nodes in the i-th subdomain (see 3 X C E structure into q subsets Nk , dk and sk , Stage 6). Moreover, in Stage 5, 7 and 8 we also 3 k = 1,2,...,q , where Nk, dk and sk are the number of have a cost of rk O(mi), i = 1,2,...,d, k = 1,2,...,s, for · points in the k-th cube. the k-th evaluation point of Es.

c 2015 NSP Natural Sciences Publishing Cor. 6 R. Cavoretto: Two and Three Dimensional Partition of Unity...

4 Numerical results Gaussian shape parameter α, whereas it does not hold by varying δ; hence, though this study turns out to be In this section we investigate the performances of the significant from a numerical standpoint, a direct partition of unity interpolants obtained by using comparison of δ and α should be viewed as purely product-type interpolants as local approximants. In doing indicative. so, we focus on accuracy and stability of such Numerical experiments highlight that the variation of interpolation formulas, considering some sets XN of α and δ may greatly influence the quality of Halton scattered data points contained in approximation results. Moreover, the behavior of such Ω = [0,1]m Rm, for m = 2,3. They are uniformly errors turns out to be more regular and uniform for ⊂ distributed random points and generated by using the compactly supported radial basis functions and splines of MATLAB program haltonseq.m [19]. In particular, product-type, whereas it is quite unstable for the Gaussian here we consider some sets of Halton points of size above all when we take small values of α. More precisely, N = 16641,66049, and N = 35937,274625, for bivariate comparing for example spline and Gaussian results, this and trivariate interpolation, respectively. Note that in this study points out that the interpolation scheme with local interpolation scheme we need to solve d linear product-type interpolants is more stable than the one with systems, one for each subdomain Ω j, j = 1,2,...,d, of the Gaussian. In fact, in the latter case we have relatively small size m j m j. interpolation matrices that are very ill-conditioned, thus × Thus, we analyze the behavior of the partition of unity producing the fast and sudden error variation shown in scheme combined with radial basis function (R) and spline Figure 1 for (i) α [1,3.9), (ii) α [1,6.3), (iii) (S) interpolants of C2 (p = 2 and n = 4) and α [1,2.5) and (iv) α∈ [1,3.9). Thus, in∈ practice, this C4 (p = 4 and n = 6), which are denoted by R2, R4 and error∈ analysis also provides∈ useful information on the S2, S4, respectively. Moreover, for a comparison we also conditioning of the different local approximants. report the results obtained by using the Gaussian (G) as Furthermore, analyzing all the tests done, we observe that local approximant. the product-type functions such as the R4 and S4 (of All the results reported below are obtained computing smoothness C4) are comparable in accuracy with the on the following bivariate and trivariate Franke’s test Gaussians. functions (see, e.g., [2,11]), i.e., Specifically, we remark that the best level of Gaussian (9x 2)2+(9x 2)2 (9x +1)2 9x +1 accuracy is reached when α [1,6], which is exactly the 3 1− 2− 3 1 2 g1(x1,x2)= e− 4 + e− 49 − 10 interval where the Gaussian∈ is unstable. Conversely, 4 4 though the optimality interval remains the same also for (9x 7)2+(9x 3)2 1 1− 2− 1 (9x 4)2 (9x 7)2 + e− 4 e− 1− − 2− , the spline, in that region we have a complete stability; 2 − 5 similar considerations, even if depending on the value δ, and can also be done for the radial basis function case. Hence, (9x 2)2+(9x 2)2+(9x 2)2 it follows that these tests point out reliability of 3 1− 2− 3− g2(x1,x2,x3)= e− 4 product-type interpolants also when they are used as local 4 2 approximants in partition of unity interpolation schemes. 3 (9x1+1) 9x2+1 9x3+1 + e− 49 − 10 − 10 Finally, in Tables 1–2 we report the errors obtained 4 δ α 2 2 2 considering optimal values of and , i.e. taking values 1 (9x1 7) +(9x2 3) +(9x3 5) + e − 4− − for which we obtain the smallest RMSEs. 2 − 1 (9x 4)2 (9x 7)2 (9x 5)2 e− 1− − 2− − 3− , − 5 δ α the root mean square error (RMSE), whose formula is Table 1 RMSEs obtained by using optimal values of and for given by g1. N 16641 66049 1 s RMSE = ∑ g(x ) I (x ) 2. i i δ α δ α s s i=1 | − | RMSE opt , opt RMSE opt , opt R2 2.4893E 5 0.88 6.9639E 6 0.84 However, in order to have a more complete picture on the − − local interpolation scheme, we have tested other functions R4 1.8259E 6 0.46 2.9154E 7 0.66 − − which we here omit for shortness, since accuracy and S2 2.3715E 5 2.9 6.5901E 6 2.7 stability have shown a uniform behavior of the errors. − − S4 1.5553E 6 2.5 2.5635E 7 2.6 In Figures 1–2 we show the behavior of compactly − − supported radial basis function and spline interpolation G 7.9092E 7 2.9 3.1658E 7 2.6 errors both for 2D and 3D cases by varying the value of − − the shape parameters α [1,10], δ [0.1,1.9]. Note that there is an exact correspondence∈ between∈ the spline and

c 2015 NSP Natural Sciences Publishing Cor. Appl. Math. Inf. Sci. 9, No. 1, 1-8 (2015) / www.naturalspublishing.com/Journals.asp 7

−2 −2 10 10 δ α G G Table 2 RMSEs obtained by using optimal values of and for S2 S2 S4 S4 −3 −3 10 10 g2.

−4 −4 10 10 N 35937 274625 RMSE RMSE −5 −5 10 10 RMSE δopt ,αopt RMSE δopt ,αopt −6 −6 10 10

−7 −7 10 10 R2 1.2433E 4 0.70 3.4168E 5 0.74 2 4 6 8 10 2 4 6 8 10 α α − − R4 2.4558E 5 0.96 4.3705E 6 1.16 (i) N = 16641 - 2D (ii) N = 66049 - 2D − − S2 1 1088E 4 2 5 2 8425E 5 2 5 . . . . −2 G G − − 10 S2 −2 S2 S4 10 S4 S4 2.2148E 5 4.0 3.3542E 6 4.6 − − −3 −3 10 10 G 8.8797E 6 2.7 1.4928E 6 2.8 − − −4 10 RMSE RMSE

−4 10 −5 10

−6 −5 10 10

2 4 6 8 10 2 4 6 8 10 α α considering some sets of scattered data points. This study highlighted that these product-type functions work well (iii) N = 35937 - 3D (iv) N = 274625 - 3D also in local approaches. Finally, as confirmed by several numerical experiments, they turn out to be more stable than Gaussians and in general, for sufficiently regular Fig. 1 RMSEs obtained by varying the shape parameters α for basis functions, comparable in accuracy. bivariate and trivariate Franke’s functions. As future work, starting from [15] we are going to implement adaptive partition of unity algorithms for

−3 −3 scattered data interpolation in high-dimensions, adopting 10 10 R2 R2 R4 R4 suitable data structures like kd-trees and range trees.

−4 10

−4 10

−5 10 RMSE RMSE Acknowledgement −5 10

−6 10 The author thanks the financial support of the Department −6 −7 10 10 0.5 1 1.5 0.5 1 1.5 δ δ of Mathematics “G. Peano”, University of Torino, project (i) N = 16641 - 2D (ii) N = 66049 - 2D “Numerical analysis for life sciences” (2012).

R2 R2 R4 R4 −2 10

−3 10

−3 References 10 RMSE RMSE −4 10 −4 10 [1] G. Allasia, R. Cavoretto, A. De Rossi, B. Quatember, W.

−5 10 Recheis, M. Mayr, and S. Demertzis, In: T.E. Simos et al. (Eds.), Proceedings of the ICNAAM 2010, AIP Conf. Proc.

−6 10 0.5 1 1.5 0.5 1 1.5 δ δ 1281, 716–719 (2010). [2] G. Allasia, R. Besenghi, R. Cavoretto and A. De Rossi, Appl. (iii) N 35937 - 3D (iv) N 274625 - 3D = = Math. Comput. 217, 5949–5966 (2011). [3] G. Allasia, R. Cavoretto and A. De Rossi, Math. Methods Appl. Sci. 35, 923–934 (2012). δ Fig. 2 RMSEs obtained by varying the shape parameters for [4] G. Allasia, R. Cavoretto and A. De Rossi, Comput. Appl. bivariate and trivariate Franke’s functions. Math. 32, 71–87 (2013). [5] G. Allasia, R. Cavoretto and A. De Rossi, Int. J. Comput. Math. 90, 2003–2018 (2013). [6] AA Borisenko, Cylindrical multidimensional surfaces in 5 Conclusions and future work Lobachevsky space, Journal of Soviet Mathematics, Springer (1991). In this paper we studied the performance of product-type [7] I. Babuˇska and J.M. Melenk, Internat. J. Numer. Methods. interpolants when they are applied as local approximants Engrg. 40, 727–758 (1997). in a local interpolation scheme such as the partition of [8] M.W. Berry and K.S. Minser, ACM Trans. Math. Software unity method. More precisely, we analyzed the behavior 25, 353–366 (1999). of such interpolants in the bivariate and trivariate setting, [9] R. Brinks, Comput. Appl. Math. 27, 79–92 (2008).

c 2015 NSP Natural Sciences Publishing Cor. 8 R. Cavoretto: Two and Three Dimensional Partition of Unity...

[10] M.D. Buhmann, Radial Basis Functions: Theory and Roberto Cavoretto is a Implementation, Cambridge Univ. Press, Cambridge, (2003). Fixed-Term Research Fellow [11] R. Cavoretto and A. De Rossi, J. Comput. Appl. Math. 234, of Numerical Analysis at the 1505–1521 (2010). University of Torino, Italy. [12] R. Cavoretto and A. De Rossi, Appl. Math. Sci. 4, 3425– He got his Ph.D. degree in 3435 (2010). Science and High Technology [13] R. Cavoretto and A. De Rossi, Appl. Math. Lett. 25, 1251– (Mathematics) at the 1256 (2012). University of Torino in 2010. [14] R. Cavoretto, In: T.E. Simos et al. (Eds.), Proceedings of the In the same University he ICNAAM 2012, AIP Conf. Proc. 1479, 1054–1057 (2012). held a post-doc position until [15] R. Cavoretto, Comput. Appl. Math., (2014). DOI: 2011. His research activity is mainly focused on topics of 10.1007/s40314-013-0104-9 numerical analysis and applied mathematics such as: [16] R. Cavoretto and A. De Rossi, Math. Methods Appl. Sci., (2014). DOI: 10.1002/mma.2906 scattered data approximation, spherical interpolation, and [17] R. Cavoretto and A. De Rossi, Comput. Math. Appl. 67, applications to image registration. He has published (2014) 1024–1038. research articles in international refereed journals of [18] R. Cavoretto and A. De Rossi, submitted for publication applied mathematics. (2014). [19] G.E. Fasshauer, Meshfree Approximation Methods with MATLAB, World Scientific Publishing, Singapore, (2007). [20] B.V. Gnedenko, The Theory of Probability, MIR, Moscow, (1976). [21] A. Iske, Rend. Sem. Mat. Univ. Pol. Torino 61, 247–285 (2003). [22] A. Iske, Rend. Sem. Mat. Univ. Pol. Torino 69, 217–246 (2011). [23] M.A. Iyer, L.T. Watson and M.W. Berry, In: R. Menezes et al. (Eds.), Proceedings of the Annual Southeast Conference, 476–481 (2006). [24] D. Lazzaro and L.B. Montefusco, J. Comput. Appl. Math. 140, 521–536 (2002). [25] J.M. Melenk and I. Babuˇska, Comput. Methods. Appl. Mech. Engrg. 139, 289–314 (1996). [26] V.P. Nguyen, T. Rabczuk, S. Bordas and M. Duflot, Math. Comput. Simulation 79, 763–813 (2008). [27] J. Pouderoux, I. Tobor, J.-C. Gonzato and P. Guitton, In: D. Pfoser et al. (Eds.), GIS 2004: Proceedings of the Twelfth ACM International Symposium on Advances in Geographic Information Systems, 232–240 (2004). [28] R.J. Renka, ACM Trans. Math. Software 14, 139–148 (1988). [29] A. R´enyi, Calcul des Probabilit´es, Dunod, Paris, (1966). [30] E. S´aenz-de-Cabez´on, L.J. Hern´andez, M.T. Rivas, E. Garc´ıa-Ruiz, V. Marco, I. P´erez-Moreno and F. Javier S´aenz- de-Cabez´on, Math. Comput. Simulation 82, 2–14 (2011). [31] W.I. Thacker, J. Zhang, L.T. Watson, J.B. Birch, M.A. Iyer and M.W. Berry, ACM Trans. Math. Software 37, Art. 34, 1–20 (2010). [32] H. Wendland, Adv. Comput. Math. 4, 389–396 (1995). [33] H. Wendland, In: C.K. Chui et al. (Eds.), Approximation Theory X: Wavelets, Splines, and Applications, Vanderbilt Univ. Press, Nashville, TN pp. 473–483.(2002) [34] H. Wendland, Scattered Data Approximation, Cambridge University Press, Cambridge, (2005).

c 2015 NSP Natural Sciences Publishing Cor.