
MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 [email protected] MATH 590 – Chapter 34 1 Outline 1 Preconditioning: Two Simple Examples 2 Early Preconditioners 3 Preconditioned GMRES via Approximate Cardinal Functions 4 Change of Basis 5 Effect of the “Better” Basis on the Condition Number 6 Effect of the “Better” Basis on the Accuracy of the Interpolant [email protected] MATH 590 – Chapter 34 2 In Chapter 16 we noted that the system matrices arising in scattered data interpolation with radial basis functions tend to become very ill-conditioned as the minimal separation distance qX between the data sites x1;:::; xN , is reduced. Therefore it is natural to devise strategies to prevent such instabilities by preconditioning the system, or by finding a better basis for the approximation space we are using. [email protected] MATH 590 – Chapter 34 3 The preconditioning approach is standard procedure in numerical linear algebra. In fact we can use any of the well-established methods (such as preconditioned conjugate gradient iteration) to improve the stability and convergence of the interpolation systems that arise for strictly positive definite functions. Example The sparse systems that arise in (multilevel) interpolation with compactly supported radial basis functions can be solved efficiently with the preconditioned conjugate gradient method. [email protected] MATH 590 – Chapter 34 4 The second approach to improving the condition number of the interpolation system, i.e., the idea of using a more stable basis, is well known from univariate polynomial and spline interpolation. The Lagrange basis functions for univariate polynomial interpolation are the ideal basis for stably solving the interpolation equations since the resulting interpolation matrix is the identity matrix (which is much better conditioned than, e.g., the Vandermonde matrix stemming from a monomial basis). B-splines give rise to diagonally dominant, sparse system matrices which are much easier to deal with than the matrices resulting from a truncated power basis representation of the spline interpolant. Both of these examples are studied in great detail in standard numerical analysis texts (see, e.g., [Kincaid and Cheney (2002)]) or in the literature on splines (see, e.g., [Schumaker (1981)]). We discuss an analogous approach for RBFs below. [email protected] MATH 590 – Chapter 34 5 Before we describe any of the specialized preconditioning procedures for radial basis function interpolation matrices we give two examples presented in the early RBF paper [Jackson (1989)] to illustrate the effects of and motivation for preconditioning in the context of radial basis functions. [email protected] MATH 590 – Chapter 34 6 Preconditioning: Two Simple Examples Example Let s = 1 and consider interpolation based on '(r) = r with no polynomial terms added. As data sites we choose X = f1; 2;:::; 10g. This leads to the system matrix 2 0 1 2 3 ::: 9 3 6 1 0 1 2 ::: 8 7 6 7 6 2 1 0 1 ::: 7 7 6 7 A = 6 3 2 1 0 ::: 6 7 6 7 6 . .. 7 4 . 5 9 8 7 6 ::: 0 with `2-condition number cond(A) ≈ 67. [email protected] MATH 590 – Chapter 34 8 Preconditioning: Two Simple Examples Example (cont.) Instead of solving the linear system Ac = y, where T 10 y = [y1;:::; y10] 2 R is a vector of given data values, we can find a suitable matrix B to pre-multiply both sides of the equation such that the system is simpler to solve. Ideally, the new system matrix BA should be the identity matrix, i.e.,B should be an approximate inverse of A. Once we’ve found an appropriate matrix B, we must now solve the linear system BAc = By: The matrix B is usually referred to as the (left) preconditioner of the linear system. [email protected] MATH 590 – Chapter 34 9 Preconditioning: Two Simple Examples Example (cont.) For the matrix A above we can choose a preconditioner B as 2 1 0 0 0 ::: 0 0 3 1 1 6 2 −1 2 0 ::: 0 0 7 6 1 1 7 6 0 −1 ::: 0 0 7 6 2 2 7 6 0 0 1 −1 ::: 0 0 7 B = 6 2 7 : 6 . 7 6 . .. 7 6 1 7 4 0 0 0 0 ::: −1 2 5 0 0 0 0 ::: 0 1 This leads to the following preconditioned system matrix 2 0 1 2 ::: 8 9 3 6 0 1 0 ::: 0 0 7 6 7 6 0 0 1 ::: 0 0 7 BA = 6 7 6 . .. 7 6 . 7 6 7 4 0 0 0 ::: 1 0 5 9 8 7 ::: 1 0 in the system BAc = By. [email protected] MATH 590 – Chapter 34 10 Preconditioning: Two Simple Examples Example (cont.) Note that BA is almost an identity matrix. One can easily check that now cond(BA) ≈ 45. The motivation for this choice of B is the following. The function '(r) = r or Φ(x) = jxj is a fundamental solution of the 2 ∆ = d Laplacian ( dx2 in the one-dimensional case), i.e. d 2 1 ∆Φ(x) = jxj = δ (x); dx2 2 0 where δ0 is the Dirac delta function centered at zero. Thus, B is chosen as a discretization of the Laplacian with special choices at the endpoints of the data set. [email protected] MATH 590 – Chapter 34 11 Preconditioning: Two Simple Examples Example For non-uniformly distributed data we can use a different discretization of the Laplacian ∆ for each row of B. 3 5 9 To see this, let s = 1, X = f1; 2 ; 2 ; 4; 2 g, and again consider interpolation with the radial function '(r) = r. Then 2 1 3 7 3 0 2 2 3 2 6 1 5 7 6 2 0 1 2 3 7 6 7 A = 6 3 1 0 3 2 7 6 2 2 7 6 3 5 3 0 1 7 4 2 2 2 5 7 1 2 3 2 2 0 with cond(A) ≈ 18:15. [email protected] MATH 590 – Chapter 34 12 Preconditioning: Two Simple Examples Example (cont.) If we choose 2 3 1 0 0 0 0 6 3 1 7 6 1 − 2 2 0 0 7 6 7 B = 6 0 1 − 5 1 0 7 ; 6 2 6 3 7 6 0 0 1 − 4 1 7 4 3 3 5 0 0 0 0 1 based on second-order backward differences of the points in X , then the preconditioned system to be solved becomes 2 1 3 7 3 0 2 2 3 2 6 0 1 0 0 0 7 6 7 6 7 6 0 0 1 0 0 7 c = By: 6 7 4 0 0 0 1 0 5 7 1 2 3 2 2 0 [email protected] MATH 590 – Chapter 34 13 Preconditioning: Two Simple Examples Example (cont.) Once more, this system is almost trivial to solve and has an improved condition number of cond(BA) ≈ 8:94. [email protected] MATH 590 – Chapter 34 14 Early Preconditioners Ill-conditioning of the interpolation matrices was identified as a serious problem very early, and Nira Dyn along with some of her co-workers (see, e.g., [Dyn (1987), Dyn (1989), Dyn and Levin (1983), Dyn et al. (1986)]) provided some of the first preconditioning strategies tailored especially to RBF interpolants. For the following discussion we consider the general interpolation problem that includes polynomial reproduction (see Chapter 6). [email protected] MATH 590 – Chapter 34 16 Early Preconditioners We have to solve the following system of linear equations AP c y = ; (1) PT O d 0 with Ajk = '(kxj − xk k), j; k = 1;:::; N, Pj` = p`(xj ), j = 1;:::; N, ` = 1;:::; M, T c = [c1;:::; cN ] , T d = [d1;:::; dM ] , T y = [y1;:::; yN ] , O an M × M zero matrix, s and 0 a zero vector of length M with M = dim Πm−1. Here ' should be strictly conditionally positive definite of order m and s radial on R and the set X = fx1;:::; xN g should be (m − 1)-unisolvent. [email protected] MATH 590 – Chapter 34 17 Early Preconditioners The preconditioning scheme proposed by Dyn and her co-workers is a generalization of the simple differencing scheme discussed above. It is motivated by the fact that the polyharmonic splines (i.e., thin plate splines and radial powers) r 2k−s log r; s even, '(r) = r 2k−s; s odd, 2k > s, are fundamental solutions of the k-th iterated Laplacian in Rs, i.e. k ∆ '(kxk) = cδ0(x); where δ0 is the Dirac delta function centered at the origin, and c is an appropriate constant. [email protected] MATH 590 – Chapter 34 18 Early Preconditioners Remark For the (inverse) multiquadrics '(r) = (1 + r 2)±1=2, which are also discussed in the papers mentioned above, application of the Laplacian yields a similar limiting behavior, i.e. lim ∆k '(r) = 0; r!1 and for r ! 0 ∆k '(r) 1: [email protected] MATH 590 – Chapter 34 19 Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X . To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R2. 1 Start with a triangulation of the set X , e.g., the Delaunay triangulation will do. This triangulation can be visualized as follows. 1 Begin with the points in X and construct their Dirichlet tesselation or Voronoi diagram. The Dirichlet tile of a particular point x is that subset of points in R2 which are closer to x than to any other point in X .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages74 Page
-
File Size-