
Efficient kriging for real-time spatio-temporal interpolation P.228 20th Conference on Probability and Statistics in the Atmospheric Sciences Balaji Vasan Srinivasan, Ramani Duraiswami∗, Raghu Murtugudde† University of Maryland, College Park, MD Abstract estimate the changes in ore grade within the mine (Krige, 1951). Kriging has since been applied in several scientific Atmospheric data is often recorded at scattered station disciplines including atmospheric science, environmental locations. While the data is generally available over a monitoring and soil management. long period of time it cannot be used directly for extract- Kriging can be linear or non-linear. Simple kriging, ing coherent patterns and mechanistic correlations. The ordinary kriging and universal kriging are linear variants. only recourse is to spatially and temporally interpolate Indicator kriging, log-normal kriging and disjunctive krig- the data both to organize the station recording to a reg- ing are non-linear variants, that were developed to account ular grid and to query the data for predictions at a par- for models where the best predictor is not linear. Moyeed ticular location or time of interest. Spatio-temporal in- and Papritz (2002) show that the performance of linear terpolation approaches require the evaluation of weights and non-linear kriging are comparable except in the use at each point of interest. A widely used interpolation ap- of skewed data where non-linear kriging performs better. kriging proach is . However, kriging has a computational With improved sensors and ease of data collection, the cost that scales as the cube of the number of data points amount of data available to krige has increased by sev- N , resulting in cubic time complexity for each point of eral fold. One drawback of kriging is the computational N 4 interest, which leads to a time complexity of O( ) for cost for large data sizes. One approach to allow kriging of N interpolation at O( ) points. In this work, we formulate large data is to use local neighborhood kriging where only the kriging problem, to first reduce the computational cost the closest observations are used for each prediction. Al- N 3 to O( ). We use an iterative solver (Saad, 2003), and though computationally attractive, the methods require a further accelerate the solver using fast summation algo- local neighborhood for each location where the prediction rithms like GPUML (Srinivasan and Duraiswami, 2009) is made, and predicting on a fine grid is still computation- or FIGTREE (Morariu et al., 2008). We illustrate the ally demanding. Another disadvantage is the discontinu- speedup on synthetic data and compare the performance ity in prediction along the peripheries of the local regions. with other standard kriging approaches to demonstrate Another strategy for acceleration is to approximate the substantial improvement in the performance of our ap- covariance matrix to result in a sparser kriging system. proach. We then apply the developed approach on ocean Furrer et al. (2006) use tapering to sparsify the covariance color data from the Chesapeake Bay and present some matrix in simple kriging and thus reduce the complexity quantitative analysis of the kriged results. of the least squares. Memarsadeghi and Mount (2007) ex- tend a similar idea to ordinary kriging. Kammann and 1 Introduction Wand (2003) use a low rank approximation to the co- variance matrix to reduce the space and time complexity. Kriging (Isaaks and Srivastava, 1989) is a group of geo- Sakata et al. (2004) use the Sherman-Morrison-Woodbury statistical techniques to interpolate the value of a random formula on the sparsified covariance matrix with spatial field (e.g., the elevation, z, of the landscape as a func- sorting. The performance of all these approaches depends tion of the geographic location) at an unobserved location on the underlying data, and reduces dramatically when- from observations of its value at nearby locations. It be- ever the recording stations are located close to each other. longs to a family of linear least squares estimation algo- Alternatively, fast algorithms to solve the exact kriging rithms that are used in several geostatistical applications. problem have also been proposed. Hartman and Hssjer It has its origin in mining application, where it was used to (2008) build a Gaussian Markov random field to represent ∗Balaji Vasan Srinivasan and Ramani Duraiswami are with the Per- the station and the kriging points of interest to accelerate ceptual Interfaces and Reality Laboratory, Institute for Advanced Com- the kriging solution. Memarsadeghi et al. (2008) use fast puter Studies, University of Maryland, College Park, MD 20742, Email: summation algorithms (Yang et al., 2004; Raykar and Du- [balajiv,ramani]@umiacs.umd.edu raiswami, 2007) to krige in linear time (O(N)). However, †Raghu Murtugudde is from the Earth System Science Interdis- ciplinary Center, University of Maryland, College Park, MD 20742, the speedup in these approaches are dependent on the dis- Email: [email protected] tribution of the station and kriged locations. In this pa- 1 N per, we propose to use a graphical processing unit (GPU) X based fast algorithm to solve the exact kriging system. We E [v] − wiE [v] = 0 first formulate the kriging problem to solve it in O(kN 2) i=1 N using iterative solvers (Saad, 2003) and then parallelize X the computations on a GPU to achieve “near” real-time wi = 1 (3) performance. Unlike other approaches discussed before, i=1 our acceleration is independent of station location and dis- Let the residue be rj; tribution and does not rely on covariance approximation. r =v ˜ − vˆ . (4) The paper is organized as follows: In Section 2, we in- j j j troduce the kriging problem and formulate the mean and Therefore, the residual variance is given by, covariances for an ordinary kriging system. In Section 3, we propose our modification to the kriging formula- V ar(rj) = Cov{v˜jv˜j}−2Cov{v˜jvˆj}+Cov{vˆjvˆj} (5) 2 tion to solve it in O(kN ). In Section 4, we discuss the The first term can further be simplified as follow, acceleration of our formulation on a graphical processor. N In Section 5, we discuss our experiments on various syn- X thetic, spatial and spatio-temporal datasets to illustrate the Cov{v˜jv˜j} = V ar{v˜j} = V ar{ wivi} performance of our kriging algorithm. i=1 N N X X = wi · wk · Cov{vivk} 2 Linear kriging i=1 k=1 N N Linear kriging is divided into simple kriging (known X X = w w Cˆ (6) mean), ordinary kriging (unknown but constant mean) and i k ik i=1 k=1 universal kriging (the mean is an unknown linear combi- nation of known functions), depending on the mean value The second term can be written as, specification. We shall restrict the discussion here to or- N ! X dinary kriging, however the approach that we propose in Cov{v˜jvˆj} = Cov{ wivi vˆj} Section 3 and the accelerations in Section 4 are generic i=i and apply to other categories as well. N N X X ˆ Ordinary kriging is widely used because it is statisti- = wi · Cov{vivˆj} = wiCi0 cally the best linear unbiased estimator (B.L.U.E). Ordi- i=1 i=1 nary kriging is linear because its estimates are linear com- Finally, assuming that the random variables have the same 2 bination of of the available data. It is unbiased because variance σv, the third term can be expressed as it attempts to keep the mean residual to be zero. Finally, 2 it is called best because it tries to minimize the residual Cov{vˆjvˆj} = σv (7) variance. Substituting from Eqs. (6-7) in Eq. (5), 2.1 B.L.U.E Formulation N N N X X ˆ X ˆ 2 V ar(rj) = wiwkCik + wiCi0 + σv. (8) Let the data be sampled at N locations (x1, x2, . , xN ), i=1 k=1 i=1 and the corresponding values be v1, v2, . , vN . The value vˆj at an unknown location xˆj is estimated as a For ordinary kriging, it is required to find w by mini- weighted linear combination of v’s, given by, mizing V ar(rj) with respect to w subject to the constraint Eq. 3. This can be written as the minimization of the pe- N X nalized cost function, v˜j = wivi. (1) N N N i=1 X X X J(w) = wiwkCˆik + wiCˆi0 (9) Here, v˜j is the estimate, and let vˆj be the actual value i=1 k=1 i=1 (unknown) at xˆj. To find the weights, the values (vi and N ! 2 X vˆj) are assumed to be stationary random functions, +σv + 2λ wi − 1 , i=1 E [v ] = E [v ˆ ] = E(v). (2) i j with 2λ the Lagrange multiplier. Taking derivatives of J For unbiased estimates, with respect to w and λ, N E [v ˆj − v˜j] = 0 ∂J X = 2 wkCˆi,k − 2Cˆi0 + 2λ (10) ∂wi E [v ˆj] − E [v ˜j] = 0 k=1 " N # N X ∂J X E [v] − E w v = 0 = w − 1 (11) i i ∂λ i i=1 i=1 2 Setting this to zero, we get the following system to solve, 3 Proposed approach to obtain the weights w and λ, Without loss of generality, the system in Eq. (13) can be Cˆ11 ... Cˆ1n 1 w1 Cˆ10 transposed, . .. . . . . . v = Cˆ TCˆ −1ˆv. (15) × = ∗ ∗ Cˆn1 ... Cˆnn 1 wN CˆN0 1 ... 1 0 λ 1 The covariance functions Cij are assumed to be symmet- ric in our discussions, therefore Cˆ T = Cˆ; hence there is ⇒ Cwˆ = ˆc (12) no difference in transposing. However, this strategy ap- ∗ plies to asymmetric covariance matrices as well.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-