
A Parallel Implementation of the Ensemble Kalman Filter Based on Modified Cholesky Decomposition Elias D. Nino-Ruiza,b, Adrian Sandub, Xinwei Dengc aDepartment of Computer Science, Universidad del Norte, Barranquilla, Colombia, [email protected] bComputational Science Laboratory, Department of Computer Science, Virginia Tech, Blacksburg, VA 24060, USA, fenino,[email protected] cDepartment of Statistics, Virginia Tech, Blacksburg, VA 24060, USA, [email protected] Abstract This paper discusses an efficient parallel implementation of the ensemble Kalman filter based on the modified Cholesky decomposition. The proposed implemen- tation starts with decomposing the domain into sub-domains. In each sub- domain a sparse estimation of the inverse background error covariance matrix is computed via a modified Cholesky decomposition; the estimates are computed concurrently on separate processors. The sparsity of this estimator is dictated by the conditional independence of model components for some radius of influ- ence. Then, the assimilation step is carried out in parallel without the need of inter-processor communication. Once the local analysis states are computed, the analysis sub-domains are mapped back onto the global domain to obtain the analysis ensemble. Computational experiments are performed using the Atmo- spheric General Circulation Model (SPEEDY) with the T-63 resolution on the Blueridge cluster at Virginia Tech. The number of processors used in the ex- periments ranges from 96 to 2,048. The proposed implementation outperforms in terms of accuracy the well-known local ensemble transform Kalman filter (LETKF) for all the model variables. The computational time of the proposed implementation is similar to that of the parallel LETKF method (where no co- variance estimation is performed). Finally, for the largest number of processors, the proposed parallel implementation is 400 times faster than the serial version of the proposed method. Keywords: ensemble Kalman filter, covariance matrix estimation, local domain analysis 2010 MSC: 62L20, 62M05, 62M20, 62P35 1. Introduction In operational data assimilation, sequential and variational methods are re- quired to posses the ability of being performed in parallel [??? ]. This obeys to current atmospheric and oceanic model resolutions in which the total Preprint submitted to Journal of Computational Science September 7, 2020 5 number of components arises to the order of millions and the daily information to be assimilated in the order of terabytes [?? ]. Thus, serial data assimila- tion methods are impractical under realistic operational scenarios. In sequential data assimilation, one of the best parallel ensemble Kalman filter (EnKF) imple- mentations is the local ensemble transform Kalman filter (LETKF) [? ]. This 10 method is based on domain localization given a radius of influence ζ. Usually, the assimilation process is performed for each model component in parallel mak- ing use of a deterministic formulation of the EnKF in the ensemble space. In this formulation, the unknown background error covariance matrix is estimated by the rank-deficient ensemble covariance matrix which, in ensemble space, is 15 well-defined. The LETKF relies in the assumption that local domain analyses avoid the impact of spurious correlations, for instance, by considering only small values for ζ. However, in operational data assimilation, ζ can be large owing to circumstances such as sparse observational networks and/or long distance data error correlations (i.e., pressure fields) In such cases, the accuracy of the 20 LETKF can be negatively impacted owing to spurious correlations. We think there is an opportunity to provide a more robust parallel ensemble Kalman filter implementation via a better estimation of background error corre- lations. When two model components (i.e., grid points) are assumed to be con- ditionally independent, their corresponding entry in the estimated inverse back- 25 ground error covariance matrix is zero. Conditionally dependence/independence of model components can be forced making use of local domain analyses. For instance, when the distance of two model components in physical space is larger than ζ, their corresponding entry in the inverse background error covariance ma- trix is zero. This can be exploited in order to obtain sparse estimators of such 30 matrix which implies huge savings in terms of memory and computations. Even more, high performance computing can be used in order to speedup the assimi- lation process: the global domain can be decomposed according to an available number of processors, for all processors, local inverse background error covari- ance matrices are estimated and then, the stochastic EnKF formulation [? ] 35 can be used in order to compute local domain analyses. The local analyses and then mapped back onto the global domain from which the global analysis state is obtained. This paper is organized as follows. In section 2 basic concepts regarding sequential data assimilation and covariance matrix estimation are presented, in 40 section 3 a parallel implementation of the ensemble Kalman filter based on the modified Cholesky decomposition is proposed; experimental results are discussed in section 4 and future research directions are presented in section 5. Conclusions are drawn in section 6. 2. Preliminaries 45 2.1. Modified Cholesky decompositon n×N Let S = fs1; s2;:::; sN g 2 R , the matrix whose columns are n-th dimensional random Gaussian vectors with probability distribution N (0n; Q), 2 where the number of columns N denotes the number of samples. Denote by x[j] 2 RN×1, the vector holding the j-th component across all the columns of S, 50 for 2 ≤ j ≤ n. The modified Cholesky decomposition [? ] arises from regressing each variable x[j] on its predecessors x[j−1], x[j−2], :::, x[1], that is , fitting regressions: j−1 [j] X [q] [j] N×1 x = βjq · x + " 2 R ; (1) q=1 where "[j] denotes the error in the regression of the j-th component. Let [j] n×n Djj = var " 2 R be the diagonal matrix of error variances and let n×n 55 Tjq = {−βjqg 2 R denote the unitary lower-triangular matrix containing the negative value of regression coefficients, for 2 ≤ q < j ≤ n. An approxima- tion of the inverse covariance matrix Q−1 2 Rn×n reads: Q−1 ≈ Qb −1 = TT · D−1 · T ; (2) and making use of basic linear algebra, an approximation of Q 2 Rn×n is: Q ≈ Qb = T−1 · D · T−T : (3) 2.2. Local ensemble transform Kalman filter 60 Localization is commonly used in the context of sequential data assimila- tion in order to mitigate the impact of spurious correlations in the assimilation process. In general, two forms of localization methods are used: covariance matrix localization and domain localization, both have proven to be equivalent [? ]. In practice, covariance matrix localization can be very difficult owing 65 to the explicit representation in memory of the ensemble covariance matrix. On the other hand, domain localization methods avoid spurious correlations by considering only observations within a given radius of influence ζ: in the two-dimensional case, each model component is surrounded by a local box of dimension (2·ζ +1; 2·ζ +1) and the information within the scope of ζ (observed 70 components and background error correlations) is used in the assimilation pro- cess and conversely, the information out the local box is discarded. In figure 1, local boxes for different radii of influence ζ are shown. The red grid point is the one to be assimilated, blue points are used in the assimilation process while black points are discarded. Based on this idea, the local ensemble transform 75 Kalman filter is proposed (LETKF) [? ] The global formulation of the LETKF is defined as follows: for a given background ensemble h i Xb = xb[1]; xb[2];:::; xb[N] 2 Rn×N ; (4) and ensemble perturbation matrix b b b T Rn×N U = X − x ⊗ 1N 2 ; (5) 3 11 11 10 10 9 9 8 8 7 7 6 6 5 5 4 4 3 3 2 2 1 1 1 2 3 4 5 6 7 8 9 10 11 1 2 3 4 5 6 7 8 9 10 11 (a) ζ = 1 (b) ζ = 3 Figure 1: Local boxes for different radius of influence ζ. where n is the number of model components, N is the ensemble size, xb[i] 2 Rn×1 b is the i-th ensemble member, for 1 ≤ i ≤ N, x is the ensemble mean, 1N is the 80 N-th dimensional vector whose components are all ones and ⊗ denotes the outer product of two vectors, an estimated of the analysis error covariance matrix in the ensemble space reads: −1 a T −1 Pc = (N − 1) · IN×N + Z · R · Z (6a) where Z = H · Ub 2 Rm×N , H 2 Rm×n is the linear observational operator, m is the number of observed components and, R 2 Rm×m is the estimated data 85 error covariance matrix. The optimal weights in such space reads: a T −1 b r = Pca · Z · R · y − H · x ; (6b) therefore, the optimal perturbations can be computed as follows: h i1=2 a a T a RN×N W = r ⊗ 1N + (N − 1) · Pc 2 (6c) from which, in model space, the analysis reads: a b T a Rn×N X = x ⊗ 1N + U · W 2 : (6d) The set of equations (6) are applied to each model component in order to com- pute the global analysis state. 90 2.3. Ensemble Kalman Filter Based On Modified Cholesky In [? ], the modified Cholesky decomposition is used in order to obtain sparse estimators of the inverse background error covariance matrix. the columns of matrix (5) are assumed normally distributed with moments: b[i] u ∼ N (0n; B) ; for 1 ≤ i ≤ N; (7) where B 2 Rn×n is the true unknown background error covariance matrix. [j] N×1 95 Denote by x 2 R the vector holding the j-th model component across all the columns of matrix (5), for 1 ≤ j ≤ n, following the analysis of section 4 2.1, i.e., S = U, an estimate of the inverse background error covariance matrix reads: B−1 ≈ Bb −1 = TT · D−1 · T 2 Rn×n; (8) and similar to (3), B ≈ Bb = T−1 · D · T−T 2 Rn×n : (9) −1 100 Based on (1), the resulting estimator Bb can be dense.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages17 Page
-
File Size-