
THE AMERICAN STATISTICIAN ,VOL.,NO., – http://dx.doi.org/./.. Understanding the Ensemble Kalman Filter Matthias Katzfussa, Jonathan R. Stroudb, and Christopher K. Wiklec aDepartment of Statistics, Texas A&M University, College Station, TX, USA; bDepartment of Statistics, George Washington University, Washington DC, USA; cDepartment of Statistics, University of Missouri, Columbia, MO, USA ABSTRACT ARTICLE HISTORY The ensemble Kalman flter (EnKF) is a computational technique for approximate inference in state-space Received June models. In typical applications, the state vectors are large spatial felds that are observed sequentially over Revised November time. The EnKF approximates the Kalman flter by representing the distribution of the state with an ensem- KEYWORDS ble of draws from that distribution. The ensemble members are updated based on newly available data by Bayesian inference; shifting instead of reweighting, which allows the EnKF to avoid the degeneracy problems of reweighting- Forecasting; Kalman based algorithms. Taken together, the ensemble representation and shifting-based updates make the EnKF smoother; Sequential Monte computationally feasible even for extremely high-dimensional state spaces. The EnKF is successfully used in Carlo; State-space models data-assimilation applications with tens of millions of dimensions. While it implicitly assumes a linear Gaus- sian state-space model, it has also turned out to be remarkably robust to deviations from these assumptions in many applications. Despite its successes, the EnKF is largely unknown in the statistics community. We aim to change that with the present article, and to entice more statisticians to work on this topic. 1.Introduction of inference called fltering, which attempts to obtain sequen- tially the posterior distribution of the state at the current time Data assimilation involves combining observations with “prior point based on all observations collected so far. The combina- knowledge” (e.g., mathematical representations of mechanistic tion of high dimensionality and nonlinearity makes this a very relationships; numerical models; model output) to obtain an challenging problem. estimate of the true state of a system and the associated uncer- The ensemble Kalman flter (EnKF) is an approximate flter- tainty of that estimate (see, e.g., Nychka and Anderson 2010,for ing method introduced in the geophysics literature by Evensen areview).Althoughdataassimilationisrequiredinmanyfelds, (1994). In contrast to the standard Kalman flter (Kalman 1960), its origins as an area of scientifcinquiryaroseoutoftheweather which works with the entire distribution of the state explicitly, forecasting problem in geophysics. With the advent of the dig- the EnKF stores, propagates, and updates an ensemble of vectors ital computer, one of the frst signifcant applications was the that approximates the state distribution. This ensemble repre- integration of the partial diferential equations that described sentation is a form of dimension reduction, in that only a small the evolution of the atmosphere, for the purposes of short-to- ensemble is propagated instead of the joint distribution includ- medium range weather forecasting. Such a numerical model ing the full covariance matrix. When new observations become requires initial conditions from real-world observations that are available, the ensemble is updated by a linear “shift” based on the physically plausible. Observations of the atmosphere have vary- assumption of a linear Gaussian state-space model. Hence, addi- ing degrees of measurement uncertainty and are fairly sparse in tional approximations are introduced when non-Gaussianity or space and time, yet numerical models require initial conditions nonlinearity is involved. However, the EnKF has been highly that match the relatively dense spatial domain of the model. Data successful in many extremely high-dimensional, nonlinear, and assimilation seeks to provide these “interpolated” felds while non-Gaussian data-assimilation applications. It is an embodi- accounting for the uncertainty of the observations and using the ment of the principle that an approximate solution to the right numerical model itself to evolve the atmospheric state variables problem is worth more than a precise solution to the wrong in a physically plausible manner. Thus, data assimilation consid- problem (Tukey 1962). For many realistic, highly complex sys- ers an equation for measurement error in the observations and tems, the EnKF is essentially the only way to do (approximate) an equation for the state evolution, a so-called state-space model inference, while alternative exact inference techniques can only (see, e.g., Wikle and Berliner 2007). be applied to highly simplifed versions of the problem. In the geophysical problems that motivated the development The key diference between the EnKF and other sequential of data assimilation, the state and observation dimensions are Monte Carlo algorithms (e.g., particle flters) is the use of a lin- huge and the evolution operators associated with the numeri- ear updating rule that converts the prior ensemble to a posterior cal models are highly nonlinear. From a statistical perspective, ensemble after each observation. Most other sequential Monte obtaining estimates of the true system state and its uncertainty Carlo algorithms use a reweighting or resampling step, but it is in this environment can be carried out, in principle, by a type well known that the weights degenerate (i.e., all but one weight CONTACT Matthias Katzfuss [email protected] Department of Statistics, Texas A&M University, TAMU, College Station, TX . Color versions of one or more of the figures in the article can be found online at www.tandfonline.com/r/TAS. © American Statistical Association THE AMERICAN STATISTICIAN: GENERAL 351 are essentially zero) in high-dimensional problems (Snyder et al. Assuming that the fltering distribution at the previous time 2008). t 1isgivenby − Most of the technical development and application of the xt 1 y1:t 1 n(µt 1, !t 1), (3) EnKF has been in the geophysics literature (e.g., Burgers, van − | − ∼ N − − Leeuwen, and Evensen 1998;HoutekamerandMitchell1998; the forecast step computes the forecast distribution for time t ! ! Bishop, Etherton, and Majumdar 2001;Tippettetal.2003;Ott based on (2)as et al. 2004;seealsoAnderson2009,forareview)andithas received relatively little attention in the statistics literature. This xt y1:t 1 n(µt , !t ), | − ∼ N is at least partially because the jargon and notation in the geo- µt : Mt µt 1, physics literature can be daunting upon frst glance. Yet, with =" " − !t : Mt !t 1Mt′ Qt . (4) the increased interest in approximate computational methods " = ! − + for “big data” in statistics, it is important that statisticians be The update step modif"es the forecast! distribution using the aware of the power of this relatively simple methodology. We new data yt . The update formula can be easily derived by con- also believe that statisticians have much to contribute to this area sidering the joint distribution of xt and yt conditional on the of research. Thus, our goal in this article is to provide the ele- past data y1:t 1,whichisgivenbyamultivariatenormaldistri- − mentary background and concepts behind the EnKF in a nota- bution, tion that is more common to the statistical state-space literature. Although the real strength of the EnKF is its application to high- xt µt !t !t H′ y , t . 1:t 1 n mt µ dimensional nonlinear and non-Gaussian problems, for peda- yt − ∼ N + $ Ht t $ Ht !t Ht !t H′ Rt %% ! "# ! " t + gogical purposes, we focus primarily on the case of linear mea- # " " " (5) # surement and evolution models with Gaussian errors to better Using well-known properties of" the multivariate" " normal distri- illustrate the approach. bution, it follows that x y (µ , ! ),wheretheupdate t | 1:t ∼ Nn t t In Section 2,wereviewthestandardKalmanflter for linear equations are given by Gaussian state-space models. We then show in Section 3 how the ! ! basic EnKF can be derived based on the notions of conditional µt : µt Kt (yt Ht µt ) (6) = + − simulation, with an interpretation of shifting samples from the !t : (In Kt Ht )!t , (7) prior to the posterior based on observations. In Section 4,we ! = " − " 1 discuss issues, extensions, and operational variants of the basic and Kt : !t H′ (Ht !t H′ Rt )− is the so-called Kalman gain = t ! t + " EnKF, and Section 5 concludes. matrix of size n mt . × An alternative" expression" for the update equations is given by 1 1 2.State-SpaceModelsandTheKalmanFilter µ ! (!− µ H′ R− y ) (8) t = t t t + t t t 1 1 1 For discrete time points t 1, 2,..., assume a linear Gaussian !− !− H′ R− Ht , (9) = t = !t "+ t t state-space model, ! " where the second! equation" is obtained from (7)usingthe Sherman–Morrison–Woodbury formula (Sherman and Morri- yt Ht xt vt , vt mt (0, Rt ), (1) = + ∼ N son 1950;Woodbury1950). These equations allow a nice inter- xt Mt xt 1 wt , wt n(0, Qt ), (2) = − + ∼ N pretation of the Kalman flter update. The fltered mean in (8) is a weighted average of the prior mean µt and the observation where yt is the observed mt -dimensional data vector at time t, vector yt ,wheretheweightsareproportionaltothepriorpreci- xt is the n-dimensional unobserved state vector of interest, and ! 1 1 the observation and innovation error vt and wt are mutually and sion t− ,andHt′ Rt− ,acombinationoftheobservationmatrix" serially independent. We call Equations (1)–(2)theobservation and the data precision
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-