
Assimilation Algorithms: Ensemble Kalman Filters Massimo Bonavita ECMWF [email protected] Acknowledgements: Mats Hamrud, Mike Fisher 1 Outline • The Standard Kalman Filter • Kalman Filters for large dimensional systems • Approximate Kalman Filters: the Ensemble Kalman Filter • Ensemble Kalman Filters in hybrid Data Assimilation 2 Standard Kalman Filter • In the Overview of Assimilation Methods lecture we have seen that, assuming all errors have Gaussian statistic, the posterior (i.e., analysis) distribution 푝(풙|풚) can also be expressed as a Gaussian distribution: 1 1 푻 푝 풚|풙 = 풆풙풑 − 풚 − 푯 풙 퐑 −ퟏ 풚 − 푯 풙 2휋 푁/2 퐑 1/2 2 1 1 푻 −ퟏ 푝 풙푏|풙 = 푵/ퟐ 1/2 풆풙풑 − 풙풃 − 풙 퐏푩 풙풃 − 풙 2휋 퐏푩 2 1 푻 1 푝 풙|풚 푝 풚|풙 푝 풙 |풙 풆풙풑 − 풚 − 푯 풙 퐑 −ퟏ 풚 − 푯 풙 − 풙 − 풙 푻 퐏 −ퟏ 풙 − 풙 푏 2 2 풃 푩 풃 • Kalman Filter methods try to find the mean and covariance of this posterior distribution • Note that, under this Gaussian assumption, knowing the mean and covariance of 푝(풙|풚) means knowing the full posterior pdf Standard Kalman Filter • Let us start with a simple univariate example: Assume we are analysing a single state variable x, whose errors are zero mean and Gaussian distributed around its background forecast xb: ퟏ 1 푥 −푥 ퟐ 푝 푥 |푥 = 푒푥푝 − 푏 ~ 푥 , 휎2 푏 2 ퟏ/ퟐ 2 휎2 N 푏 푏 2휋휎푏 푏 We have one observation of the state variable, also with Gaussian errors: ퟏ 1 푦−푥 ퟐ 푝 푦|푥 = 푒푥푝 − ~ 푦, 휎2 2 ퟏ/ퟐ 2 휎2 N 표 2휋휎표 표 Applying Bayes theorem we find: ퟐ ퟐ 1 푦−푥 1 푥푏−푥 1 1 1 ퟐ 푥푏 푦 푝 푥|푦 푝 푦|푥 푝 푥푏|푥 푒푥푝 − 2 − 2 푒푥푝 − 2 + 2 푥 − 2 2 + 2 푥 2 휎푏 2 휎푏 2 휎푏 휎표 휎푏 휎표 Comparing to a standard Gaussian distribution: 1 1 푥−휇 ퟐ 1 1 휇 휇2 푝 푥 = 푒푥푝 − =푒푥푝 − 푥ퟐ − 2 푥 + 2휋휎2 1/2 2 휎2 2 휎2 휎2 휎2 we see that the posterior distribution is also Gaussian with mean and variance: −1 2 2 2 1 1 휎표 휎푏 푉푎푟 푥|푦 = 휎 = 2 + 2 = 2 2 휎푏 휎표 휎표 + 휎푏 2 2 2 2 2 푥푏 푦 휎표 휎푏 푥푏 푦 휎표 휎푏 퐸 푥|푦 = 휇 = 휎 2 + 2 = 2 2 2 + 2 = 2 2 푥푏 + 2 2 y 휎푏 휎표 휎표 + 휎푏 휎푏 휎표 휎표 + 휎푏 휎표 + 휎푏 Standard Kalman Filter • A simple univariate example: 2 휎푏 Introducing the Kalman gain: 퐾 = 2 2 the equations for the mean and variance can 휎표 + 휎푏 be recast as: 2 2 휎표 휎푏 2 푉푎푟 푥|푦 = 2 2 = 1 − 퐾 휎푏 휎표 + 휎푏 2 2 휎표 휎푏 퐸 푥|푦 = 2 2 푥푏 + 2 2 y = 푥푏 + 퐾 푦 − 푥푏 휎표 + 휎푏 휎표 + 휎푏 The posterior variance is thus reduced (1-K<0) with respect to the prior (background) variance, while the posterior mean is a linear, weighted average of the prior and the observation. Standard Kalman Filter • These Kalman Filter analysis update equations can be generalised to the multi- dimensional and multivariate case (Wikle and Berliner, 2007; Bromiley, 2014): 퐸 풙|풚 = 풙풂 = 풙풃 + 퐊(풚 − 퐇 풙풃 ) 푉푎푟 풙|풚 = 퐏푎 = 퐈 − 퐊퐇 퐏푏 퐈 − 퐊퐇 푇 + 퐊퐑퐊푻 = 퐈 − 퐊퐇 퐏푏 − 퐏푏퐇푇퐊푇 + 퐊 퐇퐏푏퐇푇 + 퐑 퐊푇 = 퐈 − 퐊퐇 퐏푏 −1 −1 −1 퐊 = 퐏푏퐇푇 퐇퐏푏퐇푇 + 퐑 = 퐏푏 + 퐇푇퐑−1퐇 퐇푇퐑−1 These are the same update equations obtained in Assimilation Algorithms (1). In that context they were derived without making assumptions of Gaussianity, but looking for the analysis estimate which had the minimum error variance and could be expressed as a linear combination of the background and the observations (we called it the BLUE, Best Linear Unbiased Estimate). Linearity of observation operator and model was invoked. If the background and observations are normally distributed, the Kalman Filter update equations give us the mean and the covariance of the posterior distribution. Under these hypotheses the posterior distribution is also Gaussian, so we have completely solved the problem! Standard Kalman Filter • In NWP applications of data assimilation we want to update our estimate of the state and its uncertainty at later times, as new observations come in: we want to cycle the analysis b • For each analysis in this cycle we require a background x t (i.e. a prior estimate of the state valid at time t) • Usually, our best prior estimate of the state at time t is given by a forecast from the preceding analysis at time t-1 (the “background”): b a x t = M(x t-1) • What is the error covariance matrix (=> the uncertainty) associated with this background? Standard Kalman Filter • What is the error covariance matrix associated with the background forecast? b a x t = M(x t-1) • Subtract the true state x* from both sides of the equation: b * b a * x - x t = ε t = M(x t-1) - x t a * a • Since x t-1 = x t-1 + ε t-1 we have: b * a * ε t = M(x t-1 + ε t-1) - x t ≈ * a * M(x t-1)+ Mε t-1 - x t = a Mε t-1 + ηt * * • Here we have defined the model error ηt = M(x t-1) - x t • We will also assume that no systematic errors are present in our system < εa > = < η> = 0 => < εb > = 0 Standard Kalman Filter • The background error covariance matrix will then be given by: b b T b a a T <ε t (ε t) > P t = <(Mε t-1 + ηt) (Mε t-1 + ηt) > = a a T T T M<ε t-1 (ε t-1) > M + <ηt (ηt) > = a T M P t-1 M + Qt a T • Here we have assumed < ε t-1 (ηt ) > = 0 and defined the model error covariance T matrix Qt <ηt (ηt) > • Note how the background error is described as the sum of the errors of the previous analysis propagated by the model dynamics to the time of the update (M a T P t-1 M ) and the new errors introduced by the model integration (Qt) • We now have all the equations necessary to propagate and update both the state and its error estimates Standard Kalman Filter a t t+1 x t-1 t-1 a P t-1 New Observations 3. Compute the Kalman Gain K = Pb HT(HPb HT + R)-1 1. Predict the state ahead 1. Predict the state ahead b a b a x t = M(x t-1) 4. Update state estimate x t+1 = M(x t) a b b x t = x t + K (y - H(x t)) 2. Predict the state error cov. 2. Predict the state error cov. b a T b a T P t = M P t-1 M + Qt 5. Update state error estimate P t+1 = M P t M + Qt+1 a b T T P t = (I – KH)P t (I – KH) + KRK Propagation Propagation Update Standard Kalman Filter • Under the assumption that the model M and the observation operator H are linear operators (i.e., they do not depend on the state x), the Kalman Filter produces an a a a a optimal sequence of analyses (x 1, x 2, …, x t-1, x t) a • The KF analysis x t is the best (minimum variance) linear unbiased estimate of the b state at time t, given x t and all observations up to time t (y0,y1,…,yt). • Note that Gaussianity of errors is not required. If errors are Gaussian the Kalman Filter provides the correct posterior conditional probability estimate (according to a b Bayes’ Law), i.e. p(x t| x 0; y0,y1,…,yt). This also implies that if errors are Gaussian then the state estimated with the KF is also the most likely state (the mode of the pdf). Extended Standard Kalman Filter • The extended Kalman Filter (EKF) is a non-linear extension of the Kalman Filter where the model and observation operators are not required to be linear operators (independent of the state) as in the standard KF: 퐲 =H 퐱풃 + 휺풐 (EKF) 퐲 = 퐇퐱풃 + 휺풐 (KF) 퐱퐛 =M 퐱풂 + 훈 (EKF) 퐱풃 = 퐌퐱풂 + 훈 (KF) • The covariance update and prediction steps of the KF equations use the Jacobians of the model and observation operators, linearized around the analysed/predicted state, i.e.: 휕M 휕H M= (x ), H = (x ) 휕풙 a,t-1 휕풙 b,t • The EKF is thus a first order linearization of the KF equations around the current state estimates. As such it is as good as a first order Taylor expansion is a good approximation of the nonlinear system we are dealing with. • A type of EKF is used at ECMWF in the analysis of soil variables (Simplified Extended Kalman Filter, SEKF). More on this later in the dedicated lecture. Kalman Filters for large dimensional systems • The Kalman Filter (standard or extended) is unfeasible for large dimensional systems • The size N of the analysis/background state in the ECMWF 4DVar is O(108): the KF requires us to store and evolve in time state covariance matrices (Pa/b) of O(NxN) ➢ The World’s fastest computers can sustain ~ 1015 operations per second ➢ An efficient implementation of matrix multiplication of two 108x108 matrices requires ~1022 (O(N2.8))operations: about 4 months on the fastest computer! b a T 8 ➢ Evaluating P t = MP t-1 M + Qk requires 2*N≈2*10 model integrations! • A range of approximate Kalman Filters has been developed for use with large- dimensional systems.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages41 Page
-
File Size-