Ensemble Kalman Filters

Total Page:16

File Type:pdf, Size:1020Kb

Ensemble Kalman Filters Assimilation Algorithms: Ensemble Kalman Filters Massimo Bonavita ECMWF [email protected] Acknowledgements: Mats Hamrud, Mike Fisher 1 Outline • The Standard Kalman Filter • Kalman Filters for large dimensional systems • Approximate Kalman Filters: the Ensemble Kalman Filter • Ensemble Kalman Filters in hybrid Data Assimilation 2 Standard Kalman Filter • In the Overview of Assimilation Methods lecture we have seen that, assuming all errors have Gaussian statistic, the posterior (i.e., analysis) distribution 푝(풙|풚) can also be expressed as a Gaussian distribution: 1 1 푻 푝 풚|풙 = 풆풙풑 − 풚 − 푯 풙 퐑 −ퟏ 풚 − 푯 풙 2휋 푁/2 퐑 1/2 2 1 1 푻 −ퟏ 푝 풙푏|풙 = 푵/ퟐ 1/2 풆풙풑 − 풙풃 − 풙 퐏푩 풙풃 − 풙 2휋 퐏푩 2 1 푻 1 푝 풙|풚 푝 풚|풙 푝 풙 |풙 풆풙풑 − 풚 − 푯 풙 퐑 −ퟏ 풚 − 푯 풙 − 풙 − 풙 푻 퐏 −ퟏ 풙 − 풙 푏 2 2 풃 푩 풃 • Kalman Filter methods try to find the mean and covariance of this posterior distribution • Note that, under this Gaussian assumption, knowing the mean and covariance of 푝(풙|풚) means knowing the full posterior pdf Standard Kalman Filter • Let us start with a simple univariate example: Assume we are analysing a single state variable x, whose errors are zero mean and Gaussian distributed around its background forecast xb: ퟏ 1 푥 −푥 ퟐ 푝 푥 |푥 = 푒푥푝 − 푏 ~ 푥 , 휎2 푏 2 ퟏ/ퟐ 2 휎2 N 푏 푏 2휋휎푏 푏 We have one observation of the state variable, also with Gaussian errors: ퟏ 1 푦−푥 ퟐ 푝 푦|푥 = 푒푥푝 − ~ 푦, 휎2 2 ퟏ/ퟐ 2 휎2 N 표 2휋휎표 표 Applying Bayes theorem we find: ퟐ ퟐ 1 푦−푥 1 푥푏−푥 1 1 1 ퟐ 푥푏 푦 푝 푥|푦 푝 푦|푥 푝 푥푏|푥 푒푥푝 − 2 − 2 푒푥푝 − 2 + 2 푥 − 2 2 + 2 푥 2 휎푏 2 휎푏 2 휎푏 휎표 휎푏 휎표 Comparing to a standard Gaussian distribution: 1 1 푥−휇 ퟐ 1 1 휇 휇2 푝 푥 = 푒푥푝 − =푒푥푝 − 푥ퟐ − 2 푥 + 2휋휎2 1/2 2 휎2 2 휎2 휎2 휎2 we see that the posterior distribution is also Gaussian with mean and variance: −1 2 2 2 1 1 휎표 휎푏 푉푎푟 푥|푦 = 휎 = 2 + 2 = 2 2 휎푏 휎표 휎표 + 휎푏 2 2 2 2 2 푥푏 푦 휎표 휎푏 푥푏 푦 휎표 휎푏 퐸 푥|푦 = 휇 = 휎 2 + 2 = 2 2 2 + 2 = 2 2 푥푏 + 2 2 y 휎푏 휎표 휎표 + 휎푏 휎푏 휎표 휎표 + 휎푏 휎표 + 휎푏 Standard Kalman Filter • A simple univariate example: 2 휎푏 Introducing the Kalman gain: 퐾 = 2 2 the equations for the mean and variance can 휎표 + 휎푏 be recast as: 2 2 휎표 휎푏 2 푉푎푟 푥|푦 = 2 2 = 1 − 퐾 휎푏 휎표 + 휎푏 2 2 휎표 휎푏 퐸 푥|푦 = 2 2 푥푏 + 2 2 y = 푥푏 + 퐾 푦 − 푥푏 휎표 + 휎푏 휎표 + 휎푏 The posterior variance is thus reduced (1-K<0) with respect to the prior (background) variance, while the posterior mean is a linear, weighted average of the prior and the observation. Standard Kalman Filter • These Kalman Filter analysis update equations can be generalised to the multi- dimensional and multivariate case (Wikle and Berliner, 2007; Bromiley, 2014): 퐸 풙|풚 = 풙풂 = 풙풃 + 퐊(풚 − 퐇 풙풃 ) 푉푎푟 풙|풚 = 퐏푎 = 퐈 − 퐊퐇 퐏푏 퐈 − 퐊퐇 푇 + 퐊퐑퐊푻 = 퐈 − 퐊퐇 퐏푏 − 퐏푏퐇푇퐊푇 + 퐊 퐇퐏푏퐇푇 + 퐑 퐊푇 = 퐈 − 퐊퐇 퐏푏 −1 −1 −1 퐊 = 퐏푏퐇푇 퐇퐏푏퐇푇 + 퐑 = 퐏푏 + 퐇푇퐑−1퐇 퐇푇퐑−1 These are the same update equations obtained in Assimilation Algorithms (1). In that context they were derived without making assumptions of Gaussianity, but looking for the analysis estimate which had the minimum error variance and could be expressed as a linear combination of the background and the observations (we called it the BLUE, Best Linear Unbiased Estimate). Linearity of observation operator and model was invoked. If the background and observations are normally distributed, the Kalman Filter update equations give us the mean and the covariance of the posterior distribution. Under these hypotheses the posterior distribution is also Gaussian, so we have completely solved the problem! Standard Kalman Filter • In NWP applications of data assimilation we want to update our estimate of the state and its uncertainty at later times, as new observations come in: we want to cycle the analysis b • For each analysis in this cycle we require a background x t (i.e. a prior estimate of the state valid at time t) • Usually, our best prior estimate of the state at time t is given by a forecast from the preceding analysis at time t-1 (the “background”): b a x t = M(x t-1) • What is the error covariance matrix (=> the uncertainty) associated with this background? Standard Kalman Filter • What is the error covariance matrix associated with the background forecast? b a x t = M(x t-1) • Subtract the true state x* from both sides of the equation: b * b a * x - x t = ε t = M(x t-1) - x t a * a • Since x t-1 = x t-1 + ε t-1 we have: b * a * ε t = M(x t-1 + ε t-1) - x t ≈ * a * M(x t-1)+ Mε t-1 - x t = a Mε t-1 + ηt * * • Here we have defined the model error ηt = M(x t-1) - x t • We will also assume that no systematic errors are present in our system < εa > = < η> = 0 => < εb > = 0 Standard Kalman Filter • The background error covariance matrix will then be given by: b b T b a a T <ε t (ε t) > P t = <(Mε t-1 + ηt) (Mε t-1 + ηt) > = a a T T T M<ε t-1 (ε t-1) > M + <ηt (ηt) > = a T M P t-1 M + Qt a T • Here we have assumed < ε t-1 (ηt ) > = 0 and defined the model error covariance T matrix Qt <ηt (ηt) > • Note how the background error is described as the sum of the errors of the previous analysis propagated by the model dynamics to the time of the update (M a T P t-1 M ) and the new errors introduced by the model integration (Qt) • We now have all the equations necessary to propagate and update both the state and its error estimates Standard Kalman Filter a t t+1 x t-1 t-1 a P t-1 New Observations 3. Compute the Kalman Gain K = Pb HT(HPb HT + R)-1 1. Predict the state ahead 1. Predict the state ahead b a b a x t = M(x t-1) 4. Update state estimate x t+1 = M(x t) a b b x t = x t + K (y - H(x t)) 2. Predict the state error cov. 2. Predict the state error cov. b a T b a T P t = M P t-1 M + Qt 5. Update state error estimate P t+1 = M P t M + Qt+1 a b T T P t = (I – KH)P t (I – KH) + KRK Propagation Propagation Update Standard Kalman Filter • Under the assumption that the model M and the observation operator H are linear operators (i.e., they do not depend on the state x), the Kalman Filter produces an a a a a optimal sequence of analyses (x 1, x 2, …, x t-1, x t) a • The KF analysis x t is the best (minimum variance) linear unbiased estimate of the b state at time t, given x t and all observations up to time t (y0,y1,…,yt). • Note that Gaussianity of errors is not required. If errors are Gaussian the Kalman Filter provides the correct posterior conditional probability estimate (according to a b Bayes’ Law), i.e. p(x t| x 0; y0,y1,…,yt). This also implies that if errors are Gaussian then the state estimated with the KF is also the most likely state (the mode of the pdf). Extended Standard Kalman Filter • The extended Kalman Filter (EKF) is a non-linear extension of the Kalman Filter where the model and observation operators are not required to be linear operators (independent of the state) as in the standard KF: 퐲 =H 퐱풃 + 휺풐 (EKF) 퐲 = 퐇퐱풃 + 휺풐 (KF) 퐱퐛 =M 퐱풂 + 훈 (EKF) 퐱풃 = 퐌퐱풂 + 훈 (KF) • The covariance update and prediction steps of the KF equations use the Jacobians of the model and observation operators, linearized around the analysed/predicted state, i.e.: 휕M 휕H M= (x ), H = (x ) 휕풙 a,t-1 휕풙 b,t • The EKF is thus a first order linearization of the KF equations around the current state estimates. As such it is as good as a first order Taylor expansion is a good approximation of the nonlinear system we are dealing with. • A type of EKF is used at ECMWF in the analysis of soil variables (Simplified Extended Kalman Filter, SEKF). More on this later in the dedicated lecture. Kalman Filters for large dimensional systems • The Kalman Filter (standard or extended) is unfeasible for large dimensional systems • The size N of the analysis/background state in the ECMWF 4DVar is O(108): the KF requires us to store and evolve in time state covariance matrices (Pa/b) of O(NxN) ➢ The World’s fastest computers can sustain ~ 1015 operations per second ➢ An efficient implementation of matrix multiplication of two 108x108 matrices requires ~1022 (O(N2.8))operations: about 4 months on the fastest computer! b a T 8 ➢ Evaluating P t = MP t-1 M + Qk requires 2*N≈2*10 model integrations! • A range of approximate Kalman Filters has been developed for use with large- dimensional systems.
Recommended publications
  • Introduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed
    Introduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed Jeffrey Anderson, Tim Hoar, Nancy Collins NCAR Institute for Math Applied to Geophysics ICAP Workshop; 11 May 2011 pg 1 What is Data Assimilation? Observations combined with a Model forecast… + …to produce an analysis (best possible estimate). ICAP Workshop; 11 May 2011 pg 2 Example: Estimating the Temperature Outside An observation has a value ( * ), ICAP Workshop; 11 May 2011 pg 3 Example: Estimating the Temperature Outside An observation has a value ( * ), and an error distribution (red curve) that is associated with the instrument. ICAP Workshop; 11 May 2011 pg 4 Example: Estimating the Temperature Outside Thermometer outside measures 1C. Instrument builder says thermometer is unbiased with +/- 0.8C gaussian error. ICAP Workshop; 11 May 2011 pg 5 Example: Estimating the Temperature Outside Thermometer outside measures 1C. The red plot is P ( T | T o ) , probability of temperature given that To was observed. € ICAP Workshop; 11 May 2011 pg 6 Example: Estimating the Temperature Outside We also have a prior estimate of temperature. The green curve is P ( T | C ) ; probability of temperature given all available prior information C . € ICAP Workshop; 11 May 2011 pg 7 € Example: Estimating the Temperature Outside Prior information C can include: 1. Observations of things besides T; € 2. Model forecast made using observations at earlier times; 3. A priori physical constraints ( T > -273.15C ); 4. Climatological constraints ( -30C < T < 40C ). ICAP Workshop; 11 May 2011 pg 8 Combining the Prior Estimate and Observation Prior Bayes P(To | T,C)P(T | C) Theorem: P(T | To,C) = P(To | C) Posterior: Probability of T given Likelihood: Probability that To is observations and observed if T is true value and given Prior.
    [Show full text]
  • Localization in the Ensemble Kalman Filter
    Department of Meteorology Localization in the ensemble Kalman Filter Ruth Elizabeth Petrie A dissertation submitted in partial fulfilment of the requirement for the degree of MSc. Atmosphere, Ocean and Climate August 2008 Abstract Data assimilation in meteorology seeks to provide a current analysis of the state of the atmosphere to use as initial conditions in a weather forecast. This is achieved by using an estimate of a previous state of the system and merging that with observations of the true state of the system. Ensemble Kalman filtering is one method of data assimilation. Ensemble Kalman filters operate by using an ensemble, or statistical sample, of the state of a system. A known prior state of a system is forecast to an observation time, then observation is assimilated. Observations are assimilated according to a ratio of the errors in the prior state and the observations. An analysis estimate of the system and an analysis estimate of the of the errors associated with the analysis state are also produced. This project looks at some problems within ensemble Kalman filtering and how they may be overcome. Undersampling is a key issue, this is where the size of the ensemble is so small so as to not be statistically representative of the state of a system. Undersampling can lead to inbreeding, filter divergence and the development of long range spurious correlations. It is possible to implement counter measures. Firstly covariance inflation is used to combat inbreeding and the subsequent filter divergence. Covariance localization is primarily used to remove long range spurious correlations but also has the benefit increasing the effective ensemble size.
    [Show full text]
  • Adaptive Ensemble Kalman Filtering of Nonlinear Systems
    SERIES A DATA ASSIMILATION AND PREDICTABILITY PUBLISHED BY THE INTERNATIONAL METEOROLOGICAL INSTITUTE IN STOCKHOLM Adaptive ensemble Kalman filtering of non-linear systems By TYRUS BERRY and TIMOTHY SAUER*, Department of Mathematical Sciences, George Mason University, Fairfax, VA 22030, USA (Manuscript received 22 December 2012; in final form 4 June 2013) ABSTRACT A necessary ingredient of an ensemble Kalman filter (EnKF) is covariance inflation, used to control filter divergence and compensate for model error. There is an on-going search for inflation tunings that can be learned adaptively. Early in the development of Kalman filtering, Mehra (1970, 1972) enabled adaptivity in the context of linear dynamics with white noise model errors by showing how to estimate the model error and observation covariances. We propose an adaptive scheme, based on lifting Mehra’s idea to the non-linear case, that recovers the model error and observation noise covariances in simple cases, and in more complicated cases, results in a natural additive inflation that improves state estimation. It can be incorporated into non- linear filters such as the extended Kalman filter (EKF), the EnKF and their localised versions. We test the adaptive EnKF on a 40-dimensional Lorenz96 model and show the significant improvements in state estimation that are possible. We also discuss the extent to which such an adaptive filter can compensate for model error, and demonstrate the use of localisation to reduce ensemble sizes for large problems. Keywords: ensemble Kalman filter, data assimilation, non-linear dynamics, covariance inflation, adaptive filtering 1. Introduction white noise model error and observation error. One goal of this article is to introduce an adaptive scheme in the The Kalman filter is provably optimal for systems where the Gaussian white noise setting that is a natural generalisation dynamics and observations are linear with Gaussian noise.
    [Show full text]
  • Stochastic Ensemble Kalman Filter with Ensemble Generation Using a Square Root Form
    Stochastic ensemble Kalman filter with ensemble generation using a square root form: ensemble π-algorithm Ekaterina Klimova Institute of Computational Technologies SB RAS, Ac. Lavrentjev Ave., 6, Novosibirsk, 630090, Russia [email protected] 1. Introduction The Kalman filter algorithm is one of the most popular approaches to observation data assimilation. To implement the ensemble algorithm, the number of ensemble members must not be too large. Also, the ensemble must have a covariance matrix consistent with the covariances of analysis errors. There exist two approaches to the ensemble Kalman filter: a “stochastic filter” and a “deterministic filter”. A large body of research has been made to compare stochastic and deterministic filters. As shown in several studies, the stochastic ensemble Kalman filter has advantages over a deterministic filter. In [6], a stochastic version of the ensemble Kalman filter, which is implemented in the square root form (the ensemble π-algorithm) is proposed. The ensemble π–algorithm is what is called a stochastic filter in which an ensemble of analysis errors is generated by transforming an ensemble of forecast errors using a square root form. The transformation matrix does not depend on grid nodes. Therefore, the algorithm can be used locally, in a similar way as the LETKF algorithm [4] and can be implemented involving operations with matrices of size not greater than that of the ensemble. A new numerical implementation scheme of the ensemble π-algorithm is proposed. In particular, we consider a more general approach than that proposed in [6] to calculate the square root of a non-symmetric matrix.
    [Show full text]
  • Ensemble Consider Kalman Filtering*
    Ensemble Consider Kalman Filtering* Tai-shan Lou, Nan-hua Chen, Hua Xiong, Ya-xi Li, and Lei Wang Abstract—For the nonlinear systems, the ensemble Kalman filtering[7], desensitized Kalman filtering[8], set-valued filter can avoid using the Jacobian matrices and reduce the estimation[9] and consider Kalman filtering (also called computational complexity. However, the state estimates still Schmidt-Kalman filtering)[6, 10, 11]. A consider method is suffer greatly negative effects from uncertain parameters of the proposed by Schmidt to account for the parameter dynamic and measurement models. To mitigate the negative uncertainties by incorporating the covariance of the effects, an ensemble consider Kalman filter (EnCKF) is designed parameters into the Kalman filtering formulations. by using the “consider” approach and resampling the ensemble members in each step to incorporate the statistics of the To overcome the drawbacks of the EnKF coming from the uncertain parameters into the state estimation formulations. The unknown parameters, this paper proposes an ensemble effectiveness of the proposed EnCKF is verified by two consider Kalman filter (EnCKF) for the nonlinear dynamic numerical simulations. systems with uncertain parameters. The covariance matrices between the uncertain parameters and the states and the I. INTRODUCTION measurements are computed and propagated by using the Nonlinear Kalman filtering plays an important role in ensemble integration in the filtering algorithm. The formula information and communication systems, control systems, and of the EnCKF are derived by using augmented-state many other areas, such as target tracking, navigation of methods[12] in Section Ⅱ. Two numerical simulations are aerospace vehicles[1, 2], fault diagnosis[3], chemical plant shown in Section Ⅲ.
    [Show full text]
  • Assimilating Vortex Position with an Ensemble Kalman Filter
    1828 MONTHLY WEATHER REVIEW VOLUME 135 Assimilating Vortex Position with an Ensemble Kalman Filter YONGSHENG CHEN AND CHRIS SNYDER National Center for Atmospheric Research,* Boulder, Colorado (Manuscript received 14 July 2005, in final form 10 July 2006) ABSTRACT Observations of hurricane position, which in practice might be available from satellite or radar imagery, can be easily assimilated with an ensemble Kalman filter (EnKF) given an operator that computes the position of the vortex in the background forecast. The simple linear updating scheme used in the EnKF is effective for small displacements of forecasted vortices from the true position; this situation is operationally relevant since hurricane position is often available frequently in time. When displacements of the forecasted vortices are comparable to the vortex size, non-Gaussian effects become significant and the EnKF’s linear update begins to degrade. Simulations using a simple two-dimensional barotropic model demonstrate the potential of the technique and show that the track forecast initialized with the EnKF analysis is improved. The assimilation of observations of the vortex shape and intensity, along with position, extends the tech- nique’s effectiveness to larger displacements of the forecasted vortices than when assimilating position alone. 1. Introduction over, hurricane forecast models, even the simplest 2D barotropic model, are highly nonlinear. Small initial er- Hurricane track forecasts have improved 1%–2% rors in the vortex position, motion, and strength as well Ϫ1 yr over the last 3 decades (Franklin et al. 2003). as the environmental flows may lead to large discrep- These improvements have followed from refinements ancies in the short-term track and intensity forecast of both numerical models and their initial conditions, (Kurihara et al.
    [Show full text]
  • A Two-Stage Ensemble Kalman Filter for Smooth Data Assimilation∗
    A Two-Stage Ensemble Kalman Filter for Smooth Data Assimilation∗ Craig J. Johns† Jan Mandel‡ March 2005, revised September 2005, corrected in proof September 2007 Abstract The ensemble Kalman Filter (EnKF) applied to a simple fire propagation model by a nonlinear convection-diffusion-reaction partial differential equation breaks down because the EnKF creates nonphysical ensemble members with large gradients. A modification of the EnKF is proposed by adding a regularization term that penalizes large gradients. The method is implemented by applying the EnKF formulas twice, with the regularization term as another observation. The regularization step is also interpreted as a shrinkage of the prior distribution. Numerical results are given to illustrate success of the new method. Keywords: Data Assimilation, Ensemble Kalman Filter, State-Space Model, Penalty, Tikhonov Regularization, Wildfire, Convection-Reaction-Diffusion, Shrinkage, Bayesian 1 Introduction The discrete time state-space model in its most general form is an application of the Bayesian update problem: the modeled system is advanced in time until an analysis time, when the distribution of the system state before the update, called the prior or the forecast distribution, and the data likelihood are combined to give the new system state distribution, called the posterior or the analysis distribution. The system is then advanced until the next analysis time. Kalman [20] and Kalman and Bucy [21] provided simple recursive formulas for the system mean and covariance under the assumptions that the probability distributions are normal and the system is linear. The Kalman filter is popular in areas as diverse as medicine [19], economics [29] and geosciences [11].
    [Show full text]
  • Practical Ensemble Data Assimilation 1 Introduction 2 Link with the Kalman
    Practical ensemble data assimilation P.L. Houtekamer and Herschel L. Mitchell Service Met´ eor´ ologique du Canada / Meteorological Service of Canada, Montreal, Quebec,´ Canada [email protected], [email protected] 1 Introduction The ensemble Kalman filter (EnKF) is a 4-dimensional data-assimilation method that uses a Monte-Carlo en- semble of short-range forecasts to estimate the covariances of the forecast error (Evensen 1994; Houtekamer and Mitchell 1998, hereafter HM98). It is a close approximation to the Kalman filter, which provides the least-squares solution in the case of small errors with Gaussian distribution (Maybeck 1979). The approximation becomes “automatically” more accu- rate as bigger ensembles are used. This is a desirable property in an environment where massively parallel computers become increasingly powerful. The EnKF does not depend strongly on the validity of questionable hypotheses (linearity of the model dynam- ics) and is conceptually simple. It does not require the development or existence of an adjoint or tangent linear model. It is therefore an extremely attractive method. An EnKF is being developed for operational implementation at the Meteorological Service of Canada (MSC). The different stages of this project have been well documented (HM98; Mitchell and Houtekamer 2000, hereafter MH; Houtekamer and Mitchell 2001, hereafter HM01; Mitchell et al. 2002, hereafter MHP, and Houtekamer et al. 2003, hereafter HEA). We shall summarize some key results from the above studies. The main point of this paper is that the development of an EnKF is feasible for an operational centre with perhaps limited resources. A more complete overview of the literature on the EnKF algorithm can be found in Evensen (2003) and Tippett et al.
    [Show full text]
  • Ensemble Kalman Filter Assimilation of Radar Observations of the 8 May 2003 Oklahoma City Supercell: Influences of Reflectivity Observations on Storm-Scale Analyses
    272 MONTHLY WEATHER REVIEW VOLUME 139 Ensemble Kalman Filter Assimilation of Radar Observations of the 8 May 2003 Oklahoma City Supercell: Influences of Reflectivity Observations on Storm-Scale Analyses DAVID C. DOWELL Cooperative Institute for Mesoscale Meteorological Studies, University of Oklahoma, and NOAA/OAR National Severe Storms Laboratory, Norman, Oklahoma, and National Center for Atmospheric Research,* Boulder, Colorado LOUIS J. WICKER NOAA/OAR National Severe Storms Laboratory, Norman, Oklahoma CHRIS SNYDER National Center for Atmospheric Research,* Boulder, Colorado (Manuscript received 31 March 2010, in final form 23 July 2010) ABSTRACT Ensemble Kalman filter (EnKF) techniques have been proposed for obtaining atmospheric state estimates on the scale of individual convective storms from radar and other observations, but tests of these methods with observations of real convective storms are still very limited. In the current study, radar observations of the 8 May 2003 Oklahoma City tornadic supercell thunderstorm were assimilated into the National Severe Storms Laboratory (NSSL) Collaborative Model for Multiscale Atmospheric Simulation (NCOMMAS) with an EnKF method. The cloud model employed 1-km horizontal grid spacing, a single-moment bulk precipitation- microphysics scheme, and a base state initialized with sounding data. A 50-member ensemble was produced by randomly perturbing base-state wind profiles and by regularly adding random local perturbations to the horizontal wind, temperature, and water vapor fields in and near observed
    [Show full text]
  • Ensemble Kalman Filter Assimilation of Radar Observations of the 8 May 2003 Oklahoma City Supercell: Influences of Reflectivity Observations on Storm-Scale Analyses
    272 MONTHLY WEATHER REVIEW VOLUME 139 Ensemble Kalman Filter Assimilation of Radar Observations of the 8 May 2003 Oklahoma City Supercell: Influences of Reflectivity Observations on Storm-Scale Analyses DAVID C. DOWELL Cooperative Institute for Mesoscale Meteorological Studies, University of Oklahoma, and NOAA/OAR National Severe Storms Laboratory, Norman, Oklahoma, and National Center for Atmospheric Research,* Boulder, Colorado LOUIS J. WICKER NOAA/OAR National Severe Storms Laboratory, Norman, Oklahoma CHRIS SNYDER National Center for Atmospheric Research,* Boulder, Colorado (Manuscript received 31 March 2010, in final form 23 July 2010) ABSTRACT Ensemble Kalman filter (EnKF) techniques have been proposed for obtaining atmospheric state estimates on the scale of individual convective storms from radar and other observations, but tests of these methods with observations of real convective storms are still very limited. In the current study, radar observations of the 8 May 2003 Oklahoma City tornadic supercell thunderstorm were assimilated into the National Severe Storms Laboratory (NSSL) Collaborative Model for Multiscale Atmospheric Simulation (NCOMMAS) with an EnKF method. The cloud model employed 1-km horizontal grid spacing, a single-moment bulk precipitation- microphysics scheme, and a base state initialized with sounding data. A 50-member ensemble was produced by randomly perturbing base-state wind profiles and by regularly adding random local perturbations to the horizontal wind, temperature, and water vapor fields in and near observed
    [Show full text]
  • Properties of the Ensemble Kalman Filter
    Properties of the Ensemble Kalman Filter Laura Baker October 5, 2007 Abstract Data assimilation is used in numerical weather prediction to improve weather forecasts by incorporating observation data into the model forecast. The Ensemble Kalman Filter (EnKF) is a method of data assimilation which up- dates an ensemble of states and uses this to provide a state estimate and associated error estimate at each step. Some implementations of the EnKF, such as the Ensemble Transform Kalman Filter (ETKF), have been found to exhibit undesirable behaviour. This dissertation investigates two such prob- lems exhibited by the ETKF: ensemble collapse and bias. A solution to the ensemble collapse problem using random rotations is given and experimental results with the ETKF with the random rotations applied are presented. The conditions for an EnKF to be unbiased are examined, and these conditions are used to determine further restrictions on the random rotations to ensure that applying them to the filter will not introduce a bias. An unbiased filter with random rotations is developed and experimental results with this new filter show that it does not exhibit the ensemble collapse problem, and that it is unbiased. i Declaration I confirm that this is my own work, and the use of all material from other sources has been properly and fully acknowledged. Laura Baker ii Acknowledgments I would like to thank my supervisor Sarah Dance for being about as helpful and enthusiastic as it is possible for a supervisor to be (and also for laughing at my silly program names!). I would also like to thank Nancy Nichols and Amos Lawless for their help and suggestions.
    [Show full text]
  • Displacement Data Assimilation
    Displacement Data Assimilation W. Steven Rosenthal1 , Shankar C. Venkataramani2 , Arthur J. Mariano3 , Juan M. Restrepo4∗ 1 Pacific Northwest Laboratory, Richland WA USA 99354 2 Department of Mathematics and Program in Applied Mathematics, University of Arizona, Tucson AZ USA 85721 3 Rosenstiel School of Marine & Atmospheric Science, University of Miami, Miami FL USA 33149 4 Department of Mathematics, Oregon State University, Corvallis OR USA 97331 Abstract We show that modifying a Bayesian data assimilation scheme by incorporating kinematically-consistent displacement corrections produces a scheme that is demonstrably better at estimating partially observed state vectors in a setting where feature information important. While the displacement transformation is not tied to any particular assimilation scheme, here we implement it within an ensemble Kalman Filter and demonstrate its effectiveness in tracking stochastically perturbed vortices. Keywords: displacement assimilation, data assimilation, uncertainty quantification, ensemble Kalman filter, vortex dynamics 2010 MSC: 93E11, 93B40, 76B47, 37N10 1. Bayesian Estimation and Displacement Assimilation Most sequential estimation strategies are Bayesian. In these we seek to find estimates of moments (or of the whole probability density function, cdf) of the state variable X(t), a random vector that is time dependent, subject to (usually discrete) observations of the state, Y (t). The posterior probability is thus P (XjY ) / P (Y jX)P (X); where the first term on the right hand side is the likelihood, and is informed by observations, and P (X) is the prior, which is informed by the model. The model is usually an evolution equation for X. Both the model and the observations involve stochastic processes. The model "error" might be an explicit stochastic term representing uncertainties in the evolution equation or its initial or boundary data.
    [Show full text]