Estimation II 1 Discrete-Time Kalman Filter

Estimation II 1 Discrete-Time Kalman Filter

Estimation II Ian Reid Hilary Term, 2001 1 Discrete-time Kalman filter We ended the first part of this course deriving the Discrete-Time Kalman Filter as a recursive Bayes’ estimator. In this lecture we will go into the filter in more detail, and provide a new derivation for the Kalman filter, this time based on the idea of Linear Minimum Variance (LMV) estimation of discrete-time systems. 1.1 Background The problem we are seeking to solve is the continual estimation of a set of parameters whose values ´Øµ change over time. Updating is achieved by combining a set of observations or measurements Þ ´Øµ which contain information about the signal of interest Ü . The role of the estimator is to provide ´Ø · µ Ø · ¼ ¼ an estimate ^Ü at some time . If we have a prediction filter, if a smoothing ¼ filter and if the operation is simply called filtering. Recall that an estimator is said to be unbiased if the expectation of its output is the expectation of ^Ü ℄ Ü℄ the quantity being estimated, . Also recall that a minimum variance unbiased estimator (MVUE) is an estimator which is unbi- ased and minimises the mean square error: ¾ ^Ü Ö ÑÒ jj ^Ü Üjj jÞ℄ ÜjÞ℄ ^Ü ¾ jjÜ ^Ü jj ℄ The term , the so-called variance of error, is closely related to the error covariance Ì ´Ü ^Ü µ´Ü ^ܵ ℄ matrix, . Specifically, the variance of error of an estimator is equal to the trace of the error covariance matrix, ¾ Ì jjÜ ^Ü jj ℄ ´Ü ^Ü µ´Ü ^Ü µ ℄ trace The Kalman filter is a linear minimum variance of error filter (i.e. it is the best linear filter over the class of all linear filters) over time-varying and time-invariant filters. In the case of the state vector Ü and the observations Þ being jointly Gaussian distributed, the MVUE estimator is a linear function of the measurement set Þ and thus the MVUE (sometimes written MVE for Minimum Variance of Error estimator) is also a LMV estimator, as we saw in the first part of the course. 1 Notation The following notation will be used. Þ observation vector at time . Z the set of all observations up to (and including) time . Ü system state vector at time . ^Ü Ü j estimation of at time based on time , . ~Ü ^Ü Ü j j estimation error, (tilde notation) È Covariance matrix. F State transition matrix. G Input (control) transition matrix. À Output transition matrix. Û process (or system, or plant) noise vector. Ú measurement noise vector. É process (or system, or plant) noise covariance matrix. Ê measurement noise covariance matrix. à Kalman gain matrix. innovation at time . Ë innovation covariance matrix at time . 1.2 System and observation model We now begin the analysis of the Kalman filter. Refer to figure 1. We assume that the system can be modelled by the state transition equation, Ü F Ü · G Ù · Û ·½ (1) Ü Ù Û where is the state at time , is an input control vector, is additive system or process noise, G F is the input transition matrix and is the state transition matrix. We further assume that the observations of the state are made through a measurement system which can be represented by a linear equation of the form, Þ À Ü · Ú (2) Þ Ü À where is the observation or measurement made at time , is the state at time , is the Ú observation matrix and is additive measurement noise. 2 w v + + + z u unit x + G H + delay F Figure 1: State-space model. 1.3 Assumptions We make the following assumptions; ¯ Û Ú The process and measurement noise random processes and are uncorrelated, zero-mean white-noise processes with known covariance matrices. Then, É Ð Ì Û Û ℄ (3) Ð ¼ otherwise Ê Ð Ì Ú Ú ℄ (4) Ð ¼ otherwise Ì Û Ú Ð ℄ ¼ for all (5) Ð É Ê where and are symmetric positive semi-definite matrices. ¯ Ü The initial system state, ¼ is a random vector that is uncorrelated to both the system and measurement noise processes. ¯ The initial system state has a known mean and covariance matrix Ì ^Ü Ü ℄ È ´ ^Ü Ü µ´ ^Ü Ü µ ℄ ¼ ¼ ¼ j¼ ¼j¼ ¼j¼ ¼j¼ ¼ and (6) Þ Þ ·½ Given the above assumptions the task is to determine, given a set of observations ½ , the · ½ Ü ·½ estimation filter that at the th instance in time generates an optimal estimate of the state , ^Ü ·½ which we denote by , that minimises the expectation of the squared-error loss function, ¾ Ì jjÜ ^Ü jj ℄ ´Ü ^Ü µ ´Ü ^Ü ℄ ·½ ·½ ·½ ·½ ·½ ·½ (7) 1.4 Derivation ^Ü Þ Þ ·½ ½ Consider the estimation of state based on the observations up to time , , namely ^Ü k ·½jZ . This is called a one-step-ahead prediction or simply a prediction. Now, the solution to · ½ the minimisation of Equation 7 is the expectation of the state at time conditioned on the observations up to time . Thus, 3 ^Ü Ü jÞ Þ ℄ Ü jZ ℄ ·½ ½ ·½ ·½j (8) Then the predicted state is given by ^Ü Ü jZ ℄ ·½ ·½j F Ü · G Ù · Û jZ ℄ F Ü jZ ℄ · G Ù · Û jZ ℄ F ^Ü · G Ù j (9) Ù where we have used the fact that the process noise has zero mean value and is known precisely. È ^Ü ·½j ·½j The estimate variance is the mean squared error in the estimate . Û ^Ü j Thus, using the facts that and are uncorrelated: Ì È ´Ü ^Ü µ´Ü ^Ü µ jZ ℄ ·½ ·½ ·½j ·½j ·½j Ì Ì Ì F ´Ü ^Ü µ´Ü ^Ü µ jZ ℄F · Û Û ℄ j j Ì F È F · É j (10) ^Ü Þ ·½ ·½j Having obtained a predictive estimate suppose that we now take another observation . ^Ü ·½j ·½ How can we use this information to update the prediction, ie. find ? We assume that the estimate is a linear weighted sum of the prediction and the new observation and can be described by the equation, ¼ ^Ü Ã ^Ü · Ã Þ ·½ ·½ ·½j ·½ ·½j (11) ·½ ¼ à à ·½ where and are weighting or gain matrices (of different sizes). Our problem now is to ·½ ¼ à à ·½ reduced to finding the and that minimise the conditional mean squared estimation error ·½ where of course the estimation error is given by: ~Ü ^Ü Ü ·½ ·½j ·½ ·½j ·½ (12) 1.5 The Unbiased Condition ^Ü ℄ Ü ℄ ·½ ·½j ·½ For our filter to be unbiased, we require that . Let’s assume (and argue ^Ü j by induction) that is an unbiased estimate. Then combining equations (11) and (2) and taking expectations yields ¼ ^Ü · à À Ü · Ã Ú ℄ ^Ü ℄ à ·½ ·½ ·½ ·½ ·½ ·½j ·½j ·½ ·½ ¼ ^Ü ℄ · à À Ü ℄ · Ã Ú ℄ à ·½ ·½ ·½ ·½ ·½ ·½j (13) ·½ Note that the last term on the right hand side of the equation is zero, and further note that the prediction is unbiased: ^Ü ℄ F ^Ü · G Ù ℄ ·½j j F ^Ü ℄ · G Ù j Ü ℄ ·½ (14) 4 Hence by combining equations (13) and (14)) ¼ ^Ü ℄ ´Ã · à À µ Ü ℄ ·½ ·½ ·½ ·½j ·½ ·½ ^Ü ·½j ·½ and the condition that be unbiased reduces the requirement to ¼ à · à À Á ·½ ·½ ·½ ¼ à Á à À ·½ ·½ or (15) ·½ We now have that for our estimator to be unbiased is must satisfy ^Ü ´Á à À µ ^Ü · Ã Þ ·½ ·½ ·½ ·½ ·½j ·½ ·½j ^Ü · Ã Þ À ^Ü ℄ ·½ ·½ ·½ ·½j ·½j (16) where à is known as the Kalman gain. À ^Ü ^Þ ·½ ·½j ·½j Note that since can be interpreted as a predicted observation , equation 16 can be interpreted as the sum of a prediction and a fraction of the difference between the predicted and actual observation. 1.6 Finding the Error Covariance We determined the prediction error covariance in equation (10). We now turn to the updated error covariance Ì È ~Ü ~Ü µ jZ ℄ ·½j ·½ ·½j ·½ ·½j ·½ Ì ´Ü ^Ü µ´Ü ^Ü µ ℄ ·½ ·½ ·½j ·½ ·½j ·½ Ì Ì ℄Á à À µ ´Á à À µ ~Ü ~Ü ·½ ·½ ·½ ·½ ·½j ·½j Ì Ì Ì Ì ·Ã Ú Ú ℄à · ¾´Á à À µ ~Ü Ú ℄à ·½ ·½ ·½ ·½ ·½j ·½ ·½ ·½ ·½ and with Ì Ú Ú ℄ Ê ·½ ·½ ·½ Ì ℄ È ~Ü ~Ü ·½j ·½j ·½j Ì ~Ü Ú ℄ ¼ ·½j ·½ we obtain Ì Ì È ´Á à À µÈ ´Á à À µ · Ã Ê Ã ·½ ·½ ·½ ·½ ·½ ·½ ·½j ·½ ·½j (17) ·½ Thus the covariance of the updated estimate is expressed in terms of the prediction covariance È Ê Ã ·½ ·½ ·½j , the observation noise and the Kalman gain matrix . 1.7 Choosing the Kalman Gain Our goal is now to minimise the conditional mean-squared estimation error with respect to the Kalman gain, Ã. 5 ·½ Ì ~Ü jZ ℄ Ä ÑÒ ~Ü ·½j ·½ ·½j ·½ à k ·½ ·½ Ì jZ ℄ ~Ü ~Ü ÑÒ ·½j ·½ trace ·½j ·½ à k ·½ ¡ ÑÒ È ·½j ·½ trace (18) à k ·½ B For any matrix A and a symmetric matrix ¡ Ì ABA µ ¾AB trace´ A È Ì Ì a Ba a A (to see this, consider writing the trace as where are the columns of , and then a differentiating w.r.t. the ). Combining equations (17) and (18) and differentiating with respect to the gain matrix (using the relation above) and setting equal to zero yields Ä Ì ¾´Á à À µÈ À · ¾Ã Ê ¼ ·½ ·½ ·½ ·½ ·½j ·½ à ·½ Re-arranging gives an equation for the gain matrix Ì Ì ½ Ã È À À È À · Ê ℄ ·½ ·½ ·½ ·½j ·½j (19) ·½ ·½ Together with Equation 16 this defines the optimal linear mean-squared error estimator. 1.8 Summary of key equations At this point it is worth summarising the key equations which underly the Kalman filter algorithm. The algorithm consists of two steps; a prediction step and an update step. · ½ Prediction: also known as the time-update.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    44 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us