Monte Carlo Filter Particle Filter
Total Page:16
File Type:pdf, Size:1020Kb
2015 European Control Conference (ECC) July 15-17, 2015. Linz, Austria Monte Carlo Filter Particle Filter Masaya Muratay, Hidehisa Naganoy and Kunio Kashinoyz Abstract— We propose a new realization method of the B. Sequential Importance Sampling (SIS) sequential importance sampling (SIS) algorithm to derive a new particle filter. The new filter constructs the importance To tackle the problem of interest, the sequential impor- distribution by the Monte Carlo filter (MCF) using sub- tance sampling (SIS) algorithm[4][5] is often used to obtain particles, therefore, its non-Gaussianity nature can be ade- the filtered state estimate. The SIS algorithm approximates quately considered while the other type of particle filter such the p(XtjfYtg), that is, the posterior PDF of state at t given as unscented Kalman filter particle filter (UKF-PF) assumes observations up to time t, as follows: a Gaussianity on the importance distribution. Since the state estimation accuracy of the SIS algorithm theoretically improves Sequential Importance Sampling (SIS) as the estimated importance distribution becomes closer to the N true posterior probability density function of state, the new filter 0(i) (i) is expected to outperform the existing, state-of-the-art particle p(XtjfYtg) ≈ wt δ(Xt − Xt ) filters. We call the new filter Monte Carlo filter particle filter Xi=1 (MCF-PF) and confirm its effectiveness through the numerical where simulations. (i) (i) Xt ∼ p(XtjXt−1 = Xt−1; Yt); i = 1; 2; · · · ; N I. INTRODUCTION (i) (i) (i) 0 p(YtjXt = X )p(Xt = X jXt−1 = X − ) w(i) = w (i) t t t 1 A. Problem Definition t t−1 (i) (i) p(Xt = Xt jXt−1 = Xt−1; Yt) In this paper, we consider the sequential state estimation (i) problem of the following nonlinear non-Gaussian state space 0(i) wt wt = N (j) model. j=1 wt P(i) 0(i) Xt = f(Xt−1) + Φt; Φt ∼ p(Φt) (1) Here, fXt ; wt g; i = 1; 2; · · · ; N are weighted set of Yt = h(Xt) + Ωt; Ωt ∼ p(Ωt) (2) particles (particle values and their corresponding normalized 0 importance weights) at time t and N w (i) = 1 holds. Here, X is the L-dimensional random vector called state i=1 t t δ(·) is the Dirac’s delta function and this PDF approximation at time t and this is the estimation target using the P is called the Dirac’s point-mass estimate. p(XtjXt−1 = past series of M-dimensional observations fYt−1g = (i) X − ; Yt) is the optimal importance distribution specified by fYt−1; Yt−2; · · · ; Y1g or fYtg = fYt; Yt−1; · · · ; Y1g. The t 1 ¯ eqs. (1) and (2) and it proves to minimizes the variance of the former is called the predicted state estimate Xt := (i) (i) importance weights. X ∼ p(XtjXt−1 = X − ; Yt); i = E[XtjfYt−1g] and the latter is called the filtered state t t 1 ^ 1; 2; · · · ; N denotes the random (i.i.d.) sampling from the estimate Xt := E[XtjfYtg]. Their approximation errors (i) T optimal importance distribution. and are calculated by E[(Xt − X¯t)(Xt − X¯t) jfYt− g] and p(YtjXt = Xt ) 1 (i) (i) T − E[(Xt−X^t)(Xt−X^t) jfYtg], respectively. f(·) and h(·) are p(Xt = Xt jXt 1 = Xt−1) are likelihood and transition nonlinear functions and called state transition and state obser- PDFs specified by eq. (2) and (1), respectively. The above vation, respectively. Φt and Ωt are independent white noises importance weight equation also indicates that the current following arbitrary probability density functions (PDFs) and weight is calculated from the previous weight and together they are called system noise and observation noise, respec- with the fact that the current particle value depends on the tively. Equations (1) and (2) are referred to as system model previous value, this algorithm can be sequentially performed and observation model, and these two equations constitute after the appropriate importance distribution selection. Then ^ the state space model of interest. For this model, since both the filtered state estimate at time t (denoted as Xt) can be state and noises do not always follow Gaussian distributions, obtained as follows: the well-studied, state-of-the-art Gaussian filters[1] such as N ^ 0(j) (i) extended Kalman filter (EKF)[2] and unscented Kalman Xt = wt Xt (3) filter (UKF)[3] can not be adequately employed. Therefore, Xi=1 designing the sequential state estimator for this model still Note that the SIS algorithm is not designed to obtain remains as a challenging problem. the predicted state estimate at time t (denoted as X¯t). yMasaya Murata, Hidehisa Nagano and Kunio Kashino are research This algorithm proves to easily encounter the degeneracy scientists at NTT Communication Science Laboratories, NTT Corpora- problem that almost all the particles have zero or nearly tion, 3-1, Morinosato Wakamiya, Atsugi-Shi, Kanagawa 243-0198, Japan. zero weights as the time proceeds and therefore, the particle murata.masaya at lab.ntt.co.jp zKunio Kashino is also a visiting professor at National Institute of resampling procedure is often performed after obtaining (i) 0(i) Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430 Japan. fXt ; wt g; i = 1; 2; · · · ; N. The resampling makes the 978-3-9524269-3-7 ©2015 EUCA 2836 importance weights for all of the new particles become 1=N. filter is expected to be superior over the existing filters. We The new particles are randomly drawn from the discrete call the new filter Monte Carlo filter particle filter (MCF-PF) (i) 0(i) set fXt g; i = 1; 2; · · · ; N with probabilities fwt g; i = and confirm its effectiveness using the numerical examples. 1; 2; · · · ; N. Note that after the resampling procedure, the The rest of this paper is organized as follow. Section I sequential weight equation becomes I describes the MCF algorithm and its relationship to the (i) (i) (i) BF algorithm. In section III, we explain the proposed filter (i) p(YtjXt = Xt )p(Xt = Xt jXt−1 = Xt−1) formulation and summarize the sequential state estimation wt / (i) (i) p(Xt = Xt jXt−1 = Xt−1; Yt) algorithm. Numerical examples are shown in section IV and (4) section V summarizes the basic findings and conclusion of 0(i) this paper. since the resampling makes fwt g; i = 1; 2; · · · ; N be- come 1=N. II. MONTE CARLO FILTER (MCF) A. Algorithm C. Importance Distribution Selection As well as the SIS algorithm, the Monte Carlo filter The selection of the importance distribution affects the (MCF) is also a promising solution for the aforementioned entire state estimation performance of the SIS algorithm and sequential state estimation problem. Unlike the SIS, the MCF generally speaking, the smaller the difference between the approximates the Bayesian filter algorithm using randomly assumed importance distribution and true posterior PDF of drawn samples called particles. And, the MCF not only state, the better the state estimation accuracy becomes. The (i) provides the filtered state estimate, but also provides the simplest choice is p(XtjXt−1 = Xt−1; Yt) ≈ p(XtjXt−1 = (i) predicted state estimate. The algorithm of MCF can be Xt−1) and it derives the bootstrap filter (BF)[6]. Here, the summarized as follows: (i) p(X jX − = X ) is the state transition PDF specified by t t 1 t−1 Monte Carlo Filter MCF eq. (1) and the BF is one of the most popular particle filters ( ) ^ (i) ¯ ¯ (i) when tackling the problem of interest. However, one obvious X0 ∼ N (X0; P0); w^0 = 1=N; i = 1; 2; · · · ; N problem is that the possible large deviation between the For each t = 1; 2; · · · assumed importance distribution and the true one is likely to Predicted state estimate arise. To alleviate this gap, we may need the large number of (i) particles and it sometimes results in the severe computational Φt ∼ p(Φt); i = 1; 2; · · · ; N ¯ (i) ^ (i) (i) burden. Xt = f(Xt−1) + Φt The other often-used selection is to assume p(XtjXt−1 = 0(i) 0(i) (i) w¯t = w^t−1 − where − is supposed Xt−1; Yt) ≈ p(XtjX(i);t 1; Yt) X(i);t 1 N 0 to follow eqs. (1) and (2) and E[X(i);t−1jfYt−1g] is set ¯ (j) ¯ (j) (i) Xt = w¯t Xt to Xt−1. We call X(i);t−1 ith particle state. Moreover the Xj=1 p(XtjX(i);t−1; Yt) is assumed as Gaussian distributed. Then, Filtered state estimate using N particles which specify the expectations of the N ^ (i) ¯ (i) Xt = Xt particle states, N different p(XtjX i ;t− ; Yt) are estimated ( ) 1 (i) 0(i) ^ (i) by using the Gaussian filter such as EKF or UKF. The w^t = w¯t p(YtjXt = Xt ) former filter is called extended Kalman filter particle filter (i) 0(i) w^t (EKF-PF)[7] and the latter one is called unscented Kalman w^t = j N w^( ) filter particle filter (UKF-PF)[7][8]. Since the latest obser- j=1 t NP vation Y is incorporated when constructing the importance 0 t ^ (j) ^ (j) distribution, these filters theoretically are expected to be Xt = wt Xt Xj=1 more accurate than the BF that does not take Yt into account. The UKF-PF is also theoretically better than the ^ (i) (i) Here, X0 ∼ p(X0) and Φt ∼ p(Φt) are the ith particles EKF-PF since the unscented transformation (UT)[3] in the randomly drawn from the initial state PDF and the system UKF algorithm handles the nonlinear state transformation ¯ (i) noise PDF, respectively.