<<

The Iterated Extended Kalman

Li Liang-qun, Ji Hong-bing,Luo Jun-hui School of Electronic , Xidian University ,Xi’an 710071, China Email: [email protected]

Abstract— Particle filtering shows great promise in addressing Simulation results are given in Section 5.Conclusions are a wide variety of non-linear and /or non-Gaussian problem. A presented in Section 6. crucial issue in particle filtering is the selection of the importance proposal distribution. In this paper, the iterated extended 2. PARTICLE FILTER (IEKF) is used to generate the proposal distribution. The proposal distribution integrates the latest 2.1 Modeling Assumption observation into system state transition density, so it can match Consider the nonlinear discrete time dynamic system: the posteriori density well. The simulation results show that the new particle filter superiors to the standard particle filter and xkk= fx( k−1,vk−1) (1) the other filters such as the unscented particle filter (UPF), the extended kalman particle filter (PF -EKF), the EKF. zhkk= ( xk,ek) (2) where f :ℜnnxv×ℜ →ℜnxand h :ℜ×nnxeℜ→ℜnz represent the 1. INTRODUCTION k k system evolution function and the measurement model Nonlinear filtering problems arise in many fields n n function. x is the system state at time , z is including statistical , , , xk ∈ℜ k zk ∈ℜ , and engineering such as communications, radar n m the measurement vector at time k . vk −1 ∈ Rand eRk ∈ tracking, sonar ranging, target tracking, and satellite represent the process noise and the measurement noise navigation [1-7]. The best-known algorithm to solve the respectively. problem of nonlinear filtering is the . 2.2 Particle Filter This filter is based upon the principle of linearizing the The sequential importance- (SIS) algorithm is a measurements and evolution models using Taylor series that forms the basis for most sequential expansions. The series approximations in the EKF algorithm MC filters developed over the past decades; see [3]. The can, however, lead to poor representations of the nonlinear sequential MC approach is known variously as bootstrap functions and of interest. As a result, filtering, the condensation algorithm, particle filtering etc. this filter can diverge. Another popular solution strategy for The key idea is to represent the required posterior density the general nonlinear filtering problem is to use sequential function by a set of random samples with associated weights Monte Carlo methods, also known as particle filters. and to compute estimates based on these samples and weights. Particle filtering is a class of methods for filtering, Ns smoothing etc. in non-linear and/or non-Gaussian state space Let xwii, denote a that { 0:kk}i=1 models that may perform significantly better than traditional characterizes the posterior pdf px(|z ), methods like the extended Kalman filter. A key issue in 0:kk1: where i is a set of support points with particle filtering is the selection of the proposal distribution {x0:k,0i= ,...,Ns} function. In general, it is hard to design such proposals. Now associated weights wii , = 0,..., N and x ==xj, 0,...,kis there have many proposed distributions have been proposed in { ks} 0:kj{ } the literature. For example, the prior, the EKF Garssian the set of all states up to time k . The weights are normalized approximation and the UKF proposal are used as the proposal such that wi = 1 . Then, the posterior density at k can be distribution for particle filter [2,3,4]. In this paper, we follow ∑i k the same approach, but replace the EKF proposal by an IEKF approximated as proposal. Because the IEKF compute the updated state not as N s p()xz ≈−ωδii(x x) an approximate conditional -that is, a linear combination 0:kk1:k ∑ 0:k0:k i=1 (3) of the prediction and the innovation, but as a maximum a posteriori (MAP) estimate [1]. So the IEKF can be used to where pz(|xii)p(x|xi) generate proposal distributions with more precise that are ii kk kk−1 (4) wwkk∝ −1 ii close to the true mean of the target distribution. qx(|kkx−1,zk) This work is organized as follows. The generic particle i is the of the measurements filter is formulated in Section 2.The proposed iterated p(|zxkk) extended kalman particle filter is presented in Section 3. ii zk , qx(|kkx−1,zk)is the importance density. It can be state. In the general case, it may not be straightforward to do shown that as Ns →∞, the approximation (1) approaches either of these things. Therefore, the distribution px(|kx0:k−1 ) the true posterior density px(|0:kkz1: ). From the equation (3) and (4), it is show that the particle is the most popular choice of proposal function. Compared filter consists of recursive propagation of the weights and with the optimal proposal distribution, it is attractive due to its support points as each measurement is received sequentially. simplicity in sampling from the prior densities and the Furthermore, a common problem with the particle filter is the calculation of weights. But because it is not incorporating the degeneracy phenomenon. In order to avoid the degeneracy most recent observations, the proposal distribution is very phenomenon of the particles, the scheme is inefficient sometimes, and the estimation result is poor. In important to the particle filter. To the resampling scheme, order to solve this problem, several techniques have been there are many selections such as sampling importance proposed. For example, the EKF or UKF approximation is resampling, residual resamping and minimum used as the proposal distribution for a particle filter. In this sampling. In this paper, the residual resampling is used in all section, we will use the IEKF to generate proposal of the . So a generic particle filter is then as distributions with a better performance than the EKF or UKF. described by Algorithm 1. The detail derivation of the 3.1 The Iterated Extended Kalman Filter particle filter can be founded in [3]. The density function of xk(+1)given Z k +1 can be written, assuming all the pertinent Algorithm 1: Generic Particle Filter random variables to be gaussian, as 1. Initialization: k = 0 px⎡⎤k+=1|Zkk++11px⎡k+1|z(k+1),Z ⎤ ()i ⎣⎦( ) ⎣( ) ⎦ • For i= 0,..., Ns , draw the states x0 from the 1 (5) =+pz⎡⎤(1k )|x()k+1p⎡⎤x()k+1|Zk c ⎣⎦⎣⎦ prior p()x0 1 2. FOR =+Nz⎡ (1k );h⎡⎤k+1,x()k+1,R()k+1⎤ k = 1,2.... c ⎣ ⎣⎦⎦ ˆ • FOR iN=1: s .(Nx⎣⎦⎡⎤k++1);x()k1|k,P()k+1|k Draw iii Maximizing the above with respect to xk(+1)is equivalent to xkk∼ qx(|xk−1,zk) i minimizing Assign the particle a weight, w , according to (4) k 1 ' −1 J⎡x()k+=11⎤⎡z()kh+−k+1,x(k+1)⎤R()k+1z()kh+1−⎡k+1,x(k+1)⎤ END FOR ⎣ ⎦⎣2{}⎦{}⎣⎦ (6) • FOR iN=1: s 1 ' −1 +⎣⎦⎡⎤xk()+11−+xkˆˆ(|k)P(k+1|k)()⎣⎦⎡⎤xk+1−+xk(1|k) Normalize the weights: 2 ii j The iterative minimization of (6), say, using a Newton- wwkk= wk ∑ jN=1: s Raphson algorithm, will yield an approximate MAP estimate • END FOR of xk(+1). This is done by expanding J in a taylor series • Resample: ()i up to second order about the i th iterated value of the estimate Multiply /Suppress samples x with high/low 0:k i i of xk(+1), denoted (without time argument) as x , importance weights wk , respectively, to obtain ii' i1 i' i i ()i (7) J =+J J()xx− +()xx− Jxx (x−x) Ns random samples x0:k approximately 2 distributed according to where, using abbreviated notation, px(|0:kkz1: ) ' i JJ= | i For iN=1: s , set wNks=1 xx= and JJi =∇ | xxxx= i 3. THE ITERATED EXTENDED KALMAN PARTICLE FILTER JJi =∇ ∇' | xx x x xx= i The choice of proposal function is one of the most are the gradient and Hessian of with respect to . critical design issues in algorithm. J xk(1+ ) Doucet, etc [7] proved the optimal proposal distribution Setting the gradient of (7) with respect to x to zero yields the next value of x in the iteration to minimize as px(|x ,z )can minimize the variance of q(|xxkk0: −1,z1:k)= kk0: −1 1:k −1 ii+1 ii (8) x =−xJ()xxJx the importance weights conditional on x0:k−1 and z1:k . This choice of proposal distribution has also advocated by other The gradient J is, using now the full notation, researchers. However, This optimal importance density suffers two major drawbacks. It requires the ability to sample from px(|kkx0: −1,z1:k)and to evaluate the integral over the new

Jhii=− ⎡⎤k+1, xˆ i()k+1 | k+1 '1R(k+1)− Algorithm 2: The Iterated Extended Kalman Particle Filter xx⎣⎦ 1. Initialization: k = 0 (9) .1zk+−h⎡⎤k+1,xˆ i k+1|k+1 ()i {()⎣⎦( )}• For i= 0,..., Ns , draw the states x0 from the − 1 i prior px() ++P (1kk|)⎣⎡xˆˆ()k+1|k+1−x(kk+1|)⎦⎤ 0 2. FOR The Hessian of J , retaining only up to the first k = 1, 2.... • FOR iN=1: derivative of h , is s ii −1 −1 --Compute the Jacobians F& G of the process Jhii=⎡⎤k+1, xˆˆik++1 | k1 Rk+1 h⎡⎤k+1, xik++1 | k1 kk xx x ⎣⎦()()x ⎣⎦() model (10) ++Pk(1|k)−1 --Update the particles with the IEKF: xˆˆii= fx() ii'1− −1 kk+1/ k/k =+Hk(11)R()k+H()k+1+P(k+1|k) ii' i Pk(1+ |k)=+FPk(|k)F G where kkk FOR j =1:c (c is the number of iteration) ii Hk( ++11) hkx ⎡,xˆ (k+1|k+1)⎤ Compute the Jacobians Hi&Ui of the ⎣⎦ kkj j is the Jacobian of the relinearized measurement equation. measurement model Update the P i with the equation (14) Using the matrix inversion lemma[1], one has k j −1 ii−1 ' Update the state estimate xˆ i with the equation (12) ()JPxx =+(k1|)k−P(k+1|)kH()k+1 k j −1 END FOR ii' i (11) .1⎡⎤H (k++)P(k1|k)Hk()+1+R(k+1)Hk()+1P(k+1|k) --Draw xii∼ qx(|xi,z)= N xˆi,Pi ⎣⎦ kkk−1 k( kjjk) i i =+Pk()1| k+1 --Assign the particle a weight, wk , according to (4) Substituting (9) and (11) into (8) yields END FOR ii+1 ii' • FOR iN= 1: xkˆˆ()++1|k1=x()k+1|1k++P(k++1|1k)H(k+1) s Normalize the weights: wwii= wj −1 i (12) kk∑ jN=1: k .(Rk++1) z()k 1−h⎡k+1,xˆ (k+1|k+1)⎤ s {}⎣⎦ • END FOR −+Pii(k1| k+1)P(kk+1| )−1 ⎡⎤xˆˆ()k+1| k+1 −x(kk+1| ) • Resample: ⎣⎦ ()i which is the iterated extended Kalman filter. Multiply /Suppress samples x0:k with high/low importance weights i , respectively, to obtain Starting the iteration for i = 0 with w k 0 xˆkk++1| 1 xˆk+1|k (13) random samples ()i approximately distributed ()() Ns x0:k causes the last term in ()to be zero and yields after the first according to px(|0:kkz1: ) 1 iteration xkˆ (+1| k+1),that is ,the same as the first-order For , set i i= 1: Ns wNks= 1 (noniterated)EKF i The covariance associated with xkˆ ( +1| k+1) is, from (11), given by 4. SIMULATION Pkii( ++1| k1) =P(k+1| k)−P(k+1| k)H(k+1)' This section presents the simulation result of the proposed (14) algorithm described above. In order to compare the −1 .1⎡⎤Hii()k++P(k1|k)Hk()+1' +R(k+1)Hki(+1)P(k+1|k) performance with these of the convention methods, the ⎣⎦ system models were taken from [4] as following

x = 1 ++sin(0.4π kx) 0.5 +v 3.2 The Iterated Extended Kalman Particle Filter kk−−11k As shows in section above, an approximate MAP ⎧ 2 (15) ⎪ 0.2xekk+k≤30 estimate can be obtained by an iteration that amounts to y k = ⎨ 0.5xe−+2 k>30 relinearization of the measurement equation. So the iterated ⎩⎪ kk extended kalman filter is able to more accurate than the EKF. where v is a Gamma ς 3, 2 random variable modeling the Distribution generated by the IEKF generally has a bigger k a ( ) support overlap with the true posterior distribution than the process noise, The measurement noise ek is drawn from a overlap achieved by the EKF estimates. So this makes the Gaussian distribution N(0,0.0001) .The was IEKF a better candidate for more accurate proposal distribution generation within the particle filter framework. repeated 100 times with random re-initialization for each run. The new filter that results from using a IEKF for All of the particle filters used 200 particles and residual proposal distribution generation within a particle filter resampling. The output of the algorithm is the mean of framework is called the iterated extended kalman particle filter. The algorithm of the IEKF is as follows 1 N S closer to the true posterior distribution than the other such as samples set that can be computed. ˆ j . The mean x kt= ∑ x N j =1 the EKF and UKF. The simulation result shows the proposed square errors of each run is defined as particle filter is evidently better than the standard particle T 1/2 filter. At the same time, because the performance of the IEKF ⎛1 2 ⎞ ˆ (16) MSE =⎜∑ (xkk−x )⎟ is superior to the EKF and UKF, the precise of the proposed ⎝⎠T k =1 particle filter is higher the PF-EKF and the UPF.

REFERENCES [1] Bar-Shalom, Y., Li, X.R., and Kirubarajan,T. E stimation with Applications to Tracking and Navigation:Theory, Algorithm and Software[M]. New York: Wiley, 2001. [2] Gordon, N.J.; Salmond, D.J.; Smith, A.F.M.; Novel approach to nonlinear/non-Gaussian Bayesian state estimation [J]. IEE F Proceedings Radar and Signal Processing, Volume 140, Issue 2, April 1993 Page(s): 107 – 113 [3] Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T.;A tutorial on particle filters for online nonlinear/non- Gaussian Bayesian tracking[J]. IEEE Transactions on Signal Processing, Volume 50, Issue 2, Feb. 2002 Page(s): 174 – 188 [4] R Van der Merwe, A Doucet. The Unscented Particle Filter. Advances in Neural Information Processing Fig.1. Estimate of the system state Systems [M], MIT, 2000. Fig.1 compares the estimate of the system state generated [5] J.Carpenter, P.Clifford, and P.Fearnhead. “Improved from a single run of the different particle filter. The results particle filter for nonlinear problems.” Proc.inst.Elect. show that the estimate of the PF and the PF-EKF sometimes Eng.Radar.Sonar, Navig. 1999. biased the true state very large. Because the UKF and IEKF [6] Morelande, M.R.; Challa, S. Manoeuvring target tracking calculates the posterior covariance accurately to the 2rd order, in clutter using particle filters. it make the UPF and the PF-IEKF provided improvement IEEE Transactions on Aerospace and Electronic Systems, over the results of the PF and that PF-EKF clearly achieved Volume 41, Issue 1, Jan. 2005 Page(s): 252 – 270 the better performance. Table1 summarizes the performance [7] A.Doucet. “On sequential Monte Carlo methods for of the different filters. The table shows the and Bayesian filtering.” Dept.Eng. Univ.Cambridge, UK, of the mean-square-error (MSE) of the state Tech.Rep, 1998. estimates. To non-linear and /or non-Gaussian problem, the performance of the PF-IEKF is superior to the other solutions, and the EKF is the worst. Table 1: The mean and variance of the MSE

MSE Algorithm Mean Var

EKF 0.3933 0.0182

UKF 0.2969 0.0127

IEKF 0.1504 0.0098

PF 0.2205 0.0477

PF-EKF 0.3231 0.0183

PF-UKF 0.0675 0.0062

PF-IEKF 0.0495 0.0012

5. CONCLUSION In this paper, a new particle filter that uses the IEKF to generate the proposal distribution is proposed. Because the IEKF can generate an approximate MAP estimate of the system state, the proposal distribution based on the IEKF is