Particle Filtering a Brief Introductory Tutorial
Total Page:16
File Type:pdf, Size:1020Kb
Particle Filtering a brief introductory tutorial Frank Wood Gatsby, August 2007 Problem: Target Tracking • A ballistic projectile has been launched in our direction and may or may not land near enough to kill us • We have orders to launch an interceptor only if there is a >5% chance it will land near enough to kill us (the projectile’s kill radius is 100 meters) – the interceptor requires 5 seconds to destroy incoming projectiles • We live under an evil dictator who will kill us himself if we unnecessarily launch an interceptor (they’re expensive after all) Problem Schematic (x , y ) t t rt θt (0,0) Problem problems • Only noisy observations are available • True trajectory is unknown and must be inferred from the noisy observations • Full details will be given at the end of the lecture Probabilistic approach • Treat true trajectory as a sequence of latent variables • Specify a model and do inference to recover the distribution over the latent variables (trajectories) Linear State Space Model (LSSM) • Discrete time • First-order Markov chain 2 xt+1 = axt + ², ² N(µ², σ² ) ∼ 2 yt = bxt + η, η N(µη, σ ) ∼ η xt 1 xt xt+1 − yt 1 yt yt+1 − LSSM • Joint distribution N p(x1:i, y1:i)= i=1 p(yi xi)p(xi xi 1) | | − • For prediction we wantQ the posterior predictive p(xi y1:i 1) | − • and posterior (filtering) distributions p(xi 1 y1:i 1) − | − Inferring the distributions of interest • Many methods exist to infer these distributions – Markov Chain Monte Carlo (MCMC) – Variational inference – Belief propagation –etc. • In this setting sequential inference is possible because of characteristics of the model structure and preferable due to the problem requirements Exploiting LSSM model structure… p(xi y1:i)= p(xi, x1:i 1 y1:i)dx1:i 1 | − | − Z p(yi xi, x1:i 1, y1:i 1)p(xi, x1:i 1, y1:i 1)dx1:i 1 ∝ | − − − − − Z p(yi xi) p(xi x1:i 1, y1:i)p(x1:i 1 y1:i)dx1:i 1 ∝ | | − − | − Z p(yi xi) p(xi xi 1)p(x1:i 1 y1:i 1)dx1:i 1 ∝ | | − − | − − Z p(yi xi) p(xi xi 1)p(xi 1 y1:i 1)dxi 1 ∝ | | − − | − − Z p(yi xi)p(xi y1:i 1) ∝ | | − Exploiting LSSM model structure… p(xi y1:i)= Usep(x Bayes’i, x1:i rule1 yand1:i the)dx conditional1:i 1 | − | − Zindependence structure dictated by the LSSM graphical model p(yi xi, x1:i 1, y1:i 1)p(xi, x1:i 1, y1:i 1)dx1:i 1 ∝ | − − − − − Z xt 1 xt xt+1 − p(yi xi) p(xi x1:i 1, y1:i)p(x1:i 1 y1:i)dx1:i 1 ∝ | | − − | − Z p(yi yxti)1 p(xiyxt i 1)p(yxt+11:i 1 y1:i 1)dx1:i 1 ∝ | − | − − | − − Z p(yi xi) p(xi xi 1)p(xi 1 y1:i 1)dxi 1 ∝ | | − − | − − Z p(yi xi)p(xi y1:i 1) ∝ | | − for sequential inference p(xi y1:i)= p(xi, x1:i 1 y1:i)dx1:i 1 | − | − Z p(yi xi, x1:i 1, y1:i 1)p(xi, x1:i 1, y1:i 1)dx1:i 1 ∝ | − − − − − Z p(yi xi) p(xi x1:i 1, y1:i)p(x1:i 1 y1:i)dx1:i 1 ∝ | | − − | − Z p(yi xi) p(xi xi 1)p(x1:i 1 y1:i 1)dx1:i 1 ∝ | | − − | − − Z p(yi xi) p(xi xi 1)p(xi 1 y1:i 1)dxi 1 ∝ | | − − | − − Z p(yi xi)p(xi y1:i 1) ∝ | | − Particle filtering steps • Start with a discrete representation of the posterior up to observation i-1 • Use Monte Carlo integration to represent the posterior predictive distribution as a finite mixture model • Use importance sampling with the posterior predictive distribution as the proposal distribution to sample the posterior distribution up to observation i What? Posterior predictive distribution State model Likelihood p(xi y1:i) p(yi xi) p(xi xi 1)p(xi 1 y1:i 1)dxi 1 | ∝ | | − − | − − Z p(yi xi)p(xi y1:i 1) ∝ | | − Start with a discrete representation of this distribution Monte Carlo integration General setup L L p(x) w`δx lim w`f(x`)= f(x)p(x)dx ≈ ` L `=1 →∞ `=1 Z X X As applied in this stage of the particle filter p(xi y1:i 1)= p(xi xi 1)p(xi 1 y1:i 1)dxi 1 | − | − − | − − Z L Samples from i 1 i 1 w`− p(xi x`− ) ≈ | Finite mixture model `=1 X Intuitions • Particle filter name comes from physical interpretation of samples and the fact that the filtering distribution is often the distribution of interest Start with samples representing the hidden state i 1 i 1 L w`− ,x`− `=1 p(xi 1 y1:i 1) { } ∼ − | − (0,0) Evolve them according to the state model i i 1 xˆ p( x − ) m ∼ ·| ` i i 1 wˆ w − m ∝ m (0,0) Re-weight them by the likelihood i i i w wˆ p(yi xˆ ) m ∝ m | m (0,0) Down-sample / resample wi,xi L wi ,xi M { ` `}`=1 ≈ { m m}m=1 (0,0) Particle filtering • Consists of two basic elements: – Monte Carlo integration L p(x) w`δx ≈ ` `=1 L X lim w`f(x`)= f(x)p(x)dx L →∞ `=1 X Z – Importance sampling Importance sampling Proposal Ex [f(x)] = p(x)f(x)dx distribution: easy to Z sample from Original p(x) = f(x)q(x)dx x` q( ) distribution: q(x) ∼ · hard to Z sample from, L easy to 1 p(x`) evaluate f(x`) ≈ L q(x`) `=1 X Importance p(x`) weights r` = q(x`) Importance sampling un-normalized distributions Un-normalized distribution to sample from, still hard p(x)= p˜(x) to sample from and easy to evaluate Zp Un-normalized proposal distribution: still easy to sample from q(x)= q˜(x) Zq x` q˜( ) ∼ · L 1 p(x`) Ex [f(x)] f(x`) ≈ L q(x`) `=1 X New term: L ratio of Zq 1 p˜(x`) normalizing f(x`) ≈ Zp L q˜(x`) constants `=1 X Normalizing the importance weights Un-normalized importance weights Takes a little algebra p˜(x`) Zq L r˜` L w = L r˜` = Zp ` q˜(x`) r˜` r˜` ≈ `=1 `=1 Normalized importance weightsP P L Zq 1 p˜(x`) Ex [f(x)] f(x`) ≈ Zp L q˜(x`) `=1 X L w`f(x`) ≈ `=1 X PF’ing: Forming the posterior predictive Posterior up to observation i 1 i 1 L w`− ,x`− `=1 p(xi 1 y1:i 1) i 1 { } ∼ − | − − p(xi y1:i 1)= p(xi xi 1)p(xi 1 y1:i 1)dxi 1 | − | − − | − − Z L i 1 i 1 w − p(xi x − ) ≈ ` | ` `=1 X The proposal distribution for importance sampling of the posterior up to observation i is this approximate posterior predictive distribution Sampling the posterior predictive • Generating samples from the posterior predictive distribution is the first place where we can introduce variance reduction techniques L i i M i 1 i 1 wˆm, xˆm m=1 p(xi y1:i 1),p(xi y1:i 1) w`− p(xi x`− ) { } ∼ | − | − ≈ | `=1 • For instance sample from each mixtureX component several twice such that M, the number of samples drawn, is two times L, the number of densities in the mixture model, and assign weights i 1 w − wˆi = ` m 2 Not the best • Most efficient Monte Carlo estimator of a function Γ(x) – From survey sampling theory: Neyman allocation – Number drawn from each mixture density is proportional to the weight of the mixture density times the std. dev. of the function Γ over the mixture density • Take home: smarter sampling possible [Cochran 1963] Over-sampling from the posterior predictive distribution i i M wˆm, xˆm m=1 p(xi y1:i 1) { } ∼ | − (0,0) Importance sampling the posterior • Recall that we want samples from p(xi y1:i) p(yi xi) p(xi xi 1)p(xi 1 y1:i 1)dxi 1 | ∝ | | − − | − − Z p(yi xi)p(xi y1:i 1) ∝ | | − • and make the following importance sampling identifications Proposal distribution p˜(xi)=p(yi xi)p(xi y1:i 1) q(xi)=p(xi y1:i 1) | | − | − L Distribution from which we i 1 i 1 w − p(xi x − ) want to sample ≈ ` | ` `=1 X Sequential importance sampling • Weighted posterior samples arise as wˆi , xˆi q( ) { m m} ∼ · • Normalization of the weights takes place as before rˆi i ` p˜(ˆxi ) wm = L i m i i rˆi rˆm = q(ˆxi ) = p(yi xˆm)ˆwm `=1 ` m | • We are left with M weighted samples from the posterior up to observationP i i i M w ,x p(xi y ) { m m}m=1 ∼ | 1:i An alternative view L i i i 1 i 1 wˆ , xˆ w − p(xi x − ) { m m} ∼ ` | ` `=1 X M i p(xi y1:i 1) m=1 wˆmδxˆi | − ≈ m P M i i p(xi y ) p(yi xˆ )ˆw δxˆi | 1:i ≈ m=1 | m m m P Importance sampling from the posterior distribution i i M w ,x p(xi y ) { m m}m=1 ∼ | 1:i (0,0) Sequential importance re-sampling • Down-sample L particles and weights from the collection of M particles and weights wi,xi L wi ,xi M { ` `}`=1 ≈ { m m}m=1 this can be done via multinomial sampling or in a way that provably minimizes estimator variance [Fearnhead 04] Down-sampling the particle set i i L w ,x p(xi y ) { ` `}`=1 ∼ | 1:i (0,0) Recap • Starting with (weighted) samples from the posterior up to observation i-1 • Monte Carlo integration was used to form a mixture model representation of the posterior predictive distribution • The posterior predictive distribution was used as a proposal distribution for importance sampling of the posterior up to observation i • M>Lsamples were drawn and re-weighted according to the likelihood (the importance weight), then the collection of particles was down-sampled to Lweighted samples SIR Particle Filter • The Sequential Importance Sampler (SIS) particle filter multiplicatively updates weights at every iteration and never resamples particles (skipped) – Concentrated weight / particle degeneracy • The Sequential Importance Resampler (SIR) particle filter avoids many of the problems associated with SIS particle filtering but always uses representations with Lparticles and does not maintain a weighted particle set Particle evolution i 1 i 1 L w`− ,x`− `=1 p(xi 1 y1:i 1) { } ∼ − | − (0,0) Particle evolution i i M wˆm, xˆm m=1 p(xi y1:i 1) { } ∼ | − (0,0) Particle evolution i i M w ,x p(xi y ) { m m}m=1 ∼ | 1:i (0,0) Particle evolution i i L w ,x p(xi y ) { ` `}`=1 ∼ | 1:i (0,0) LSSM Not alone • Various other models are amenable to sequential inference, Dirichlet process mixture modelling is another example, dynamic Bayes’ nets are another Tricks and Variants • Reduce the dimensionality of the integrand through analytic integration – Rao-Blackwellization • Reduce the variance of the Monte Carlo estimator through – Maintaining a weighted particle set – Stratified sampling – Over-sampling – Optimal re-sampling Rao-Blackwellization • In models where parameters can be analytically marginalized