
2 Sequential Monte Carlo Techniques and Bayesian Filtering : Applications in Tracking Samarjit Das Abstract—Visual tracking is a very important issue in various applications involving robotics and/or computer vision. The basic idea is to estimate a hidden state sequence from noisy observations under a given state space model. The state space model comprises of a system model and an observation model. The system models that we are going to discuss here has the markov property and the corresponding state space model is basically known as hidden markov model. Due to non-linearities in the system and the observation model, bayesian filtering ( also known as particle filtering ) is an ideal candidate for the purpose of tracking. Particle filtering is basically a non-linear, non-gaussian analog of kalman filter. It is based on sequential Monte Carlo techniques. In this article we are going to discuss Monte Carlo simulation techniques to solve numerical problems, especially to compute posterior probability distribution which involves complicated integrals. Then we are going to extend these ideas to sequential Monte Carlo techniques and develop algorithms leading to particle filtering. We demonstrate applications of particle filtering in areas such as target tracking in a wireless sensor network. Index Terms—Monte Carlo simulation, sequential Monte Carlo techniques, particle filtering, visual tracking ✦ 1 MONTE CARLO METHODS here δ(.) denotes a dirac delta function. Thus fˆX (x) is the discretized PMF (probability mass function) approx- Monte Carlo methods (also referred to as Monte Carlo imation of the continuous pdf fX . Now the integral can simulation) are a class of computational algorithms that be written as, rely on repeated random sampling to compute their results. These methods are quite often used to simu- b late physical and mathematical systems. They are most I = ψ(x)dx Za suitable for computation by computers as they rely on b ψ(x) repeated computation and random or pseudo random = f (x)dx Z f (x) X numbers. Monte Carlo methods tend to be used when a X it is infeasible or impossible to compute an exact re- ψ(X) I = EfX [ ] sult with deterministic algorithm. A simple example is fX (X) numerical computation of complicated integrals using where E [.] denotes the expected value w.r.t the Monte Carlo simulation. fX distribution fX . Now, we can estimate the expectation Consider the following case. Say we want to compute, and thus the integral using the discretized version of ψ(X) b the pdf. Or, Iˆ ≈ E [ ] which ca be computed as fˆX fX (X) I = ψ(x)dx follows, Za b N ψ x ψ(x) 1 where ( ) be a function with a very complicated closed Iˆ = ( δ(x − xi))dx form expression making it impossible to be integrated Z fX(x) N a Xi=1 in a closed form. We can solve this problem numerically N i using Monte Carlo technique. First, consider a random ˆ 1 ψ(x ) Or, I = i (1) N fX(x ) variable X distributed over the interval [a,b] with a Xi=1 probability density function (pdf) fX(x). The closed form expression of fX is known and we can draw samples Using the law of large numbers it can be shown that ˆ w.r.t. fX (.). We draw N (sufficiently large) samples from I → I as N → ∞. Thus by drawing a large number of the pdf, sample in the interval [a,b] we are able to numerically approximate the integral I using Monte Carlo technique. i x ∼ fX (x), i=1, ..., N The process of sampling from the distribution fX can be N extended to importance sampling technique by which we 1 i fˆX (x) , δ(x − x ) can estimate the properties of a particular distribution , N Xi=1 while only having samples from a different distribution (known as importance density or importance function) • This article was submitted as a term paper for the course ComS 577, Fall rather than the distribution of interest. This will be 2008 at Iowa State University, Ames, IA 50010. discussed in detail while developing the particle filtering algorithm. 3 posterior distribution. But in many practical problems the posterior is highly non-gaussian ( quite often mul- timodal ) together with non-linear φ(.) and g(.). Under such circumstances, sequential Monte Carlo technique based particle filtering algorithm gives us a way to solve this posterior estimation problem. In the next few sections, we shall develop the particle filter starting with Monte Carlo sampling for pdf approximation. We also discuss sequential Monte Carlo method (especially sequential importance sampling) and how it leads to Fig. 1. This figure shows the hidden markov model structure of particle filtering. the state space. The state sequence xt is a markov process and the observations yt, t = 1, 2, ... are conditionally independent given the state at t 3 DERIVATION OF THE PARTICLE FILTER 2 BAYESIAN FILTERING PROBLEM Similar to eq (4), say we are trying to compute the ex- Consider a state-space model with t =0,...,T , pectation w.r.t the joint posterior distribution p(x1:t|y1:t) xt = φ(xt−1)+ nt (2) Ep(x1:t|y1:t)[f(X1:t)] = f(x1:t)p(x1:t|y1:t)dx1:t =? yt = g(xt)+ vt (3) Z (5) where xt and yt denote the state and the observation at the current instant respectively. In real-life applications, In order to solve this problem, we have to look into for example, the state can be the position of a target two very important issues. a) Do we have a closed while the observation is noisy the sensor data about the form expression for p(x t|y t) ? b) Can we sample from current position and our goal could be to extract the true 1: 1: p(x t|y t) ? Depending on these two issues we can use ‘hidden’ state information using the observations and 1: 1: three different methods two solve this problem namely, the state dynamical model (the system model, eq (2)). simple Monte Carlo sampling, importance sampling and The functions φ(.) and g(.) can be any linear or non- bayesian importance sampling. The method involving linear function. The following things can be assumed bayesian importance is our main focus as it exactly is to be known. The initial state distribution p(x ), the 0 what bayesian filtering/particle does. These methods are state transition density p(x |x ) and the observation t t−1 discussed below. likelihood p(yt|xt). Here, p(.) is used to denote proba- bility density function. The transition density and the observation likelihood can be determined form the know 3.1 Simple Monte Carlo Sampling distributions of i.i.d noise sequences nt and vt. The state space is assumed to follow a hidden markov model Consider the case when we can sample from the joint (HMM). In other words, the state sequence xt is a posterior distribution p(x1:t|y1:t). In that case we can markov process and the observations yt, t = 1, 2, ... are approximate the distribution as a discretized version (as conditionally independent given the state at t. The states done in Sec. 1) in terms of the samples drawn from that are hidden (i.e. not observable); all we can know about distribution. Or, the system is through the observations known at each in- i stant. The graphical model of the HMM is shown in Fig. x1:t ∼ p(x1:t|y1:t), i=1, ..., N N 1. Under HMM assumption, p(xt|xt−1,past)= p(xt|xt−1) 1 pˆ(x |y ) ≈ δ(x − xi ) (6) and p(yt|xt,past)= p(yt|xt) e.g. p(yt|xt,yt−1)= p(yt|xt). 1:t 1:t N 1:t 1:t The goal of particle filtering (or bayesian filtering) is Xi=1 to estimate the joint posterior state distribution at time t Now, we can approximate the expectation in the equa- i.e. p(x |y ) or quite often its marginal p(x |y ). Here, 1:t 1:t t 1:t tion (5) as follows, x1:t = {x1, x2, ..., xt}. Finally, we would like to compute Ep(x1:t|y1:t)[f(X1:t)] ≈ Epˆ(x1:t|y1:t)[f(X1:t)] It = Ep(x |y )(f(Xt)) = f(xt)p(xt|y1:t)dxt t 1:t Z N 1 (4) = f(xi ) N 1:t Xi=1 When the posterior can be assumed to be gaussian and function φ(.) and g(.) to be linear the same problem can This is the most ideal case. In reality, almost always be recursively solved using Kalman filter [1]. To tackle we cannot sample from the posterior but we might have non-linearity of φ and g(.), the Extended Kalman filter [1] a closed form expression for the distribution. This leads can be used, but it still assumes the gaussianity of the to the second method known as importance sampling. 4 3.2 Importance Sampling form expression for the posterior p(x1:t|y1:t). But it can Importance sampling can be used to estimate the prop- be expressed in the following manner, erties of a particular distribution, while only having p(x1:t,y1:t) p(x1:t|y1:t) = thus samples from a different distribution (known as impor- p(y1:t) tance density or importance function) rather than the p(x1:t|y1:t) ∝ p(x1:t,y1:t) (8) distribution of interest. Thus when we have a closed It turns out that we can at least compute p(x1:t,y1:t) form expression for p(x1:t|y1:t) but cannot sample from it, we use an importance density to draw the samples.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-