Markov Chain Distributed Particle Filters (MCDPF)

Markov Chain Distributed Particle Filters (MCDPF)

Joint 48th IEEE Conference on Decision and Control and ThC07.6 28th Chinese Control Conference Shanghai, P.R. China, December 16-18, 2009 Markov Chain Distributed Particle Filters (MCDPF) Sun Hwan Lee* and Matthew West† Abstract— Distributed particle filters (DPF) are known to approaches and consensus-based local information exchange provide robustness for the state estimation problem and can methods. Message passing approaches transfer information reduce the amount of information communication compared along a predetermined route covering an entire network. to centralized approaches. Due to the difficulty of merging multiple distributions represented by particles and associated For example, [7] passes parameters of the parametric model weights, however, most uses of DPF to date tend to approximate of the posterior distribution while [8] transmits the raw the posterior distribution using a parametric model or to use information, particles and weights, or the parameters of a a predetermined message path. In this paper, the Markov GMM approximation of the posterior distribution. Consensus Chain Distributed Particle Filter (MCDPF) algorithm is pro- based methods communicate the information only between posed, based on particles performing random walks across the network. This approach maintains robustness since every the nearest nodes and achieve global consistency by consen- sensor only needs to exchange particles and weights locally and sus filtering. The type of exchanged information can be, for furthermore enables more general representations of posterior example, the parameters of a GMM approximation [2] or the distributions because there are no a priori assumptions on local mean and covariance [9]. distribution form. The paper provides a proof of weak conver- The massage passing approaches [7][8] can have reduced gence of the MCDPF algorithm to the corresponding centralized particle filter and the optimal filtering solution, and concludes robustness because the distributed algorithms cannot them- with a numerical study showing that MCDPF leads to a reliable selves cope with the failure of even one node since the system estimation of the posterior distribution of a nonlinear system. uses fixed message paths. Furthermore, the assumption of synchronization with identical particles at every node can I. INTRODUCTION cause fragility. On the other hand, the consensus based Distributed Particle Filters (DPF) have been emerging as approaches so far proposed [2][9] all use approximations of an efficient tool for state estimation, for instance in target the posterior distribution with GMM because to reduce infor- tracking with a robotic navigation system [1][2]. The general mation flow. The amount of reduced information, however, benefits of distributed estimation include the robustness of is not significant compared with transmitting full particles the estimation, the reduction of the amount of information when the dimension of the system is very large, due to flow, and estimation results comparable to the centralized the covariance matrix of the posterior distribution. For an approach. Much effort has been directed toward the real- n-dimensional system, consensus based DPF with a GMM ization of the decentralized Kalman filtering [3][4][5] but approximation has to transmit O(cn2|E|) data through the decentralized particle filters were thought to be challenging entire network per consensus step, where c is the number of due to the difficulty of merging probability distributions rep- Gaussian mixtures and |E| is the number of network edges. resented by particles and weights [6]. The currently existing If the DPF is realized with exchanging N particles, however, distributed particle filtering methods, however, are not able then the amount of information per Markov chain iteration to gain all of these advantages or turn out to benefit from is O(nmN), where m is the number of nodes. A detailed these properties only with relatively low dimensional systems description of the Markov chain iteration is given below. by introducing an assumption such as Gaussian Mixture For a system with cn|E| ≫ mN, a GMM approximation Model (GMM). The developed methods so far try to avoid no longer benefits from the effect of reduced information exchanging the raw data, namely particles and associated flow. Furthermore, it frequently happens that the posterior weights, mainly due to the large amount of information distribution is not well-approximated by small number of that implies. The communication of such raw data scales combination of Gaussian distribution. better with system dimension, however, than the existing In this paper we propose a distributed particle filter based methods do. The distributed particle filter proposed in this on exchanging particles and associated weights according paper exchanges particles and weights only between nearest to a Markov chain random walk. This approach maintains neighbor nodes and estimates the true state by assimilating the robustness of a distributed system since each node only data with an algorithm based on a Markov chain random needs local information and it scales well in the case of non- walk. Gaussian and high dimensional systems. Past work on distributed particle filters can be broadly The contents of this paper are as follows. In section II, a categorized into two approaches, namely massage passing review of basic properties and theorems for random walks on graphs is provided. Section III contains the centralized *S. H. Lee is with the Department of Aeronautics and Astronautics, particle filter (CPF) algorithm and the proposed decentralized Stanford University, [email protected] †M. West is with the Department of Mechanical Science and Engineering, particle filter algorithm. In addition, the weak convergence University of Illinois at Urbana-Champaign, [email protected] of the posterior distribution of the DPF to that of the CPF 978-1-4244-3872-3/09/$25.00 ©2009 IEEE 5496 ThC07.6 and the optimal filter is proved in this section. A numerical kernel and conditional probability distribution of Y attain example using a bearing only tracking problem with four Lebesgue measures. sensors is given in section IV. Conclusion and future works are discussed in section V. Pr(Xt ∈ A|Xt−1 = xt−1) = K(xt|xt−1)dxt (5) ZA II. RANDOM WALKS ON GRAPHS Pr(Yt ∈ B|Xt = xt) = ρ(yt|xt)dyt (6) ZB A sensor network system can modeled as a graph, G = where ρ(yt|xt) is the transition probability density of a (V, E), with a normalized adjacency matrix A. The vertices measurement yt given the state xt. V = {1,..., m} correspond to nodes or sensors in the The filtering problem is to estimate the true state xt network and edges E represent the connections between at time t given the time series of observations y1:t. The sensors. The neighbors of node i are given by Ni = {j ∈ prediction and updating of the optimal filtering based on V : Aij 6= 0}. Matrix A is a Markov transition probability Bayes’ recursion are given as follows. matrix defined on the graph because it satisfies A ≥ 0 and A1 = 1. Consequently a random walk on the network can be p(xt|y1:t−1) = p(xt−1|y1:t−1)K(xt|xt−1) dxt−1 (7) defined according to matrix A and we assume no self-loops ZRn in the chain. Here we review several properties of random ρ(yt|xt)p(xt|y1:t−1) p(xt|y1:t) = . (8) walks on graphs which are useful for the development of Rn ρ(yt|xt)p(xt|y1:t−1) dxt DPF. Analytic solutionsR for the posterior distribution in (8) do Proposition 2.1: If A is the normalized adjacency matrix not generally exist except in special cases, such as linear of an undirected, connected graph G then Markov chain dynamical systems with Gaussian noise. In the particle defined by A has a unique stationary distribution Π and filtering setting, the posterior distribution is represented by a Πi > 0 for all i ∈ V . Moreover, for any initial distribution, group of particles and associated weights so that the integral in (8) is approximated by the sum of discrete values. M(i, k) lim = Πi, (1) k→∞ k A. Centralized Particle Filters where M(i, k) is the number of visits to state i during k Particle filtering is a recursive method to estimate the true steps. state, given the time series of measurements [10][11]. Sup- pose the posterior distribution at time t−1, π (dx ), Theorem 2.2: If A is the normalized adjacency matrix t−1|t−1 t−1 is approximated by N particles xi N . Then we have of an undirected connected graph G then the stationary { t−1}i=1 , distribution of the Markov chain defined by A is given by p(xt−1|y1:t−1) πt−1|t−1(dxt−1) (9) d(i) Π = (Π1, Π2,..., Πm), where Πi = 2|E| . Here d(i) is the N degree of node i and E is the number of edges in the graph. N 1 | | ≈ π (dxt−1) = δxi (dxt−1). t−1|t−1 N t−1 Proof: We compute Xi=1 m m where particle i is at position xi in state space. Now, parti- d(i) |E | t−1 (ΠA) = Π A = ij = Π , (2) cles go through the prediction and measurement update steps j i ij 2|E| d(i) j Xi=1 Xi=1 to approximate the posterior distribution at time t. Given N particles, new particles are sampled from the transition kernel where |Eij| is the number of edges connecting nodes i and j i N 1 N i m density, x˜t ∼ πt−1|t−1K(dxt) = N i=1 K(xt|xt−1). This and i=1|Eij| = d(j). Since Π satisfies Π = ΠA, Π is the set of particles is the approximationPof πt|t−1, stationaryP distribution of the Markov chain defined by A.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us