Parallel Stochastic Particle Methods Using Markov Chain Random Walks

Parallel Stochastic Particle Methods Using Markov Chain Random Walks

PARALLEL STOCHASTIC PARTICLE METHODS USING MARKOV CHAIN RANDOM WALKS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF AERONAUTICS AND ASTRONAUTICS AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Sun Hwan Lee December 2010 © 2011 by Sun Hwan Lee. All Rights Reserved. Re-distributed by Stanford University under license with the author. This work is licensed under a Creative Commons Attribution- Noncommercial 3.0 United States License. http://creativecommons.org/licenses/by-nc/3.0/us/ This dissertation is online at: http://purl.stanford.edu/jn897hc5058 ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Matthew West, Primary Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Peter Glynn, Co-Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Juan Alonso I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Sanjay Lall Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice Provost Graduate Education This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file in University Archives. iii Abstract Particle methods, also known as Monte Carlo methods in the statistical community, have become a powerful tool for a variety of research areas such as chemistry, as- tronomy, and finance, to list a few. This is mainly due to the enormous advances in computational resources in recent years. In this work, we consider an efficient and robust parallel methodology that can be applied to particle methods in a general setting. The parallel methodology proposed in this thesis takes advantage of Markov Chain random walks and corresponding Markov chain theory. We develop parallel stochastic particle methods in two different areas: (1) the optimal filtering problem, and (2) simulation of particle coagulation. In each application, a mathematical proof of convergence as well as a numerical example are provided. After a brief review of Markov Chain random walks and an explanation of the two application areas in chapter 1, the Markov Chain Distributed Particle Filter (MCDPF) algorithm is introduced. The performance of this method is demonstrated with a bearing-only-measurement target-tracking numerical example and is further compared with an existing method, the Distributed Extended Kalman Filter (DEKF), using a flocking model for the target vehicles. We study the convergence of MCDPF to the Centralized Particle Filter (CPF) and the optimal filtering solution by using results from Markov chain theory. In addition, the robustness of the MCDPF method is highlighted for practical problems. As the second application area, we developed a parallel stochastic particle method for the stochastic simulation of Smoluchowski's coagulation equation. This equation is used in many broad areas and for high-dimensional problems the stochastic particle solution is more accurate, stable and computationally cheaper than classical numerical iv integration schemes. In this application, simulated particles can be considered as representing physical particles. Since more particles result in more accurate and useful solutions, it is desirable to simulate this equation with a greater number of particles. By applying the parallel stochastic particle method, a comparable solution is obtained more efficiently using multiple processors, where one processor maintains many fewer particles by communicating with neighboring processors. A numerical study as well as a theoretical analysis are provided to demonstrate the convergence of the parallel stochastic particle algorithm. v Acknowledgement For the six years I spent for my M.S. and Ph.D. degree, I have never noticed the importance of people around me who helped me in various ways. One nice thing about defending the Ph.D. degree is that it gives me an opportunity to pause and appreciate such valuable people when I wrap up my studies at this moment. First, I would like to thank Professor Matt West, my advisor, for giving me a great research opportunity and a lot of valuable advice. He is the one who introduced me the area of numerical computation, stochastic systems, and probability, and I was able to explore totally new fields for me due to his deep knowledge and generous support. I could not have finished my degree without his guidance and patience. I also want to thank Professor Peter Glynn who advised me after Professor Matt West left Stanford. I learned a lot from his classes about stochastic systems and calculus, which equipped me with theoretical background on those subjects. It was a great help for me to have someone with whom I could consult about research in person. Thanks to Professor Sanjay Lall, Juan Alonso and James Primbs for generously being the committee members of my Ph.D. oral examination. The Samsung Scholarship Foundation supported me for four years of my graduate studies. Along with the financial support, I really appreciate the opportunities to meet with other Korean students studying across the world and the great experiences with them. My thanks should go to friends I met here at Stanford: Younggeun Cho, Taemie Kim, Taesup Moon, Jeeyoung Peck, Chunki Park, Jinsung Kwon, Jongyoon Peck, Kahye Song, Daeseok Nam, Minyong Shin, Hyungsik Shin, Jonghan Kim, Jaeheung Park, and all SGBT members. I always miss Korea because of my friends: Taesung Choi, Jiyoung Kang, Keum-Dong Jung, Yoonkyoung Hur, Jisun Peck, Hyejung Lee, vi Seungmin Wie, and Sehyuk Kwak. It is not enough to thank my family in Korea for their spiritual support and love. My sincere thanks goes to my parents, Joowon Lee and Hwasook Park, whom I respect the most in the world, and to my older brother, Daehwan Lee, who is a good competitor at all kinds of sports and I hope to have many rounds of golf together. I also would like to thank my parents-in-law for their love and care. Last but not least, I would like to thank my family, YeoMyoung and Yuna. My marriage and the birth of my daughter changed my Ph.D. life dramatically, but in a very positive way. From the first moment at Stanford West tennis court to finishing my Ph.D. degree, we enjoyed life at Stanford as a student family and I am so excited about the journey toward the new stage of our life from now on. I love you and thank you, YeoMyoung and Yuna. vii Contents Abstract iv Acknowledgement vi 1 Introduction 1 1.1 Problem description . 1 1.2 Dissertation overview . 3 2 Background 4 2.1 Markov chain random walk . 4 2.2 Steady state of Markov chains . 6 3 Markov Chain Distributed Particle Filter 9 3.1 Introduction . 10 3.2 Random walks on a graph . 12 3.3 Particle filters . 14 3.3.1 Centralized particle filters . 15 3.3.2 The Markov Chain Distributed Particle Filter (MCDPF) . 16 3.3.3 Convergence to CPF and algorithm . 18 3.3.4 Convergence to optimal filtering . 20 3.4 Strong convergence . 23 3.4.1 Preliminaries . 23 3.4.2 Proof of strong convergence . 27 3.5 Numerical certificate of strong convergence . 35 viii 3.6 Performance comparison . 38 3.6.1 Extended Kalman filter . 38 3.6.2 Numerical example . 41 3.7 Conclusions . 48 4 Parallel stochastic simulation of coagulation 50 4.1 Introduction . 51 4.2 Gillespie's method . 52 4.2.1 Numerical example . 53 4.3 Parallel stochastic particle algorithm . 55 4.3.1 Numerical example . 58 4.4 Convergence of parallel stochastic particle method . 61 4.5 Conclusions . 74 Bibliography 75 ix List of Tables 3.1 Table of the algorithms, RMSE values and the fraction of divergence (Averaged over 1000 Monte Carlo Runs) . 43 3.2 Table of the algorithms, RMSE values and the fraction of divergence (Averaged over 1000 Monte Carlo Runs) . 46 3.3 Table of the algorithms, RMSE values and the fraction of divergence . 47 x List of Figures 3.1 The trajectory estimation by CPF and DPF with Markov chain steps k =4. ................................... 37 3.2 RMSE of MCDPF and CPF with respect to number of execution (left) and different Markov chain steps k (right). 37 3.3 Trajectory of flocking model and its position estimation by EKF, DEKF, REKF, RDEKF, CPF and MCDPF. 44 3.4 RMSE versus time for EKF, DEKF, REKF, RDEKF, CPF, MCDPF. 45 3.5 RMSE with respect to BW with changing N = 50; 100; 200; 500 (CPF), kmc = 5; 10; 20; 50 (MCDPF) and kcon = 2; 6; 10; 14 (DEKF). The de- crease in RMSE is observed with increased BW. 48 4.1 c(t; k) of linear kernel with 2 k 10. 54 ≤ ≤ 4.2 The stochastic solution with M0 = 500; 120 and the histogram of τn. For M0 = 120 and M0 = 500 the portion of τn 0:01 is 0.4311 and ≤ 0.8016. 56 4.3 The plot of c104 (t; 5) andc ~104 (t; 5) with 3 different τmix's. 59 4.4 The plot of c104 (5; k) andc ~104 (5; k) with 3 different τmix's. 60 4 4.5 eR 2 defined in (4.22) up to an ensemble size of R = 10 . 60 k k 4.6 Particular realization of particle coagulation using Gillespie's algorithm.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    91 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us