
UAI2002 THRUN 511 Particle Filters in Robotics Sebastian Thrun Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 http://www.cs.cmu.edu/�thrun Abstract bilistic laws, such as Bayes rule. Many recently fielded state-of-the-art robotic systems employ probabilistic tech­ niques for perception [12, 46, 52]; some go even as far as In recent years, particle filters have solved several hard using probabilistic techniques at all levels of perception and perceptual problems in robotics. Early successes of decision making [39]. particle filters were limited to low-dimensional esti­ mation problems, such as the problem of robot lo­ This article focuses on particle filters and their role in calization in environments with known maps. More robotics. Particle filters [9, 30, 40] comprise a broad fam­ recently, researchers have begun exploiting structural ily of sequential Monte Carlo algorithms for approximate properties of robotic domains that have led to success­ inference in partially observable Markov chains (see [9] ful particle filter applications in spaces with as many for an excellent overview on particle filters and applica­ as I 00,000 dimensions. The fact that every model-no tions). In robotics, early successes of particle filter imple­ mater how detailed-fails to capture the full complex­ mentations can be found in the area of robot localization, ity of even the most simple robotic environments has in which a robot's pose has to be recovered from sensor lead to specific tricks and techniques essential for the data [51]. Particle filters were able to solve two important, success of particle filters in robotic domains. This arti­ previously unsolved problems known as the global local­ cle surveys some of these recent innovations, and pro­ ization [2] and the kidnapped robot [ 14] problems, in which vides pointers to in-depth articles on the use of particle a robot has to recover its pose under global uncertainty. filters in robotics. These advances have led to a critical increase in the robust­ ness of mobile robots, and the localization problem with a given map is now widely considered to be solved. More re­ 1 INTRODUCTION cently, particle filters have been at the core of solutions to much higher dimensional robot problems. Prominent ex­ amples include the simultaneous localization and mapping One of the key developments in robotics has been the adop­ problem [8, 27, 36, 45], which phrased as a state estima­ tion of probabilistic techniques. In the 1970s, the predomi­ tion problem involves a variable number of dimensions. A nant paradigm in robotics was model-based. Most research recent particle-filter algorithm known as FastSLAM [34] at that time focused on planning and control problems un­ has been demonstrated to solve problems with more than der the assumption of fully modeled, deterministic robot I 00,000 dimensions in real-time. Similar techniques have and robot environments. This changed radically in the mid- been developed for robustly tracking other moving entities, 1980s, when the paradigm shifted towards reactive tech­ such as people in the proximity of a robot [35, 44]. niques. Approaches such as Brooks's behavior-based ar­ chitecture generated control directly in response to sensor However, the application of particle filters to robotics prob­ measurements [ 4]. Rejections of models quickly became lems is not without caveats. A range of problems arise typical for this approach. Reactive techniques were ar­ from the fact that no matter how detailed the probabilistic guable as limited as model-based ones, in that they replaced model-it will still be wrong, and in particular make false the unrealistic assumption of perfect models by an equally independence assumptions. In robotics, all models lack im­ unrealistic one of perfect perception. Since the mid-1990s, portant state variables that systematically affect sensor and robotics has shifted its focused towards techniques that uti­ actuator noise. Probabilistic inference if further compli­ lize imperfect models and that incorporate imperfect sensor cated by the fact that robot systems must make decisions data. An important paradigm since the mid-1990s-whose in real-time. This prohibits, for example, the use of vanilla origin can easily be traced back to the 1960s-is that of (exponential-time) particle filters in many perceptual prob­ probabilistic robotics. Probabilistic robotics integrates im­ lems. perfect models and and imperfect sensors through proba- 512 THRUN UAI2002 (a) r--------------, Robotpolition Figure 2: Particle filtershave been used successfullyfor on-board localization of soccer-playing Aibo robots with as few as 50 par­ ticles [26]. is a stochastic projection of the true state Xt generated via the probabilistic law p(zt I Xt). Furthermore, the initial (b) state Xo is distributed according to some distribution p( x0). In robotics, p(xt I Ut, Xt-1) is usually referred to as ac­ tuation or motion model, and p(zt I Xt) as measurement model. They as usually highly geometric and generalize classical robotics notions such as kinematics and dynamics .• by adding non-deterministic noise. Robotposition The classical problem in partially observable Markov chains is to recover a posterior distribution over the state Xt at any tinie from all available sensor measurements t t, t z = zo, ... , Zt and controls u = uo, ..., Ut. A solution �-'-==� --- '--""' -·-· ' -�·.- -- to this problem is given by Bayes filters [20], which com­ pute this posterior recursively: (c)r--------------, t t p(xt I z ,u ) = canst. · p(ztiXt) j p(xtlut,Xt-I) •, t-! t-I Robot position p(xt-IIz , u ) dxt-! (I) under the initial conditionp(xo I z0,u0) =p(xo). If states, controls, and measurements are all discrete, the Markov chain is equivalent to hidden Markov models (HMM) [41] and (I) can be implemented exactly. Representing the pos­ terior takes space exponential in the number of state fea­ tures, though more efficient approximations exist that can exploit conditional independences that might exist in the model of the Markov chain [3]. Figure 1: Monte Carlo localization, a particle filter algorithm In robotics, particle filters are usually applied in continu­ for state-of-the-art mobile robot localization. (a) Global uncer­ ous state spaces. For continuous state spaces, closed form tainty, ) approximately bimodal uncertainty after navigating in (b solutions for calculating (1) are only known for highly spe­ the (symmetric) corridor, and (c) unimodal uncertainty after en­ tering a uniquely-looking office. cialized cases. If p(x0) is Gaussian and p(xt I Ut, Xt-I) and p( Zt I Xt) are linear in its arguments with added in­ dependent Gaussian noise, (I) is equivalent to the Kalman This article surveys some of the recent developments, and filter [23, 32]. Kalman filters require O(d3) time ford­ points out some of the opportunities and pitfalls specific to dimensional state spaces, although in many robotics prob­ robotic problem domains. lems the locality of sensor data allows for O(d2) imple­ mentations. A common approximation in non-linear non­ Gaussian systems is to linearize the actuation and measure­ 2 PARTICLE FILTERS ments models. If the linearization is obtained via a first­ order Taylor series expansion, the result is known as ex­ Particle filters are approximate techniques for calculat­ tended Kalman filter, or EKF [32]. Unscented filters [21] ing posteriors in partially observable controllable Markov obtain often a better linear model through (non-random) chains with discrete time. Suppose the state of the Markov sampling. However, all these techniques are confined to chain at time t is given by Xt. Furthermore, the state Xt cases where the Gaussian-linear assumption is a suitable depends on the previous state Xt-! according to the prob­ approximation. abilistic law p(xt I Ut, Xt_1), where Ut is the control as­ serted in the time interval ( t- 1; t]. The state in the Markov Particle filters address the more general case of (nearly) un­ chain is not observable. Instead, one can measure Zt, which constrained Markov chains. The basic idea is to approxi- UA12002 THRUN 513 mate the posterior of a set of sample states { xlil}, or par­ 4500 ticles. Here each xliJ is a concrete state sample of index i, 4000 where is ranges from 1 to M, the size of the particle fil­ 3500 ter. The most basic version of particle filters is given by the 3000 "E .2. following algorithm. 2500 c1l 2000 .!!1� • = Initialization: At time t 0, draw M particles accord­ i5 1500 ing to p(x ). Call this set of particles Xo. 0 1000 500 • Recursion: At time t > 0, generate a particle xlil for each particle xl�1 E Xt-1 by drawing from the actua­ 500 1000 , 500 2000 2500 3000 3500 4000 tion model p( Xt I Ut, x\i� 1). Call the resulting set Xt. Time [sec] Subsequently, draw M particles from Xt, so that each Figure 3: MCL with the standard proposal distribution (dashed curve)compared to MCL with a hybrid mixture distribution (solid xli] E Xt is drawn (with replacement) with a probabil­ line) [51]. Shown here is the error for a 4,000-second episode of ity proportional to p(zt I xlil ). Call the resulting set of camera-based MCL of a museum tour-guide robot operating in particles Xt. Smithsonian museum [50]. In the limit as M _, oo, this recursive procedure leads global uncertainty. The most difficult variant of the local­ to particle sets Xt thatt converge uniformly to the desired posterior p(xt I zt, u ), under some mild assumptions on ization, however, is the kidnapped robot problem [14], in the nature of the Markov chain. which a well-localized robot is tele-ported to some other location without being told. This problem was reported, Particle filters are attractive to roboticists for more than for example, in the context of the Robocup soccer compe­ one reason.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-