Kalman Filtering

Kalman Filtering

Localization, Mapping, SLAM and The Kalman Filter according to George Robotics Institute 16-735 http://voronoi.sbp.ri.cmu.edu/~motion Howie Choset http://voronoi.sbp.ri.cmu.edu/~choset RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox The Problem • What is the world around me (mapping) – sense from various positions – integrate measurements to produce map – assumes perfect knowledge of position • Where am I in the world (localization) – sense – relate sensor readings to a world model – compute location relative to model – assumes a perfect world model • Together, these are SLAM (Simultaneous Localization and Mapping) RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Localization Tracking: Known initial position Challenges – Sensor processing Global Localization: Unknown initial position – Position estimation Re-Localization: Incorrect known position – Control Scheme – Exploration Scheme (kidnapped robot problem) – Cycle Closure – Autonomy SLAM – Tractability – Scalability Mapping while tracking locally and globally RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Representations for Robot Localization Kalman filters (late-80s?) Discrete approaches (’95) • Gaussians • Topological representation (’95) • approximately linear models • uncertainty handling (POMDPs) • position tracking • occas. global localization, recovery • Grid-based, metric representation (’96) • global localization, recovery Robotics AI Particle filters (’99) Multi-hypothesis (’00) • sample-based representation • multiple Kalman filters • global localization, recovery • global localization, recovery RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Three Major Map Models Grid-Based: Feature-Based: Topological: Collection of discretized Collection of landmark Collection of nodes and obstacle/free-space pixels locations and correlated their interconnections uncertainty Elfes, Moravec, Smith/Self/Cheeseman, Kuipers/Byun, Thrun, Burgard, Fox, Durrant–Whyte, Leonard, Chong/Kleeman, Simmons, Koenig, Nebot, Christensen, etc. Dudek, Choset, Konolige, etc. Howard, Mataric, etc. RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Three Major Map Models Grid-Based Feature-Based Topological Resolution vs. Scale Discrete localization Arbitrary localization Localize to nodes Computational Grid size and resolution Landmark covariance (N2) Minimal complexity Complexity Exploration Frontier-based No inherent exploration Graph exploration Strategies exploration RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Atlas Framework • Hybrid Solution: – Local features extracted from local grid map. – Local map frames created at complexity limit. – Topology consists of connected local map frames. Authors: Chong, Kleeman; Bosse, Newman, Leonard, Soika, Feiten, Teller RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox H-SLAM RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox What does a Kalman Filter do, anyway? Given the linear dynamical system: xk(+1 )= Fkxk ()()+ Gkuk ()()+ vk () yk()=+ Hkxk ()() wk () xk() is the n- dimensional state vector (unknown) uk() is the m- dimensional input vector (known) yk() is the p- dimensional output vector (known, measured) Fk(),(),() Gk Hk are appropriately dimensioned system matrices (known) vk(),() wk are zero - mean, white Gaussian noise with (known) covariance matrices Qk(),() Rk the Kalman Filter is a recursion that provides the “best” estimate of the state vector x. RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox What’s so great about that? xk(+1 )= Fkxk ()()+ Gkuk ()()+ vk () yk()=+ Hkxk ()() wk () • noise smoothing (improve noisy measurements) • state estimation (for state feedback) • recursive (computes next estimate using only most recent measurement) RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox How does it work? xk(+1 )= Fkxk ()()+ Gkuk ()()+ vk () yk()=+ Hkxk ()() wk () 1. prediction based on last estimate: xˆ(k +1| k) = F(k)xˆ(k | k) + G(k)u(k) yˆ(k) = H (k)xˆ(k +1| k) 2. calculate correction based on prediction and current measurement: Δx = f (y(k +1), xˆ(k +1| k)) 3. update prediction: xˆ(k +1| k +1) = xˆ(k +1| k) + Δx RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!) y = Hx Given prediction xˆ(k +1| k) and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. Want the best estimate to be consistent with sensor readings xˆ (k+1|k) • Ω = x | Hx = y { } “best” estimate comes from shortest Δx RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!) y = Hx Given prediction xˆ(k +1|1) and output y, find Δx so that xˆ = xˆ(k +1|1) + Δx is the "best" estimate of x. “best” estimate comes from shortest Δx shortest Δ x is perpendicular to Ω xˆ(k +1| k) • Δx Ω = {x | Hx = y} RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Some linear algebra a is parallel to Ω if Ha = 0 Null(H ) = {a ≠ 0 | Ha = 0} Ω = {x | Hx = y} a is parallel to Ω if it lies in the null space of H for all v ∈ Null(H ),v ⊥ b if b∈column(H T ) Weighted sum of columns means b = Hγ , the weighted sum of columns RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!) y = Hx Given prediction xˆ(k +1| k) and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. “best” estimate comes from shortest Δx xˆ(k +1| k) shortest Δ x is perpendicular to Ω • ⊥ T ⇒ Δx ∈ null(H ) ⇒ Δx ∈column(H ) Δx ⇒ Δx = H T γ Ω = {x | Hx = y} RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!) y = Hx Given prediction xˆ(k +1| k) and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. “best” estimate comes from shortest Δx xˆ(k +1| k) shortest Δ x is perpendicular to Ω • ⊥ T ⇒ Δx ∈ null(H ) ⇒ Δx ∈column(H ) Δx ⇒ Δx = H T γ Ω = {x | Hx = y} Real output – estimated output ν = y − H (xˆ(k +1| k)) = H (x − xˆ(k +1| k)) innovation RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!) y = Hx Given prediction xˆ(k +1| k) and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. “best” estimate comes from shortest Δx xˆ(k +1| k) shortest Δ x is perpendicular to Ω • ⊥ T ⇒ Δx ∈ null(H ) ⇒ Δx ∈column(H ) Δx ⇒ Δx = H T γ Ω = {x | Hx = y} assume γ is a linear function of ν ⇒ Δx = H T Kν Guess, hope, lets face it, it has to be some for some m× m matrix K function of the innovation RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!) y = Hx Given prediction xˆ− and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. we require H (xˆ(k +1| k) + Δx) = y xˆ(k +1| k) • Δx Ω = {x | Hx = y} RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!) y = Hx Given prediction xˆ− and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. we require H (xˆ(k +1| k) + Δx) = y xˆ(k +1| k) • ⇒ HΔx = y − Hxˆ(k +1| k) = H (x − xˆ(k +1| k)) =ν Δx Ω = {x | Hx = y} RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!) y = Hx Given prediction xˆ− and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. we require H (xˆ(k +1| k) + Δx) = y xˆ(k +1| k) • ⇒ HΔx = y − Hxˆ(k +1| k) = H (x − xˆ(k +1| k)) =ν Δx substituting Δ x = H T K ν yields Ω = {x | Hx = y} HH T Kν =ν Δx must be perpendicular to Ω b/c anything in the range space of HT is perp to Ω RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!) y = Hx Given prediction xˆ− and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. we require H (xˆ(k +1| k) + Δx) = y xˆ(k +1| k) • ⇒ HΔx = y − Hxˆ(k +1| k) = H (x − xˆ(k +1| k)) =ν Δx substituting Δ x = H T K ν yields Ω = {x | Hx = y} HH T Kν =ν −1 ⇒ K = (HH T ) The fact that the linear solution solves the equation makes assuming K is linear a kosher guess RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!) y = Hx Given prediction xˆ− and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. we require H (xˆ(k +1| k) + Δx) = y xˆ(k +1| k) • ⇒ HΔx = y − Hxˆ(k +1| k) = H (x − xˆ(k +1| k)) =ν Δx substituting Δ x = H T K ν yields Ω = {x | Hx = y} HH T Kν =ν −1 ⇒ K = (HH T ) T T −1 Δx = H (HH ) ν RI 16-735, Howie Choset, with slides from George Kantor, G.D.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    64 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us