<<

Localization, Mapping, SLAM and The according to George

Robotics Institute 16-735 http://voronoi.sbp.ri.cmu.edu/~motion

Howie Choset http://voronoi.sbp.ri.cmu.edu/~choset

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox The Problem

• What is the world around me (mapping) – sense from various positions – integrate measurements to produce map – assumes perfect knowledge of position

• Where am I in the world (localization) – sense – relate sensor readings to a world model – compute location relative to model – assumes a perfect world model

• Together, these are SLAM (Simultaneous Localization and Mapping)

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Localization

Tracking: Known initial position Challenges – Sensor processing Global Localization: Unknown initial position – Position estimation Re-Localization: Incorrect known position – Control Scheme – Exploration Scheme (kidnapped robot problem) – Cycle Closure – Autonomy SLAM – Tractability – Scalability Mapping while tracking locally and globally

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Representations for Robot Localization

Kalman filters (late-80s?) Discrete approaches (’95) • Gaussians • Topological representation (’95) • approximately linear models • uncertainty handling (POMDPs) • position tracking • occas. global localization, recovery • Grid-based, metric representation (’96) • global localization, recovery AI Particle filters (’99) Multi-hypothesis (’00) • sample-based representation • multiple Kalman filters • global localization, recovery • global localization, recovery

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Three Major Map Models

Grid-Based: Feature-Based: Topological: Collection of discretized Collection of landmark Collection of nodes and obstacle/free-space pixels locations and correlated their interconnections uncertainty

Elfes, Moravec, Smith/Self/Cheeseman, Kuipers/Byun, Thrun, Burgard, Fox, Durrant–Whyte, Leonard, Chong/Kleeman, Simmons, Koenig, Nebot, Christensen, etc. Dudek, Choset, Konolige, etc. Howard, Mataric, etc.

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Three Major Map Models

Grid-Based Feature-Based Topological

Resolution vs. Scale Discrete localization Arbitrary localization Localize to nodes

Computational Grid size and resolution Landmark (N2) Minimal complexity Complexity

Exploration Frontier-based No inherent exploration Graph exploration Strategies exploration

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Atlas Framework

• Hybrid Solution: – Local features extracted from local grid map. – Local map frames created at complexity limit. – Topology consists of connected local map frames. Authors: Chong, Kleeman; Bosse, Newman, Leonard, Soika, Feiten, Teller

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox H-SLAM

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox What does a Kalman Filter do, anyway?

Given the linear dynamical system: xk(+1 )= Fkxk ()()+ Gkuk ()()+ vk () yk()=+ Hkxk ()() wk () xk() is the n - dimensional state vector (unknown) uk() is the m - dimensional input vector (known) yk() is the p - dimensional output vector (known, measured) Fk(),(),() Gk Hk are appropriately dimensioned system matrices (known) vk(),() wk are zero - mean, white Gaussian with (known) covariance matrices Qk(),() Rk

the Kalman Filter is a recursion that provides the “best” estimate of the state vector x.

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox What’s so great about that?

xk(+1 )= Fkxk ()()+ Gkuk ()()+ vk () yk()=+ Hkxk ()() wk ()

• noise (improve noisy measurements) • state estimation (for state ) • recursive (computes next estimate using only most recent measurement)

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox How does it work?

xk(+1 )= Fkxk ()()+ Gkuk ()()+ vk () yk()=+ Hkxk ()() wk ()

1. based on last estimate: xˆ(k +1| k) = F(k)xˆ(k | k) + G(k)u(k) yˆ(k) = H (k)xˆ(k +1| k)

2. calculate correction based on prediction and current measurement: Δx = f (y(k +1), xˆ(k +1| k))

3. update prediction: xˆ(k +1| k +1) = xˆ(k +1| k) + Δx

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!)

y = Hx Given prediction xˆ(k +1| k) and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x.

Want the best estimate to be consistent with sensor readings xˆ (k+1|k) •

Ω = {x | Hx = y} “best” estimate comes from shortest Δx

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!)

y = Hx Given prediction xˆ(k +1|1) and output y, find Δx so that xˆ = xˆ(k +1|1) + Δx is the "best" estimate of x. “best” estimate comes from shortest Δx shortest Δ x is perpendicular to Ω xˆ(k +1| k) • Δx Ω = {x | Hx = y}

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Some linear algebra

a is parallel to Ω if Ha = 0

Null(H ) = {a ≠ 0 | Ha = 0}

Ω = {x | Hx = y} a is parallel to Ω if it lies in the null space of H

for all v ∈ Null(H ),v ⊥ b if b∈column(H T )

Weighted sum of columns means b = Hγ , the weighted sum of columns

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!)

y = Hx Given prediction xˆ(k +1| k) and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x.

“best” estimate comes from shortest Δx xˆ(k +1| k) shortest Δ x is perpendicular to Ω • ⊥ T ⇒ Δx ∈ null(H ) ⇒ Δx ∈column(H ) Δx ⇒ Δx = H T γ Ω = {x | Hx = y}

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!)

y = Hx Given prediction xˆ(k +1| k) and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x.

“best” estimate comes from shortest Δx xˆ(k +1| k) shortest Δ x is perpendicular to Ω • ⊥ T ⇒ Δx ∈ null(H ) ⇒ Δx ∈column(H ) Δx ⇒ Δx = H T γ Ω = {x | Hx = y}

Real output – estimated output ν = y − H (xˆ(k +1| k)) = H (x − xˆ(k +1| k)) innovation

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!)

y = Hx Given prediction xˆ(k +1| k) and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x.

“best” estimate comes from shortest Δx xˆ(k +1| k) shortest Δ x is perpendicular to Ω • ⊥ T ⇒ Δx ∈ null(H ) ⇒ Δx ∈column(H ) Δx ⇒ Δx = H T γ Ω = {x | Hx = y} assume γ is a linear function of ν ⇒ Δx = H T Kν

Guess, hope, lets face it, it has to be some for some m× m K function of the innovation

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!)

y = Hx

Given prediction xˆ− and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. we require H (xˆ(k +1| k) + Δx) = y xˆ(k +1| k) • Δx Ω = {x | Hx = y}

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!)

y = Hx

Given prediction xˆ− and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. we require H (xˆ(k +1| k) + Δx) = y xˆ(k +1| k) • ⇒ HΔx = y − Hxˆ(k +1| k) = H (x − xˆ(k +1| k)) =ν Δx Ω = {x | Hx = y}

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!)

y = Hx

Given prediction xˆ− and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. we require H (xˆ(k +1| k) + Δx) = y xˆ(k +1| k) • ⇒ HΔx = y − Hxˆ(k +1| k) = H (x − xˆ(k +1| k)) =ν Δx substituting Δ x = H T K ν yields Ω = {x | Hx = y} T ν HH K =ν

Δx must be perpendicular to Ω b/c anything in the range space of HT is perp to Ω

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!)

y = Hx

Given prediction xˆ− and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. we require H (xˆ(k +1| k) + Δx) = y xˆ(k +1| k) • ⇒ HΔx = y − Hxˆ(k +1| k) = H (x − xˆ(k +1| k)) =ν Δx substituting Δ x = H T K ν yields Ω = {x | Hx = y} T ν HH K =ν

−1 ⇒ K = (HH T )

The fact that the linear solution solves the equation makes assuming K is linear a kosher guess

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (no noise!)

y = Hx

Given prediction xˆ− and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" estimate of x. we require H (xˆ(k +1| k) + Δx) = y xˆ(k +1| k) • ⇒ HΔx = y − Hxˆ(k +1| k) = H (x − xˆ(k +1| k)) =ν Δx substituting Δ x = H T K ν yields Ω = {x | Hx = y} T ν HH K =ν

−1 ⇒ K = (HH T )

−1 Δx = H T (HH T ) ν

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox A Geometric Interpretation

Ω = {x | Hx = y}

xˆ(k +1| k) ••

Δx

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox A Simple State Observer

x(k +1) = Fx(k) + Gu(k) System: y(k) = Hx(k)

1. prediction: xˆ(k +1| k) = Fxˆ(k | k) + Gu(k)

2. compute correction: Observer: −1 Δx = H T (HH T ) ()y(k +1) − Hxˆ(k +1| k)

3. update: xˆ(k +1| k +1) = xˆ(k +1| k) + Δx

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Caveat #1

Note: The observer presented here is not a very good observer. Specifically, it is not guaranteed to converge for all systems. Still the intuition behind this observer is the same as the intuition behind the Kalman filter, and the problems will be fixed in the following slides.

It really corrects only to the current sensor information, so if you are on the hyperplane but not at right place, you have no correction…. I am waiving my hands here, look in book

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Estimating a distribution for x

Our estimate of x is not exact! We can do better by estimating a joint Gaussian distribution p(x). −1 1 ((x−xˆ)T P−1 (x−xˆ)) p(x) = e 2 1/ 2 (2π )n / 2 P where P = E ( ( x − xˆ )( x − xˆ ) T ) is the

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (geometric intuition)

Given prediction xˆ(k +1| k), covariance P, and output y, find Δx so that xˆ = xˆ(k +1| k) + Δx is the "best" (i.e. most probable) estimate of x.

−1 1 ((x−xˆ)T P−1 (x−xˆ)) Ω = {x | Hx = y} p(x) = e 2 1/ 2 (2π )n / 2 P

xˆ•(k +1| k) The most probable Δx is the one that : Δx 1. satisfies xˆ(k +1| k +1) = xˆ(k +1| k) + Δx 2. minimizes ΔxT P−1Δx

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox A new kind of distance

Suppose we define a new inner product on R n to be : T −1 T x1, x2 = x1 P x2 (this replaces the old inner product x1 x2 )

2 Then we can define a new norm x = x, x = xT P −1x

The xˆ in Ω that minimizes Δx is the orthogonal projection of xˆ(k +1| k)

onto Ω, ωso Δx is orthogonal to Ω. ⇒ ,Δx = 0 for ω in TΩ = null(H ) ω , Δx = ω T P −1Δx = 0 iff Δx ∈ column(PH T )

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (for real this time!)

Assuming that Δx is linear in ν = y − Hxˆ(k +1| k) Δx = PH T Kν

The condition y = H (xˆ(k +1| k) + Δx) ⇒ HΔx = y − Hxˆ(k +1| k) =ν

Substitution yields : ν HΔx = = HPH T Kν

−1 ⇒ K = (HPH T )

−1 ∴ Δx = PH T (HPH T ) ν

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox x(k +1) = Fx(k) + Gu(k) + v(k) A Better State Observer y(k) = Hx(k) Sample of Guassian Dist. w/ COV Q We can create a better state observer following the same 3. steps, but now we must also estimate the covariance matrix P. We start with x(k|k)andP(k|k) Where did noise go? Step 1: Prediction … xˆ(k +1| k) = Fxˆ(k | k) + Gu(k)

What about P? From the definition: P(k | k) = E((x(k) − xˆ(k | k))(x(k) − xˆ(k | k))T )

and P(k +1| k) = E((x(k +1) − xˆ(k +1| k))(x(k +1) − xˆ(k +1| k))T )

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Continuing Step 1

To make life a little easier, lets shift notation slightly: − − − T Pk +1 = E((xk +1 − xˆk +1)(xk +1 − xˆk +1) ) T = E(()Fxk + Guk + vk − (Fxˆk + Guk ) (Fxk + Guk + vk − (Fxˆk + Guk )) ) T = E()(F()xk − xˆk + vk (F(xk − xˆk )+ vk ) ) = E(F()()x − xˆ x − xˆ T F T + 2F(x − xˆ )vT + v vT ) k k k k k k k k k = FE(()()x − xˆ x − xˆ T )F T + E(v vT ) k k k k k k T = FPk F + Q

P(k +1| k) = FP(k | k)F T + Q

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Step 2: Computing the correction

From step 1 we get xˆ(k +1| k) and P(k +1| k). Now we use these to compute Δx :

−1 Δx = P(k +1| k)H (HP(k +1| k)H T ) ()y(k +1) − Hxˆ(k +1| k)

For ease of notation, define W so that

Δx = Wν

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Step 3: Update

xˆ(k +1| k +1) = xˆ(k +1| k) +Wν

T Pk +1 = E((xk +1 − xˆk +1)(xkν+1 − xˆk +1) )

− − T = E((xk +1 − xˆk +1 −W )(xk +1 − xˆk +1 −Wν ) )

(just take my word for it…)

P(k +1| k +1) = P(k +1| k) −WHP(k +1| k)H TW T

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Just take my word for it…

T Pk +1 = E((xk +1 − xˆk +1)(xk +ν1 − xˆk +1) )

− − T = E((xk +1 − xˆk +1 −W )(ν xk +1 − xˆk +1 −Wν ) )

⎛ − − T ⎞ = E⎜()(x − xˆ ) −W ((x − xˆ ν) −Wν ) ⎟ ⎝ k +1 k +1 k +1 k +1 ⎠

− − T − T ν T = E((xk +1 − xˆk +1)(xk +1 − xˆk +1) − 2W (xk +1 − xˆk +1) +W (Wν ) )

− − − T − − T T T = Pk +1 + E(− 2WH (xk +1 − xˆk +1)(xk +1 − xˆk +1) +WH (xk +1 − xˆk +1)(xk +1 − xˆk +1) H W )

− − − T T = Pk +1 − 2WHPk +1 +WHPk +1H W

− − T − T −1 − − T T = Pk +1 − 2Pk +1H (HPk +1H ) HPk +1 +WHPk +1H W

− − T − T −1 − T − T −1 − − T T = Pk +1 − 2Pk +1H (HPk +1H ) (HPk +1H )(HPk +1H ) HPk +1 +WHPk +1H W

− − T T − T T = Pk +1 − 2WHPk +1H W +WHPk +1H W

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Better State Observer Summary

x(k +1) = Fx(k) + Gu(k) + v(k) System: y(k) = Hx(k)

1. Predict xˆ(k +1| k) = Fxˆ(k | k) + Gu(k) P(k +1| k) = FP(k | k)F T + Q

−1 W = P(k +1| k)H ()HP(k +1| k)H T 2. Correction Δx = W (y(k +1) − Hxˆ(k +1| k)) Observer xˆ(k +1| k +1) = xˆ(k +1| k) +Wν 3. Update P(k +1| k +1) = P(k +1| k) −WHP(k +1| k)H TW T

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox •Note: there is a problem with the previous slide, namely the covariance matrix of the estimate P will be singular. This makes sense because with perfect sensor measurements the uncertainty in some directions will be zero. There is no uncertainty in the directions perpendicular to Ω

P lives in the state space and directions associated with sensor noise are zero. In the step when you do the update, if you have a zero noise measurement, you end up squeezing P down.

In most cases, when you do the next prediction step, the process covariance matrix Q gets added to the P(k|k), the result will be nonsingular, and everything is ok again.

There’s actually not anything wrong with this, except that you can’t really call the result a “covariance” matrix because “sometimes” it is not a covariance matrix

Plus, lets not be ridiculous, all sensors have noise.

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the correction (with output noise)

y = Hx + w The previous results require that you know which hyperplane to aim for. Because there Ω = {x | Hx = y} is now sensor noise, we don’t know where to aim, so we can’t directly use our method.

If. we can determine which hyperplane aim for, then the previous result would apply. xˆ(k +1| k) • We find the hyperplane in question as follows: 1. project estimate into output space 2. find most likely point in output space based on measurement and projected prediction 3. the desired hyperplane is the preimage of this point

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Projecting the prediction (putting current state estimates into sensor space)

xˆ(k +1| k) → yˆ = Hxˆ(k +1| k) P(k +1| k) → Rˆ = HP(k +1| k)H T

P(k +1| k) Rˆ xˆ(k +1| k) • yˆ •

state space (n-dimensional) output space (p-dimensional)

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding most likely output ence independent, so multiply them because we want them both to be true at the same time The objective is to find the most likely output that results from the independent Gaussian distributions (y, R) measurement and associate covariance (yˆ, Rˆ) projected prediction (what you think your measurements should be and how confident you are) Fact (Kalman Gains): The product of two Gaussian distributions given by

mean/covariance pairs (x1,C1) and (x2,C2) is proportional to a third Gaussian with mean x3 = x1 + K(x2 − x1) and covariance C3 = C1 − KC1 Strange, but true, where this is symmetric −1 K = C1(C1 + C2 )

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Most likely output (cont.)

Using the Kalman gains, the most likely output is

−1 y* = yˆ + ⎜⎛ Rˆ(Rˆ + R) ⎟⎞(yˆ − y) ⎝ ⎠

−1 = Hxˆ(k +1| k) + (HPH T (HPH T + R) )(Hxˆ(k +1| k) − y)

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the Correction

Now we can compute the correction as we did in the noiseless case, this time using y* instead of y. In other words, y* tells us which hyperplane to aim for.

Ω = {x | Hx = y*}

The result is: xˆ •− T T −1 * Δx Δx = PH (HPH ) (y − Hxˆ(k +1| k))

ot going all the way to y, but splitting the difference between how confident you are with your Sensor and process noise RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Finding the Correction (cont.)

−1 Δx = PH T ()HPH T (y* − Hxˆ(k +1| k))

−1 −1 = PH T ()HPH T (Hxˆ + HPH T (HPH T + R) ()y − Hxˆ(k +1| k) − Hxˆ(k +1| k)) −1 = PH T ()HPH T + R ()y − Hxˆ(k +1| k)

For convenience, we define −1 W = PH T (HPH T + R) So that Δx = W (y − Hxˆ(k +1| k))

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Correcting the Covariance Estimate

The covariance error estimate correction is computed from the definition of the covariance matrix, in much the same way that we computed the correction for the “better observer”. The answer turns out to be:

P(k +1| k +1) = P(k +1| k) −W (HP(k +1| k)H T )W T

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox LTI Kalman Filter Summary

x(k +1) = Fx(k) + Gu(k) + v(k) System: y(k) = Hx(k) + w(k)

xˆ(k +1| k) = Fxˆ(k | k) + Gu(k) 1. Predict P(k +1| k) = FP(k | k)F T + Q

S = HP(k +1| k)H T + R T −1 2. Correction W = P(k +1| k)H S Δx = W (y(k +1) − Hxˆ(k +1| k)) Kalman Filter xˆ(k +1| k +1) = xˆ(k +1| k) +Wν 3. Update P(k +1| k +1) = P(k +1| k) −WSW T

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Kalman Filters

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Kalman Filter for

T • Robot moves along a straight line with state x = [xr, vr]

• u is the force applied to the robot

• Newton tells us or

Process noise from a zero mean Gaussian V

• Robot has velocity sensor

Sensor noise from a zero mean Gaussian W

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Set up

Assume At some time k

PUT ELLIPSE FIGURE HERE

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Observability

Recall from last time

Actually, previous example is not observable but still nice to use Kalman filter

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

• Life is not linear

•Predict

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Extended Kalman Filter

• Update

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox EKF for Range-Bearing Localization

• State position and orientation

• Input forward and rotational velocity

• Process Model

• nl landmarks Association map • can only see p(k) of them at k

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Be wise, and linearize..

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Data Association •BIG PROBLEM

Ith measurement corresponds to the jth landmark innovation

where

Pick the smallest

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Kalman Filter for SLAM (simple) state

Inputs are commands to x and y velocities, a bit naive

Process model

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Kalman Filter for SLAM

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Range Bearing

Inputs are forward and rotational velocities

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Greg’s Notes: Some Examples

• Point moving on the line according to f = m a – state is position and velocity – input is force – sensing should be position

• Point in the plane under Newtonian laws

• Nonholonomic kinematic system (no dynamics) – state is workspace configuration – input is velocity command – sensing could be direction and/or distance to beacons

• Note that all of dynamical systems are “open-loop” integration • Role of sensing is to “close the loop” and pin down state

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Kalman Filters

μ 2 Bel(xt ) = N( t ,σ t )

− − ⎧ μt +1 = μt + But Bel(xt +1 ) = ⎨ −2 2 2 2 ⎩σ t +1 = A σ t + σ act

⎧μ = μ − + K (μ − μ − ) t +1 t +1 t +1 zt +1 t +1 Bel(xt +1 ) = ⎨ 2 −2 ⎩ σ t +1 = (1 − Kt +1 )σ t +1

− − ⎧ μt +2 = μt +1 + But +1 Bel(xt +2 ) = ⎨ −2 2 2 2 ⎩σ t +2 = A σ t +1 + σ act

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Kalman Filter

1. Algorithm Kalman_filter( <μ,Σ>, d ):

2. If d is a perceptual data item y then T T −1 3. K = ΣC (CΣC + Σobs ) 4. μ = μ + K(z − Cμ) 5. Σ = (I − KC)Σ 6. Else if d is an action data item u then 7. μ = Aμ + Bu 8. T Σ = AΣA + Σact 9. Return <μ,Σ>

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Limitations

• Very strong assumptions: – Linear state dynamics X − = AX + Bu + Σ – Observations linear in state t+1 t t act

• What can we do if system is not linear? Zt = CX t + Σobs – Non-linear state dynamics – Non-linear observations − X t+1 = f (X t ,ut ,Σact )

Zt = c(X t ,Σobs ) • Linearize it! • Determine Jacobians of dynamics f and observation function c w.r.t the current state x and the noise.

∂fi ∂ci ∂fi ∂ci Aij = (xt ,ut ,0) Cij = (xt ,0) Wij = (xt ,ut ,0) Vij = (xt ,0) ∂x j ∂x j ∂Σact j ∂Σobs j

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Extended Kalman Filter Algorithm

1. Algorithm Extended_Kalman_filter(<μ,Σ>, d):

2. If d is a perceptual data item z then T T T −1 T T −1 3. K = ΣC (CΣC +VΣobsV ) K = ΣC (CΣC + Σobs ) 4. μ = μ + K(z − c(μ,0)) μ = μ + K(z − Cμ) 5. Σ = (I − KC)Σ Σ = (I − KC)Σ 6. Else if d is an action data item u then 7. μ = f (μ,u,0) μ = Aμ + Bu 8. T T T Σ = AΣA +WΣactW Σ = AΣA + Σact 9. Return <μ,Σ>

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Kalman Filter-based Systems (2)

• [Arras et al. 98]: • Laser range-finder and vision • High precision (<1cm accuracy)

Courtesy of RIK. 16-735,Arras Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Unscented Kalman Filter

• Instead of linearizing, pass several points from the Gaussian through the non- linear transformation and re-compute a new Gaussian. • Better performance (theory and practice).

μ μ

f f μ'= f (μ,u,0) T T Σ = AΣA +WΣactW μ’

μ’

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox Kalman Filters and SLAM

• Localization: state is the location of the robot

• Mapping: state is the location of beacons

• SLAM: state combines both

• Consider a simple fully-observable holonomic robot

– x(k+1) = x(k) + u(k) dt + v

–yi(k) = pi - x(k) + w

• If the state is (x(k),p1, p2 ...) then we can write a linear observation system – note that if we don’t have some fixed beacons, our system is unobservable (we can’t fully determine all unknown quantities)

RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox