ROBOT PATH PLANNING AND TRACKING OF A MOVING TARGET USING KALMAN FILTER
Adya Shrotriya B.E., University of Pune, India, 2006
PROJECT
Submitted in partial satisfaction of the requirements for the degree in
MASTER OF SCIENCE
in
ELECTRICAL AND ELECTRONIC ENGINEERING
at
CALIFORNIA STATE UNIVERSITY, SACRAMENTO
SPRING 2010
ROBOT PATH PLANNING AND TRACKING OF A MOVING TARGET USING KALMAN FILTER
A Project
by
Adya Shrotriya
Approved by:
______, Committee Chair Dr. Fethi Belkhouche
______, Second Reader Dr. Preetham B. Kumar
______Date
ii
Student: Adya Shrotriya
I certify that this student has met the requirements for format contained in the university format manual and that this project is suitable for shelving in the library and credits to be rewarded for the Project.
______, Graduate Coordinator ______Dr. Preetham B. Kumar Date
Department of Electrical and Electronic Engineering
iii
Abstract
of
ROBOT PATH PLANNING AND TRACKING OF A MOVING TARGET USING KALMAN FILTER
by
Adya Shrotriya
This project develops a method for path planning and tracking of a moving target by a wheeled robot. The main kinematics variables updated in each step are the orientation and relative position of the robot and the target. A discrete extended Kalman filter is used to predict and update the states of the robot and the target and their uncertainties. The robot uses a linear proportional navigation law to track the target. Simulation of the motion kinematics of robot and the target is performed using MATLAB. It is shown using multiple simulation scenarios that the robot is able to track and reach the moving goal successfully.
______, Committee Chair Dr. Fethi Belkhouche
______Date
iv
ACKNOWLEDGEMENT
I would like to take this opportunity to thank the people behind successful completion of this project. First and foremost, I would like to thank my professor and mentor for this project, Dr. Fethi Belkhouche for his tremendous support and guidance throughout the project. I would like to thank him for always being there to help me in developing better understanding of the topic and encouraging me to give my best. Also, I am extremely grateful to my Graduate Coordinator, Dr. Preetham Kumar for his consistent support in my academic years at the School guiding me through the entire process, right from enrolling for a class until submission of my master’s project. Finally, I would like to thank my professors throughout my Master’s, my department and my school for giving me this opportunity to learn and grow towards attainment of this degree.
Lastly, I would like to express my heartfelt gratitude to my family in India who have constantly supported me and blessed me. I am extremely thankful to my husband Puru, who helped me make my dreams come true.
v
TABLE OF CONTENTS Page Acknowledgement.………………………………………………………………………………...v List of figures…...………………………………………………………………………………..viii Chapter 1. INTRODUCTION ...... 1 2. ROBOT NAVIGATION FUNDAMENTALS ...... 3 Odometry ...... 4 Probabilistic Estimation ...... 7 3. KALMAN FILTER ...... 9 State Vector ...... 10 Dynamic Model ...... 11 Observation Model ...... 12 Kalman Filtering ...... 15 Predict-Match-Update Process ...... 19 4. KALMAN FILTER ALGORITHM...... 21 Extended Kalman Filter ...... 24 5. PREVIOUS RESEARCH IN IMPLEMENTATION OF KALMAN FILTERING ...... 27 Mathematical Foundations of Navigation and Perception for an Autonomous Mobile Robot [1] ...... 27
Kalman Filtering for Positioning and Heading Control of Ships and Offshore Rigs [5] .. 28 Accurate Odometry and Error modeling for a Mobile Robot [10] ...... 29 6. TRACKING–NAVIGATION OF A MOVING GOAL ...... 30 Linear Navigation Law ...... 35 7. KALMAN FILTERING IMPLEMENTATION IN NAVIGATION/TRACKING ...... 37 A Kalman Filter Model for Odometric Position Estimation [1] ...... 40 8. SIMULATION RESULTS ...... 46 Implementation of Navigation law equations ...... 46 Representation of Error Ellipses ...... 48 Robot Target Path ...... 49
vi
Covariance Distribution ...... 50 Error Ellipses ...... 51 9. CONCLUSION ...... 54 Appendix ...... 55 MATLAB Code Implementing Kalman Filter to show Error Ellipses ...... 55 MATLAB Code Implementing the Navigation Law ...... 56 MATLAB Code Implementing Kalman Filter for Path Planning ...... 57 References ...... 61
vii
LIST OF FIGURES Page 1. Growing error ellipses indicate growing position uncertainty with odometry ...... 8 2. Kalman Filter diagram...... 10 3. A framework of dynamic world modeling ...... 19 4. State space diagram for an odometric system ...... 30 5. Geometrical representation of robot/target positions ...... 32
6. Probability density function for noises v1 and v2 ...... 34
7. Probability density function for rGR and ΨGR ...... 34
8. Probability density function for positions xGR and yGR ...... 35 9. Linear Navigation law simulation for constant c and variable N ...... 47 10. Linear Navigation law simulation for constant N and variable c ...... 47 11. Simulation results depicting uncertainties through error ellipses ...... 48 12. Robot Target path ...... 49 13. Covariance matrix distribution...... 50 14. Error Ellipse for the first and the 15th step robot/target position ...... 51
viii
1
Chapter 1
INTRODUCTION
1
2 This project is concerned with the application of one of the most important techniques
3 from estimation theory to the problem of navigation and tracking for a mobile robot. In
4 this project, probabilistic estimation is done to predict the next step of the robot that
5 follows a moving target under uncertainty. Translation as well as orientation of the
6 moving target with respect to the global axis work as the reference for estimation of robot
7 position. Estimation of the position of the vehicle with respect to the external world is
8 fundamental to navigation. Modeling the contents of the immediate environment is
9 equally fundamental. Estimation theory provides a basic set of tools for position
10 estimation and environmental modeling. These tools provide an elegant and formally
11 sound method for combining internal and external sensor information from different
12 sources, operating at different rates. The foundations of estimation theory are reviewed,
13 and a virtual system is assumed for mathematical tools derived for combining sensory
14 information. In particular, a predict-match-update cycle is derived as a framework for
15 perception. The Kalman filter is used to provide the mathematical basis for this process
16 [1].
17
18 The use of a Kalman filter as the basis for a virtual vehicle controller makes it possible to
19 correct errors in odometric position using external perception. Example cases are derived
2
20 for correcting a position estimate from different forms of perception. In particular,
21 technique is presented for correction of estimated position using angle and distance to the
22 moving target.
23
24 This project is organized as follows. Chapter 2 discusses fundamentals of odometry and
25 robot navigation. Probabilistic estimation for robot as well as target states and the
26 resulting uncertainties are examined. Later, in Chapter 3 and 4, the implementation of
27 Kalman filter, its operation and significance in path planning is elaborated with
28 simulation examples and mathematical models. The Kalman filter algorithm and the
29 extended Kalman filter are explained. In Chapter 5, several case studies that have used
30 the Kalman filter and odometry to perform navigation for a wheeled robot, ships and
31 other such real time applications are studied and discussed. Chapter 6 introduces the
32 linear navigation laws and several simulation examples that form the basis of our robot
33 and target path planning operation. Later, in Chapter 7, successful implementation of
34 Kalman filter to perform odometric position estimation for the robot and the target is
35 shown in simulation examples. At last, the appendix lists MATLAB codes written in this
36 project to implement the Kalman filter and achieve successful robot path tracking to
37 reach its target.
38
39
40
3
41 Chapter 2
42 ROBOT NAVIGATION FUNDAMENTALS
43
44 Robot navigation means a robot’s ability to determine its own position in its frame of
45 reference and then to plan a path towards some goal location. In order to navigate in its
46 environment, the robot or any another mobility device requires a representation i.e. a map
47 of the environment and the ability to interpret that representation. Path planning is one of
48 the techniques of Robot Navigation which is an extension of localization. In that, it
49 requires the determination of the robot's current position and a position of a goal location,
50 both within the same frame of reference or coordinates.
51
52 In this project, the goal location is also moving and characterized by uncertainty in
53 motion. The position of the target is estimated and updated with respect to its observed
54 values using the discrete Kalman filter. The Kalman filter provides us with tools to
55 estimate and update the position and orientation of the robot, considering its uncertainty
56 characterized by mean values and a covariance matrix. The path planning algorithm uses
57 motion models for probabilistic estimation of robot position. Kalman filter is also used to
58 track the position and orientation of the moving goal.
59
4
60 Odometry based models are used when the system is equipped with wheel encoders
61 whereas the velocity based models are applied when there is no presence of wheel
62 encoders to provide observation values from motion sensors inbuilt inside the robot [9].
63
64 Odometry
65 66 Odometry is the process of estimating one's current position based upon a previously
67 determined position, or fix, and advancing that position based upon known or estimated
68 speeds over elapsed time, and course. In mobile robotics, odometry typically refers to a
69 position estimate achieved by integrating odometry measurements. The current pose of
70 the vehicle is estimated based on its velocities and the time elapsed. It is a velocity based
71 model that needs to be applied since no wheel encoders are present [9].
72
73 Odometry is the simplest and the most widely used navigation method for mobile robot
74 positioning. The fundamental idea is the integration of incremental motion information
75 over time, which leads inevitably to the accumulation of errors. Particularly, the
76 accumulation of orientation errors will cause large position errors which increase
77 proportionally with the distance traveled by the robot. Despite these limitations, most
78 researchers agree that odometry is an important part of a robot navigation system and that
79 navigation tasks will be simplified if odometric accuracy can be improved [7].
80 Odometry is used in almost all mobile robots, for various reasons:
81
5
82 1. Odometry data can be fused with absolute position measurements to provide
83 better and more reliable position estimation [1].
84 2. Odometry can be used in between absolute position updates with landmarks.
85 Given a required positioning accuracy, increased accuracy in odometry allows for less
86 frequent absolute position updates. As a result, fewer landmarks are needed for a given
87 travel distance [1].
88 3. Many mapping and landmark matching algorithms assume that the robot can
89 maintain its position well enough to allow the robot to look for landmarks in a limited
90 area and to match features in that limited area to achieve short processing time and to
91 improve matching correctness [1].
92 4. In some cases, odometry is the only navigation information available; for
93 example: when no external reference is available, when circumstances preclude the
94 placing or selection of landmarks in the environment, or when another sensor subsystem
95 fails to provide usable data [4].
96
97 In a typical indoor environment with a flat floor, localization becomes a matter of
98 determining the Cartesian coordinates (x,y) and the orientation θ of the robot on a two
99 dimensional floor plan. For a wheeled robot, odometry is one of the most important
100 means of achieving this task. In practice, optical encoders that are mounted on both drive
101 wheels feed discretized wheel increment information to a processor, which
102 correspondingly updates the robot’s state using geometric equations. However, with time,
6
103 odometric localization accumulates errors in an unbounded fashion due to wheel
104 slippage, floor roughness and discretized sampling of wheel increments.
105
106 To begin, consider how x and y coordinates (and orientation) change with respect to time.
107 At any instant, the x and y coordinates of the robot’s center point are changing based on
108 its speed and orientation. We treat orientation as an angle measured in radians, counter
109 clockwise from the x-axis. The vector giving the direction of the forward motion for the
110 robot will simply be (cosθ, sinθ). The x and y coordinates for the robot’s center point will
111 change depending on the speed of its motion along the vector.
112
113 These observations suggest that, taking v(t) and θ(t) as time dependent functions for the
114 robot’s speed and orientation, our solution is going to be in the form [8] :
dx 115 v( t )cos ( t ) dt
dy 116 v( t )sin ( t ) dt
117 or
118 dx v( t )cos ( t ) dt
119 dy v( t )sin ( t ) dt
120
121 Theoretical research normally involves error modeling, so that a mathematical treatment
122 is possible. For example, different versions of Kalman Filter require that the odometry
7
123 errors be expressed in the form of an error covariance matrix. Odometry errors can be
124 classified as being systematic or non-systematic [3].
125
126 It is noteworthy that many researchers develop algorithms that estimate the position
127 uncertainty or odometry errors of a robot. With this approach each computed robot
128 position is surrounded by a characteristic “error ellipse”, which indicated a region of
129 uncertainty for the robot’s actual position. Typically, these ellipses grow with travel
130 distance, until an absolute position measurement reduces the growing uncertainty and
131 thereby “resets” the size of the error ellipse. These error estimation techniques must rely
132 on error estimation parameters derived from observations of the vehicle’s odometric
133 performance. These parameters can take into account only systematic errors, because the
134 magnitude of non-systematic errors is unpredictable [4].
135
136 Probabilistic Estimation
137 138 Probabilistic localization is a probabilistic algorithm where, instead of maintaining a
139 single hypothesis as to where in the world a robot might be, probabilistic localization
140 maintains a probability distribution over the space of all such hypotheses. The
141 probabilistic representation allows for the uncertainties that arise from the uncertain
142 motion models and noisy sensor readings to be accounted for as well. The challenge is
143 then to maintain a position probability density over all possible robot poses. Such a
144 density can have arbitrary forms representing various kinds of information about the
8
145 robot’s position. For example, the robot can start with a uniform distribution representing
146 that it is completely uncertain about its position, i. e. that the robot could be in any
147 location with equal probability. It furthermore can contain ambiguous situations. In the
148 usual case, in which the robot is highly certain about its position, it consists of a unimodal
149 distribution centered on the true position of the robot. Kalman filtering is a form of
150 probabilistic estimation where the estimate is assumed to be a Gaussian (unimodal)
151 probability density function (PDF) [2].
152 Figure 1 illustrates the error ellipses and the increasing uncertainties with odometry.
153
154 1. Growing error ellipses indicate growing position uncertainty with odometry [10].
155
156
157
158
159
160
9
161 Chapter 3
162 KALMAN FILTER
163
164 The Kalman filter is an on-line, recursive algorithm designed to estimate the true state of
165 a system where only (noisy) observations or measurements are available. It contains a
166 linear model for the process, as well as a linear model for the measurement. The former
167 model describes how the current state of the tracker is changing, given the previous
168 instance. To improve the estimated state the Kalman filter uses measurements that are
169 related to the state but disturbed as well.
170
171 Thus the Kalman filter consists of two steps:
172 1) Prediction
173 2) Correction (or update)
174
175 In the first step, the state is predicted with the dynamic model. In the second step it is
176 corrected with the observation model, so that the error covariance of the estimator is
177 minimized. In this sense it is an optimal estimator [3].
178
179 The application areas of the Kalman filter span from aerospace, to marine navigation,
180 demographic modeling, weather science, manufacturing and many other areas. Hence,
181 due to its wide area of utility, Kalman filter is subject to extensive amount of research [2].
10
182
183 2. Kalman Filter diagram [3].
184 The procedure is repeated for each time step with the state of the previous time step as
185 initial value. Therefore, the Kalman Filter is called a recursive filter.
186 The basic components of the Kalman filter are the state vector, the dynamic model and
187 the observation model [3].
188
189 State Vector
190 191 The state vector contains the variables of interest. In general it describes the state of the
192 dynamic system. The variables in the state vector cannot be measured directly but can be
193 inferred from values that are measurable.
11
194 Elements of state vector can be position, velocity, orientation angles, etc. For examples,
T 195 xR(t) = [xR , yR , θR] is a state vector with xR and yR as variables of translation and θR as
196 the variable of orientation. The state vector has two values at the same time. A priori
197 value, which is the predicted value before the update and a posteriori value, which is the
198 corrected value after the update [3].
199
200 Dynamic Model
201 202 The dynamic model describes the transformation of the state vector over time. It can
203 usually be represented by a system of difference or differential equations. The discrete
204 model can be written as follows:
205 xk( 1) Fkxk ()() Gkuk ()() vk ()
206
207 The vector x(k) ∈ Rn is the state vector which is used to represent the system input such
208 as velocity commands, torques, or forces intentionally applied to the robot. Matrix F(k) ∈
209 Rnxn encodes the dynamics of the system, and G(k) ∈ Rnxm describes how the inputs drive
210 the dynamics. Vector v(k) ∈ Rn is called the process noise and is assumed to be white
211 Gaussian noise with zero mean and covariance matrix V(k). This process noise is used to
212 account for unmodeled disturbances (such as slipping wheels) that affect the system
213 dynamics [3].
214
215
12
216 Observation Model
217 218 The observation model represents the relationship between the state and the
219 measurements. In the linear case, the measurements can be described by a system of
220 linear equations, which depend on the state variables. Usually the observations are made
221 at discrete time steps. The discrete measurement model can be written as follows:
222 y()()()() k H k x k w k
223
224 The matrix H(k) ∈ Rpxn describes how state vectors are mapped into outputs. The
225 measurement noise vector w(k) ∈ Rp is assumed to be White Gaussian noise with zero
226 mean and covariance matrix W(k) [3].
227
228 When the commands are executed perfectly and the robot starting position is perfectly
229 known, odometry method gives a perfect estimation of position. However, perfect
230 performance and knowledge are impossible to achieve in the real world. Errors between
231 the velocity commands and the actual robot velocities will accumulate over time. In other
232 words, as the robot moves, it is continuously losing information about its location.
233 Eventually, the robot will lose so much information that the command integration
234 estimate becomes meaningless for any practical purpose. A similar approach is to
235 integrate robot velocity measurements reported by onboard odometry sensors.
236
13
237 With just a few extensions, command integration can be thought of as a probabilistic
238 estimation method. First, consider the robot starting location. Since, in the real world the
239 starting position cannot be perfectly known, it makes sense to represent the starting
240 location as a PDF over the state space, i. e., the space of possible robot positions. If a
241 good estimate of the starting position is available, the PDF will have a peak at that
242 location, and the “sharpness” of the peak will represent the certainty of the initial
243 estimate, the higher and narrower the peak. The location of the peak is propagated by
244 integrating the velocity commands. However, since the commands are not executed
245 perfectly, the estimate becomes more uncertain as time progresses and the peak of the
246 PDF will become smaller and more spread out. The rate at which the peak spreads is
247 determined by the amount of error (noise) in the velocity command; the greater the noise,
248 the faster the peak spreads. Eventually, the peak will become so flat that the estimate
249 provided by the PDF is meaningless [2].
250
251 The goal of probabilistic estimation (or any estimation method) is to provide a
252 meaningful estimate of the system state, which in this case is robot pose. The problem
253 with the command integration method is that information is continually lost and no new
254 information is ever gained. The solution to this is to inject new information into the
255 system through the use of sensors that gather information about the environment. For
256 example, a robot equipped with a sensor capable of measuring the range and bearing to a
257 landmark with a known location. Such a measurement adds new information to the
14
258 system and can be used (at least partially) to compensate for the information that was lost
259 by integrating. The new information can be represented as a PDF in sensor space, usually
260 with a peak at the value of the sensor reading. Knowledge about the robot location prior
261 to the sensor measurement is described by a PDF in the state space. The crux of
262 probabilistic estimation problem is to merge these two distributions in a meaningful way.
263
264 Generally, any probabilistic estimation method can be thought of as a two step process of
265 prediction and update. Given an estimate of the system state in the form of PDF, the
266 prediction propagates the PDF according to robot commands together with a motion
267 model for the robot. The update step then corrects the prediction by merging the
268 predicted PDF with information collected by the sensors. The “new” estimate is given by
269 the updated PDF, and the process is iterated. Note that in each iteration, the prediction
270 step accounts for the information lost due to errors in the motion model while the update
271 step incorporates information gained by the sensors.
272
273 Kalman filter is a well known probabilistic estimation technique. In Kalman filtering, the
274 motion model is assumed to be a linear function of the state variables and the inputs
275 (commands). The quantities measured by the sensors are assumed to be linear functions
276 of the state variables. Errors in both the motion model and the sensor model are assumed
277 to be zero-mean white Gaussian noise. Because of this simple form, it is possible to
15
278 derive closed form equations to perform the prediction and update steps, making Kalman
279 filter implementation a straightforward process [2].
280
281 Kalman Filtering
282 283 In order to apply Kalman filtering to the problem of robot localization, it is necessary to
284 define equations that can be used to model the dynamics and sensors of the robot system.
285 The vector x is used to denote the system (robot) state as it evolves through time. This
286 project uses discrete time models, meaning that the continuously varying robot state is
287 sampled at discrete, regularly spaced intervals of time to create the sequence x(k), k is the
288 discrete time, k = 0,1,2….. Specifically, if x (0) is the value of the state at time t=t0, then
289 x(k) is the value of the state at time t0+Tk, where T is defined to be the sampling time.
290
291 The evolution of the robot state and the values measured by the robot sensors can be
292 modeled as a linear dynamical discrete-time system as follows [2].
293 xk( 1) Fkxk ()() Gkuk ()() vk ()
294 y()()()() k H k x k w k
295
296 From building systems using linear dynamic model framework, a set of principles are
297 identified for integrating perceptual information. These principles follow directly from
298 the nature of the cyclic process for dynamic world modeling. Experiences from building
299 systems for dynamic world modeling have led to identify a set of principles for
16
300 integrating perceptual information as below. These principles have been elaborated in the
301 paper “Mathematical foundations of navigation and perception for an autonomous mobile
302 robot” by James L. Crowley [1].
303
304 1) Primitives in the world model should be expressed as a set of properties [1]
305 A model primitive expresses an association of properties which describe the state of some
306 part of the world. This association is typically based on spatial position. For example, the
307 co-occurrence of a surface with a certain normal vector, a yellow color, and a certain
308 temperature or for numerical quantities, each property can be listed as an estimate
309 accompanied by a precision. For symbolic entities, the property slot can be filled with a
310 list of possible values, from a finite vocabulary. This association of properties is known
311 as a “state vector” in estimation theory.
312
313 2) Observation and Model should be expressed in the same coordinate system [1]
314 The common coordinate system may be scene based or observer based. The choice of
315 reference frame should be determined by considering the total cost of the transformations
316 involved in each predict-match-update cycle. For example, in the case of a single
317 stationary observer, a sensor based coordinate system may minimize the transformation
318 cost. For a moving observer with a model which is small relative to the size of the
319 observations, it may be cheaper to transform the model to the current sensor coordinates
320 during each cycle of modeling. On the other hand, when the model is large compared to
17
321 the number of observations, using an external scene based system may yield fewer
322 computations.
323
324 Transformations between coordinate frames generally require a precise model of the
325 entire sensing process. This description, often called a “sensor model”, is essential to
326 transform a prediction into the observation coordinates, or to transform an observation
327 into a model based coordinate system.
328
329 3) Observation and Model should be expressed in a common vocabulary [1]
330 In order to match or to add information to a model, an observation needs some be
331 transformed to the terms of the data base in order to serve as a key. It is possible to
332 calculate such information as needed. However, since the information is used both in
333 matching and updating, it makes more sense to save it between phases. Hence, the
334 observation should be expressed in a subset of properties used in the model.
335
336 An efficient way to integrate information from different sensors is to define a standard
337 “primitive” element which is composed of the different properties which may be
338 observed or inferred from different sensors. Any one sensor might supply observations
339 for only a subset of these properties. Transforming the observation into the common
340 vocabulary allows the fusion process to proceed independent of the source of
341 observations.
18
342 4) Properties should include an explicit representation of uncertainty [1]
343 Dynamic world modeling involves two kinds of uncertainty: precision and confidence.
344 Precision can be thought of as a form of spatial uncertainty. By explicitly listing the
345 precision of an observed property, the system can determine the extent to which an
346 observation is providing new information to a model. Unobserved properties can be
347 treated as observations which are very imprecise. Having a model of the sensing process
348 permits an estimate of the uncertainties to be calculated directly from the geometric
349 situation.
350
351 5) Primitives should be accompanied by a confidence factor [1].
352 A confidence factor provides the world modeling system with a simple mechanism for
353 non monotonic reasoning. Observations which do not correspond to expectations may be
354 initially considered as uncertain. If confirmation is received further observation, their
355 confidence is increased. If no further confirmation is received, they can be eliminated
356 from the model.
357
358 The application of these principles leads to a set of techniques for the processes of
359 dynamic world modeling.
360
361
362
19
363 Predict-Match-Update Process
364 365 In a dynamic world modeling framework, independent observations are “transformed”
366 into a common coordinate space and vocabulary. These observations are then integrated
367 (fused) into a model (or internal perception) by a cyclic process composed of three
368 phases: Predict, Match and Update [1].
369
370 3. A framework of dynamic world modeling [1].
371
372 Predict: In the prediction phase, the current state of the model is used to predict the state
373 of the external world at the time that the next observation is taken.
374
20
375 Match: In the match phase, the transformed observation is brought into correspondence
376 with the predictions. Such matching requires that the observation and the prediction
377 express information which is qualitatively similar. Matching requires that the predictions
378 and observations be transformed to the same coordinate space and to a common
379 vocabulary.
380
381 Update: The update phase integrates the observed information with the predicted state of
382 the model to create an updated description of the environment composed of hypotheses.
383
384
385
386
387
388
389
390
391
392
393
394
395
21
396 Chapter 4
397 KALMAN FILTER ALGORITHM
398 399 The Kalman filter is a recursive estimator. This means that only the estimated state from
400 the previous time step and the current measurement are needed to compute the estimate
401 for the current state. In contrast to Bayesian estimation techniques, no history of
402 observations and/or estimates is required. In what follows, the notation xˆnm| represents
403 the estimate of “x” at time “n” given observations up to, and including at time “m”.
404 The state of the filter is represented by two variables:
405 xˆkk| , the a posteriori state estimate at time k given observations up to and
406 including at time k.
ˆ 407 Ckk| , the a posteriori error covariance matrix (a measure of the estimated
408 accuracy of the state estimate).
409 The Kalman filter has two distinct phases: Predict and Update. As mentioned
410 previously, the predicted state estimate is also known as the a priori state estimate
411 because, although it is an estimate of the state at the current time step, it does not include
412 observation information from the current time step. In the update phase, the current a
413 priori prediction is combined with current observation information to refine the state
414 estimate. This improved estimate is termed the a posteriori state estimate.
22
Usually, the update phase follows the predict phase after incorporating the observation at each step. This observation value helps in providing an updated state of the system which is nearer to the true value. However, in situations like absence of an observation value or presence of multiple observation values, the update phase may change. Either the update step is skipped with multiple values of prediction or the observation values are user defined in order to provide a beacon point to the predictions. Similarly, multiple update values are evaluated depending on the available number of observations
(typically with different observation matrices y k). In order to perform the predict and update steps as explained above, the Kalman filter algorithm is used which is summarized as follows [12].
Predict
Predicted (a priori) state
xˆˆk| k 1 F k x k 1| k 1 G k u k
Predicted (a priori) estimate covariance
T CFCFWk| k 1kk k 1| k 1
23
Update
Innovation or measurement residual
y z H x k k k k|1 k
Innovation (or residual) covariance
T SHCHVk k k|1 k k k
Optimal Kalman gain
T 1 KCHSk k|1 k k k
Updated (a posteriori) state estimate
xˆˆ x K y k| k k | k 1 k k
Updated (a posteriori) estimate covariance
CIKHC() k| k k k k | k 1
415 The formula for the updated estimate covariance above is only valid for the optimal
416 Kalman gain. Usage of other gain values requires a more complex formula found in the
417 derivations section.
418
419 420 421 422 423
24
424 Extended Kalman Filter
425 The robot and goal motion models used in this project are highly non-linear. Thus,
426 Kalman filter cannot be used.To solve this problem we suggest to use the Extended
427 Kalman filter (EKF). The extended Kalman filter is a filter where the state transition and
428 observation models are non- linear functions of the state and input. The dynamic model is
429 given by,
430 xk f(,) x k1 u k w k
431 zk h() x k v k
432
433 The function f can be used to compute the predicted state from the previous estimate and
434 similarly the function h can be used to compute the predicted measurement from the
435 predicted state. However, f and h cannot be applied to the covariance directly. Instead, a
436 matrix of partial derivatives (the Jacobian) is computed.
437
438 The Jacobian of a function describes the orientation of a tangent plane to the function at a
439 given point. For example, in a two dimensional plane where xn+1 = f(xn), with a fixed
440 point x*, the Jacobian matrix is simply the first order derivative of the function f(xn) at
441 that fixed point. Although, for the Jacobian, only the partial derivatives of the variable
442 need to exist. Hence, the Jacobian gives the best possible linear approximation for a
443 variable to a differential function near a given fixed point. Since the Jacobian is a
25
444 derivative of a multivariate function, for “n” variables, n > 1, the derivative of a
445 numerical function must be matrix-valued, or a partial derivative.
446
447 At each time step the Jacobian is evaluated with current predicted states. These matrices
448 can be used in the Kalman filter equations. This process essentially linearizes the non-
449 linear function around the current estimate [12].
450
451 Predict
452 Predicted State
453 xˆˆk| k 1 f(,) x k 1| k 1 u k 1
Predicted estimate covariance
T CFCFWk| k 1 k 1 k 1| k 1 k 1 k 1
454 Update
455 Innovation or measurement residual
456 yˆˆk z k h() x k|1 k
457 Innovation (or residual) covariance
T 458 SHCHVk k k|1 k k k
459 Optimal Kalman Gain
T 1 460 KCHSk k|1 k k k
26
461 Updated State estimate
462 xˆˆk| k x k | k 1 K k y k
463 Updated estimate covariance
464 CIKHCk| k() k k k | k 1
465 where the state transition and observation matrices are defined to be the following
466 Jacobians
f F 467 k 1 x xˆk1| k 1,u k 1
h H 468 k x xˆkk|1
469
470
471
472
473
474
27
475 Chapter 5
476 PREVIOUS RESEARCH IN IMPLEMENTATION OF KALMAN FILTERING
477 In this section, we present case studies where odometry and Kalman filter are used to
478 predict and update the system states and eventually achieve the desired goal of the
479 project.
480
481 Mathematical Foundations of Navigation and Perception for an Autonomous Mobile
482 Robot [1]
483 484 This work concerns the application of estimation theory to the problem of navigation and
485 perception of a mobile robot. A hierarchical structure is presented for the design of a
486 mobile robot navigation system. The control system of a mobile robot is found to
487 decompose naturally into a set of layered control loops, where the layers are defined by
488 the level of abstraction of data, and the cycle time of the feedback control. The levels that
489 occur naturally are identified as the level of signal, device, behavior and task.
490
491 The vehicle controller provides asynchronous independent control of forward translation
492 and rotation. The controller accepts both velocity and displacement commands, and can
493 support a diverse variety of navigation techniques. In addition, the controller operates in
494 terms of standardized “virtual vehicle” which may be easily adapted to a large variety of
495 vehicle geometries. All vehicle specific information is contained in the low level
496 interface which translates the virtual vehicle into specific vehicle geometry.
28
497 The use of Kalman filter as the basis for a vehicle controller makes it possible to correct
498 errors in odometric position estimation using external perception. The concepts of this
499 project are adapted majorly from this paper.
500
501 Kalman Filtering for Positioning and Heading Control of Ships and Offshore Rigs [5]
502 503 This application majorly discusses the role of Kalman filter in dealing with uncertainties
504 in ship motion control caused by environmental conditions like waves, wind and current.
505 Modern marine vessels are equipped with sophisticated motion control systems that have
506 different objectives depending on the particular operations being performed. Some of
507 these control objectives include position and heading regulation, path following,
508 trajectory tracking and wave induced motion reduction [5].
509
510 The stochastic nature of environmentally induced forces had led to extensive use of
511 Kalman filter for estimating ship-motion-related quantities and for filtering disturbances,
512 both of which are of paramount importance for implementing marine motion-control
513 systems. In this application, elements of a ship motion-control system are introduced,
514 describing the models used for position regulation and course keeping control and how
515 Kalman filter is used for wave filtering, that is, removing oscillatory wave induced
516 motion from velocity and positioning measurements [5].
517
518
29
519 Accurate Odometry and Error modeling for a Mobile Robot [10]
520 521 This case study examines the important steps involved in the design, calibration and error
522 modeling of a low cost odometry system capable of achieving high accuracy path
523 planning. A consistent error model is developed for updating every step of position and
524 orientation of the odometric system. The approach taken in this case takes into account
525 the noise over the entire path track to produce simple closed form expressions, allowing
526 efficient correction or update of the covariance matrix after the completion of each path
527 segments [10].
528
529 An accurate odometry system is presented which is comparable to the best reported
530 system but can be fabricated at low cost. A new first order odometry error model has
531 been derived to propagate the state error covariance following a motion. The Kalman
532 filter can implement this case study procedure easily since a covariance matrix is
533 developed which is a common approach to considering uncertainties or noise in many
534 statistics and filtering theories. Although, this model fails to consider unexpected errors
535 such as hitting a bump on the floor or any other noise that violates the flat floor
536 assumption. External tools or sensors may be required to detect errors for certain
537 applications like mapping.
538
539
540
30
541 Chapter 6
542 TRACKING–NAVIGATION OF A MOVING GOAL
543 544 The robot and the moving goal move in the horizontal plane. The goal maneuvers are not
545 a priori known to the robot. The aim is to design a closed-loop control law for the robot
546 steering angle, which insures reaching the moving goal. We assume that the following
547 conditions are satisfied.
548 1. The robot is faster than the moving goal, and the goal moves in a smooth path.
549 2. The minimum turning radius of the robot is smaller than the minimum turning
550 radius of the moving goal.
551 3. The robot has a sensory system, which provides the control system with the
552 necessary information about the target and the environment. The target’s speed,
553 orientation, and position are not exactly known, but can be measured and
554 estimated using a Kalman filter.
555
556 4. State space diagram for an odometric system [9].
31
557 Point O (0,0) is the origin of an inertial reference frame of coordinates. The robot is a
558 wheeled mobile robot of the unicycle type with the following kinematics.
xRRR v cos
559 yRRR v sin
RR
560 561 where (xR, yR) represent the position of the robot’s reference point in the inertial frame of
562 reference, θR is the robot’s orientation angle. vR and ωR represent the linear and angular
563 velocities of the robot, respectively. The state of the robot is characterized by sR = [xR, yR,
T ˆ ˆ T 564 θR] . The state has a Gaussian distribution with mean value SG [ xˆG , yˆG ,G ] and
565 covariance matrix CR. The goal moves according to the following kinematics equations
xGGG v cos
566 yGGG v sin
GG 567 568 where (xG, yG) represent the position of the goal in the inertial frame of reference, θG is
569 the goal’s orientation angle. vG is the goal’s linear velocity. The state of the goal is
570 characterized by = [x , y , θ ]T . A Gaussian distribution is assumed for the state, SG G G G
T 571 with mean value [ , , ] and covariance matrix CG.
572
573
574
32
575 Y
576 Target 577 yG
578
579 rGR 580 ΨGR yR 581 Robot 582 583 X xR xG 584 585 586 5. Geometrical representation of robot/target positions
587 The geometrical representation of the target and robot coordinates is shown above. The
588 orientation angle of the robot is relative to that of the target and it is represented by Ψ.
589 The following equations show the implementation of the linear navigation law that
590 depicts motion of robot with respect to a moving target. The relative velocity between the
591 target and the robot is given by
x vcos v cos 592 GR G G R R yGR v Gsin G v R sin R
593 594 The orientation angle between the robot and the target is given by
yyGR 595 tan GR xxGR
596 597
33
598 The distance between the robot and the target is given by
r()() y y22 x x 599 GR G R G R
600
601 The onboard sensory system returns value for GR and rGR. These observation or
602 measurement values are corrupted by noise which is Gaussian with zero mean. Thus, the
603 equations for and rGR can be written as follows
yyGR 604 GR avtan 2 1 xxGR
22 605 rGR()() y G y R x G x R v2
606 607 where v1 and v2 are both measurement noise components.
608
609 Figure 6 shows the probability density function (PDF) for the noise. The standard
610 deviation for v1 is 0.4 and mean is 0, whereas the standard deviation of v2 (distance) is 0.5
611 with the mean value of 0. Figure 7 shows the uncertainty propagation for and rGR.
612 Figure 8 shows the PDFs for the relative positions where the PDFs are obtained using
0 613 Monte Carlo simulation for rGR=10 and ΨGR=45 . Monte Carlo methods are useful for
614 modeling phenomena with significant uncertainty in inputs. To perform the Monte Carlo
615 simulation, following equations are evaluated.
616 xGR=rGR cos ΨGR
617 yGR=rGR sin ΨGR
34
618 The following graphs illustrate PDFs for v1, v2, rGR, ΨGR, xGR,.
619
620
621 v1 = 0.4 v2 = 0.5
622 6. Probability density function for noises v1 and v2
623
624
0 625 rGR = 10 ΨGR = 45
626 7. Probability density function for rGR and ΨGR
627
628
35
0 629 For mean values rGR = 10, ΨGR = 45 ,
630
631 xGR yGR
632 8. Probability density function for positions xGR and yGR
633 634 Linear Navigation Law
635 636 The other contribution of this project consists of suggesting a solution to the navigation-
637 tracking problem using the linear navigation law. In our formulation, the navigation law
638 is defined in terms of the orientation angle or the angular velocity, which is more suitable
639 for robotics applications. This formulation allows an easy integration between the
640 navigation--tracking and obstacle avoidance modes. The integration is accomplished by
641 tuning the control variables of the proportional navigation law. The aim of
642 the navigation law is to make the robot angular velocity proportional to the rate of turn
643 of the angle of the line of sight robot-goal, which means that the robot’s steering angle is
644 given by
ˆ 645 R()()t N GR t c
36
ˆ 646 where GR ()t is the robot-goal line of sight angle as shown in figure 6. N is a positive
647 real number called the proportionality factor or constant, with N 1. Note that N = 1
648 corresponds to the pursuit. c is a real constant called the deviation constant. The aim of
649 the controller is to drive the line of sight angle rate to zero, and thus the line of sight
ˆ 650 angle tracks an SG asymptotically stable equilibrium solution that results in a decreasing
651 range. In some situations, this equilibrium solution approximates the goal’s orientation
652 angle. The navigation constant N plays an important role. Different values of N result in
653 different paths. For small values of N, small path corrections are required at the
654 beginning of the navigation-tracking process; however, larger corrections are required
655 near the interception. A rendezvous behavior is obtained for larger values of N, where the
656 robot’s path tends to be more curved at the beginning, and large path corrections are
657 needed; these corrections decrease with time. The robot navigating under the proportional
658 navigation law reaches the goal from almost all initial states. Several examples
659 illustrating the method are shown in our simulation. The MATLAB code to implement
660 the navigation law is included in the appendix and its corresponding simulation results
661 are shown in figure 9 and 10.
662
663
664
665
666
37
667 Chapter 7
668 KALMAN FILTERING IMPLEMENTATION IN NAVIGATION/TRACKING
669
670 Estimation of the position of the vehicle with respect to the external world is fundamental
671 to navigation. It provides the basic set of tools for position estimation and environmental
672 modeling. These tools provide an elegant and formally sound method for combining
673 internal and external sensor information from different sources, operating at different
674 rates. With the Kalman filter as the basis for these processes, a predict-match-update
675 cycle is derived as a framework for perception.
676
677 Position estimation also includes a model of errors of the position estimation process.
678 Thus the estimated position is accompanied by an estimate of uncertainty, in the form of
679 a covariance matrix. The covariance matrix makes it possible to correct the estimated
680 position using a Kalman filter. The correction is provided as a side effect in the process
681 of dynamic modeling of the local environment.
682
683 The experiment performed in this project is path planning for a robot that follows a target
684 which is moving under uncertainty. The probabilistic estimation is performed by
685 recurring stages of predict and update with implementation of odometry and discrete
686 Kalman filter.
38
687 In general, the configuration of a robot can be described by six parameters, three
688 dimensional Cartesian coordinates plus the Euler angles pitch, roll and tilt. Throughout
689 this experiment, we consider robots operating on a planar surface. The state space of such
690 systems is three dimensional (xR, yR, θR). The angle θR is the orientation of the vehicle
691 heading with respect to the X axis [9].
692
693 This is a simulation of a virtual robot following a target. The main focus of this project is
694 on implementation of Kalman filter for successfully estimating the next step and targeting
695 the moving destination.
696
697 The target moves in an uncertain manner based on predicted values obtained from
698 Kalman filter equations as well. The Kalman filter equations predict the next step values
699 for the moving target. The observed values are obtained from one of our assumptions
700 made in the experiment.
701
702 The predicted state vector values for the robot hugely depend on the target’s next state
703 predicted values. Here partial prediction is done on account of target movement. The
704 orientation being the point of focus in order to move the robot, the next state values of the
705 robot are predicted by following the orientation of the target. The translation of the robot
706 is then updated depending on the orientation of the target and equations of motion.
707
39
708 The uncertainty estimate is based on a model of the errors which corrupt the prediction
709 and observation processes. Estimating these errors is both difficult and essential to the
710 function of such a system.
711
712 The uncertainty estimate provides two crucial roles:
713 1) The tolerance bounds for matching observations to predictions, and
714 2) The relative strength of prediction and observation when calculating a new
715 estimate.
716
717 Because covariance determines the tolerance for matching, system performance will
718 degrade rapidly if we under-estimate the covariance. On the other hand, overestimating
719 covariance may increase the computing time for finding a match. Here, the covariance
720 values are stored in the array “arr” which help to predict the values of next step
721 covariance. Later, the covariance values are updated based on the discrete Kalman filter
722 update equations [1].
723
724 The state transition matrices for the robot and the destination are kept different in this
725 experiment. This determines at after what interval the next values of the robot and target
726 movement need to be predicted. Also, the measurement noise and the process noise are
727 user defined constants.
728
40
729 A Kalman Filter Model for Odometric Position Estimation [1]
730 731 The robot estimates and commands a state vector composed of the incremental translation
732 and rotation (xR , yRR) and their derivatives.
733
734 The robot also maintains estimates of the absolute position and orientation (the pose) and
735 its uncertainty, represented by a covariance. The “pose” of a vehicle is defined as the
736 estimated values for position and orientation:
xR 737 X = y R R
738
739 The uncertainty in pose is represented by a covariance matrix:
x2 xy x yxy2 y 740 Cx = xy 2
741
742 The vehicle controller responds to commands. The command to correct the estimated
743 position applied a Kalman filter to the position using the “innovation” ΔY.
744 XXKY
745 CCKCx x* x
41
746 The Kalman gain matrix is a 3x3 matrix which tells how much of the change in each
747 parameter should be taken into account in correcting the other parameters. The process
748 for estimating and correcting the vehicle’s pose and uncertainty from odometry are also
749 described.
750
751 Estimating Position and Orientation and their corresponding Uncertainties
752
753 The odometric position estimation process is activated at a regular interval. The process
754 requests the translator to provide the change in the translation, ΔS = [xR , yR] and rotation
755 ΔθR. The process also reads the system real time clock to determine the change in time,
756 Tk. The accumulated translation and rotation are calculated as:
757 SSS
758
759
760 The vehicle pose is then updated by means of algebraic manipulation to calculate the
761 average orientation during a cycle given by the previous estimate plus half the
762 incremental change. The overall state vector is updated as
763 XXX
764
42
765 The estimated position is accompanied by an estimate of its uncertainty, represented by a
766 covariance matrix; Cx. Updating the position uncertainty requires expressing a linear
767 transformation which estimates the update in position.
S 768 Y =
769
770 The contribution of Y to X can be expressed in linear form using a matrix which
771 expresses the sensitivity of each parameter of X to the observed values by use of the
772 Jacobian of the X with respect to Y.
773
774 The uncertainty in orientation also contributes to an uncertainty in position. This is also
775 captured by a Jacobian J. This linear expression permits the covariance of the estimated
776 position to be updated by:
2 0 CXYX JC J s JT C 777 XXYY X0 w a2
778
779 These matrices are a discrete model of the odometric process. They tell the sensitivity of
780 the covariance matrix to uncertainty in the terms of ΔS and Δθ. For pose estimation, it is
781 generally the case that an uncertainty in orientation strongly contributes to an uncertainty
782 in Cartesian position. In particular, the uncertainty in orientation dominates the odometric
783 error model [1].
43
784 Correcting the Estimated Position from Observations of the Environment using Kalman
785 gain
786 Different sensing modalities provide different kinds of information about position. Each
787 of the different kinds of observations can be used to correct an estimate of the position
788 and orientation of a vehicle. The choice of perceptual depends on the environment, the
789 task and the available computing power. [1]
790
791 The new information that can be introduced into the estimated position is the difference
792 between the observed coordinates, and the predicted coordinates. This information, called
793 the innovation is denoted by ΔY.
794
795 This innovation is a composition of three sources of errors:
796 1) Error in the perception of the target C0
797 2) Error in the knowledge if the target’s position. Cb
798 3) Error in the estimated position of the vehicle. Cx
799 A direct computation of the new uncertainty in the vehicle position Cx is given by
-1 -1 -1 800 Cx = (Cb + Co )
801
802 By means of some algebraic manipulation, it is possible to transform this update into a
803 recursive form expressed by the Kalman Filter. In the recursive form, a Kalman Gain is
804 computed as:
44
-1 805 K = Cˆ x (Cb* + Co)
806
807 and the position and its uncertainty are given by the familiar forms [1] :
808 X = Xˆ – K ΔY
809 Cx = Cˆ x – K Cˆ x
810
811 Some of the assumptions made in this project are
812 1) The velocity is kept constant
813 The equation of motion used in this experiment is:
xk vcos kT x k1 y vsin T y 814 k k k1
815
816 where T is the sampling time, v is the constant velocity of the robot/target, k is the priori
817 state in Kalman filter and θk is the orientation of the robot at time k. Here we assume the
818 velocity of the robot and the target to be constant. Hence, the translational movement of
819 the robot and the target remain considerably the same in each recurring step.
820
821 2) First two steps are user defined in order to be able to find the covariance
822 In this experiment, there are three variable of motion, x and y for the translational and
823 theta for orientation. Hence the covariance matrix is a 3x3 diagonal matrix which needs
824 three different values in order to find the covariance of a 3x3 matrix. Hence, the first two
45
825 values of the robot/target movement are user defined whereas the third step and onwards
826 are predicted and updated by rules of Kalman filter where the covariance matrix is
827 calculated at each step based on the previous, present and the next state predicted value.
828
829 3) Since this is a simulation of the actual vehicle controller, the observation values
830 provided by the sensors are assumed here to be as presence of some noise value in the
831 predicted value. This observed value is used to obtain the updated value.
832
833
834
835
836
837
838
839
840
841
842
843
844
845
46
846 Chapter 8
847 SIMULATION RESULTS
848
849 Implementation of Navigation Law Equations
850 851 This section shows the various paths covered by the robot to reach its target eventually by
852 use of the linear navigation law. In the following simulation graphs, the red color depicts
853 the robot path and the blue color depicts the target path. Here, N corresponds to a
854 proportionality factor whereas c is the deviation constant. Larger the value of N more is
855 the initial curve in the graph which depicts the need of large path corrections.
856
857 For variable N and constant c=0
50 60
45 50 40
35 40
30 30 25 20 20
15 10
10 0 5
0 -10 2 4 6 8 10 12 14 16 18 20 -10 -5 0 5 10 15 20 25 858
859 Simulation 1: N=1, c=0 Simulation 2: N=3, c=0
47
50
45
40
35
30
25
20
15
10
5
0 -5 0 5 10 15 20 25 860
861 Simulation 3: N=10,c=0
862 9. Linear Navigation law simulation for constant c and variable N
863
864 For constant N=3 and variable c
60 60
50 50
40 40
30 30 20
20 10
10 0
0 -10 -5 0 5 10 15 20 25 -10 -5 0 5 10 15 20 25 865
866 Simulation 1: N=3, c=10 Simulation 2: N=3, c=-25
867 10. Linear Navigation law simulation for constant N and variable c
868 Hence, from the simulation graphs above the effects of changing values of N, the
869 proportionality factor and c, the deviation constant can be observed in the path tracked by
870 the robot to reach its target.
48
871 Representation of Error Ellipses
872 The following figure shows the results obtained for error ellipses from implementation of
873 Kalman filter for this project. They depict the effect of uncertainties in odometry.
15
14
13
12
11
10
9
8
7
6
5 5 10 15 20 25 30 35 40 45 50 55 874
8
7.5
7
6.5
6
5.5
5
4.5 6 8 10 12 14 16 18 20 22 875
876 11. Simulation results depicting
877 uncertainties through error ellipses
49
878 The above figure is the zoomed view of the error ellipses showing 5 recurring steps of
879 Kalman filter implementation. The red dot depicts the true/observed value whereas the
880 “inverted triangle” is the value predicted by use of Kalman filter.
881
882 Robot Target Path
883 884 The starting point of the robot and the target are encircled in red.
885
886 12. Robot Target path
887 This result shows the path of the robot and the target where the robot path is shown in
888 green and the target path is shown in red. Also, the “+” and “*” signs show the locations
889 of target and robot respectively at discrete time intervals. The robot follows the target
890 path as shown by following its orientation angle. The translation movement of the robot
50
891 and the target stay considerably the same. Successful implementation of the Kalman filter
892 is also shown in predicting and updating the next step of the target as well as the robot.
893
894 Covariance Distribution
895
896
897 13. Covariance matrix distribution
898 This simulation result shows the covariance matrix distribution. Statistical analyses of
899 multivariate data often involve exploratory studies of the way in which the variables
900 change in relation to one another and this may be followed up by explicit statistical
901 models involving the covariance matrix of the variables. Thus the estimation of
902 covariance matrices directly from observational data plays two roles:
903
904 Estimates of covariance matrices are required at the initial stages of principal component
905 analysis and factor analysis, and are also involved in versions of regression analysis that
51
906 treat the dependent variables in a data-set, jointly with the independent variable as the
907 outcome of a random sample.
908
909 The covariance matrix here is a variance matrix since the diagonal terms are considered
910 assuming the rest of the parts of the covariance matrix are uncorrelated.
911
912 Error Ellipses
913
914 14. Error Ellipse for the first and the 15th step robot/target position
915
916 This is the error ellipse for the first step of the covariance matrix.
917 Error ellipses are derived from the covariance matrix, and provide a graphical means of
918 viewing the results of a network adjustment. Error ellipses can show [6]:
919 Orientation weakness (minor axes pointing to the datum)
920 Scale weakness (major axes pointing to the datum)
921 etc.
52
922 Generally the standard deviation in x, y, and z is used as a guide to the precision of a
923 point, the error ellipse gives more detailed information - the maximum and minimum
924 standard deviations, and their associated directions. The orientation of the ellipse is
925 basically given by the correlation term.
926 The error ellipse is an approximation of the pedal curve, which is the true shape of the
927 standard error in all directions.
928 A standard error ellipse shows the region, if centered at the true point position, where the
929 least squares estimate falls with a confidence of 39.4%. Since the truth is rarely known,
930 the error ellipse is taken to represent the 39.4% confidence region, and when drawn is
931 centered at the least squares estimate of the position (rather than the true position). To
932 obtain different confidence regions the length of the ellipse axis is just multiplied by an
933 appropriate factor:
Confidence region 39.4% 86.5% 95.0% 98.9%
Factor 1.000 2.000 2.447 3.000
934 These multiplicative factors are determined from the c2 distribution. Generally the 95%
935 confidence region ellipses are plotted.
53
936 Absolute error ellipses show the effects of the datum - for example, points further away
937 from the datum point in a survey network generally have larger error ellipses than points
938 close to the datum point
939 Relative error ellipses are not influenced by the choice of datum. Relative error ellipses
940 are still derived from the covariance matrix, only they involve 2 points, and an exercise in
941 propagation of variance [6].
942 The second figure above shows the error ellipse for step 15 covariance matrix. The
943 equations referred for calculating the error ellipses is as below. It can be shown that the
944 square root of the eigen values of the covariance matrix correspond to the error ellipse
945 axis lengths, and the eigen vectors that correspond to the eigen values define the error
946 ellipse axis directions.
947 For example,
2 n ne 2 Covariance matrix; ne e 2 2 2 2 2 2 948 Eigenvalue s satisfy ( n e ) ( n e ne ) 0 [11] 2 ne 2 e Eigenvecto rs : 2 and 1 n ne
1 1 2 2 2 2 2 2 2 2 n e n e 4 n e ne 2 2 949 [11] 2 ne tan 2 2 2 angle ellipse make to N axis n e
950
54
951 Chapter 9
952 CONCLUSION
953
954 This project presents a successful implementation of the principles of odometry and
955 Kalman filtering on a robot to track and reach a moving target. The robot uses a linear
956 navigation law to navigate in the workspace. Several simulation examples using Matlab
957 are shown to illustrate the approach. The effects of uncertainties have been taken into
958 account and represented by error ellipses. Factors affecting the robot path and effects of
959 increasing uncertainties are also considered and reported. Also, this project elaborates the
960 importance of Kalman filtering in robot path planning as well as explains the prediction
961 and correction (update) steps performed under effects of uncertainties or noise to
962 eventually track the moving target.
963 964 965 966 967 968 969 970 971 972
973
55
974 APPENDIX 975 976 MATLAB Code Implementing Kalman Filter to show Error Ellipses 977 978 979 T=0.5 980 A = [1 T; 0 1]; 981 B = [0; T]; 982 C = [1 0; 0 1]; 983 D = 0; 984 Q=[0.2 0.05; 0.05 .1] 985 986 %Plant = ss(A,[B B],C,0,-1,'inputname',{'u' 'w'},'outputname','y'); 987 %Q = 2.3; % the process noise covariance (Q): 988 989 R = [0.5 0;0 0.5]; % the sensor noise covariance (R): 990 991 t = [0:100]'; 992 u = 1.5; 993 randn('seed',0); 994 995 P=[1 0 ;0 2]; % Initial error covariance 996 xhat=[2;4]; % Initial condition on the state 997 998 x0=[2.9 ;3.5]; 999 1000 for i=1:0.1:10 1001 1002 x1=A*x0+B*u 1003 1004 %Prediction 1005 1006 xhat1 = A*xhat + B*u; % x[n+1|n] 1007 P1 = A*P*A' + Q; % P[n+1|n] 1008 1009 %Update 1010 1011 y1=C*x1; 1012 M = y1-C*xhat1; 1013 S=C*P1*C'+R; 1014 Z=P1*C'*inv(S); 1015 1016 1017 xhat11=xhat1+Z*M 1018 P11 = P1-Z*C*P1 % P[n|n] 1019 pause 1020 xhat=xhat11;
56
1021 plot(x1(1),x1(2),'r.') 1022 hold on 1023 plot(xhat11(1),xhat11(2),'kv') 1024 1025 x0=x1; 1026 P=P11; 1027 n=P11 1028 lm=eig(n) 1029 majmin=sqrt(lm) 1030 phi=0.5*atan2(2*n(1,2),n(1,1)^2-n(2,2)^2), phi*180/pi 1031 a=majmin(1) 1032 b=majmin(2) 1033 timpo=0:0.05:30 1034 x= xhat1(1)+a*cos(timpo)*cos(phi)-b*sin(timpo)*sin(phi) 1035 y=xhat1(2)+a*cos(timpo)*sin(phi)+b*sin(timpo)*cos(phi) 1036 plot(x,y,'k') 1037 hold on 1038 end 1039 1040 1041 MATLAB Code Implementing the Navigation Law 1042 1043 1044 vG=2. %velocity of the Goal/target 1045 vR=4 %velocity of the robot 1046 θG=π/2 %Orientation angle of the goal 1047 xG0=20; %Initial position of Goal (x,y) 1048 yG0=20; 1049 xR0=0; %Initial position of robot (x,y) 1050 yR0=0; 1051 1052 xG=xG0; 1053 yG=yG0; 1054 xR=xG0; 1055 yR=yG0; 1056 1057 h=0.25 %discrete time interval when measurements 1058 taken 1059 tf=100 %total number of steps 1060 N=3 1061 c=-25 1062 7+3+1 1063 1064 for time=0:h:tf 1065 pause 1066 1067 xG=vG*cos(θG)*h+xG0; %New values of goal position obtained 1068 yG=vG*sin(θG)*h+yG0; 1069 1070 psi=atan2(yG-yR,xG-xR);
57
1071 θR=N*psi+c; 1072 1073 xR=vR*cos(θR)*h+xR0; %New values of robot position obtained 1074 yR=vR*sin(θR)*h+yR0; 1075 1076 plot(xG,yG,'.') 1077 hold on 1078 plot(xR,yR,'r.') 1079 1080 xG0=xG; %New values of robot and goal position are now old values 1081 for next step 1082 yG0=yG; 1083 xR0=xR; 1084 yR0=yR; 1085 1086 end 1087
1088 MATLAB Code Implementing Kalman Filter for Path Planning 1089 1090 1091 vr=1; %velocity of robot 1092 vd=0.9; %velocity of destination 1093 1094 tr(1)=pi; %initial first orientation of robot 1095 1096 xr(1)=vr*cos(tr(1)); %initial X positon of robot 1097 yr(1)=vr*sin(tr(1)); %initial Y positon of robot 1098 wr(1)=vr*tr(1); 1099 1100 td(1)=pi; %initial orientation of destination 1101 xd(1)=vd*cos(td(1)); %initial X positon of destination 1102 yd(1)=vd*sin(td(1)); %initial Y positon of destination 1103 wd(1)=vd*td(1); 1104 1105 tr(2)=pi; 1106 xr(2)=vr*cos(tr(2)); 1107 yr(2)=vr*sin(tr(2)); 1108 wr(2)=vr*tr(2); 1109 1110 td(2)=pi; 1111 xd(2)=vd*cos(td(2)); 1112 yd(2)=vd*sin(td(2)); 1113 wd(2)=vd*td(2); 1114 1115 X_R(:,1) = [xr(1);yr(1);wr(1)]; 1116 Y_R(:,1) = [xr(1);yr(1);wr(1)]; 1117 1118 X_D(:,1) = [xd(1);yd(1);wd(1)]; 1119 Y_D(:,1) = [xd(1);yd(1);wd(1)];
58
1120 1121 X_R(:,2) = [xr(2);yr(2);wr(2)]; 1122 Y_R(:,2) = [xr(2);yr(2);wr(2)]; 1123 1124 X_D(:,2) = [xd(2);yd(2);wd(2)]; 1125 Y_D(:,2) = [xd(2);yd(2);wd(2)]; 1126 1127 X_C(:,:,1) = 0.84*eye(3); %initial covariance constant 1128 of the robot 1129 X_CD(:,:,1) = 0.74*eye(3); %initial covariance constant of 1130 the destination 1131 1132 X_C(:,:,2) = 0.93*eye(3); %initial covariance 1133 constant of the robot 1134 X_CD(:,:,2) = 0.93*eye(3); %initial covariance constant of 1135 the destination 1136 1137 X_W = 0.3*eye(3); %measurement 1138 noise defined 1139 1140 H = 0.492*eye(3); %state transition 1141 matrix of robot 1142 HD = 0.5*eye(3); %state transition 1143 matrix of destination 1144 arr=1; 1145 1146 m=1; 1147 n=1; 1148 1149 %Kalman filter implementation 1150 1151 for i=2:1:50 1152 1153 %predict 1154 1155 %X_D(:,i+1) = HD*X_D(:,i) + X_D(:,i); 1156 %Y_D(:,i+1)= X_D(:,i+1) + rand(3,1); %the observed/measured value for 1157 moving destination 1158 1159 wd(i+1) = HD(3,3)*vd*td(i) + wd(i); 1160 td(i+1) = wd(i+1)/2; 1161 xd(i+1) = HD(1,1)*vd*cos(td(i+1)) + xd(i); 1162 yd(i+1) = HD(2,2)*vd*sin(td(i+1)) + yd(i); 1163 1164 X_D(:,i+1) = [xd(i+1);yd(i+1);wd(i+1)]; 1165 Y_D(:,i+1) = [xd(i+1);yd(i+1);wd(i+1)] + [0.2;0.1;0.1]; 1166 1167 C_D = [X_D(:,i-1) X_D(:,i) X_D(:,i+1)]; 1168 CD(:,:,m)=cov(C_D); 1169 X_CD(:,:,arr)=cov(C_D); %calculated covariance for moving 1170 destination
59
1171 X_CD(:,:,arr)=(HD*X_CD(:,:,arr)*HD')+X_CD(:,:,arr); 1172 %new predicted value for covariance of moving destination 1173 1174 tr(i+1) = td(i+1); 1175 wr(i+1) = 2*tr(i+1); 1176 xr(i+1) = H(1,1)*vr*cos(tr(i+1)) + xr(i); 1177 yr(i+1) = H(2,2)*vr*sin(tr(i+1)) + yr(i); 1178 1179 % xr(i+1) = H(1,1)*2*cos(tr(i+1)) + xr(i); 1180 % yr(i+1) = H(2,2)*2*sin(tr(i+1)) + yr(i); 1181 1182 % xr(i+1) = xd(i+1); 1183 % yr(i+1) = yd(i+1); 1184 1185 X_R(:,i+1) = [xr(i+1);yr(i+1);wr(i+1)]; 1186 Y_R(:,i+1) = [xr(i+1);yr(i+1);wr(i+1)] + [0.2;0.1;0.1]; 1187 1188 C_R = [X_R(:,i-1) X_R(:,i) X_R(:,i+1)]; 1189 CR(:,:,n)=cov(C_R); 1190 X_C(:,:,arr)=cov(C_R); 1191 %calculated covariance 1192 X_C(:,:,arr) = (H*X_C(:,:,arr)*H') + X_C(:,:,arr); 1193 1194 %update target 1195 1196 X_D(:,i+1)=X_D(:,i+1)+((X_CD(:,:,arr).*eye(3))*HD'/(HD*(X_CD(:,:,arr).* 1197 eye(3))*HD' + X_W)*(Y_D(:,i+1) - HD*X_D(:,i+1))); 1198 %updated value of X_D: destination 1199 1200 %covariance of moving destination 1201 1202 X_CD(:,:,arr)=(X_CD(:,:,arr).*eye(3))+((X_CD(:,:,arr).*eye(3))*HD')/((H 1203 D*(X_CD(:,:,arr).*eye(3))*HD')+X_W)*HD*(X_CD(:,:,arr).*eye(3))); 1204 %updated value of X_CD: 1205 1206 %update robot 1207 1208 X_R(:,i+1) = X_R(:,i+1) + 1209 ((X_C(:,:,arr).*eye(3))*H'/(H*(X_C(:,:,arr).*eye(3))*H' + 1210 X_W)*(Y_R(:,i+1) - H*X_R(:,i+1))); 1211 %updated value of X_R 1212 1213 %covariance of moving target 1214 1215 X_C(:,:,arr)=(X_C(:,:,arr).*eye(3))+(((X_C(:,:,arr).*eye(3))*H')/((H*(X 1216 _C(:,:,arr).*eye(3))*H')+X_W)*H*(X_C(:,:,arr).*eye(3))); 1217 %updated value of X_C 1218 1219 m=m+1; 1220 n=n+1; 1221 arr=arr+1;
60
1222 end 1223 1224 subplot(2,1,1), 1225 p=0; 1226 for p=1:1:arr 1227 surfc(X_CD(:,:,p)); 1228 hold on 1229 end 1230 hold off 1231 1232 subplot(2,1,2), 1233 plot(X_R(1,:),X_R(2,:),'r'); 1234 hold on 1235 plot(X_R(1,:),X_R(2,:),'+'); 1236 plot(X_D(1,:),X_D(2,:),'g'); 1237 plot(X_D(1,:),X_D(2,:),'*'); 1238 plot(X_D(1,1),X_D(2,1),'O'); 1239 plot(X_R(1,1),X_R(2,1),'O'); 1240 hold off 1241 1242 1243
1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261
1262 1263 1264
61
1265 REFERENCES 1266 1267 [1] Crowley James L.,”Mathematical foundations of Navigation and Perception for an 1268 autonomous mobile robot”. International Workshop on Reasoning with Uncertainty in 1269 Robotics, University of Amsterdam, The Netherlands, December 4-6, 1995. 1270 1271 [2] Choset Howie, Lynch Kevin, Hutchinson Seth, George Kantor, Burgard Wolfram, 1272 Kaveraki Lydia, Thrun Sebastian, “Principles of Robot Motion: Theory, Algorithms and 1273 Implementation, Chapter 8 Kalman Filtering”, The MIT Press, Cambridge, 1274 Massachusetts, London, England, 2004. 1275 1276 [3] Kleinbauer Rachel, “Kalman filtering Implementation in Matlab”. Study report in the 1277 field of study Geodesy and Geoinformatics at Universitat Stuttgart, Helsinki, November 1278 2004. 1279 1280 [4] Borenstien J., Everett H. R., Feng L., Navigating Mobile Robots: Systems and 1281 Techniques, Chapter 5 Odometry and other Dead Reckoning Methods, A K Peters, 1282 Wellesley, MA., 1996. 1283 1284 [5] Fossen Thor I., Perez Tristan, “Kalman filtering for positioning and heading control 1285 of Ships and offshore rigs.” IEEE Control Systems Magazine, December 2009: 33, 40- 1286 42. 1287 1288 [6] Jones Nicole, “Error Ellipses”, Department of Geomatics, The University of 1289 Melbourne Australia, 25 May 1999. February 2010 1290 http://www.geom.unimelb.edu.au/nicole/surveynetworks/02a/notes09_01.html 1291
62
1292 [7] Nathaniel Bowditch, LL.D., “Dead Reckoning”, The American Practical Navigator: 1293 An Epitome of Navigator, National Imagery And Mapping Agency, Bethesda, Maryland, 1294 1995 1295 1296 [8] G. W. Lucas, “A Tutorial and Elementary Trajectory Model for the Differential 1297 Steering System of Robot Wheel Actuators,” SourceForge, 2001, 1298 http://rossum.sourceforge.net/papers/DiffSteer/DiffSteer.html. 1299 1300 [9] Thrun Sebastian, Burgrad Wolfram, Fox Dieter, Probabilistic Robotics: Chapter 5 1301 Robot Motion, The MIT Press, Cambridge, Massachusetts, London, England, 2006. 1302 1303 [10] K S Chong and L Kleeman, "Accurate odometry and error modelling for a mobile 1304 robot", IEEE International Conference on Robotics and Automation, Albuquerque USA, 1305 April 1997, pp. 2783-2788. 1306 1307 [11] Herring Thomas, “12.215 Modern Navigation L14” Massachusetts Institute of 1308 Technology, MA, 2009. March 2010. 1309 http://geoweb.mit.edu/~tah/12.215/12.215.Lec14.html 1310 1311 [12] F. Orderud, “Comparison of Kalman Filter Estimation Approaches for State Space 1312 Models with Nonlinear Measurements”, Norwegian Institute of Science and Technology, 1313 Proceedings of Scandinavian Conference on Simulation and Modeling - SIMS 2005