Chapter 11
Tutorial: The Kalman Filter
Tony Lacey.
11.1 Intro duction
The Kalman lter [1] has long b een regarded as the optimal solution to many tracking and data prediction
tasks, [2]. Its use in the analysis of visual motion has b een do cumented frequently. The standard Kalman
lter derivation is given here as a tutorial exercise in the practical use of some of the statistical techniques
outlied in previous sections. The lter is constructed as a mean squared error minimiser, but an alternative
derivation of the lter is also provided showing how the lter relates to maximum likeliho o d statistics.
Do cumenting this derivation furnishes the reader with further insightinto the statistical constructs within
the lter.
The purp ose of ltering is to extract the required information from a signal, ignoring everything else.
Howwell a lter p erforms this task can b e measured using a cost or loss function. Indeed wemay de ne
the goal of the lter to b e the minimisation of this loss function.
11.2 Mean squared error
Many signals can b e describ ed in the following way;
y = a x + n 11.1
k k k k
where; y is the time dep endent observed signal, a is a gain term, x is the information b earing signal
k k k
and n is the additive noise.
k
The overall ob jective is to estimate x . The di erence b etween the estimate ofx ^ and x itself is termed
k k k
the error;
f e = f x x^ 11.2
k k k
The particular shap e of f e is dep endent up on the application, however it is clear that the function
k
should b e b oth p ositive and increase monotonically [3]. An error function which exhibits these charac-
teristics is the squared error function;
2
f e = x x^ 11.3
k k k 133
Since it is necessary to consider the ability of the lter to predict many data over a p erio d of time a more
meaningful metric is the exp ected value of the error function;
l ossf unction = E f e 11.4
k
This results in the mean squared error MSE function;