Tracking Filters for Radar Systems

Wing Ip Tam

A thesis submitted in conformity with the requirernents for the degree of Master of Applied Science Graduate Department of Electrical and Cornputer Engineering University of Toronto

@Copyright by Wing Ip Tarn 1997 National Library Bibliothèque nationale 1*1 ofCanada du Canada Acquisitions and Acquisitions et Bibliographie Services services bibliographiques 395 Wellington Street 395, rue Wellington Ottawa ON KIA ON4 Ottawa ON KI A ON4 Canada Canada

The author has granted a non- L'auteur a accordé une licence non exclusive licence dowing the exclusive permettant ii la National Library of Canada to Bibliothèque nationale du Canada de reproduce, loan, distribute or seli reproduire, prêter, distribuer ou copies of this thesis in microform, vendre des copies de cette thèse sous paper or electronic formats. la fome de rnicrofiche/film, de reproduction sur papier ou sur format électronique.

The author retains ownership of the L'auteur conserve la propriété du copyright in this thesis. Neither the droit d'auteur qui protège cette thèse. thesis nor substantial extracts fiom it Ni la thése ni des extraits substantiels may be printed or othemke de celle-ci ne doivent être imprimés reproduced without the author's ou autrement reproduits sans son permission. autorisation. Tracking Filters for Radar Systems by Wig Ip Tam Master of Applied Science, 1997

Depart ment of Elect rical and Computer Engineering, University of Toront O

Abstract

In this paper we discuss the problem of target tracking in Cartesian coordinates with polar measurements and propose two efncient tracking algorithms. The first dgorithm uses the multidimensional Gauss-Hermite quadrature to evaluate the op- timal estimate as a weighted sum of functional dues. To reduce the computational requirements of this quadrature technique we have suggested several ways to reduce the dimension, the number and the order of the quadratures required for a given accu- racy. The second tracking dgorithm is based on the Gaussian sum filter. To alleviate the computationd buden associated with the Gaussian sum fdter, we have found two efficient and systematic ways to approximates a non-Gaussian and measurement- dependent bction by a weighted surn of Gaussian density function and we have derived the formula for updating the paxameters involved in the bank of Kahan- type flters and we also have proposed new techniques to control the number of terms in the Gaussian mixture at each iteration. Simulation results show that these two proposed methods are more accurate than the classical method, such as the extended Khanfilter (EKF). Acknowledgements

1am especidy gratefd for the extensive and generous assistance of my superior, Pro- fessor Dimitrios Hatzinakos, who spent time with me each week to guide me through the thesis and made numerous helpfid suggestions to me.

1wish to thank' Kostas Plataniotis for his thorough reviews and perceptive com- ments which greatly improve the quality of my thesis work. I also wish to thank Professor Pasupat hy and Professor Kwong for their consultations.

1 thank my friends William Ma, Erwin Pang, Jacky Yan, Richard Lee, Wilson Wong and Ryan Lee and his farnily for all the fun time.

Lastly, much credit must be given to the members of my family, who have sup- ported me and have borne with me during this period. Contents

1 Introduction 1

2 Problem Definition and Literature Survey 4 2.1 Basic Mode1 for Radar Systems ...... 4 2.1.1 Target Dynamics ...... 5 2.1.2 Radar Measurernent ...... 7 2.2 The Bayesian Approach ...... 8 2.3 Sub-Optimal Nonlinear Filters for Radar Tracking ...... 12

3 An Efficient Radar Tkacking Algorithm Using Multidimensionai Gauss- Hermite Quadratures 18 3.1 Introduction ...... 18 3.2 Gauss.HermiteQuadrature ...... 19 3.3 Basic Principles of the Proposed Filter ...... 21 3.4 Complexity Analysis ...... 24 3.5 tete ...... 25 3.6 Simulation Results ...... 26

4 Adaptive Gaussian Sum Algorithm for Radar 'Ikacking 29 4.1 Introduction ...... 29 4.2 Theoretical Foundations ...... 30 4.3 Basic Principles of the Proposed Filter ...... 30

4.3.1 Approximation of Densities ...... ,...... 31 4.3.2 BaskofKalmanFilters ...... 40 4.3.3 Growing Memory Problem ...... 41

4.4 Filter Structure ...... , , ...... 45 4.5 Simulation Results ...... 46

5 Conclusion 54

A Discretization of the Continuous-Time Equation of Motion

B Track Initialization

C Detail Derivations of the Curve-fitting Approach

D Detail Denvation of the 'Randormation Approach

E Reproducing Property of Gaussian Densities

F Bank of Kalman Fiiters List of Figures

2.1 Flow Diagram for the Bayesian Estimator ......

3.1 With the factorization the order of the quadrature is reduced for a given accuracy...... 3.2 Flow Diagram of the Proposed Filter ...... 3.3 Cornparison of the position errors ...... 3.4 Cornparison of the velocity errors ......

4.1 Plots for the me-fitting approadi ...... 4.2 Basic Principles of the Proposed Gaussian mixture Approximation Method ...... 4.3 Fundamentals of the proposed Gaussian sum approximation method . 4.4 A . The function p(z, lx-) can be obtained through a nonlinear transfor- mation from the density p(w,) . B . Similady. some initial parameters

fiorn the statistics of the noise W. can be transformed into a Gaussian sum approximation for the Çiction p(z&) ...... 4.5 Plots of the original function p(z, lx-) and the approximation ..... 4.6 Mixture before Reduction (50 components) ...... 4.7 Mixture after Reduction (10 components) ...... 4.8 Flow Diagram of the Proposed Gaussian Sum Filter ...... 4.9 Cornparison of the position errors for measurement noise level 1 ...... 4.10 Cornparison of the velocity errors for measurement noise level 1 ...... 4.11 Comparison of the position mors for measurement noise level 2 ...... 4.12 Cornparison of the vdouty enors for measurement noise Ievel 2 ..... 4.13 Superposition of 100 realizations of experiment based on Extended ...... 52 4.14 Superposition of 100 realizations of experiment based on Extended Kalman Fe ...... 52 4.15 Cornparison of the position errors for 3-D tracking scenario ...... 53 4.16 Cornparison of the velocity errors for 3-D tracking scenario ...... 53

C.1 Weighted Gaussian Flnction ...... 61 C.2 Aftine Transformation of the eIlipsoidal base ...... 62 List of Tables

4.1 Comparison of the accuracy and the efficiency of the proposed Gaussian sum approximation met hod and the classical method. Note that the classical method is based on the Marquardt algonthm and the number of Gaussian terms N is fixed to be 5 for both methods...... 36 4.2 Comparison of the accuracy and the efficiency of the proposed Gaussian surn approximation method and the classical method. Note that the classical method is based on the Marquardt algorithm and the number of Gaussian terms N is fixed to be 5 for both methods...... 40 Chapter 1

Introduction

For nearly three decades the target tradcing-trajectory estimation problem has been a fruitfd applications area for state estimation. It has found wide applications in both military and cornmerciai areas [l,21 such as inertial navigation, guidance & control; global positioning system (GPS);differential global positioning system (DGPS);wide area augmentation system (WAAS); inertial navigation system (INS) ; missiles guid- ance system; satellite orbit determination; maritime surveillance; air trac control; fieeway tr&c system; fie control system; automobile navigation system; fleet man- agement; underwater target tracking system. Many problems have been solved, yet new and diversified applications still challenge engineers.

This paper addresses the problem of target tracking in Cartesian coordinates with polar measurements. In tracking applications the target motion is usually best mod- eled in a simple fashion using Cartesian coordinates. Unfortunately, in most radar systems the target position measurements are provided in polar coordinates (range and azimuth) with respect to the sensor location. Tracking in Cartesian coordinates using polar measurements is a problern of nonlinear estimation. A ~~O~OUStreat- ment of the nonlinear estimation problem requises the use of stochastic integrals and stochastic differential equations [3]. In this paper, we dladapt the formal manipu- lation of the white noise process and omit the ~~O~OUSderivations using Ito calnilus. The nonlineu estimation problem is very challenging because the distribution of the state is generally non-Gaussian. Without the Gaussian property, efficient computa- tion of the conditional mean is very difficult because the optimal (conditional mean) nonlinear estimator cannot be realized with a finite-dimensional implementation; con- sequently all practicd nonlinear filters are suboptimal. These suboptimal nonlinear mten can be divided into two categories in general: the most popular technique uses a Taylor-series expmsion to approximate the nonlinear system model, e.g. the extended Kalman filter (EKF);the other approximates the conditional probability density function in such a fashion that makes the computation of the conditional mean efficient, e.g. the Gaussian sum filter. Both approaches yield computationally feasible algorithms; however, each has shortcoming that must be addressed. The fist approach is efficient but not accurate and often resdts in filter divergence whereas the second approach yields very accurate result but the complexity is so high that precludes practical real-time applications.

To improve the performance of the existing approaches, we propose two subopti- mal tracking algorithms which are believed to be more accurate and efficient than the most widely used Taylor series methods, e.g. the EKF. The fîrst method uses the mul- tidimensional Gauss-Hermite quadrature to evaluate the optimal (conditional mean) estimate of the target states directly fiom the Bayesian equations. This quadrature technique replaces the integrals in the Bayesian equations with a weighted sum of sam- ples of the integrand and this approximation can be very accurate if the integrand is smooth. However, this method has a major drawback, namely its computational complexity because of the dimension of the problem and the number and the order of the quadratures involved the number of samples required are usually too large to be handled in seal-tirne. In this paper we have suggested several ways to reduce the computational requirements of this quadrature technique by reducing the dimension, the number and the order of the quadratures required for a given accuracy. The second tracking algorithm is based on the Gaussian sum filter. The Gaussian sum approximation method can approximate any probability density function as closely as desired [4]. Moreover, the optimal estimate (conditional mean) can be computed in a simple manner as a weighted sum of the estimates from a bank of Kaknan-type filters operating in pardel. However, this method has a very high complexity since the number of Gaussian terms required in the approximation increases exponentially in time. To deviate the computational burden associated with the Gaussian sum filter, we have found two efficient and systematic ways to approximates a non-Gaussian and measurement-dependent function by a weighted sum of Gaussian density fimction and we have derived the formula for updating the parameters involved in the bank of Kalman-type filters and we also have suggested several ways to control the number of terms in the Gaussian mixture at each iteration. Simulation results show that these two proposed methods are more accurate than the classical method, such as the EKF.

The rest of the paper is organized as follows: ki chapter 2, mathematical models for target tracking applications and the existing tracking algorithms are discussed. In chapter 3, the tracking algorithm based on the multidimensional Gauss-Henni te quadrature is introduced. Motivation and implementation issues for this technique are discussed and simulation results are presented also in this chapter. In chapter 4, the tracking algorithm based on the Gaussian sum filter is discussed. Modified Gaussian sum approximation method is analyzed and several mixture reduction al- gorithms are presented in this chap ter. Simulation results of this technique and the pedorrnance of this proposed tracking filter is described. Finally, chapter 5 sumrna- rizes our conclusions aad future works. Chapter 2

Problem Definition and Literature Survey

2.1 Basic Mode1 for Radar Systems

The tracking problem is a state estimation problem, i.e. assuming the state of a target evolves in continuous time according to the equation

The state vector x(t) usually contains target position, velocity and sometimes accel- eration as state variables and its corresponding discrete measurement vector is given by

where v(t) and w, are system and measurement noise processes respectively and they are assumed to be zero mean white noise processes. The covariance of the sys- tem noise, Q,is selected to compensate for modelling errors (disaepancies between the mode1 and the actual process). The cowiance of the measurement noise pro- cess sequence, &, should also be chosen to represent all possible excursions such as measurement biases, fdse measurements, etc. 2.1.1 Target D ynamics

Constant Velocity (CV)Model For some targets such as airplanes, a constant velocity (CV) model is sdicient, i.e. the state vector contains six variables

where (x(t), y(t), z(t)) are coordinates of a Cartesian coordinates system used to describe the target dynamics. The state equations axe

where vi(t) is the system noise terms used to characterîze modelling errors.

Constant Accelerating (CA)Model If the target being tracked is maneuvering (accelerating), a constant accelerating (CA) model is sometimes used, i.e..

The state equations can be written accordingly, i.e.,

xi (t ) = xi+ i (t ) (t)= zif2(t)1 i = 174y7 (2.6) 5i+2 (t ) = vi( f ) These two state equations are also referred to as the first- and second-order polyno- mi al dynamics respect ively.

Target with Sudden Maneuvers Targets with sudden maneuvers can be mod- elled as systems with abrupt changes [5]. We can modify equation (2.1) to become two sets of equations, one representing the premaneuver dynamics and the other incorporating the maneuver feature where x,(t) is the vector representing maneuvering force and satisfies

fort < t, where t, is the tirne the maneuver begins. Tm(-) is the maneuvering dynamics, and v,(t) is the system noise for the maneuvering dynamics fm(*).For targets with sud- den maneuvers, tm is unknown, f, (*) and vrn(t)may be unknown or partially known.

The target maneuver x,(t) is correlated in time; namely, if a target is accelerating at time t, it is likely to be accelerating at time t + r for sufficiently smd T. For example, a lazy tuni will often give rise to correlated acceleration inputs for up to one minute, evasive maneuvers will provide correlated acceleration inputs for periods between ten and thirty seconds, and atrnospheric turbulence may provide correlated acceleration inputs for one or two seconds. A typical representative mode1 often used in airplane tracking is

where a is the correlation constant and va@)is a noise process. Methods for selecting values for a and statistics for v&) can be found in [6].

Discrete Time Equations of Motion The above target equations of motion must be fust discretized [7] into appropriate discrete theequations in order for the appli- cation of suitable digital filters [6]. In this paper, we assume the motion of the target (such as aircrafts , and missiles) generally foIIows a straight-line const ant-velocity (CV) trajectory. Turns, evasive maneuvers, and acceleration due to turbulence of the sur- rounding environment may be modeled as perturbations upon the constant-velocity trajectory. Assume the target is moving in a two-dimensional plane, i.e. the state vector x, contains four variables

T Xn = [%, Vs,n, Yni vg,n]

lThe discretization of continuous-time system is presented in Appendix A where (x,, yn) are the target positions and (v~,~,v,,) are the target velocities at tirne step n. The discrete time target equations of motions are given as

where the target accelerations a,, and a,, are modeled as stationary Gaussian dis- tributed random processes with zero mean and variance 0:. It is assumed that the accelerations in x and y directions are mutudy statistically independent. AT is the radar scan interval. In notation

where vn is the noise process sequence [~,,a,,]~with zero mean and Qn (which equals 4).F is cded the transition matrix and G the noise gain matrix.

2.1.2 Radar Measurement

When a radas is used to measure the position of a target moving in the two-dimensional space, the measurements are reported in range and bearing which are in polar CO- ordinates. The polar coordinate measurernent of the target position related to the Cartesian coordinate target state is denoted as

where r, and 8, are the measured range and bearing of the moving target and w,,and wd,n are the measurernent noise of the range and the bearing; they are assumed to be zero mean white Gaussian processes with variances and cri respectively. Moreover, the range and the beating of the measurement are assumed to be uncorrelated. In rnatrix notation:

where z, is the vector of polar coordinates memuement [r, 9,jT. h( -) is the Cartesian- tepolar coordinate transformation. w, is the observation noise process [w,, w,] T which is assumeci to be zerwnean white Gaussian noise process with cdance ma- * l

The basic problem is to estimate the ment state x, based on the sequence of observations Zn = {%)hl,the statisria of the noise input and the memuernent and system modek. Generally, one choose the optimal estimate by extremizing some performance critaion such as the mean-square enor. Regardless of the perf'ce criterion, given the a podniori density fiinction p(scJZn) any type of esrimate can be detennioed. Th-, the estimation problem can be approached as the problem of determining the a posteriori density. This approach is generaiiy refmed to as the Bayesian approach.

2.2 The Bayesian Approach

In this papa, we dl choose the optima estimate based on the m;n;miim mean squared ~OT(bME) criterion. In this case, the optimal estimate G,,, at then is jus the meaa of the srate bityhction conditid on the meastuement kory Similady, because the measurement z,, depends on the state Xn and a white observa- tion noise sequence w,,

Utilizing the above property the a posteriori density p(xn12") can be determined recursively by the foIlowing Bayesian equations in two stages: prediction and up date [9].

estatistic of v, stafistic of w, etarget mode1 measurement mode1

Figure 2.1: Flow Diagram for the Bayesiaa Estirnator

Prediction: Suppose that the conditional density p(~n-~[Zn-') at then - 1 is known. Then using the state equation(2.12) it is possible to obtain the conditional density of the state p(x, (Zn-')at time n

This equation is cded Chapmaa-Kolmogorov equation.

The probabilistic model of the state evolution, p(~lx,+l) can be defined by the state equation and the known statistics of vn-l where 6(*)is the Dirac delta function. The delta function is used because if x,-~ and vn-1 are known, then x,, can be obtained deterministically. Since the noise sequence vn is assumed to be an zero-mean white Gaussian random process with covariance matrix Q, and the system dynamics f (*) is linear; therefore the density p(~,k-~)is also white Gaussian of the form

Update: The conditional density p(xn12") can be written using Bayes' formula

where the normalizing constant is given by

The conditional density of zn given x,, p(znlxn)is defined by the measurement mode1 (2.14) and the known statistics of wn

The observation noise sequence wn is white Gaussian and the measurement equation is nonlùiear; This density becomes

The initial conditional density p(xol ZO)is given by where p(xo) is usudy assumed to be white Gaussian 2.

This optimal (conditional mean) nonlinear filter cannot be realized with a finite- dimensional implementation. It is because the distribution of the state is generally non-Gaussian; thus, the integration indicated in equations (2.18) and (2.23) is gener- ally impossible to accomplish in closed form except when the measurement equation is linear (i.e. the measurement function h(x,,) is linear) and the statistics of the initial state and the noise sequences are rtll white Gaussian. In that case, the equa- tions (2.18) and (2.22) can be evaluated in closed form and the conditional density p(xn(Zn)is reduced to a Gaussian density function. The propagation of its mean and covariance matrix is collectively known as the Khanfilter equations.

The Kalman Filter Assume the measurement equation is linear

where H, is the measurement matrix and the noise process wn is modeled as a zero- mean white Gaussian random process with covariance matrix &.

The linear-Gaussian assumption of the mode1 leads to the presenmtion of the Gaussian property of the conditional density p(x, 1 Zn).Thus, they are completely characterized by their means and their covariance matrices, which can be ob tained recursively in two stages: prediction and update [IO].

Updat e 'The track is initialized fiorn thefirst two rneasurements using the algorithm presented in Ap pendix B where

is called the Kalman gain.

Most of the sub-optimal nonlineu filter are based on the linear Kalman filter equations by transforrning the nonlineu measurement equation into a linear equation with white additive Gaussian noise, i.e. forcing the requirements of the Khanfilter equations satisfied. The next section presents several sub-optimal filters used in target tracking in Cartesias coordinates with noisy polar measurements.

2.3 Sub-Opt imal Nonlinear Filters for Radar Track- ing

The most widely used tracking filter is the extended Khanfilter (EKF) [IO, 81 which employs the fist-order Taylor series approximation to adapt the linear Kalrnan filter to the nonlinear system described by equations (2.12) and (2.14). Since the state is in Cartesian coordinates and the measurements are in polar coordinates. Therefore, there is a nonlinear measurement function h(&) and linearization of the measurement equation is required for the state and covariasce update. The prediction stage of the EKF proceeds as in equations (2.28) and (2.29) but the update stage is described as follows:

where is the Khan gain and

is the Jacobian of the nonlineu function h(xn). Its accuracy however depends heav- ;iy on the stability of the Jacobian matrix. In practice, due to the modelling error the Jacobian matrix is often numerically unstable resulting in filter divergence. This Taylor series expansion method can be extended further to include higher order tems in the expansion for the nonlinear function h(xn) to seek better approximation, such as the second-order filter [9]. It is however questionable whether higher-order approx- imations would improve performance in cases where extended Khanfilter (EKF) diverges [9]. The crucial defect in the approximations is that replace global proper- ties of a function by its local properties (its derivatives). Clearly, if the true state lies outside the region in which the nonlinear system is accurately represented by its Taylor series, not only will the first-order approximation be inadequate but so will any higher-order approximation.

There exists similar approaches to the EKF which utilizes mixed coordinate filters, i.e. the state is in Cartesian coordinates and the measurements are in polar coordi- nate. They a.U share the same filter structure; they are only different in the way their filter gains Knare computed. One technique is the iterated extended Khan fil- ter [IO]. It improves the accuracy of the extended Kalman filter (EKF) by repeatedly updating the estimate XnIn,and the kalman gain Kn based on first-order Taylor series expansion about the most recent estimate. Denote the ilh estimate of x,ln by IinlnVi, i = 0,1,. .. ,N and XnlnVo= îCnln-i* The foIlowing equations surnmaxize the iterated extended Khanfilter (KF) measurement update: Similar to the extended Khanfilter (EKF). Its accuracy also depends heavily on the stability of the Jacobian matrix. If the Jacobian matrix is numerically unstable, it may result in slowed or even non-convergence of the estimates. A new nonlinear iterated filter based on the Juiier et al's discrete approximation and the Levenberg- Marquaxdt algorithm [Il]has been shown to be more accurate and robust than the EKF and the IKF. There is also a "quasi-extended" Kaknan filter [12], which shows improvements when tracking maneuvering trtrgets at close range. There is another technique, modified gain extended Kalman filter (MGEKF)[13], has been shown to gumantee stability and exponential convergence for a special class of nonlinearities.

Another technique uses the statistical lineaxization as opposed to the Taylor se- ries expansion [IO] to approximate the nonlinear function h(xJ such that the mean square enor between the linear approximation and the nonlinear function is mini- mized. The computational requirements of this method is greater than that of the extended Kalman filter; however, the performance advantages oEered by this method rnay make the additional comput ation wor t hwhile. There exists similar techniques which are based on various series expansions, such as Edgeworth and Gram-Charlier expansion. In some cases, because of the computational constraints, it may be im- practical to compute the filter gain in real time. Under such conditions one must use either a set of precomputed filter gains or a constant gain filter. One popular constant gain filter is a - ,O - 7 filter (or a - ,B filter when using a CV model) [14,151.

The extended Kalman filter (EKF) uses the polar coordinates measurement . There is an alternative approach which transforms the polar coordinates measurements to the Cartesian coordinate systems. Then the conventional Kalrnan filter is applied. This approach is generally called the converted measurement Kalman filter (CMKF). With the conver ted measurement Kalman filter [16], the polar coordinate measure- ment zP, is first converted to the Cartesian coordinate measurements zC, using the inverse trassformation h-'(z~)The original noise process w, acting on the converted measurement zn no longer behaves rigorody as an additive tem, but in some corn- plicated fashion. However, at le& when the covariance of the noise wn is smd, the new Cartesian coordinate measurement equation can be written as follows:

=Dx,+W, (2.41)

11 O 0 01 where D = and wnis approximated as a white Gaussian noise process

4 on the convched measurement 2: with zero mean and covariance matrix Mn

Mn = E[W~+;~]= Jb-1 (Pnln-i)&Jb-l T (2.42)

where

As a result, the new measurement equation in Cartesian coordinates becomes linear and the noise process is Gaussian, the standard Kalrnan flter can be applied to the problem with the measurement equation defined in Equation (2.41) and the statistics of the measurement noise given as in Equation (2.42). This method however is an acceptable approximation only for moderate cross-range errors (cross-range error is defined as the product of the range and the bearing angle). An improved method using debiased converted measurements [17] which accounts for the sensor inaccu- racies over all practical geometries and acmacies and provides consistent estimates (i.e. compatible with the filter calcukted covariances) for all practical situations. There is another technique [2] which preprocesses the transfonned measurements to reduce measurement uncertainty and increase state estimation acniracy by explicitly employing the knowledge available about taiget motion and trajectory.

So far we have discussed various algorithms based on first-order approximation techniques. For the rest of the section , we will present several well-known nonlinear algorithms based on global approximation techniques. These nonlinear algorithms are no t cornmonly used for real-the applications, because the computation time and data storage requirements are too excessive. One global approach is to approxirnate the densities directly in such a manner that makes the integrations involved in Bayesian equations (2.22) and (2.18) as trackable as possible. The simplest method is the point- mass method [18],where the densities are approximated by point masses located on a rectangular grid. As a result, the integrations in the Bayesian equations can be accomplished numerically as a discrete nonLinear convolution which requires O(p) operations to cornpute, where N is the number of grid points. An improved method approximates the densities with a piecewise constant bction [19] and this approxi- mation is slightly more sophisticated than the point mass method, but it converts the Bayesian equations to a discrete linear convolution which requires only O(Nlog N) operations. There is another technique which uses a Gaussian-sum approximation for the density [4]. In this case, the Bayesian equations become a bank of Khan filter equations. However, the number of components in the approximating sum grows exponentiaily with time. More sophisticated interpolation schemes such as different spline approximations [20, 21, 22, 231 which require fewer grid points for a given accuracy are also studied. There exists alternative technique, called the bootstrap filter [24], which represents and recursively propagates the required density function as a set of random samples, rather than a function over state space. As the number of samples becomes very large, they effectively provide an exact, equivalent, repre- sentation of the required density function. However, the number of sampies required to give a satisfactory representation of the densities for the filter operation is very difficult to determine. Another global approach is to approximate the integrations involved in the Bayesian equations directly. All such methods amount to replacing the integral with a weighted sum of samples of the integrand based on some particular quadrature formula [25, 26, 271. The basic issue again is trading off computational complexity against the numbes of grid points required to achieve a desired accuracy.

Fuzzy logic and neural network have been considered extensively for t~gettrack- ing recently [28, 291. An fuzzy-Khan estimator [30] which uses a fuzzy system to estimate the current taqet maneuvering acceleration and a KaIman filter to estimate the target position is studied. The resulting estimator would keep the features of the Kalman filter but with improved accuracy if good estimates of target maneuvering acceleration can be produced. Similarly, neurofuzzy estimators [31] are used to ini- tialize the Kahan Bter and extended Khanfilter. It is shown that flters which have been initialized with neurofuzzy estimates converge faster and give irnproved performance. Fuzzy logic is also introduced into a conventional a - P constant gain filter [32] in an attempt to improve its tracking ability for maneuvering targets. Chapter 3

An Efficient Radar Tracking Algorit hm Using Multidimensional Gauss-Hermite Quadratures

3.1 Introduction

In this chapter, a new tradcing algorithm based on multidimensional Gauss-Hermite quadrature is presented. This quadrature technique evaluates the optimal (condi- tional mean) estimate by replacing the integrals involved in the Bayesian equations with a weighted sum of samples of the integrand and this approximation can be very accurate if the integrand is smooth. However, this technique requires excessive corn- putational time and data storage, because of the dimension, the number and the order of the quadrature involved the number of samples required are usually too large to be handled in red-the. In this chapter, we suggest several ways to reduce the computational requirements of this quadrature technique by reducing the dimension, the number and the order of the quadratures required for a given acniracy. 3.2 Gauss-Hermite Quadrature

Based upon the orthogonality property of the Hermite polynomials, one can develop the Gauss-Hermite quadrature (331.

where Bi are the weights and tii are the roots of the Hermite polynomial of degree k and they are chosen so that the sum of the k appropriately weighted functional values yields the integral exactly when f (u) is a polynomial of degree k or less.

The above one-dimensional quadrature formula can be easily extended to the more general multi-dimensional quadrature formula [34]. Consider the evaluation of the following multidimensional integral

where r and P are n-dimensional vectors and P is a n x n nonsingular matrix. If P is a positive , then the multidimensional quadrature formulae by Stroud [34] may be applied, which is obtained by successively applying the one di- mensional Gauss-Hermite formula. If P is not diagonal, an affine transformation of variables can make P diagonal. One approach is to hdthe eigenvector decomposition of the matrix P:

where D is a diagonal matrix of eigenvalues and V is a full matrk whose col- are the corresponding eigenvectors. The expression

achîeves the diagonalkation. A simples approach is to fmd the square-root of P, W using the Cholesky algorithm (351 such that and the expression

also achieves diagonalization. Then the multidimensional integral in equation (3.2) can be evaluated numerically by applying the one dimensional Gauss-Hermite quadra- ture formula to one variable at a time and it becomes

where

where ri, ,.--,iN are the N-dirnen~iona.1grid points and Bi,,... ,,, are the corresponding weights. Bi,(m = 1,.. . ,N) and uim(rn = 1,. .. ,N) are the weights and the grid point for the one dimensional Gauss-Hermite quadrature fkom equation (3.1). The grid r is cded a floating grid and it is centered at the mentmean 2. The square-root approach has the advantage over the eigenvector approach in that the computation of the of a matrix is faster than computation of the eigenvectors of a matrix.

The quadrature formula in equation (3.7) involves the summation of kN weighted functional values, where N is the dimension of the quadrature and k is the order of the quadrature. For hi& dimensional problems it is desirable to keep the order of quadrature smd in order to reduce the complexity of the quadrature. However, the result will not be acnirate if the function f(e) cannot be weU approxîmated by a low order algebraic expression. In this case, one may factor f (-) into two components: one is nearly algebraic and the other being Gaussian. The Gaussian component cmthen be combined with the original Gaussian weighting function to obtain a new Gaussian weighting function. 3.3 Basic Principles of the Proposed Filter

The central issue here is to compute the optimal (conditional mean) estimate based on the Bayesian equations accurately and efficiently using the multidimensional Gauss- Hermite quadrature technique. To reduce the number of quadratures required, ana- lytic results are applied to the prediction stage (theupdate), and the application of quadrature techniques is restricted ody to the update stage (measurement update), because the equation of target motion (2.12) is linear and the measurement equation

(2.14) is nonlinear. Assume at time step n the mean &-lln-l and the covariance mat& Pn-lln-iof the conditional density p(~,_~lZ"-')is known. Then we approx- imate the distribution of the state prediction p(xnlZn-l) as a Gaussian distribution with mean f and covariance matrix Pnln-l.

This algonthm rests on the assumption that the state prediction density is Gaussian at each step. This assumption is closely satisfied in many situations of practical in- terest, in particular if the state noise vn is Gaussian [36].

Then the conditional mean (optimal estimate) kInand the conditional covariance Pnlncan be determined fiom the Bayesian equations as follows:

The above integrations generally cannot be accomplished in dosed form because the densities of the state are non-Gaussian, To make the dculations of the conditional mean and the conditional covariance possible and &cient, we replace the integrak involved in the Bayesian equations with a weîghted sum of samples of the integrand based on the Gauss-Hermite quadrature formula. This requires the evaluation of the integration of the form

as the Gaussian weighting function, but the remaining expression

is not algebraic. To determine the dueof the integral in equation (3.15) accurately with &(xJ as the Gaussian weighting function wiU require a high-order quadrature formula. However, if the expression F2(xn)is factored into two expressions: one is Gaussian, F:(x~) ; the other is nearly algebraic within the desired region, Fi(xn) and a new weighting function determined by Fl(xn) F!(x.) is used, then the same accuracy can be obtained with a lower order integration formula (see Figure 3.1).

To determine F;(x,) and F;(xn), we use the linear approximation to the function h(xn )

where the matrices H and Ho can be determined either by first-order Taylor series expansion or the statistical linearization. The error of this linear approximation is then given by Figure 3 L: With the factorization the order of the quadrature is reduced for a given accuracy. For example, in A, if F2(x) is the weighting function, the number of grid points required to cover the integrand Fi(x) completely will be 7; however, if the integrand Fi(x) is factorized and the new weighting function is F2(x)x F: (x), then the number of grid points required is reduced to 4 as shown in B.

The expression Fz(xn)can be rewritten as

where

As a result, the new weighting function determined by F(x~)= F~(x,) Fi@,) becomes

where These equations are exactly the linearized Kalman filter equâtions and the constant term C2 is not necessary to evaluate as it wiU be canceled later.

Applying the multidimensiond Gauss-Hermite quadrature formula to the integral in Equation (3.15) with F(xn) in Equation (3.20) as the new weighting function and A(xn)F;(xn) in Equation (3.l9) as the new integrand, the integral in Equation (3-15) becomes

where

w,w,T = Pnln (3.25)

&,il ,... ,iN = Wnuil,...,i~ f %ln (3.26) T Uil ,... ,ir = [uil1 t uiN] (3.27)

Bil,..., iN = IwnlBil **BiN (3.28) where Wn is the square root of PnInfrom the Cholesky algorithm; ~,,i~,...,i,is a N-dimensional grid point which is constructed from N one-dimensional grid points uij(j = 1, . .. ,N) and Bi,,...,i, is the corresponding weight which is just the product of N ondimensional weights Bij(j= 1,.. . ,N). Utilizing the results from Equation (3.24), the conditional mean (optimal estimate) and the conditional covariance

3.4 Complexity Analysis

This tracking algorithm basicdy consists of two parts: the first part is based on the linearized Khanfilter to obtain a rough estimate of the conditional mean and the conditional covariance; the second part is based on the multidimensiond Gauss- Hermite quadrature formula to compensate for any error introduced from the linear approximation.

The evaluation of the initial grid points ui,,...,iNand its corresponding weights

Bi,,...,i, in equation (3.27) and (3.28) requires the most computations; they are de- termined from a set of N one-dimensional grid points uij and weights Bij. Fortunately, these initial N-dimensional grid points and the weights can be computed off-line. As- sume the dimension of the quadrature is N and the order of the quadrature is k. The computationd requirements at each iteration are as follows:

1. the calculation of the square root W, of PnInfrom the Cholesky algorithm (3.25) requires 0(N2)operations;

2. the affine transformation (3.26) requires Na scalar multiplications and N2scdar

additions for each grid points ui, ,... ,iN.

3. the caldation of the conditional mean 1î,(, and the conditional covariance Pni, requires kN evaluations of Fi(-),(fl + N + 1) kN scalar multiplications and (N2 + N + 1) (k - l)N scalar additions.

The whole algorithm requires excessive amount of additions and multiplications and data storage.

3.5 Filter Structure

The block diagram of the proposed filter is presented in Figure 3.2. It consists of the foIlowing t hree stages :

Stage O: Previous estimates:

Assume at step n the previous mean and covariance estimates X, and PnI,aze known. Figure 3.2: Flow Diagram of the Proposed Filter

Stage 1: Linearized Kalman filter: The new mean f and the new covariance Pnlnare first estimated by the linearized Kalman iîlter equations.

Stage 3: Error Compensation: To compensate for the errors introduced in the linear approximation in Stage 2 multidimensional Gauss-Hermite quadrature is used to evaluate the conditional mean Xn and the conditional covariance PnI,

3.6 Simulation Results

To compare the performance of our proposed filter with that of currently popular approximate filters a two-dimensional target tracking application described by the system equation (2.12) and the measurement equation (2.14) with the following pa- rameters is simtdated. The following algorithms are used in the simulation: the extended Kalman Nter (EKF), the converted measurement Kalman filter (CMKF) and proposed quadrature- based fiiter. AU these filters require an initial filtered estimate hloand an initial error covariance Polo.They are estimated fiom the first two measurements using the al- gorithm presented in Appendix B. The dimensions and the order of the quadrature we used is 4 and 16 respectively, i.e. the proposed filter has a computational com- plexity of order 16~.The results presented in Figures 3.3 and 3.4 are based on 100 measurements averaged over 500 independent realizations of the experiment with the sampling interval of one second and the initial state xo is chosen randomly for each realizations of the experiment.

The proposed filter is compared with the weU-known classical filters, e.g. the EKF and the converted measurement Kalman filter (CMKF). The position errors and the velocity errors for each filters are shown in Figs. 3.3 and 3.4 where the error is defined as the root mean square of the difference between the actual value and the estimated value. Our proposed method converges faster and yields results of smaller error than the EKF and the converted measurement Kalman filter (CMKF)does whereas the EKF on average diverges due to the instability of the Jacobian matrix. Although this quadrature-based filter does yield more accurate estimates, the computations and the data storage it required are still too excessive. To reduce this computational burden, we must investigate some other means to represent the densities such that the optimal estimate can be computed in a simple and efficient fashion...... :...... :...... i...... h.

Figure 3.3: Cornparison of the position errors

6000 I I I I I I I 1 I

I 5000 i..-.....: ...... r...... ; ...... -...... : ...... - ! i ! I ....-..* 4000 4....-..-; .--...-..; ...... ;- ...... -=. - n E ! i YE L ...... S -O P ...... L...... ,...... -.....*..~...... -...... I...... ,

...... L ...... I...... *..: ...... -

*-.--.:--.-.--.-:--.--..-.:*.----.-* i O 10 20 30 40 50 60 70 80 90 100 "lime

Figure 3.4: Cornparison of the vdoaty errors Chapter 4

Adapt ive Gaussian Sum Algorit hm for Radar Tracking

4.1 Introduction

Reca that it is generaily impossible to accomplish the integration involved in the Bayesian equations in closed form when the measurement equation is nonlinear. This problem leads to either the numerical evaluation of the conditional densities or the investigation of density approximation for which the required integration can be ac- cornplished in a straightforward manner. Zn this chapter, the Gaussian sum approx- imation method is used to approximate the conditional densities, such as p(xnlZn) such that the integration indicated in equations (2.18) and (2.22) can be accomplished in closed form, i.e. the optimal estimate can be computed analyticdy. The Gaus- sian sum approximation method can approximate any probability density hinction as closely as desired. Moreover, it is compnsed of a bank of Kalman-type &ers operating in pardel with each individual estimate possessing its own corresponding weighting term; hence, the optimal estimate can be computed efficiently as a weighted sum of the estimates from each Kalman-type filters. 4.2 Theoretical Foundations

Lemma: Given any probability density function p(x), it is possible to approximate it as closely as desired in the &(Rn) by a linear combination of Gaussian densi ties.

N PA(X) = Cd(x-mi, Bi) where

- mi)T~y'(~- mi)] (4.2) and the weighting factors ai have the constraints:

and mi is a mean vector and Bi is a positive definite covariance matrix. For a proof of this result, see Sorenson and Aspach [4]. These properties insure t hat the approximation is always positive and integrates to unity. This approximation is thus a true probability density function in its own right and when this representation is used the Bayesian recursion relations can be solved exactly in closed form.

4.3 Basic Principles of the Proposed Filter

To evaluate the state prediction density lZndL)efticiently fiom equation(2. la), we will assume the conditional density p(xna112"-') to be Gaussian with mean îCn-lln-l and conriance matrix Pn-lln-L-Based on this assumption, the equation(2.18) can be evaluated in closed form and the resulting state prediction density p(xn is a Gaussian density function with mean and covariance mat& Pn!n-l. Even though the density ~(2~1%)is a Gaussian density function with mean h(x,) and covariance matrk &, the integration involved in equation (2.22) cannot be accom- plished in closed form, because the integration involved is done with respect to x,, not zn. The density p(z. lx,) when expressed in tenns of x, is a non-Gaussian function. The word function is used instead of density function because the density p(znIxn) when expressed in terms of x,,, denoted as p(znlx,), has all the properties of a density bction, except it doesn't have an area of unity, i.e. jTwp(zn IxJdx, f 1. Since the function p(z, IL) is non-Gaussian, the integration involved in equation (2.22) cannot be accomplished analytically. The Gaussian sum approximation method is used to approximate this function such that the integration involved in the Bayesian equa- tions can be accomplished in a simple manner, and the optimal state estimate hr,nln can be computed analytically and efficiently as a weighted sum of the estimates from a bank of Khan-type filters.

4.3.1 Approximation of Densities

Since the function p(z,lx-) has all the properties of a density function, except it doesnythave an area of unity, the Gaussian sum approximation may be modified such that it can be used to approximate this bction with a weighted sum of Gaussian density function, where the sum of weights is equal to the area of the function, instead of unity.

N wkeCkl an,i = p(zn lx,)& and D = [:: : :].ItisbonvcruerydiB- cult to choose the %bestnparameters an,i,mi and Bi efficiently because the function p(znl&) depends on the current dueof the measurement 2,.

Classical Optimization Approach

This approach selects the parameters Bn,i, mi and an,i so that the Lk nom be- tween the actud function p(znis) and the Gaussian sum approximation pcs(~)is As the number of terms N increases and as the conriance Bns decreases to zero, the nom between the actual density and the approximation must mnish. However, for finite N and nonzero comriance matrix, it is reasonable to attempt to minimize the Lk nom such as this. In doing this, the stochastic estimation problem becomes a deterministic optimization problem.

To actually carry out this optimization procedure, we mut obtain a set of equally spaced samples fiom the function p(znlxn) and fiom the samples we determine the "bestn sum of Gawians [37]. Thus, the problem becomes to choose the parameters m,,i and an,,-such that

where {xnj : j = 1,.. . ,K) is the set of uniformly spaced points and E is the pre- scribed accuracy. Note that in this approximation, the number of Gaussians used, N,

&O has to be determined, and it is desirable to fmd the smallest N possible.

Assume the dimension of problem is m and the L2 nom is used; then a Gaussian te- determined by parameters B.,i, mntiand kVihas (m+ l)(m + 2)/2 unknowns. For a known number of Gawian terms N, this optimization problem is equivalent to that of solving a system of N(m + 1) (m + 2)/2 nonlinear equations by Ieast-squares method using K » N(m + l)(m + 2)/2 data points. To solve a system of nonlinear equations, we can use the Marquardt algorithm (381. In this algorithm, starting fkom an initial set of parameters, the parameters are iteratively modified until a local mini- mum is reached in the sum-of-squared enor between the data and the approximating This approach is not very suitable for the radar tracking applications, because the function p(z&) we want to approximate depends on the current value of the measurement 2,. Thus, the optimization procedure indicated in equation (4.7) must be accomplished on-line for each new measurement, which is impossible because the dimensions of the problem and the number of parameters involved are too large to be handled in real-time. Consequently we propose two efficient and yet accurate methods to approximate the function p(zn IL).

Curve-fitting Approach

Our method utilizes the special symmetric geornetry of the function p(znIx-) to select the parameters an,i,m,,; and Bric to obtain the near-optimal Gaussian sum approxi- mation for the function p(z,lx-). The function p(z,,(xJ has a heavy-tailed structure and its projection onto the x,,-plane has a crescent shape. If we can fit this crescent shape with some ellipses geometricaily, then we can represent the function p(znl&) as a sum of Gaussian mixture obtained fiom those ellipses, because the projection of a Gaussian function onto the x.-space is an ellipse (see Figure 4.1).

Figure 4.1: (a)Plots of the fundion p(zn1s) and (b) its projection Figure 4.2: Basic Principles of the Proposed Gaussian mixture Approximation Method

Step O: Defme the contour of the projection as shown in Figure 4.lb. Eere we as- sume the region outside three standard derivations of the density p(zn lx,) has negligible probability; i.e.

This contour when plotted in terms of z, is an ellipse, but when plotted in terms

of xn is the shape shown in Figure 4.lb. Define Pt. 1 be the measurement 2,; Line 1 be the hejoining Pt. 1 and the origin; Line 2 be the line perpendicdar to Line 1and passing through Pt. 1. Step 1: Denote the four interceptions of Contour C, Line 1 and Line 2 (see Fig- ure 4.2a) as Pt. la-d. Then the fist Gaussian term is defined to have a ellipsoidal base bounded by Pt. la-d and a height equal to the value of the function p(z, 1%) evaluated at Pt. 1. Or the first Gaussian term can be defined by the foliowing weight al, mean mi and covariance matrix BI:':

where

Gr = p(z, (Dx,,= mi) kl = 9/(9 + 2 1n(2rrlR,J1/*ii~))

01, = \Pt. 1-Pt.la(/S

QL~= (Pt. 1 -Pt. lb1/3

Step 2: Define Arc A be the arc r = rn;Line 3 be the line joining Pt. la and the origin; Line 4 be the line perpendicular to Line 3 and passing through Pt. 2, where Pt. 2 is defined as the interception of Line 3 and Arc A. Repeat Step 1 with Line 1replaced by Line 3, Line 2 by Line 4 and Pt. 1 by Pt. 2 and we can obtain the second Gaussian term (see Figure 4.2b). The rest Gaussian terms are obtained in a similar manner. This algonthm is terminated either when the number of terms exceeds a certain prescribed value or when Pt. 2 is close to the tips of Contour C (see Figure 4.2d).

Numerical Exam~les

We compare the acniracy and the efficiency of the proposed approximation method with the classical optimization method in approximating the function ~(z,lx,) with a sum of Gaussian terms. We assume the noise w, is a white Gaussian process sequence with zero mean and covariance matrix R, = 1; i].~ehtherusume

lThe detail derivation of this curve-fitting approach is praented in Appendi C the measurement z, is equal to 1 rn = lokm 1. The cornparison ir basecl on thee O,, = n/4rad Herent scenarios: in Case 1 or = 10m and os = 0.lrad; in Case 2 or = 20m and

cg = =.?rad; in Case 3 or = 50m and 00 = 0.25rad. The simulation results are

% error Classical method Proposed method Case 1 1.21 4.66 Case 2 Case 3 1.76 4.78

processing time (s) Case 1 3.5641e3 1.59 Case 2 Case 3 I 3.4165e3 1.49 Table 4.1: Cornparison of the accuracy and the efficiency of the proposed Gaussian sum approximation method and the classical method. Note that the classicai method is based on the Marquardt algonthm and the number of Gaussian terms N is fixed to be 5 for both methods. sumrnarized in Table 4.1. They show that the proposed Gaussian sum approximation method is very efficient but yet yields accurate results comparable to the classical optimization method.

Transformation Approach

Recall that the fimction p(zn lx-) can be defined by measurement equation (2.14) and the known statistics of wn where 6(-)is the Dirac delta function. The delta function is used because if x, and w, are known, then x. can be obtained deterministicaily. Thus, the function p(znlx-) can be obtained by applying the transformation w, = 8, - h(x,) to the density function p,(w,) (see Figures 4.4A and 4.3A). Utilizing this observation, we select some initial parameters innIi and Bn,i from the known statistics of the noise w,, and transform these parameters from the wn-space to the xn-space based on the transformation w, = z, - h(x,,) and fmally collect them as a Gaussian sum approximation for the function p(z&) (see Figures 4.4B and 4.3B). This approach is more efficient than the classical approach because most of the computations is involved in the initialization which can be done off-line; the only computation required at each i teration is for updating the initial parameters.

Figure 4.3: Fundamentals of the proposed Gaussian sum approximation met hod

Step O: For initialization, select the parameters mnIiand B,,~ for a prescribed dueof N such that the following sum-of-squared error is minimized.

where {wnj: j = 1,. .. ,K) is the set of uniformly spaced points distributed

through the region containing non-negligible probability and E is the prescribed accuracy. Figure 4.4: A. The function p(zn lx,) can be obtained through a nonlinear trans- formation from the density p(w,). B. Similady, some initial parameters fiom the statistics of the noise wn can be transformed into a Gaussian sum approximation for the function p(z, IL)

Step 1: For each new measurement z,, update the new parameters anli, m,,i and Bn1i such that

where

mn,i = Zn - mn,i 2The detail derivation of this transformation approach is presented in Appendix D Numerical Examples

We compare the accuracy and the efficiency of the proposed approximation method with the classical curve-fitting method in approximating the fimction p(z, lx,) with a suof Gaussian terms. We assume the noise w, is a white Gaussian process se- c quence with zero mean and covariance matrix R, = 1 0 1 where ur = lûm anci ri 00 = O.Irad. We further assume the current measurement z, = 1 '"1 is known where r. = lkm and 8, = rf4rad.

(a) Original function p(z.l&) (b) Gaussian Sum Approximation

Figure 4.5: Plots of the original function p(z, 1%) and the approximation

The simulation results are summarized in Figure 4.5 and Table 4.2. They show that the proposed Gaussian sum approximation method is very efficient but yet yields accurate results comparable to the classical optimization method. Classical method Proposed method % error 1.21 2.23 processing time (s) 3.5641e3 0.17

Table 4.2: Comparison of the accuracy and the efficiency of the proposed Gaussian sum approximation method and the classical method. Note that the classical method is based on the Marquardt algorithm and the number of Gaussian tenns N is fixed to be 5 for both methods.

4.3.2 Bank of Kalman Filters

Either using the curve-fitting approach or using the transformation approach, we wiU end up with a Gaussian sum approximation for the density ~(z~Ix~)If we substitute this approximation into equation (2.22) and complete the squares 3, the a posteriori density p(xn 1 Zn)becomes

where

3The detail derivation is presented in Appendix E 4.3.3 Growing Memory Problem

Inherent in the Gaussian sum algorithm is a serious memory growing problem that causes the number of tems in the Gaussian sum ta increase exponentidy at each iteration. h this section we will present two approaches to alleviate this prob em.

Equivalent Gaussian Approach

For simple applications the Gaussian sum approximation of the density p(xn Zn) is collapsed into one equivdent Gaussian term [39]. Consequently, the total num ber of Gaussian terms is fixed to be N. The mean and the covariance matrix of the equivalent Gaussian term, denoted XnIn and Pnl, respectively, are given as follows:

This method is simple asd elegant and it usually yields very accurate estimates, be- cause if the a posteriori density p(xn1Zn) can be approximated very closely by a sum of Gaussian density functions, then its mean can be estimated very accurately as a weighted sum of the means of those Gaussian density functions. Furthemore, to control the number of terms in the approximation we collapse the Gaussian mixture into one equivdent Gaussian term at the end of each iteration. However, by doing so we actually collapse the non-Gaussian nature of the a posterion' density p(x,lZn) into a Gaussian density function, which of course can senously degrade the perfor- mance of the filter. Hence, it wube desirable to reduce the number of terms in the approximation but at the same time retain the general structure of the distribution of the a posteriori density ~(x~IZ").This approach is called the mixture reduction approach.

Mixture Reduction Approach

For complex applications the number of Gaussian terms in the approximation is re- duced to some specified limit M for M < N. As a result, the total number of Gaussian terms is fixed to be N - M 4. This approach results in another Gaussian sum approx- imation with lesser terms without modifying the structure of the distribution beyond some acceptable limit and it should preserve the mean and the covariance rnatrix of the original Gaussian sum. Suppose that the nurnber of onginal Gaussian sum ap- proximation is reduced by rnerging several terms together. If c is the set of Gaussian terms to be merged, then in order to preserve the mean and cotariance matrix of the original Gaussian sum, the weight the mean inIn,and the covariance matrix PnI,,should be chosen [40] as follows:

Ideally the partition of the Gaussian terms for merging should be such that some cost function is minimized. However, to reduce the Gaussian sum fkom N terms to M tems this could involve the evaluation of the criterion for every possible partition to identify the minimum. Such a procedure would be fax too time-consuming; so a suboptimal approach has been adapted, in which a pair of Gaussian terms are merged at every iteration of the algorithm to minimize some chosen criterion.

The Joining Algorithm This algorithm uses the Mahalanobis distance [40]as the cost function

Gj = -'Y3j (4 - 4)T(~i+ Pj)-'(Xi - 'Yi + 7j where 3, xi and Piare the weight, the mean and the covariance mat& of the ith

Gaussian term; rj, Xi and Piare the weight, the mean and the covariance matrice of the jth Gaussian tenn. At each iteration, the two Gaussian terms which is closest in the sense of the Mahalanobis distance are combined to fonn a new Gaussian dehed 'When this mixture reduction approach is used, the Gaussian sum fiiter equations have to be modifieci. The modification is presented in Appendix F by equations (4.30- 4.32). To implement this algonthm a symmetric matnx contain- ing the distance between every pair of Gaussian tenns in the onginai Gaussian sum is evaluated. This rnerging should proceed until the minimum distance exceeds some specified threshold. At each iteration, the smdest element of the matrix is found and the corresponding pair of components are merged. This algorithm has a complexity of order N2 where N is the number of components in the original Gaussian sum. It can fiuther simplified by merging the tems with the smallest and the second smd- est weights together at each iteration until the number of remaining terms has been reduced below the specified value M. This method works because the te= with the smdest and the second smdest weights cary the lest important information; thus, merging them will not mowthe original mixture very much. This simplified joining algorithm has a complexity of order N.

The Clustering Algorit hm This algorithm is based on the K-means clustering algorithm [41] which partitions the original Gaussian mixture into M clusters such that the Mahalanobis distance between the Gaussian tenns and each cluster center is minimized. Each cluster is then approximated by a single Gaussian defmed by equations (4.30-4.32).

Modified K-means Clustering Algonthm

0 Choose randomly M of the Gaussian terms as the centers of the cluster cl,. .. ,CM.

0 Vlhile there are no changes in the position of the cluster centers.

- For each cluster

* Use the cluster centers to classify the Gaussian terms into chsters by evaluating a N x M matrix D with the entry dG dehed as the Ma- halanobis distance between the ith Gaussian term and the jth cluster

center cj. To classify the ith Gaussian term into clusters for instance, we must fmd the mallest element on the ith row of the mateD and the column that element belongs to corresponds to the cluster center that Gaussian term should go.

* Update the new cluster centers by collapsing each cluster into one single Gaussian using equations (4.30-4.32).

- end For

O end While

One drawback of this algorithm is that the results produced depend on the initial values for the cluster cent ers, and it fiequent ly happens t hat suboptimal partitions are found.

Numerical Examples To demonstrate this clustering algorithm we generate a Gaussian mixture with 50 randomly chosen components and then apply the algorithm to reduce its number of components to 10. Figure 4.6 shows the onginal Gaussiaa mixture and Figure 4.7 shows the Gaussian mixture after applying the dustering algorithm.

Figure 4.6: Mixture before Reduction Figure 4.7: Mixture after Reduction (10 (50 components) components) 4.4 Filter Structure

The blodc diagram of the proposed Gaussian sum filter is presented in Figure 4.8. It consists of four stages:

Gaussian Sum Approximation

Figure 4.8: Flow Diagram of the Proposed Gaussian Sum Filter

Stage O: Previous estimate Assume at time n the Gaussian mixture approximation of the conditional den- sisP(X,-~ (Zn-')is known.

Stage 1: Gaussian Sum Approximation The density p(z, lx,) is approximated systematically by a weighted sum of Gaus- sian terms either by the curvefitting approach or the transformation approach.

Stage 2: Bank of Kalman Filters The Gaussian mixtures fiom Stage O and Stage 1 are passed to a bank of N x M Khan filters which evaluate the new Gaussian 'mixture for the density P(X~Izn)- Stage 4: Mixture Reduction To control the number of Gaussian te=, the density p(xJZn) is either col- lapsed into one equident Gaussian term or reduced to M Gaussian terms by the joining algonthm or the clustering algorithm.

4.5 Simulation Results

To compare the performance of our proposed filter with that of currently popular approximate filters two different scenarios are considered.

Scenario 1:

A two-dimensional target tracking application described by the system equation (2.12) and the measurement equation (2.14) with the following parameters and two different noise levels is simulated.

The following algorithms are used in the simulation: the extended Kahan filter (EKF), the converted measurement Khanfilter (CMKF) and the proposed adap tive Gaussian sum filter (AGSF) which uses the Transformation approach in the Gaussian sum approximation and the Equivalent Gaussian approach to control the number of terms in the approximations. All these filters require an initial Qtered estimate hloand an initial error covariance Polo.They are estimated fiom the fmt two measurements using the algorithm presented in Appendix B. The number of the Gaussian terms we used in the approximation is 9 and they are collapsed into one equivalent Gaussian at the end of each iteration. The result s presented in Figures 4.9- 4.12 are based on 100 measurements averaged over 500 independent realizations of the experiment with the sampling interval of one second and two different measurement noise levels and the initial state xo is chosen randomly for each realizations of the exp eriment .

The proposed filter is compared with the well-known classical filters, e.g. the ex- tended Kalman filter (EKF) and the converted measurement Khanfiter (CMKF). The position errors and the velocity errors for each iîlters for each measurement noise levels are shown in Figs. 4.9, 4.10, 4.11 and 4.12 where the error is defmed as the root mean square of the difference between the actual value and the estimated value. Our proposed method converges faster and yields estimates of smder error than the EKF aod the converted measurement Khanfilter (CMKF)does in a.LI cases. Note that the extended Kalman filter (EKF) diverges in only some of the cases due to the instability of the Jacobian matrix as shown in figure 4.13 and 4.14. Scenario 2:

A threedimensional target tracking application described by the system equation (2.12) and the measurement equation (2.14) with the foilowing puameters is simu- lated.

xn+~ =

The following algorithms are used in the simulation: the extended Kaknan filter (EKF), the converted measurement ICalman flter (CMKF)and the proposed adaptive Gaussian surn filter (AGSF)which uses the Transformation approach in the Gaussian sum approximation and the Mixture Reduction approach to control the number of terms in the approximations. All these filters require an initial filtered estimate and an initial error covariance Polo.They are estimated fiom the first two rneasure- ments using the algorithm presented in Appendix B. The number of the Gaussian terms we used in the approximation is 49 and they are reduced to 25 equivalent Gaussian terms at the end of each iteration. The results presented in Figures 4.15 and 4.16 are based on 100 measurements averaged over 10 independent realizations of the experiment with the sampling interval of one second and the initial state xo is chosen randomly for each realizations of the experiment.

The proposed filter is compared with the well-known classicd filters, e.g. the ex- tended Kalman filter (EKF) and the converted measurement Kalman filter (CMKF). The position errors and the velocity errors for each filters for each measurement noise levels are shown in Figs. 4.15 and 4.16 where the error is defined as the root mean square of the difference between the actual value and the estimated value. Our proposed met hod clearly outperforms the others by converging faster and yielding estimates of smder enor. Note that both the extended Khanfilter (EKF) and the converted measurement Khanfilter (CMKF)diverges due to the highly nonlinear nature of the problem whereas our proposed adaptive Gaussian sum filter (AGSF) converges because it is able to retain the non-Gaussian nature of the distribution of the a posten'on' density p(x, [Zn)at each iteration. Comparison of Position Emrs for Noise Level I lo3...... *..,...... ,...... ,..i...... !...... !...... :...... !, ...... ! ...... 1...... : ...... ! ...... "..'".'...... ; ...... ; ...... ; ...... ;...... :......

...... ;...... ; ...... :=:&.,.f ...-.-. -.CA- .... r::

...... /...... ,..-..*.....*.*..**...... * C...... ,..,...... hE - Y i p 1.1 ; ...... i ...... i ...... i ...... i ...... i...... i...... i ...... i ...... :.. ;...... ;...*.....;...... :...... :...... :...... c...... ;...... -...... ;.* ...... ; ...... :...... ; ...... ;...... :...... --...... "...... *...... *..;. ... :.. ...;. - ...... ; ...... a..... -..;...... , ...... ,...... ,......

J .- O 10 20 30 40 50 60 70 80 90 100 Tirne (s)

Figure 4.9: Comparison of the position errors for rneasurement noise Ievel 1 Corn parison of Velocity Enors for Noise Level I

...... --...... *...

,:::::::::.. -*...... ::::::::: i :::::::::. :::::::::t j: :::::::::; :::;II .- ...... ?...... '..,...... <...... i...... i...... ~...... S.. S.. .i...... ;-+...... ;...... : ...: ...--.*. ;* ...... ;.-,,.,.... ,......

..-.....-.;.-...... -----**- ......

IO-'~ 1 1 1 1 1 1 I 1 1 10 20 30 40 50 60 70 80 90 100 Time (s)

Figure 4.10: Comparison of the velocity errors for measurement noise level 1 50 Comparison of Position Errors for Noise Level I lo3 ...... ! ...... ! ...... ! ...... 2...... : ...... 2...... ; ...... 4 ...... _...... _...... -...... i...... := ~~,~.~.~~~~.-.-...~~~:~:.~:.~~.~.~~~:-:.-:.-.'.-'.~~-~.~.~~~.-.-~-~~~.- -/ ...... /-:i-'> ...... "'../..*',"...... i ...... ,...... fi .. 1'...... :...... ; ...... -i ..... I ..** 1 .K.:...... ;...... ; ...... : ...... : ...... n -.+ f : ".' Y ...'i... L. . .* . m..

...... *..l...... S...... %...... '...... '......

1O' 1 1 I I I I I 1 I J O IO 20 30 40 50 60 70 80 90 100 TTme (s)

Figure 4.11: Comparison of the position errors for measurement noise level 2 Comparison of Vetocity Errors for Noise Level II 10' t...... 1.- -...... &.-...... i...-.. -.+ .-.....-.r ...... 4 ...... 4 ...... 4 ...... 6 ...... 4 ...... *...... *.....,.....*...... ~...,..ri...-.-...... *...5.i-.-...... -...*...... --.*.~.-...*.---.*.*.~....*-.-.~...... *....~.--..*-.....~~~...... ;*...... ;...... ---..-.-;-...... ;..-.-....;...... ~...... ;.-.....*.~*..*....~.~*....*.;...... *.~*...... *:*.*...~~., I...... -...... C...... P...... P...... ;...... ; ...... -.:...... 1.-.*.a ....:...-...... : ...... *.; .....*--.;.... .*...;.*...... -...... -...... -...... -...... -......

...-.--...%...-• .-.-a-.~---.-~~.*.....*.~-~.---.....-.---...... '..-.~.--..:.~.....*.d~~..-....~~-~*.*.-...... ~...... ~.....~...~..~.~..~...... ~..~.....~.~~~...... a...... ,...... ,...... 1.**...... -.....;...... :......

---..-....-----.--...-.*~.*.-...-5.-.-----.~--~.~~.~ A ...... -...... a......

1O-' I I f l f t I 1 I O 10 20 30 40 50 60 70 80 90 100 Tirne (s)

Figure 4.12: Comparison of the velocity errors for measurement noise level 2 Figure 4.13: Superposition of 100 realizations of experiment based on Extended Kdman Filter

I I I r a l I I I i

Figure 4.14: Superposition of 100 reaiizations of experiment based on Extended Khan Filter Figure 4.15: Comparison of the position errors for 3-D tracking scenario 16000 ;. . I I I I L I I 1 -.*. . . .

Figure 4.16: Cornparison of the veiouty errors for 3-D tracking scenario Chapter 5

Conclusion

In this paper we have discussed the problem of target tracking in Cartesian coordi- nates with polar measurements. Two suboptimal tracking algorithms which is be- lieved to be more accurate and efficient than the most widely used Taylor series methods, e.g. the EKF are introduced. The first method uses the multidimensional Gauss-Hermite quadrature to evaluate the optimal (conditional mean) est imate of the target st ates directly frorn the Bayesian equations. This quadrature technique replaces the integrds in the Bayesian equations with a weighted sum of samples of the integrand and this approximation can be very accurate if the integrand is smooth. However, this method has a major drawback, namely its computational complexity because of the dimension of the problem and the number and the order of the quadra- tures the number of samples involved are usually too large to be handled in red-time. In this paper we have suggested several ways to reduce the computational require- ments of this quadrature technique by reducing the dimension, the number and the order of the quadratures required for a given accuracy. To reduce the number of quadrature required, analytic results are applied to the prediction stage and the ap- plication of quadrature techniques is restricted to the update stage because only the measurement equation is nonlinear. To reduce the order of quadrature, the integand is factored into two components: one is Gaussian and the other is nearly algebraic within the desired region. In this case, the Gaussian component can be combined the original Gaussian weighting function to obtain a new Gaussian weighting function. Simulation results show this quadrature approach is more accurate than the classicd methods, such as the EKF and the converted measurement Khanfilter (CMKF), but it still has a high computational complexity.

The second tradcing dgonthm is based on the Gaussian sum filter. The Gaussian sum approximation method yields very accurate result because it cm approximate any probability density kction as closely as desired; moreover, the optimal estimate (conditional mean) can be computed in a simple rnanner as a weighted sum of the estimates from a bank of Kalman-type filters operating in pardel. However, this method has a very high complexity since the number of Gaussian terms required in the approximation increases exponentialiy in tirne. To alleviate the computational burden associated with the Gaussian sum filter, we have found two efficient and sys- tematic ways to approximates a non-Gaussian and measurement-dependent function by a weighted sum of Gaussian density function. The fist approach utilizes the spe- cial symmetric geometry of the function to select the number and the parameters of Gaussians in the mixture. The second approach selects some initial parameters fiom the known statistics of the noise and transform these parameters based on the measurement equation to obtain the Gaussian sum approximation. We have derived the formula for updating the weights and the parameters involved in the bank of Kalman-type filters and we also have suggested several ways to control the number of terms in the Gaussian mixture at each iteration. The fvst approach is to collapse the mixture into one equivalent Gaussian and the second approach is to reduce the number of Gaussians to some specified Lunit using the proposed joining algorithm or the k-means clustering algorithm. Simulation resul ts show t hat this Gaussian sum filter is more accurate than the classical methods, such as the EKJ? and the converted measurement Kalman filter (CMKF),and yet as efficient as they are. Future Works

First of all, there are still some problems rernains unsolved and requires more atten- tions:

What is the optimal number of Gaussian terms we should use in the approxi- ma tion?

What is the number of tems we should keep in the mixture reduction process?

So far these nurnbers are pre-determined and therefore it will be desirable to have certain means to determine them analyticaily. However in the long run we should investigate

r A new adaptive and intelligent Gaussian sum approximation method for a gen- eral non-Gaussian and non-st ationary distribution;

A new adaptive and intelligent mixture reduction method which reduces the number of Gaussian terms but retains the general structure of the distribution.

We should also apply this filter to other applications such as the underwater target tracking [42]. Appendix A

Discretization of the Continuous-Time Equation of Motion

Assume in a single physical dimension, the target equations of motions may be rep resented by

~(t)= Fx(t) + Gv(t) (A.1)

where x(t) = [x(t), i.(t),I(t)jT and v(t)is a white noise driving function with variance

The sensors have a constant scan interval sampling target position every AT seconds, i.e. the measurements are taken in discrete time. The appropriate discrete thetarget equations of motions are given by where @(AT,a)is the target state transition matrix and un is the inhomogenous driving input. Since

x(t + AT) = eFATx(t)+ Lttl eF(AT-')~v(r)dr

it follows that

It can be verified that

When aAT is smd, @(AT,a) reduces to the Newtonian matrix

The noise sequence un satisfies

Since v(t)is white noise, E[u~u,+~]= O for i # O so that un is a disaete thewhite noise sequence. Appendix B

Track Initializat ion

Assume the initial density p(xo) is white Gaussian. It mean jro and its comiance matrix POcan be estimated from the fist two measurements z-L and 20.

Recd that the polar measurement zi at time n is related to the Cartesian target state as follows:

This polar coordinate measurement z{ is first converted to the Cartesian coordinate measurements zi using the inverse transformation h-'(z;)

The noise W, = [CI,,6,,IT on the converted measurement zk can be assumed to be a zer-mean white Gaussian noise process with covariance matrix Mn defbed by equation (2.42).

Then the initial mean ko is defined as where

E[xo] = E[&,o - &,O] = &,O

E[v=,o] = E[(G,o- &,-*)/AT] = (&,O - &,-,)/AT

Similarly, the initial covariance matrix Pois defined as

where

and the matrices M-1 and Mo are evaluated fiom equation (2.42) based on the first two measwements 2-1 and zo. Appendix C

Detail Derivations of the Curve-fitting Approach

The objective here is to determine a weighted Gaussian denisty function such as it has the ellipsoidal base defined by Pt. la-d as shown in figure C.1 and the weights equal to the value of the function p(z, IL) at Pt. 1, where Pt. 1 is just the mean of the firs t Gaussian term.

Rld Rlb

Figure C.1: Weighted Gaussian Function

The ellipsoidal base can be determined by rotating the ellipse defined by the lengths 01, and c1b in Figure C.2A through the angle di; as a result the equivalent covariance matrix can be detemiined by applying an rotation fietradormation to the covariance rnatrix ( 1 a ci1) Mmows:

where

Figure C.2: A5eTransformation of the ellipsoidal base

The value of kl and the weight ai axe determined as follows: At Pt. 1,

At three standard derivations level,

Substitue equation (C.6) into equation (C.5), the value of kl becomes

9 kl = 9 + 2 q2ral1% 1112) Appendix D

Detail Derivation of the Transformation Approach

The function p(zn13,) cmbe rewritten as

where finti= Z, - mn,i and and Ên,i are the initial parameters determined from the known statistics of the noise w, from equation (4.17) and

is a non-Gaussian function of xn.

Denote E(x.) as the exponential component Of the hctionh

1

Let consider the Taylor series expansion of the hction Fi(xn) about m,: where Jn(uinti)and HeFi(mns) are the Jacobiaa and the Hessian of the function Fi(xn) respectively.

where Jh(Zn)and Heh(mnti)are the Jacobian and the Hessian of the function h(x,) respectively. If mn,i is chosen such that mnc = h-'(ni,,+)' , the constant term F(mn,,-)becomes zero and the first order term JF,(mns)aiso becomes zero; only the second order term and the higher order terms remain. Thus, the function N(ninqi- h(x,), Ëni)can be approximated as a Gaussian function by taking ody the second order term and ignoring the remaining higher order terms.

where

Finally, substitute equation (D.7) into equation (D.l), the density p(znlxn)becomes

l Here, we assume the function is invertible; however, if the inverse does not exist, then we mut choose mn,i to be the most iikely solution given m,,i = h(mn,i) Appendix E

Reproducing Property of Gaussian Densities

First we make some definitions:

be an r x 1 matrix (column vector),

be an r x r symmetric, positive ,

be an s x 1 matrix (column vector),

be an s x s symmetric, positive defmite matrix, be an r x s matrix,

be an s x 1 random matrix (column vector).

Also d&e N,(x - a, A) be a r-dimensional Gaussian density function with mean a and covariance matrix A.

Theorem Let variables a, A, r, b, B, s, c, C and D be dehed as above. Then

Proof The proof is straightforward, though somewhat tedious, and involves com- pleting the square and applying the matrix inversioo lemma.

the definition of c and C from equations (E.3) and (E.1), the above equation becornes

- 1 1 .-f [(x-C) (x-C) -C~C-~C+R*A-~~+bT~-' b] (2?r)r/21A1112(2~)8/~1B11/2 - 1 ,-f[(x-c)~c-~ (x-c)] 1 1 e-~[aT~-l~+bT~-'b-e~C-~ (2a)~/~ICil/2 IA1l/21B1112 Knowing the relations that Ic~'/~ - 1 IA11/21B11/2 (DBDT+ A)l12 (E-7) aT~-'a+ bT~-'b- = (Db - a)T(~+ DBD=)-'(D~ - a) (E.8) the above equation can simplified as follows:

Note that this theorem can be used to shift the dependence on x from a pair of Gaussians to a single Gaussian. From the theorem the product of two Gaussians is Gaussian whose normalking cons tant is the reciprocal of ano ther Gaussian indepen- dent of x. Appendix F

Bank of Kalman Filters

When the mixture reduction dgorithm is used instead of the equivdent Gaussian dgorithm, the resulting conditional density p(xnlZn) will be expressed as a weighted sum of M Gaussian terms

Using this resdt the state prediction density p(xn+l (Zn)can be evaluated andytically fkom equation (2.18).

where

Utilizing the proposed Gaussian sum approximation, we can approximate the density p(zn+* (x~+~)as the following weighted sum of Gaussian terms Substitute this result into equation (2.22) the conditional density p(xn+ (z"*')be- cornes

where foo tnotesize

Finally, the optimal estimate k+lln+lat time step n + 1 becomes Bibliography

[1] R. G. Brown and P. Y. C. Hwang, htroduction to Randorn Signals and Applied Kalman Filtering. John Wiley & Sons, 2 ed., 1992.

[2] T. C. Wang and P. K. Varshney, "Measurement Preprocessing for Nonlinear Target Tracking," IEE Proc. Part F: Radar and Signal Proc., vol. 140, pp. 316- 322, October 1993.

(31 E. Wong, Stochastic Processes in Infornation and Dynamical Systems. New York: McGraw-Hill, 1971.

[4] H. W. Sorenson and D. L. Alspach, "Recursive Bayesian Estimation using Gaus- sian sums," Automatica, vol. 7, pp. 465-479, 1971.

[5] C.-B. Chang and J. A. Tabaczynski, "Application of State Estimation to Target Tracking," IEEE Dansactions on Automatic Control, vol. AC-29, pp. 98-109, February 1984.

[6] R. A. Singer, 'Estimating Optimal Tracking Filter Performance for Manned Ma- neuvering Targets," IEEE l'kans. on Aero. and Elect. Syst., vol. AES-6,pp. 473- 483, July 1970.

[7] B. W. Dickinson, Systems: Analysis, Design, and Computation. Prentice Hall, 1991.

[a] Y. Bat-Shdom and X.-R. Li, Estimation and Backing: PrincipLes, Techniques, and Software. Artech House, 1993. [9] A. Jazwinski, Stochastic Processing and Filtering Theory. Academic Press, 1970.

[IO] A. Gelb, ed., Applied Optimal Estimation. The M. 1. T. Press, 1974.

[Il] R. L. Bellaire, E. W. Kamen, and S. M. Zabin, "A New Nonlinear Iterated Filter With Applications to Target Tracking," SPIE, vol. 2561, pp. 240-251, 1995.

[12] E. Cortina, D. Otera, and C. D'Attellis, "Maneuvering target tracking using extended Kaiman filter," IEEE fians. on Aero. and Elect. Sys., vol. 29, pp. 1015- 1022, July 1991.

[13] T. L. Song ônd J. L. Speyer, "The Modified Gain Extended Kalman Filter and Parameter Identification in Linear Sy~terns,~Automatica, vol. 22, no. 1, pp. 59- 75, 1986.

[14] T. R. Bendict and G. W. Bordiner, "Synthesis of Optimal Set of Radar Track- While-Scan-Smoothing Equations," IEEE Dans. Automatic Control, vol. AC-?, pp. 27-32, 1962.

[15] S. R. Neal, "Discussion on Parametric Relations for a - 0 - 7 Filter Predictor," IEEE Tran. Automatic Control, vol. AC-12, pp. 315-317, June 1962.

[16] A. Farina and F. A. Studer, Radar Data Processing: Volume I - introduction and Zkacking. John Wiley & Sons, 1985.

[17] D. Lerro and Y. Bar-Shalom, "Tracking With Debiased Consistent Converted Measurements Versus EKF," lEEE Z'bans. on Aero. and Elect. Syst., vol. 29, pp. 1015-1022, Jdy 1993.

[18] R. S. Bucy and K. D. Senne, 'Digital Synthesis of Nonlinear Filters," Automatica, vol. 7, pp. 287-298, 1971.

[19] S. C. Kramer and H. W. Sorenson, 'Recwsive Bayesian Estimation using Piece wise Constant Approximation," Automatica, vol. 24, no. 6, pp. 789-801, 1988.

[20] R. S. Bucy and H. Youssef, "Nonlinear Filter Representation via Spline Func- tions," 5th Symp. Nonlinear Estimation, pp. 51-60, 1974. [21] R. J. P. de Figueiredo and J. G. Jan, "Spline Filters," 2nd Symp. on Nonlinear Estimation, pp. 127-138, 1971.

[22] D. G. Lainiotis and J. G. Deshpande, "Parameter Estimation using Splines," Inf. Sciences, vol. 7, pp. 291-315, 1974.

[23] A. H. Wang and R. L. Klein, "Implementation of Nonlinear Estimators using Monospline," 13th IEEE Conf. on Decision and Control, pp. 1305-1307, 1976.

[24] N. J. Gordon, D. J. Salmond, and A. F. M. Smith, "Novel Approach to Nonlinear/Non-Gaussian Bayesian State Estimation," IEE Processings-F, vol. 140, pp. 107-113, April1993.

[25] R. L. Klein and W. Kasemratanasunti, "Quadrature Formulae and the Imple- mentation of Optimal Nonlinear Estimation," 4th Symp. on Nonlinear Estima- tion, pp. 151-160, 1973.

[26] A. H. Wang and R. L. Klein, "Op timd Quadrature Formula Nonlinear Estima- tors," Inf. Sciences, vol. 16, pp. 169-184, 1978.

[27] J. Y. Liang and R. L. Klein, "Nonlinear Estimation knplemented with Optimal Quadrature Formulae," 6th Symp. on Nonlinear Estimation, pp. 160-170, 1975.

[28] B. Kosko, Neural Netvoeb and Fvzy System. Prentice Hall, 1992.

[29] C.-W. Tao and W. E. Thompson, "A Fuzzy Logic Approach to Multidimensional Target Tracking," IEEE Int. Con6 on Frtzzy Systems, pp. 1350-1354, 1993.

[30] C. G. Moore, C. J. Harris, and E. Rogers, "Utilising Fuzzy Models in the De- sign of Estimators and Predictors: An Agile Target Tracking Example," IEEE Int. Conf. on hzzy Systems, pp. 679-684, 1993.

[31] J. M. Roberts, D. J. Mills, D. Chamley, and C. J. Harris, "Improved Kalman Filter Initialisation using Neurofuzzy Estimation," IEEE Int. Conf. on Pirn( Systems, 1995. [32] P. Ott, K. C. C. Chas, and H. Leung, "A Study on Fuzzy Tracking on Maneuver- ing Aireiaft Using Real Data," IEEE Int. Con5 on Fmzy System, pp. 1671-1675, 1994.

[33] A. H. Stroud, Numerical Quadrature and Solution of Ordinary diflerential equa- tiom. Springer-Verlag, 1974.

[34j A. H. Stroud, Approximate Calculation of Multiple Integrals. Englewood Cliffs, NJ: Prentice Hall, 1971.

(351 F. B. Hildebrand, Introduction to Numerical Analysis. Prentice Hall, 1956.

[36] C. J. Masreliez, "Approximate Non-Gaussian Filtering with Linear St ate and Observation Relations,'' IEEE Dans. on Auto. Contr., pp. 107-110, February 1975.

[37] A. Goshtasby and W. D. O'Neill, "Cuve Fitting by a Sum of Gaussians," Graph- ical Models and Image Processing, vol. 56, pp. 281-288, July 1994.

[38] D. W. Marquardt, "An Algorithm for Least-Squares Estimation of Nonlinear Parameters," J. SIAM, vol. 11, no. 2, pp. 431-441, 1963.

[39] Y. Bar-Shalom and K. Birmiwal, "Consistency and Robustness of PDAF for Target Tracking in Clut tered Environments," Automatica, no. 19, pp. 431-437, 1983.

[40] D. J. Salmond, D. P. Atherton, and J. A. Bather, "Mixture Reduction Algorithms For Uncertain Tracking," IFAC Idenitfication and System Pa~umeterEstimation, pp. 775-780, 1988.

[41] R. A. Johnson and D. W. Wichern, Applied Multivoriate Statistical Analysis. Prentice Hall, 1988.

[42] F. El-Hawary, F. Aminzadeh, and G. A. N. Mbarnalu, "The Generalized Khan Filter Approach to Adap tive Underwater Target Tracking," IEEE Journal of Oceanic Engineering, vol. 17, pp. 129-137, Januq 1992. APPLlEC3 IWGE, lnc -.-3 1653 East Main Street --- , Rochester. NY 14609 USA -,,-- Phone: 71 6/4829300 ---- Fa 716/28&5989

0 1993. AppiIed Image. lm. Aii Riqhts ReseNsd