Guidance Filter Fundamentals
Total Page:16
File Type:pdf, Size:1020Kb
Guidance Filter Fundamentals Neil F. Palumbo, Gregg A. Harrison, Ross A. Blauwkamp, and Jeffrey K. Marquart hen designing missile guidance laws, all of the states necessary to mechanize the implementation are assumed to be directly available for feedback to the guidance law and uncorrupted by noise. In practice, however, this is not the case. The separation theorem states that the solution to this problem separates into the optimal deterministic controller driven by the output of an optimal state estimator. Thus, this article serves as a companion to our other article in this issue, “Modern Homing Missile Guidance Theory and Techniques,” wherein optimal guidance laws are discussed and the aforementioned assumptions hold. Here, we briefly discuss the general nonlinear filtering problem and then turn our focus to the linear and extended Kalman filtering approaches; both are popular filtering methodologies for homing guidance applications. INTRODUCTION Our companion article in this issue, “Modern Homing problem,” and the resulting linear-quadratic optimal Missile Guidance Theory and Techniques,” discusses lin- controller is deterministic. For example, consider the ear-quadratic optimal control theory as it is applied to Cartesian version of PN, derived in the abovementioned the derivation of a number of different homing guidance companion article and repeated below for convenience: laws. Regardless of the specific structure of the guidance law—e.g., proportional navigation (PN) versus the “opti- 3 uPN () t =2 [(x1 t)( + x 2 t)]tgo . (1) mal guidance law”—all of the states necessary to mech- tgo anize the implementation are assumed to be (directly) available for feedback and uncorrupted by noise. We Examining Eq. 1, and referring to Fig. 1, the refer to this case as the “perfect state information states of the PN guidance law are x() t _ r–, r 1 TMy y 60 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010) ␥M = Missile flight path angle aM = Missile acceleration, deterministic controller driven by the output of an opti- ␥ = Target flight path angle normal to LOS mal state estimator.1–6 In this article, we will introduce T a = Target acceleration, = LOS angle T normal to VT the optimal filtering concepts necessary to meet these rM = Missile inertial position vector L = Lead angle needs. r = Target inertial position vector r = Relative position x (r – r ) T x Tx Mx In general terms, the purpose of filtering is to develop v = Missile velocity vector r = Relative position y (r – r ) M y Ty My estimates of certain states of the system, given a set of vT = Target velocity vector R = Range to target noisy measurements that contain information about the yI/zI ␥ states to be estimated. In many instances, one wants to T aT vT perform this estimation process in some kind of “opti- ␥ mal” fashion. Depending on the assumptions made about vM T aM Target the dynamic behavior of the states to be estimated, the (T) statistics of the (noisy) measurements that are taken, and R r L T ␥ ry how optimality is defined, different types of filter struc- M tures (sometimes referred to as “observers”) and equa- Missile tions can be developed. In this article, Kalman filtering (M) rM is emphasized, but we first provide some brief general xI comments about optimal filtering and the more general Origin rx (and implementationally complex) Bayesian filter. (O) Figure 1. Planar engagement geometry. The planar intercept BAYESIAN FILTERING problem is illustrated along with most of the angular and Carte- sian quantities necessary to derive modern guidance laws. The Bayesian filtering is a formulation of the estimation x axis represents downrange while the y/z axis can represent problem that makes no assumptions about the nature either crossrange or altitude. A flat-Earth model is assumed with (e.g., linear versus nonlinear) of the dynamic evolution of an inertial coordinate system that is fixed to the surface of the the states to be estimated, the structure of the uncertain- Earth. The positions of the missile (M) and target (T) are shown ties involved in the state evolution, or the statistics of the with respect to the origin (O) of the coordinate system. Differen- noisy measurements used to derive the state estimates. It tiation of the target–missile relative position vector yields relative does assume, however, that models of the state evolution velocity; double differentiation yields relative acceleration. (including uncertainty) and of the measurement-noise distribution are available. In the subsequent discussions on filtering, discrete- x2 () t _ vTM– v (that is, components of relative y y time models of the process and measurements will be position and relative velocity perpendicular to the refer- the preferred representation. Discrete-time processes ence x axis shown in Fig. 1). Recall from the compan- may arise in one of two ways: (i) the sequence of events ion article that the relative position “measurement” is takes place in discrete steps or (ii) the continuous-time really a pseudo-measurement composed of noisy line-of- process of interest is sampled at discrete times. For our sight (LOS) angle and relative range measurements. In purposes, both options come into play. For example, addition to relative position, however, this (Cartesian) a radar system may provide measurements at discrete form of the deterministic PN controller also requires (perhaps unequally spaced) intervals. In addition, the fil- relative velocity and time-to-go, both of which are not tering algorithm is implemented in a digital computer, (usually) directly available quantities. Hence, in words, thus imposing the need to sample a continuous-time the relative position pseudo-measurement must be fil- process. Thus, we will begin by assuming very general tered to mitigate noise effects, and a relative velocity discrete-time models of the following form: state must be derived (estimated) from the noisy rela- tive position pseudo-measurement (we dealt with how xk = fk –1 (,xk –1 wk –1) to obtain time-to-go in the companion article men- (2) y= c (,x ). tioned above). Consequently, a critical question to ask k k k k is: Will deterministic linear optimal control laws (such as those derived by using the techniques discussed in our com- In Eq. 2, xk is the state vector at (discrete) time k, panion article in this issue) still produce optimal results the process noise wk–1 is a functional representa- given estimated quantities derived from noisy measure- tion of the (assumed) uncertainty in the knowledge ments? Fortunately, the separation theorem states that of the state evolution from time k – 1 to time k, y is k the solution to this problem separates into the optimal the vector of measurements made at time k, and the JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010) 61 N. F. PALUMBO et al. vector is a statistical representation of the noise Given a measurement y , the likelihood function (see k k that corrupts the measurement taken at time k. We Eq. 4) characterizes the probability of obtaining that will revisit continuous-to-discrete model conversion value of the measurement, given a state xk . The likeli- later. hood function is derived from the sensor-measurement Eq. 2 models how the states of the system are assumed model. Equations 3 and 4, when applied recursively, to evolve with time. The function fk –1 is not assumed to constitute the Bayesian nonlinear filter. The posterior have a specific structure other than being of closed form. density encapsulates all current knowledge of the system In general, it will be nonlinear. Moreover, no assump- state and its associated uncertainty. Given the posterior tion is made regarding the statistical structure of the density, optimal estimators of the state can be defined. uncertainty involved in the state evolution; we assume Generally, use of a Bayesian recursive filter para- only that a reasonably accurate model of it is available. digm requires a methodology for estimating the prob- The second statement in Eq. 2 models how the measure- ability densities involved that often is nontrivial. ments are related to the states. Again, no assumptions Recent research has focused on the use of particle fil- are made regarding the structure of ck or the statistics of tering techniques as a way to accomplish this. Particle the measurement noise. filtering has been applied to a range of tracking prob- Suppose that at time k – 1 one has a probability den- lems and, in some instances, has been shown to yield sity that describes the knowledge of the system state at superior performance as compared with other filtering that time, based on all measurements up to and includ- techniques. For a more detailed discussion of particle ing that at time k – 1. This density is referred to as the filtering techniques, the interested reader is referred prior density of the state expressed as p(xk –1 | Yk –1) to Ref. 7. where Yk – 1 represents all measurements taken up to and including that at time k – 1. Then, suppose a new measurement becomes available at time k. The problem KALMAN FILTERING is to update the probability density of the state, given For the purpose of missile guidance filtering, the more all measurements up to and including that at time k. familiar Kalman filter is widely used.3, 6, 8, 9 The Kalman The update is accomplished in a propagation step and a filter is, in fact, a special case of the Bayesian filter. Like measurement-update step. the Bayesian filter, the Kalman filteri () requires models The propagation step predicts forward the prob- of the state evolution and the relationship between states ability density from time k – 1 to k via the Chapman– and measurements and (ii) is a two-step recursive pro- 7 Kolmogorov equation (Eq.