Estimation II 1 Discrete-Time Kalman Filter

Total Page:16

File Type:pdf, Size:1020Kb

Estimation II 1 Discrete-Time Kalman Filter Estimation II Ian Reid Hilary Term, 2001 1 Discrete-time Kalman filter We ended the first part of this course deriving the Discrete-Time Kalman Filter as a recursive Bayes’ estimator. In this lecture we will go into the filter in more detail, and provide a new derivation for the Kalman filter, this time based on the idea of Linear Minimum Variance (LMV) estimation of discrete-time systems. 1.1 Background The problem we are seeking to solve is the continual estimation of a set of parameters whose values ´Øµ change over time. Updating is achieved by combining a set of observations or measurements Þ ´Øµ which contain information about the signal of interest Ü . The role of the estimator is to provide ´Ø · µ Ø · ¼ ¼ an estimate ^Ü at some time . If we have a prediction filter, if a smoothing ¼ filter and if the operation is simply called filtering. Recall that an estimator is said to be unbiased if the expectation of its output is the expectation of ^Ü ℄ Ü℄ the quantity being estimated, . Also recall that a minimum variance unbiased estimator (MVUE) is an estimator which is unbi- ased and minimises the mean square error: ¾ ^Ü Ö ÑÒ jj ^Ü Üjj jÞ℄ ÜjÞ℄ ^Ü ¾ jjÜ ^Ü jj ℄ The term , the so-called variance of error, is closely related to the error covariance Ì ´Ü ^Ü µ´Ü ^ܵ ℄ matrix, . Specifically, the variance of error of an estimator is equal to the trace of the error covariance matrix, ¾ Ì jjÜ ^Ü jj ℄ ´Ü ^Ü µ´Ü ^Ü µ ℄ trace The Kalman filter is a linear minimum variance of error filter (i.e. it is the best linear filter over the class of all linear filters) over time-varying and time-invariant filters. In the case of the state vector Ü and the observations Þ being jointly Gaussian distributed, the MVUE estimator is a linear function of the measurement set Þ and thus the MVUE (sometimes written MVE for Minimum Variance of Error estimator) is also a LMV estimator, as we saw in the first part of the course. 1 Notation The following notation will be used. Þ observation vector at time . Z the set of all observations up to (and including) time . Ü system state vector at time . ^Ü Ü j estimation of at time based on time , . ~Ü ^Ü Ü j j estimation error, (tilde notation) È Covariance matrix. F State transition matrix. G Input (control) transition matrix. À Output transition matrix. Û process (or system, or plant) noise vector. Ú measurement noise vector. É process (or system, or plant) noise covariance matrix. Ê measurement noise covariance matrix. à Kalman gain matrix. innovation at time . Ë innovation covariance matrix at time . 1.2 System and observation model We now begin the analysis of the Kalman filter. Refer to figure 1. We assume that the system can be modelled by the state transition equation, Ü F Ü · G Ù · Û ·½ (1) Ü Ù Û where is the state at time , is an input control vector, is additive system or process noise, G F is the input transition matrix and is the state transition matrix. We further assume that the observations of the state are made through a measurement system which can be represented by a linear equation of the form, Þ À Ü · Ú (2) Þ Ü À where is the observation or measurement made at time , is the state at time , is the Ú observation matrix and is additive measurement noise. 2 w v + + + z u unit x + G H + delay F Figure 1: State-space model. 1.3 Assumptions We make the following assumptions; ¯ Û Ú The process and measurement noise random processes and are uncorrelated, zero-mean white-noise processes with known covariance matrices. Then, É Ð Ì Û Û ℄ (3) Ð ¼ otherwise Ê Ð Ì Ú Ú ℄ (4) Ð ¼ otherwise Ì Û Ú Ð ℄ ¼ for all (5) Ð É Ê where and are symmetric positive semi-definite matrices. ¯ Ü The initial system state, ¼ is a random vector that is uncorrelated to both the system and measurement noise processes. ¯ The initial system state has a known mean and covariance matrix Ì ^Ü Ü ℄ È ´ ^Ü Ü µ´ ^Ü Ü µ ℄ ¼ ¼ ¼ j¼ ¼j¼ ¼j¼ ¼j¼ ¼ and (6) Þ Þ ·½ Given the above assumptions the task is to determine, given a set of observations ½ , the · ½ Ü ·½ estimation filter that at the th instance in time generates an optimal estimate of the state , ^Ü ·½ which we denote by , that minimises the expectation of the squared-error loss function, ¾ Ì jjÜ ^Ü jj ℄ ´Ü ^Ü µ ´Ü ^Ü ℄ ·½ ·½ ·½ ·½ ·½ ·½ (7) 1.4 Derivation ^Ü Þ Þ ·½ ½ Consider the estimation of state based on the observations up to time , , namely ^Ü k ·½jZ . This is called a one-step-ahead prediction or simply a prediction. Now, the solution to · ½ the minimisation of Equation 7 is the expectation of the state at time conditioned on the observations up to time . Thus, 3 ^Ü Ü jÞ Þ ℄ Ü jZ ℄ ·½ ½ ·½ ·½j (8) Then the predicted state is given by ^Ü Ü jZ ℄ ·½ ·½j F Ü · G Ù · Û jZ ℄ F Ü jZ ℄ · G Ù · Û jZ ℄ F ^Ü · G Ù j (9) Ù where we have used the fact that the process noise has zero mean value and is known precisely. È ^Ü ·½j ·½j The estimate variance is the mean squared error in the estimate . Û ^Ü j Thus, using the facts that and are uncorrelated: Ì È ´Ü ^Ü µ´Ü ^Ü µ jZ ℄ ·½ ·½ ·½j ·½j ·½j Ì Ì Ì F ´Ü ^Ü µ´Ü ^Ü µ jZ ℄F · Û Û ℄ j j Ì F È F · É j (10) ^Ü Þ ·½ ·½j Having obtained a predictive estimate suppose that we now take another observation . ^Ü ·½j ·½ How can we use this information to update the prediction, ie. find ? We assume that the estimate is a linear weighted sum of the prediction and the new observation and can be described by the equation, ¼ ^Ü Ã ^Ü · Ã Þ ·½ ·½ ·½j ·½ ·½j (11) ·½ ¼ à à ·½ where and are weighting or gain matrices (of different sizes). Our problem now is to ·½ ¼ à à ·½ reduced to finding the and that minimise the conditional mean squared estimation error ·½ where of course the estimation error is given by: ~Ü ^Ü Ü ·½ ·½j ·½ ·½j ·½ (12) 1.5 The Unbiased Condition ^Ü ℄ Ü ℄ ·½ ·½j ·½ For our filter to be unbiased, we require that . Let’s assume (and argue ^Ü j by induction) that is an unbiased estimate. Then combining equations (11) and (2) and taking expectations yields ¼ ^Ü · à À Ü · Ã Ú ℄ ^Ü ℄ à ·½ ·½ ·½ ·½ ·½ ·½j ·½j ·½ ·½ ¼ ^Ü ℄ · à À Ü ℄ · Ã Ú ℄ à ·½ ·½ ·½ ·½ ·½ ·½j (13) ·½ Note that the last term on the right hand side of the equation is zero, and further note that the prediction is unbiased: ^Ü ℄ F ^Ü · G Ù ℄ ·½j j F ^Ü ℄ · G Ù j Ü ℄ ·½ (14) 4 Hence by combining equations (13) and (14)) ¼ ^Ü ℄ ´Ã · à À µ Ü ℄ ·½ ·½ ·½ ·½j ·½ ·½ ^Ü ·½j ·½ and the condition that be unbiased reduces the requirement to ¼ à · à À Á ·½ ·½ ·½ ¼ à Á à À ·½ ·½ or (15) ·½ We now have that for our estimator to be unbiased is must satisfy ^Ü ´Á à À µ ^Ü · Ã Þ ·½ ·½ ·½ ·½ ·½j ·½ ·½j ^Ü · Ã Þ À ^Ü ℄ ·½ ·½ ·½ ·½j ·½j (16) where à is known as the Kalman gain. À ^Ü ^Þ ·½ ·½j ·½j Note that since can be interpreted as a predicted observation , equation 16 can be interpreted as the sum of a prediction and a fraction of the difference between the predicted and actual observation. 1.6 Finding the Error Covariance We determined the prediction error covariance in equation (10). We now turn to the updated error covariance Ì È ~Ü ~Ü µ jZ ℄ ·½j ·½ ·½j ·½ ·½j ·½ Ì ´Ü ^Ü µ´Ü ^Ü µ ℄ ·½ ·½ ·½j ·½ ·½j ·½ Ì Ì ℄Á à À µ ´Á à À µ ~Ü ~Ü ·½ ·½ ·½ ·½ ·½j ·½j Ì Ì Ì Ì ·Ã Ú Ú ℄à · ¾´Á à À µ ~Ü Ú ℄à ·½ ·½ ·½ ·½ ·½j ·½ ·½ ·½ ·½ and with Ì Ú Ú ℄ Ê ·½ ·½ ·½ Ì ℄ È ~Ü ~Ü ·½j ·½j ·½j Ì ~Ü Ú ℄ ¼ ·½j ·½ we obtain Ì Ì È ´Á à À µÈ ´Á à À µ · Ã Ê Ã ·½ ·½ ·½ ·½ ·½ ·½ ·½j ·½ ·½j (17) ·½ Thus the covariance of the updated estimate is expressed in terms of the prediction covariance È Ê Ã ·½ ·½ ·½j , the observation noise and the Kalman gain matrix . 1.7 Choosing the Kalman Gain Our goal is now to minimise the conditional mean-squared estimation error with respect to the Kalman gain, Ã. 5 ·½ Ì ~Ü jZ ℄ Ä ÑÒ ~Ü ·½j ·½ ·½j ·½ à k ·½ ·½ Ì jZ ℄ ~Ü ~Ü ÑÒ ·½j ·½ trace ·½j ·½ à k ·½ ¡ ÑÒ È ·½j ·½ trace (18) à k ·½ B For any matrix A and a symmetric matrix ¡ Ì ABA µ ¾AB trace´ A È Ì Ì a Ba a A (to see this, consider writing the trace as where are the columns of , and then a differentiating w.r.t. the ). Combining equations (17) and (18) and differentiating with respect to the gain matrix (using the relation above) and setting equal to zero yields Ä Ì ¾´Á à À µÈ À · ¾Ã Ê ¼ ·½ ·½ ·½ ·½ ·½j ·½ à ·½ Re-arranging gives an equation for the gain matrix Ì Ì ½ Ã È À À È À · Ê ℄ ·½ ·½ ·½ ·½j ·½j (19) ·½ ·½ Together with Equation 16 this defines the optimal linear mean-squared error estimator. 1.8 Summary of key equations At this point it is worth summarising the key equations which underly the Kalman filter algorithm. The algorithm consists of two steps; a prediction step and an update step. · ½ Prediction: also known as the time-update.
Recommended publications
  • Error Control for the Detection of Rare and Weak Signatures in Massive Data Céline Meillier, Florent Chatelain, Olivier Michel, H Ayasso
    Error control for the detection of rare and weak signatures in massive data Céline Meillier, Florent Chatelain, Olivier Michel, H Ayasso To cite this version: Céline Meillier, Florent Chatelain, Olivier Michel, H Ayasso. Error control for the detection of rare and weak signatures in massive data. EUSIPCO 2015 - 23th European Signal Processing Conference, Aug 2015, Nice, France. hal-01198717 HAL Id: hal-01198717 https://hal.archives-ouvertes.fr/hal-01198717 Submitted on 14 Sep 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. ERROR CONTROL FOR THE DETECTION OF RARE AND WEAK SIGNATURES IN MASSIVE DATA Celine´ Meillier, Florent Chatelain, Olivier Michel, Hacheme Ayasso GIPSA-lab, Grenoble Alpes University, France n ABSTRACT a R the intensity vector. A few coefficients ai of a are 2 In this paper, we address the general issue of detecting rare expected to be non zero, and correspond to the intensities of and weak signatures in very noisy data. Multiple hypothe- the few sources that are observed. ses testing approaches can be used to extract a list of com- Model selection procedures are widely used to tackle this ponents of the data that are likely to be contaminated by a detection problem.
    [Show full text]
  • A Frequency-Domain Adaptive Matched Filter for Active Sonar Detection
    sensors Article A Frequency-Domain Adaptive Matched Filter for Active Sonar Detection Zhishan Zhao 1,2, Anbang Zhao 1,2,*, Juan Hui 1,2,*, Baochun Hou 3, Reza Sotudeh 3 and Fang Niu 1,2 1 Acoustic Science and Technology Laboratory, Harbin Engineering University, Harbin 150001, China; [email protected] (Z.Z.); [email protected] (F.N.) 2 College of Underwater Acoustic Engineering, Harbin Engineering University, Harbin 150001, China 3 School of Engineering and Technology, University of Hertfordshire, Hatfield AL10 9AB, UK; [email protected] (B.H.); [email protected] (R.S.) * Correspondence: [email protected] (A.Z.); [email protected] (J.H.); Tel.: +86-138-4511-7603 (A.Z.); +86-136-3364-1082 (J.H.) Received: 1 May 2017; Accepted: 29 June 2017; Published: 4 July 2017 Abstract: The most classical detector of active sonar and radar is the matched filter (MF), which is the optimal processor under ideal conditions. Aiming at the problem of active sonar detection, we propose a frequency-domain adaptive matched filter (FDAMF) with the use of a frequency-domain adaptive line enhancer (ALE). The FDAMF is an improved MF. In the simulations in this paper, the signal to noise ratio (SNR) gain of the FDAMF is about 18.6 dB higher than that of the classical MF when the input SNR is −10 dB. In order to improve the performance of the FDAMF with a low input SNR, we propose a pre-processing method, which is called frequency-domain time reversal convolution and interference suppression (TRC-IS).
    [Show full text]
  • Anomaly Detection Based on Wavelet Domain GARCH Random Field Modeling Amir Noiboar and Israel Cohen, Senior Member, IEEE
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 45, NO. 5, MAY 2007 1361 Anomaly Detection Based on Wavelet Domain GARCH Random Field Modeling Amir Noiboar and Israel Cohen, Senior Member, IEEE Abstract—One-dimensional Generalized Autoregressive Con- Markov noise. It is also claimed that objects in imagery create a ditional Heteroscedasticity (GARCH) model is widely used for response over several scales in a multiresolution representation modeling financial time series. Extending the GARCH model to of an image, and therefore, the wavelet transform can serve as multiple dimensions yields a novel clutter model which is capable of taking into account important characteristics of a wavelet-based a means for computing a feature set for input to a detector. In multiscale feature space, namely heavy-tailed distributions and [17], a multiscale wavelet representation is utilized to capture innovations clustering as well as spatial and scale correlations. We periodical patterns of various period lengths, which often ap- show that the multidimensional GARCH model generalizes the pear in natural clutter images. In [12], the orientation and scale casual Gauss Markov random field (GMRF) model, and we de- selectivity of the wavelet transform are related to the biological velop a multiscale matched subspace detector (MSD) for detecting anomalies in GARCH clutter. Experimental results demonstrate mechanisms of the human visual system and are utilized to that by using a multiscale MSD under GARCH clutter modeling, enhance mammographic features. rather than GMRF clutter modeling, a reduced false-alarm rate Statistical models for clutter and anomalies are usually can be achieved without compromising the detection rate. related to the Gaussian distribution due to its mathematical Index Terms—Anomaly detection, Gaussian Markov tractability.
    [Show full text]
  • Coherent Detection
    Coherent Detection Reading: • Ch. 4 in Kay-II. • (Part of) Ch. III.B in Poor. EE 527, Detection and Estimation Theory, # 5b 1 Coherent Detection (of A Known Deterministic Signal) in Independent, Identically Distributed (I.I.D.) Noise whose Pdf/Pmf is Exactly Known H0 : x[n] = w[n], n = 1, 2,...,N versus H1 : x[n] = s[n] + w[n], n = 1, 2,...,N where • s[n] is a known deterministic signal and • w[n] is i.i.d. noise with exactly known probability density or mass function (pdf/pmf). The scenario where the signal s[n], n = 1, 2,...,N is exactly known to the designer is sometimes referred to as the coherent- detection scenario. The likelihood ratio for this problem is QN p (x[n] − s[n]) Λ(x) = n=1 w QN n=1 pw(x[n]) EE 527, Detection and Estimation Theory, # 5b 2 and its logarithm is N−1 X hpw(x[n] − s[n])i log Λ(x) = log p (x[n]) n=0 w which needs to be compared with a threshold γ (say). Here is a schematic of our coherent detector: EE 527, Detection and Estimation Theory, # 5b 3 Example: Coherent Detection in AWGN (Ch. 4 in Kay-II) If the noise w[n] i.i.d.∼ N (0, σ2) (i.e. additive white Gaussian noise, AWGN) and noise variance σ2 is known, the likelihood- ratio test reduces to (upon taking log and scaling by σ2): N H 2 X 2 1 σ log Λ(x) = (x[n]s[n]−s [n]/2) ≷ γ (a threshold) (1) n=1 and is known as the correlator detector or simply correlator.
    [Show full text]
  • Matched Filter
    Matched Filter Documentation for explaining UAVRT-SDR’s current matched filter Flagstaff, AZ June 24, 2017 Edited by Michael Finley Northern Arizona University Dynamic and Active Systems Laboratory CONTENTS Matched Filter: Project Manual Contents 1 Project Description 3 2 Matched Filter Explanation 4 3 Methods/Procedures 11 4 MATLAB Processing 11 5 Recommendations 13 Page 2 of 13 Matched Filter: Project Manual 1 Project Description The aim of this project is to construct an unmanned aerial system that integrates a radio telemetry re- ceiver and data processing system for efficient detection and localization of tiny wildlife radio telemetry tags. Current methods of locating and tracking small tagged animals are hampered by the inaccessibility of their habitats. The high costs, risk to human safety, and small sample sizes resulting from current radio telemetry methods limit our understanding of the movement and behaviors of many species. UAV-based technologies promise to revolutionize a range of ecological field study paradigms due to the ability of a sensing platform to fly in close proximity to rough terrain at very low cost. The new UAV-based (UAV-RT) system will dramatically improve wildlife tracking capability while using inexpensive, commercially available radio tags. The system design will reflect the unique needs of wildlife tracking applications, including easy assembly and repair in the field and reduction of fire risk. Our effort will focus on the development, analysis, real-time implementation, and test of software-defined signal processing algorithms for RF signal detection and localization using unmanned aerial platforms. It combines (1) aspects of both wireless communication and radar signal processing, as well as modern statis- tical methods for model+data inference of RF source locations, and (2) fusion-based inference that draws on RF data from both mobile UAV-based receivers and ground-based systems operated by expert humans.
    [Show full text]
  • MATCHED FILTERS •The Matched Filter Is the Optimal Linear Filter for Maximizing the Signal to Noise Ratio (SNR) in the Presence of Additive Stochastic Noise
    MATCHED FILTERS •The matched filter is the optimal linear filter for maximizing the signal to noise ratio (SNR) in the presence of additive stochastic noise. •Matched filters are commonly used in radar, in which a signal is sent out, and we measure the reflected signals, looking for something similar to what was sent out. •Two-dimensional matched filters are commonly used in image processing, e.g., to improve SNR for X-ray pictures 1 • A general representation for a matched filter is illustrated in Figure 3-1 FIGURE 3-1 Matched filter 2 • The filter input x(t) consists of a pulse signal g(t) corrupted by additive channel noise w(t), as shown by (3.0) • where T is an arbitrary observation interval. The pulse signal g(t) may represent a binary symbol I or 0 in a digital communication system. • The w(t) is the sample function of a white noise process of zero mean and power spectral density No/2. • The source of uncertainty lies in the noise w(t). • The function of the receiver is to detect the pulse signal g(t) in an optimum manner, given the received signal x(t). • To satisfy this requirement, we have to optimize the design of the filter so as to minimize the effects of noise at the filter output in some statistical sense, and thereby enhance the detection of the pulse signal g(t). 3 • Since the filter is linear, the resulting output y(t) may be expressed as • (3.1) • where go(t) and n(t) are produced by the signal and noise components of the input x(t), respectively.
    [Show full text]
  • Matched-Filter Ultrasonic Sensing: Theory and Implementation Zhaohong Zhang
    White Paper SLAA814–December 2017 Matched-Filter Ultrasonic Sensing: Theory and Implementation Zhaohong Zhang ........................................................................................................... MSP Systems This document provides a detailed discussion on the operating theory of the matched-filter based ultrasonic sensing technology and the single-chip implementation platform using the Texas Instruments (TI) MSP430FR6047 microcontroller (MCU). Contents 1 Introduction ................................................................................................................... 2 2 Theory of Matched Filtering ................................................................................................ 2 3 MSP430FR6047 for Ultrasonic Sensing .................................................................................. 5 4 MSP430FR6047 USS Subsystem......................................................................................... 6 5 MSP430FR6047 Low Energy Accelerator (LEA) ........................................................................ 8 6 References ................................................................................................................... 9 List of Figures 1 Sonar in Target Range Measurement .................................................................................... 2 2 Concept of Pulse Compression............................................................................................ 2 3 Time-Domain Waveform and Autocorrelation of FM Chirp............................................................
    [Show full text]
  • Necessary Conditions for a Maximum Likelihood Estimate to Become Asymptotically Unbiased and Attain the Cramer–Rao Lower Bound
    Necessary conditions for a maximum likelihood estimate to become asymptotically unbiased and attain the Cramer–Rao Lower Bound. Part I. General approach with an application to time-delay and Doppler shift estimation Eran Naftali and Nicholas C. Makris Massachusetts Institute of Technology, Room 5-222, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 ͑Received 17 October 2000; revised 20 April 2001; accepted 29 May 2001͒ Analytic expressions for the first order bias and second order covariance of a general maximum likelihood estimate ͑MLE͒ are presented. These expressions are used to determine general analytic conditions on sample size, or signal-to-noise ratio ͑SNR͒, that are necessary for a MLE to become asymptotically unbiased and attain minimum variance as expressed by the Cramer–Rao lower bound ͑CRLB͒. The expressions are then evaluated for multivariate Gaussian data. The results can be used to determine asymptotic biases, variances, and conditions for estimator optimality in a wide range of inverse problems encountered in ocean acoustics and many other disciplines. The results are then applied to rigorously determine conditions on SNR necessary for the MLE to become unbiased and attain minimum variance in the classical active sonar and radar time-delay and Doppler-shift estimation problems. The time-delay MLE is the time lag at the peak value of a matched filter output. It is shown that the matched filter estimate attains the CRLB for the signal’s position when the SNR is much larger than the kurtosis of the expected signal’s energy spectrum. The Doppler-shift MLE exhibits dual behavior for narrow band analytic signals.
    [Show full text]
  • Matched Filtering As a Separation Tool – Review Example in Any Map
    Matched Filtering as a Separation Tool – review example In any map of total magnetic intensity, various sources contribute components spread across the spatial frequency spectrum. A matched filter seeks to deconvolve the signal from one such source using parameters determined from the observations. In this example, we want to find an optimum filter which separates shallow and deep source layers. Matched filtering is a Fourier transform filtering method which uses model parameters determined from the natural log of the power spectrum in designing the appropriate filters. Assumptions: 1. A statistical model consisting of an ensemble of vertical sided prisms (i.e. Spector and Grant, 1970; Geophysics, v. 35, p. 293- 302). 2. The depth term exp(-2dk) is the dominate factor in the power spectrum: size and thickness terms for this example are ignored but they can have large impacts on the power spectrum. Depth estimates must be taken as qualitative. For a simple example, consider a magnetic data set, with two convolved components, in the spatial frequency (wavenumber) domain: ΔT = T(r) + T(s) where T(r) = regional component = C*exp(-k*D) T(s) = shallow component = c*exp(-k*d) D = depth to deep component d = depth to shallow component k = radial wavenumbers (2*pi/wavelength) c, C = constants including magnetization, unit conversions, field and magnetic directions. Substitute from above, and ΔT =C*exp(-k*D)+ c*exp(-k*d) factor out the deep term, and c ΔT = C*exp(-k*D)*(1+ *exp(k*(D-d))). C We now have the expression for the total field anomaly in the spatial frequency domain as: c ΔT = C*exp(-k*D) * [1+ *exp(k*(D-d))] C Note the term in brackets '[]'.
    [Show full text]
  • A Simultaneous Maximum Likelihood Estimator Based on a Generalized Matched Filter
    A simultaneous maximum likelihood estimator based on a generalized matched filter Gustavsson, Jan-Olof; Börjesson, Per Ola Published in: [Host publication title missing] DOI: 10.1109/ICASSP.1994.389775 1994 Link to publication Citation for published version (APA): Gustavsson, J-O., & Börjesson, P. O. (1994). A simultaneous maximum likelihood estimator based on a generalized matched filter. In [Host publication title missing] (pp. 481-484) https://doi.org/10.1109/ICASSP.1994.389775 Total number of authors: 2 General rights Unless other specific re-use rights are stated the following general rights apply: Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Read more about Creative commons licenses: https://creativecommons.org/licenses/ Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. LUND UNIVERSITY PO Box 117 221 00 Lund +46 46-222 00 00 Download date: 25. Sep. 2021 A
    [Show full text]
  • Matched Filters
    Introduction to matched filters Introduction to matched filters John C. Bancroft ABSTRACT Matched filters are a basic tool in electrical engineering for extracting known wavelets from a signal that has been contaminated by noise. This is accomplished by cross- correlating the signal with the wavelet. The cross-correlation of the vibroseis sweep (a wavelet) with a recorded seismic signal is one geophysical application. Another geophysical application is found in Kirchhoff migrations, where the summing of energy in diffraction patterns is equivalent to a two-dimensional cross-correlation. The basic concepts of matched filters are presented with figures illustrating the applications in one and two dimensions. INTRODUCTION 1D model for matched filtering Matched filtering is a process for detecting a known piece of signal or wavelet that is embedded in noise. The filter will maximize the signal to noise ratio (SNR) of the signal being detected with respect to the noise. Consider the model in Figure 1 where the input signal is s(t) and the noise, n(t). The objective is to design a filter, h(t), that maximizes the SNR of the output, y(t). Signal s(t) Filter h(t) y(t) = sh(t) +nh(t) Noise n(t) FIG. 1: Basic filter If the input signal, s(t), is a wavelet, w(t), and n(t) is white noise, then matched filter theory states the maximum SNR at the output will occur when the filter has an impulse response that is the time-reverse of the input wavelet. Note that the convolution of the time-reversed wavelet is identical to cross-correlation of the wavelet with the wavelet (autocorrelation) in the input signal.
    [Show full text]
  • A Straightforward Derivation of the Matched Filter
    A Straightforward Derivation of the Matched Filter Nicholas R: Rypkema This short manuscript is intended to provide the reader with a simple and straightforward derivation of the matched filter, which is typically used to solve the signal detection problem. In our case, we are interested in the use of the matched filter to determine whether or not an acoustic signal broadcast underwater is present in a measurement recorded by a hydrophone receiver, as shown in Fig. 1. It is in large part informed by Chapter 3 of the excellent reference by Wainstein and Zubakov, `Extraction of Signals from Noise' 1. Template Measured Measured Spectrogram 1.5 0.05 200 1 180 0.5 160 0 0 140 -0.5 Amplitude 120 -1 100 -1.5 -0.05 Time (ms) 80 0 100 200 300 400 500 600 700 0 100 200 300 400 500 600 700 Samples (n) Samples (n) 60 15 15 40 10 10 20 5 5 Time (ms) 05 10 15 05 10 15 05 10 15 Frequency (kHz) Frequency (kHz) Frequency (kHz) Figure 1: Ideal and measured 20 ms, 16{18 kHz linear frequency modulation (lfm) up-chirp { the ideal up-chirp is shown on the top-left over time (750 samples or 20 ms), and as a spectro- gram in the bottom-left; the corresponding in-water up-chirp as measured by a hydrophone is shown in the top-center over time, and as a spectrogram in the lower-center; the spectrogram of the full sample of measured acoustic data (8000 samples or 213 ms) is shown on the right.
    [Show full text]