Mathematics for Signal Processing (Signal Processing for Communication)

Total Page:16

File Type:pdf, Size:1020Kb

Mathematics for Signal Processing (Signal Processing for Communication) mathematics for signal processing (signal processing for communication) 2007 spatial processing / beam forming reader: sections 3.1-3.3 Marc Geilen PT 9.35 [email protected] http://www.es.ele.tue.nl/education/5ME00/ Eindhoven University of Technology 2 overview • optimal beam formers – deterministic approach – stochastic approach • colored noise • the matched filter 3 data model assume we receive d (narrow band) signals on an antenna array: d =+=+ xanAsnkiikkkk∑ s , i =1 objective: construct a receiver weight vector w such that = H ykkwx is an estimate of one of the sources, or all sources: = H yWxkk 4 deterministic approach =⇔= noiseless case xAskk XAS dxN MxNMxd objective: find W such that WXH = S minimize WXH − S we consider two scenarios: • A is known • S is known 5 deterministic approach scenario 1 use direction finding methods to determine how many signals there are and what their response vectors are A is known: = … Aa[]1 ad (α2) (α1) (t) (α) 6 deterministic approach scenario 2 the sending signal makes use of a known training sequence, agreed in the protocol S is known, goal: select W alternatively, this could be the case via decision feedback Im(sk) 1/ 2 Re(s ) −1/ 2 1/ 2 k −1/ 2 7 deterministic approach =⇔= noiseless case xAskk XAS objective: find W such that WXH = S with A known − XAS=⇔ AXS††1 =,() A = AAAHH hence, we set WAH = † all interference is cancelled (if M ≥ d): H = WA Id 8 deterministic approach =⇔= noiseless case xAskk XAS objective: find W such that WXH = S with S known − WSXSXXXHHH==†1††(), AXSW == () H (after training, W is used to estimate the unknown S) again, all interference is cancelled (M ≥ d, N ≥ d): WAH = I 9 deterministic approach noisy case XASN=+ two possible optimization criteria model matching: adapting the model of A or S to minimize residual: XAS− 2 F output error minimization 2 WXH − S F 10 deterministic approach noisy case XASN=+ model matching: adapting the model of A (or S) to minimize residual: 2 XAS− F A Aˆ 11 deterministic approach noisy case XASN=+ output error minimization 2 WXH − S F n k s x y k A k W k e k - 12 model matching with A known: zero-forcing solution 2 ˆˆSXASsuch that − is minimal F ˆSAX= ††⇒ WH = A WHA=I: all interference is cancelled (hence zero-forcing) (under what conditions?) the ZF beamformer maximizes the output Signal-to-Interference Ratio (SIR) 13 example model matching the ZF beamformer satisfies WHA=I let w1 be the first column of W, the beamformer of the first signal. HH= ⇒ ……= WA I w12[][] a ad 0 0 ⊥ { … } waa12,,d thus, w1 projects out all other sources, except source 1. but what about noise? d ==HHH + yt11()wx () t∑ wa 1ii st () wn 1 () t i=1 =+H st11()wn () t the effect on the noise is not considered! 14 model matching zero-forcing solution WXHH=+ S() A† N the output noise depends on A+ and can be large, since = ⇒ †1H = − AUAAAΣΣ V() A U AA V A -1 this happens if ΣA is large, i.e. if A is ill-conditioned σ 1 1 σ d e.g. if directions are very close 15 model matching σ 2 11u σ 2 22u a2 a1 to discriminate a1 and a2, the ZF beamformer amplifies noise in the direction of u2 16 model matching with S known: 2 AXASˆˆ such that − is minimal F − AXSXSSSˆ ==†1HH() this does not specify the beamformer, but it is natural to set WAH = ˆ† 17 output error minimization objective: minimize the output error with S known 2 WWXSHHsuch that − is minimal F WSXH = † − note thatXXXX†1= HH() , so that 11−− WSXXXRRWRRHHH=(H11 )= ˆˆ , = ˆˆ -1 NN XS X X XS 1 RXXˆ = H : sample data covariance matrix X N 1 RXSˆ = H : sample correlation between the sources XS N and the received data 18 output error minimization H 2 objective: minimize the output error WX− S F with A known and assuming independent sources: observe that 11 SSHH→→ I,0 SN NN 11 1 RXSASSNSAˆ ==HHH +→ XS NN N =→ˆˆ-1− 1 WRRXXS RA X = H RxxX E[]is the true data covariance matrix. with finite samples, we take the estimate from XXH: = ˆ−1 WRAX 19 deterministic approach = ˆ−1 WRAX this is the Linear Minimum Mean Square Error (LMMSE) or Wiener receiver. • it maximizes the Signal-to-Interference-plus-Noise Ratio (SINR) at the output. • it does not cancel all interference: WAH ≠ I (because the loss in interference is compensated by the gain in removing noise) 20 Wiener filtering remember that for a convolution filter h(t) the optimal receiver in terms of output signal-to-noise ratio is given by: Hf()() Sf Gf*()= Hf()2 Sf ()+ Nf () spectral decomposition of array receiver H XASN=+ AUV= Σ •columns of V are vectors in ‘signal-space’ • that obtain the same attenuation (singular values of Σ) • their impact in ‘antenna-space’ are the columns of U 21 spectral decomposition H AUV= Σ u3 x2 σ u A σ 11 22u x1 x3 signal-space antenna-space 22 Wiener filtering spectral decomposition of array receiver H XASN=+ AUV= Σ define: XUXSVSNUN**===HHH * then X* =+UASNH () HH H =+UUΣ VS UN ** = ΣSN+ 23 Wiener filtering signal and noise are ‘spatially’ white in (transformed) signal-space and antenna-space respectively RnnUnUnUnnUUIUI=E**H==== E ( HHHHHH )( ) E σσ 2 2 n* nn RssVsVsVssVVIVI=E**HHHHHHH==== E ( )( ) E s* 24 Wiener filtering covariance of the transformed antenna samples: Rxxsnsn=E**HHHH=+ E ( ** )( ** += ) E ssnn **** + x* ΣΣ ΣΣ =+=+σσ222 ΣΣIInn Σ I a diagonal matrix! in this space, ‘virtual’ antenna elements are independent! Rxssnsssns=E**HHHH=+ E ( * * ) * = E ** + ** xs** ΣΣ **HH ** =+=+=ΣΣΣEEss ns 0 25 Wiener filtering thus, the Wiener receiver in the transformed space is WX***HH== S⇒ W * SX **† −1 ***H**HHH11 − 1 *-1 WSXXXRRWRR=,== ˆˆ** * ˆˆ * ** NNXS X X XS −1 *22→+σ WI()ΣΣn a diagonal matrix (!) with σ Hf()() Sf w * = k compare: Gf*( ) = kk, σσ22+ 2 kn Hf() Sf ()+ Nf () (in this space, beam forming is trivial) YVYVWX==****HHH = VWUX WUWV= * H 26 comparing receivers output error interference zero-forcing receiver (solid) Wiener receiver Wiener receiver (dashed) (what about ZF?) 27 comparing receivers ==H σσ… AUVΣΣ, diag(1 ,d ) • zero forcing = H WUWVZF 1 W diagonal with w ZF = ZF k, k σ k • Wiener receiver = H WUWWiener V σ W diagonal with wWiener = k Wiener k, k σσ22+ kn 28 stochastic approach assume a model with 1 source =+ =HH = + H xakkks, nys k wxwawn k ()() k k we make the following assumptions: 2 ==H E[sskkk ] 1,E[]n0 and define (spatial noise ‘color’): = H Rnnn :E[kk ] so that 2 =+HHHHH + E[|ys | ] E[(wakk wn )( wa s kk wn ) ] =+++HHHHHHHH2 E[wa |ssskkkkkkk | aw wa nw wn aw wnnw ] =HH +++ H H E[waaw 0 0 wnnwkk ] =+HH H ()()waaw wRwn 29 stochastic approach 2 =+HH H E[|y | ] waaw wRwn signal noise so the Signal to Noise Ratio (SNR) at the output of the Wiener receiver is: E[| (waH )s |2 ] waawHH SNR ()w ==k out H 2 H E[| (wnk ) | ] wRwn 30 stochastic approach two stochastic optimization criteria • maximum likelihood the likelihood of a set of data X is the probability (density) of obtaining that particular set of data, given the transmitted signal S. The Maximum Likelihood Estimate of S is the value of S that maximizes the probability of receiving that particular X. (compare deterministic model matching) • stochastic output error minimization the error at the output y of the receiver is a stochastic process, which depends on the weight vector w. Choose the weight vector w, which minimizes the variance of this process. (compare deterministic output error minimization) 31 stochastic model matching assume a model with d sources =+ =… ⇔ =+ xAsnkkk(1kN,,) XASN assume sk to be deterministic noise i.i.d. in time (temporally white), and spatially white 2 (Rn=σ I) and jointly complex Gaussian distributed 2 nk − 0.4 2 1 σ 2 σ ⇔= 0.3 n0Inkk~(,)CN pe () 2πσ 0.2 0.1 0 25 20 15 10 25 20 15 5 10 5 0 0 32 stochastic model matching nk=xk-Ask so the probability (density) to receive a certain vector xk, when sk has been transmitted is: − 2 − xAskk ==−=1 σ 2 pp(|)xskk ( n k x k As k ) e 2πσ − 2 ∑ xAs− 2 N −−xAskk N kk 1122 pe(|)XS==∏ σσ e 22πσ πσ k=1 XAS− 2 N − F 1 2 = e σ 2πσ XAS− 2 − F =⋅const e σ 2 (p(X|S) is the likelihood of receiving a certain data matrix X, for a given transmitted data matrix S) 33 stochastic model matching XAS− 2 − F pconste(|)XS=⋅σ 2 deterministic maximum likelihood technique: estimate S as the one which maximizes the likelihood of the actually received X ˆSXSsuch that p( | ) is maximal XAS- 2 - F ⇔ ˆS such that eσ 2 is maximal ⇔ ˆSXASsuch that -2 is minimal F for white Gaussian noise, Maximum Likelihood is equivalent to deterministic model matching (hence, the solutions are also the same) 34 stochastic derivation stochastic output error minimization minimize the Linear Minimum Mean Square Error cost: 2 Jw()=− E wxH s kk it can be worked out as follows: 2 Jw()=−=− EwxHHHH s E ()() wx s wx − s kk kk kk =−−+HH H[] H 2 wEEsEsEs xxwwkk x kk kk xw k =−HH[] − H +2 wRwx wEssEss akk kk a k w Es k 2 if sk is regarded stochastic with E[|sk| ]=1, then =HHH −−+ Jw()wRwx wa aw 1 35 stochastic output error minimization =HHH −−+ J()wwRwwaawx 1 differentiate with respect to w: let w=u-jv wit u and v real valued, then the gradient is ∂∂ JJ ∂∂ uv11 11 1 1 ∇=∇−∇=JJjJ»» − j wu22 v 2 2 ∂∂ JJ ∂∂ uvdd ∇=∇=∇HH H = with properties wwwwa a,0 aw wRwxx Rw ⇒ ∇= − wJ Rwx a the minimum J(w) is attained for ∇= ⇒ = −1 wJ 0wRax we thus obtain the Wiener receiver 36 stochastic output error minimization the minimum J(w) is attained for the Wiener receiver = −1 wRax the expected output error becomes: =HHH−−1111 −−+ − − Jmin aRxxx RR a aR x a aR x a 1 =−H −1 1 aRx a − =−HHσ 2 + 1 1 aaa()s Rn a −1 =−HH +σ 2 1 aaa()n Ia 37 stochastic output error minimization the expected output error becomes: −1 =−HH +σ 2 Jmin 1 aaa()n Ia −−11
Recommended publications
  • WS 2014/15 – „Report 3: Kalman Or Wiener Filters“ – Stefan Feilmeier
    1 Embedded Systems – WS 2014/15 – „Report 3: Kalman or Wiener Filters“ – Stefan Feilmeier Facultatea de Inginerie Hermann Oberth Master-Program “Embedded Systems“ Advanced Digital Signal Processing Methods Winter Semester 2014/2015 Report 3 Kalman or Wiener Filters Professor: Prof. dr. ing. Mihu Ioan Masterand: Stefan Feilmeier 15.01.2015 2 Embedded Systems – WS 2014/15 – „Report 3: Kalman or Wiener Filters“ – Stefan Feilmeier TABLE OF CONTENTS 1 Overview 3 1.1 Spectral Filters 3 1.2 Adaptive Filters 4 2 Wiener-Filter 6 2.1 History 6 2.2 Definition 6 2.3 Example 7 3 Kalman-Filter 9 3.1 History 9 3.2 Definition 9 3.3 Example 10 4 Conclusions 11 Table of Figures 12 Table of Equations 12 List of References 13 3 Embedded Systems – WS 2014/15 – „Report 3: Kalman or Wiener Filters“ – Stefan Feilmeier 1 OVERVIEW Everywhere in the real world we are surrounded by signals like temperature, sound, voltage and so on. To extract useful information, to further process or analyse them, it is necessary to use filters. 1.1 SPECTRAL FILTERS The term “filter” traditionally refers to spectral filters with constant coefficients like “Finite Impulse Response” (FIR) or “Infinite Impulse Response” (IIR). To apply such filters, a constant, periodic signal is converted from time domain to frequency domain, in order to analyse its spectral attributes and filter it as required, typically using fixed Low-Pass, Band-Pass or High-Pass filters. Fig. 1 shows this process on the example of two signals – with the original signals in blue colour and the signals after filtering in green colour.
    [Show full text]
  • A Novel Speech Enhancement Approach Based on Modified Dct and Improved Pitch Synchronous Analysis
    American Journal of Applied Sciences 11 (1): 24-37, 2014 ISSN: 1546-9239 ©2014 Science Publication doi:10.3844/ajassp.2014.24.37 Published Online 11 (1) 2014 (http://www.thescipub.com/ajas.toc) A NOVEL SPEECH ENHANCEMENT APPROACH BASED ON MODIFIED DCT AND IMPROVED PITCH SYNCHRONOUS ANALYSIS 1Balaji, V.R. and 2S. Subramanian 1Department of ECE, Sri Krishna College of Engineering and Technology, Coimbatore, India 2Department of CSE, Coimbatore Institute of Engineering and Technology, Coimbatore, India Received 2013-06-03, Revised 2013-07-17; Accepted 2013-11-21 ABSTRACT Speech enhancement has become an essential issue within the field of speech and signal processing, because of the necessity to enhance the performance of voice communication systems in noisy environment. There has been a number of research works being carried out in speech processing but still there is always room for improvement. The main aim is to enhance the apparent quality of the speech and to improve the intelligibility. Signal representation and enhancement in cosine transformation is observed to provide significant results. Discrete Cosine Transformation has been widely used for speech enhancement. In this research work, instead of DCT, Advanced DCT (ADCT) which simultaneous offers energy compaction along with critical sampling and flexible window switching. In order to deal with the issue of frame to frame deviations of the Cosine Transformations, ADCT is integrated with Pitch Synchronous Analysis (PSA). Moreover, in order to improve the noise minimization performance of the system, Improved Iterative Wiener Filtering approach called Constrained Iterative Wiener Filtering (CIWF) is used in this approach. Thus, a novel ADCT based speech enhancement using improved iterative filtering algorithm integrated with PSA is used in this approach.
    [Show full text]
  • Stationary Signal Processing on Graphs
    Stationary signal processing on graphs Nathanaël Perraudin and Pierre Vandergheynst ∗ April 25, 2017 Abstract Graphs are a central tool in machine learning and information processing as they allow to con- veniently capture the structure of complex datasets. In this context, it is of high importance to develop flexible models of signals defined over graphs or networks. In this paper, we generalize the traditional concept of wide sense stationarity to signals defined over the vertices of arbitrary weighted undirected graphs. We show that stationarity is expressed through the graph localization operator reminiscent of translation. We prove that stationary graph signals are characterized by a well-defined Power Spectral Density that can be efficiently estimated even for large graphs. We leverage this new concept to derive Wiener-type estimation procedures of noisy and partially observed signals and illustrate the performance of this new model for denoising and regression. Index terms— Stationarity, graphs, spectral graph theory, graph signal processing, power spectral density, Wiener filter, covariance estimation, Gaussian markov random fields 1 Introduction Stationarity is a traditional hypothesis in signal processing used to represent a special type of statistical relationship between samples of a temporal signal. The most commonly used is wide-sense stationarity, which assumes that the first two statistical moments are invariant under translation, or equivalently that the correlation between two samples depends only on their time difference. Stationarity is a corner stone of many signal analysis methods. The expected frequency content of stationary signals, called Power Spectral Density (PSD), provides an essential source of information used to build signal models, generate realistic surrogate data or perform predictions.
    [Show full text]
  • DCT/IDCT Filter Design for Ultrasound Image Filtering
    DCT/IDCT Filter Design for Ultrasound Image Filtering Barmak Honarvar Shakibaei1,2,*, Jan Flusser1, Yifan Zhao2, John Ahmet Erkoyuncu2 and Rajkumar Roy3 1Czech Academy of Sciences, Institute of Information 2Through-Life Engineering Services Centre, Theory and Automation, Pod vodarenskou´ veˇzˇ´ı 4, School of Aerospace, Transport and Manufacturing, 18208 Praha 8, Czech Republic Cranfield University, Bedfordshire MK43 0AL, UK Email: honarvar,flusser @utia.cas.cz Email: barmak,yifan.zhao,j.a.erkoyuncu @cranfield.ac.uk { } { } 3School of Mathematics, Computer Science and Engineering City, University of London, Northampton Square, London, EC1V 0HB, UK Email: [email protected] Abstract—In this paper, a new recursive structure based on the convolution model of discrete cosine transform (DCT) is correspondingly often called simply “the inverse DCT” or for designing of a finite impulse response (FIR) digital filter “the IDCT”. The N-point DCT-II of a discrete signal, x(n) is is proposed. In our derivation, we start with the convolution given by model of DCT-II to use its Z-transform for the proposed filter structure perspective. Moreover, using the same algorithm, a N−1 filter base implementation of the inverse DCT (IDCT) for image π 1 X = c(k) x(n) cos n + k , (1) reconstruction is developed. The computational time experiments k N 2 n=0 of the proposed DCT/IDCT filter(s) demonstrate that the pro- X posed filters achieve faster elapsed CPU time compared to the for k = 0, 1,...,N 1, where others. The image filtering and reconstruction performance of − 1 the proposed approach on ultrasound images are presented to , k = 0 validate the theoretical framework.
    [Show full text]
  • “Self-Wiener” Filtering: Non-Iterative Data-Driven Robust Deconvolution of Deterministic Signals Amir Weiss and Boaz Nadler
    1 “Self-Wiener” Filtering: Non-Iterative Data-Driven Robust Deconvolution of Deterministic Signals Amir Weiss and Boaz Nadler Abstract—We consider the fundamental problem of robust GENERATION deconvolution, and particularly the recovery of an unknown deterministic signal convolved with a known filter and corrupted v[n] by additive noise. We present a novel, non-iterative data-driven approach. Specifically, our algorithm works in the frequency- x[n] y[n] domain, where it tries to mimic the optimal unrealizable Wiener- h[n] Σ like filter as if the unknown deterministic signal were known. This leads to a threshold-type regularized estimator, where the RECONSTRUCTION threshold value at each frequency is found in a fully data- driven manner. We provide an analytical performance analysis, y[n] xb[n] and derive approximate closed-form expressions for the residual g[n] Mean Squared Error (MSE) of our proposed estimator in the low and high Signal-to-Noise Ratio (SNR) regimes. We show analytically that in the low SNR regime our method provides Fig. 1: Block diagram of model (1) (“GENERATION”), and the considered enhanced noise suppression, and in the high SNR regime it class of estimators, produced by filtering (“RECONSTRUCTION”). Note that in our framework, g[n] may depend on the measurements fy[n]gN−1. approaches the performance of the optimal unrealizable solution. n=0 Further, as we demonstrate in simulations, our solution is highly suitable for (approximately) bandlimited or frequency-domain and noise are stochastic, with a-priori known upper and lower sparse signals, and provides a significant gain of several dBs bounds on their spectra at each frequency.
    [Show full text]
  • 6. Wiener and Kalman Filters
    83 6. Wiener and Kalman Filters 6.1. The Wiener Filter. The theory of filtering of stationary time series for a variety of purposes was constructed by Norbert Wiener in the 1940s for continuous time processes in a notable feat of mathematics (Wiener, 1949). The work was done much earlier, but was classified until well after World War II). In an important paper however, Levinson (1947) showed that in discrete time, the entire theory could be reduced to least squares and so was mathematically very simple. This approach is the one used here. Note that the vector space method sketched above is fully equivalent too. The theory of Wiener filters is directed at operators (filters) which are causal. That is, they operate only upon the past and present of the time series. This requirement is essential if one is interested in forecasting so that the future is unavailable. (When the future is available, one has a “smoothing” problem.) The immediate generator of Wiener’s theory was the need during the Second World War for determining where to aim anti-aircraft guns at dodging airplanes. A simple (analogue) computer in the gunsight could track the moving airplane, thus generating a time history of its movements and some rough autocovariances. Where should the gun be aimed so that the shell arrives at the position of the airplane with smallest error? Clearly this is a forecasting problem. In the continuous time formulation, the requirement of causality leads to the need to solve a so-called Wiener-Hopf problem, and which can be mathematically tricky.
    [Show full text]
  • Wiener Filters • Linear Filters and Orthogonality • the Time Continues Wiener Filter • the Time Discrete Wiener Filter • Example
    Stochastic signals and processes Lec. 8 Samuel Schmidt 25-10-2011 • Introduction to Wiener filters • Linear filters and orthogonality • The time continues Wiener filter • The time discrete Wiener filter • Example How to filter a signal Y(t) so it became as similar to a desired signal S(t) as possible. Y(t) S(t) 5 Wiener 2 Y(t) 0 0 ≈ S(t) -5 filter 0 1 2 3 4 5 -2 Time (s) 0 1 2 3 4 5 L[Y(t)] Time (s) • Noise contaminated signal Y(t)=S(t)+N(t) Wiener filter • Estimate the linear filter L[·] so Ŝ(t) become as close to S(t) as possible Ŝ(t)=L[Y(t)] Se(t) 5 e(t) 5 0 Se(t) 0 e(t) -5 0 1 2 3 4 5 -5 Time (s) 0 1 2 3 4 5 Time (s) Y(t) S(t) 5 2 Y(t) 0 0 S(t) -5 0 1 2 3 4 5 -2 Time (s) 0 1 2 3 4 5 Time (s) • Introduction to Wiener filters • Linear filters and orthogonality • The time continues Wiener filter • The time discrete Wiener filter • Example • Y(t),S(t) and N(t) are zero mean stationary processes • L[·] is a linear filter • FIR filters (M-order) Y(n)=ω0 X(n)+ ω1 X(n-1)+ ω2 X(n-2)….. ωM-1 X(n-M+1) Or like 푀−1 푌 푛 = 휔푘푋(푛 − 푘) 푘=0 • In linear system the order of multiplication and addition is irrelevant. • Defined by super position Ta x [n]b x [n] aTx [n]bTx [n] 1 2 1 2 • Linear algebra: Two vectors is orthogonal if the dot product is zero • That corresponds zero correlation betweenS(t) two signals 2 0 S(t) -2 0N(t) 1 2 3 4 5 5 Time (s) 0 N(t) -5 0 1 2 3 4 5 Time (s) S(t) vs.
    [Show full text]
  • Speech Enhancement in Transform Domain
    This document is downloaded from DR‑NTU (https://dr.ntu.edu.sg) Nanyang Technological University, Singapore. Speech enhancement in transform domain Ding, Huijun 2011 Ding, H. (2011). Speech enhancement in transform domain. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/43536 https://doi.org/10.32657/10356/43536 Downloaded on 29 Sep 2021 04:08:36 SGT SPEECH ENHANCEMENT IN TRANSFORM DOMAIN DING HUIJUN School of Electrical & Electronic Engineering A thesis submitted to the Nanyang Technological University in partial ful¯llment of the requirement for the degree of Doctor of Philosophy 2011 Statement of Originality I hereby certify that the work embodied in this thesis is the result of original research done by me and has not been submitted for a higher degree to any other University or Institute. ............................................... Date DING HUIJUN To Dad and Mom, for their encouragement and love. Summary This thesis focuses on the development of speech enhancement algorithms in the transform domain. The motivation of the work is ¯rst stated and various e®ects of noise on speech are discussed. Then the main objectives of this work are explained and the primary aim is to attenuate the noise component of a noisy speech in order to enhance its quality using transform based ¯ltering algorithms. A literature review of various speech enhancement algorithms with an emphasis on those implemented in the transform domain is presented. Some of the important speech enhancement algorithms are outlined and various transform methods are compared and discussed. Based on the discussion of transform methods, the use of Discrete Cosine Trans- form (DCT) is investigated.
    [Show full text]
  • Wiener Filters
    Advanced Digital Signal Processing and Noise Reduction, Second Edition. Saeed V. Vaseghi Copyright © 2000 John Wiley & Sons Ltd ISBNs: 0-471-62692-9 (Hardback): 0-470-84162-1 (Electronic) 6 WIENER FILTERS 6.1 Wiener Filters: Least Square Error Estimation 6.2 Block-Data Formulation of the Wiener Filter 6.3 Interpretation of Wiener Filters as Projection in Vector Space 6.4 Analysis of the Least Mean Square Error Signal 6.5 Formulation of Wiener Filters in the Frequency Domain 6.6 Some Applications of Wiener Filters 6.7 The Choice of Wiener Filter Order 6.8 Summary iener theory, formulated by Norbert Wiener, forms the foundation of data-dependent linear least square error filters. W Wiener filters play a central role in a wide range of applications such as linear prediction, echo cancellation, signal restoration, channel equalisation and system identification. The coefficients of a Wiener filter are calculated to minimise the average squared distance between the filter output and a desired signal. In its basic form, the Wiener theory assumes that the signals are stationary processes. However, if the filter coefficients are periodically recalculated for every block of N signal samples then the filter adapts itself to the average characteristics of the signals within the blocks and becomes block-adaptive. A block-adaptive (or segment adaptive) filter can be used for signals such as speech and image that may be considered almost stationary over a relatively small block of samples. In this chapter, we study Wiener filter theory, and consider alternative methods of formulation of the Wiener filter problem.
    [Show full text]
  • 3. the Wiener Filter 3.1 the Wiener-Hopf Equation the Wiener
    3. The Wiener Filter 3.1 The Wiener-Hopf Equation The Wiener filter theory is characterized by: 1. The assumption that both signal and noise are random processed with known spectral characteristics or, equivalently, known auto- and cross-correlation functions. 2. The criterion for best performance is minimum mean-square error. (This is partially to make the problem mathematically tractable, but it is also a good physical criterion in many applications.) 3. A solution based on scalar methods that leads to the optimal filter weighting function (or transfer function in the stationary case). z(nnn )=+ s( ) v( ) hi() or H( z ) s(ˆ n ) JWSS LTI WSS RRRRsvzszs, , , or SzSzSzS(),(),(), v z sz () z known 2 J=− E{}[] sn() snˆ () Fig. 3.1-1 Wiener Filter Problem 1 We now consider the filter optimization problem that Wiener first solved in the 1940s. Referring to Fig. 3.1-1, we assume the following: 1. The filter input is an additive combination of signal and noise, both of which are jointly wide- sense stationary (JWSS) with known auto- and cross-correlation functions (or corresponding spectral functions). 2. The filter is linear and time-invariant. No further assumption is made as to its form. 3. The output is also wide-sense stationary. 4. The performance criterion is minimum mean-square error. The estimate s(ˆ n ) of a signal s(n) is given by the convolution representation ∞∞ s()()z()()z()()z(ˆ nhnn=∗=∑∑ hnii − = hini −), (3.1-1) ii=−∞ =−∞ where z(i) is the measurement and hn() is the impulse response of the estimator.
    [Show full text]
  • Statistical and Computational Performance of a Class by Computationally Convenient Transforms [8]
    IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT-32, NO. 2, MARCH 1986 303 one can expect to identify pOI. For suppose dn,, = poi&, dt + Wiener filters. A general model of a suboptimal Wiener filter over a group dm,, 7 i = 1,2, and let n, = n,, + nzf. Then eventually all the is defined, which includes, as special cases, the known filters based on the observations of n, are almost entirely those of rz2,, which does discrete Fourier transform (DFT) in the case of a cyclic group and the not yield much information about pOl. Indeed C now becomes Walsh-Hadamard transform (WHT) in the case of a dyadic group. Statis- i 7 . Similarly, if lim, _ m+l,/+ZL = c E (0, cc), one can only tical and computational performances of various group filters are investi- [ 1 gated. The cyclic and the dyadic group filters are known to be computa- expect to identify cpOl + p02. tionally the best ones among all the group filters. However, they are not It might be difficult to check assumptions 2 and 3 of Theorem always the best ones statistically and other (not necessarily Abelian) group 2. Assumption 1 will in general be easy to verify. A sufficient filters are studied. Results are compared with those for the cyclic group condition for assumptions 1 and 2 to hold is, for example, filters (DFT), and the general problem of selecting the best group filter is +t - to (a > - l/2). A necessary condition for assumption 3 is posed. That problem is solved numerically for small-size signals (I 64) for that the eigenvalues of &#@~ ds are of the same order as t + cc.
    [Show full text]
  • Statistical Signal Processing
    ELEG-636: Statistical Signal Processing Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware Spring 2010 Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical Signal Processing Spring 2010 1 / 34 Course Objectives & Structure Course Objectives & Structure Objective: Given a discrete time sequence {x(n)}, develop Statistical and spectral signal representation Filtering, prediction, and system identification algorithms Optimization methods that are Statistical Adaptive Course Structure: Weekly lectures [notes: www.ece.udel.edu/∼arce] Periodic homework (theory & Matlab implementations) [15%] Midterm & Final examinations [85%] Textbook: Haykin, Adaptive Filter Theory. Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical Signal Processing Spring 2010 2 / 34 Course Objectives & Structure Course Objectives & Structure Broad Applications in Communications, Imaging, Sensors. Emerging application in Brain-imaging techniques Brain-machine interfaces, Implantable devices. Neurofeedback presents real-time physiological signals from MRIs in a visual or auditory form to provide information about brain activity. These signals are used to train the patient to alter neural activity in a desired direction. Traditionally, feedback using EEGs or other mechanisms has not focused on the brain because the resolution is not good enough. Gonzalo R. Arce (ECE, Univ. of Delaware) ELEG-636: Statistical Signal Processing Spring 2010 3 / 34 Wiener (MSE) Filtering Theory Wiener Filtering Problem Statement Produce an estimate of a desired process statistically related to a set of observations £ ¤ ¢ £ ¤ ¦ § ¦ ¥ ¢ ¡ ¡ ¢ £ ¤ ¦ § ¥ § ¦ ¤ ¨ © ¤ ¦ § ¦ ¦ ¥ Historical Notes: The linear filtering problem was solved by Andrey Kolmogorov for discrete time – his 1938 paper “established the basic theorems for smoothing and predicting stationary stochastic processes” Norbert Wiener in 1941 for continuous time – not published until the 1949 paper Extrapolation, Interpolation, and Smoothing of Stationary Time Series Gonzalo R.
    [Show full text]