Digital Signal Processing Tom Davinson
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Moving Average Filters
CHAPTER 15 Moving Average Filters The moving average is the most common filter in DSP, mainly because it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average filter is optimal for a common task: reducing random noise while retaining a sharp step response. This makes it the premier filter for time domain encoded signals. However, the moving average is the worst filter for frequency domain encoded signals, with little ability to separate one band of frequencies from another. Relatives of the moving average filter include the Gaussian, Blackman, and multiple- pass moving average. These have slightly better performance in the frequency domain, at the expense of increased computation time. Implementation by Convolution As the name implies, the moving average filter operates by averaging a number of points from the input signal to produce each point in the output signal. In equation form, this is written: EQUATION 15-1 Equation of the moving average filter. In M &1 this equation, x[ ] is the input signal, y[ ] is ' 1 % y[i] j x [i j ] the output signal, and M is the number of M j'0 points used in the moving average. This equation only uses points on one side of the output sample being calculated. Where x[ ] is the input signal, y[ ] is the output signal, and M is the number of points in the average. For example, in a 5 point moving average filter, point 80 in the output signal is given by: x [80] % x [81] % x [82] % x [83] % x [84] y [80] ' 5 277 278 The Scientist and Engineer's Guide to Digital Signal Processing As an alternative, the group of points from the input signal can be chosen symmetrically around the output point: x[78] % x[79] % x[80] % x[81] % x[82] y[80] ' 5 This corresponds to changing the summation in Eq. -
Generalizing Sampling Theory for Time-Varying Nyquist Rates Using Self-Adjoint Extensions of Symmetric Operators with Deficiency Indices (1,1) in Hilbert Spaces
Generalizing Sampling Theory for Time-Varying Nyquist Rates using Self-Adjoint Extensions of Symmetric Operators with Deficiency Indices (1,1) in Hilbert Spaces by Yufang Hao A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Applied Mathematics Waterloo, Ontario, Canada, 2011 c Yufang Hao 2011 I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii Abstract Sampling theory studies the equivalence between continuous and discrete representa- tions of information. This equivalence is ubiquitously used in communication engineering and signal processing. For example, it allows engineers to store continuous signals as discrete data on digital media. The classical sampling theorem, also known as the theorem of Whittaker-Shannon- Kotel'nikov, enables one to perfectly and stably reconstruct continuous signals with a con- stant bandwidth from their discrete samples at a constant Nyquist rate. The Nyquist rate depends on the bandwidth of the signals, namely, the frequency upper bound. Intuitively, a signal's `information density' and ‘effective bandwidth' should vary in time. Adjusting the sampling rate accordingly should improve the sampling efficiency and information storage. While this old idea has been pursued in numerous publications, fundamental problems have remained: How can a reliable concept of time-varying bandwidth been defined? How can samples taken at a time-varying Nyquist rate lead to perfect and stable reconstruction of the continuous signals? This thesis develops a new non-Fourier generalized sampling theory which takes samples only as often as necessary at a time-varying Nyquist rate and maintains the ability to perfectly reconstruct the signals. -
Designing Filters Using the Digital Filter Design Toolkit Rahman Jamal, Mike Cerna, John Hanks
NATIONAL Application Note 097 INSTRUMENTS® The Software is the Instrument ® Designing Filters Using the Digital Filter Design Toolkit Rahman Jamal, Mike Cerna, John Hanks Introduction The importance of digital filters is well established. Digital filters, and more generally digital signal processing algorithms, are classified as discrete-time systems. They are commonly implemented on a general purpose computer or on a dedicated digital signal processing (DSP) chip. Due to their well-known advantages, digital filters are often replacing classical analog filters. In this application note, we introduce a new digital filter design and analysis tool implemented in LabVIEW with which developers can graphically design classical IIR and FIR filters, interactively review filter responses, and save filter coefficients. In addition, real-world filter testing can be performed within the digital filter design application using a plug-in data acquisition board. Digital Filter Design Process Digital filters are used in a wide variety of signal processing applications, such as spectrum analysis, digital image processing, and pattern recognition. Digital filters eliminate a number of problems associated with their classical analog counterparts and thus are preferably used in place of analog filters. Digital filters belong to the class of discrete-time LTI (linear time invariant) systems, which are characterized by the properties of causality, recursibility, and stability. They can be characterized in the time domain by their unit-impulse response, and in the transform domain by their transfer function. Obviously, the unit-impulse response sequence of a causal LTI system could be of either finite or infinite duration and this property determines their classification into either finite impulse response (FIR) or infinite impulse response (IIR) system. -
Finite Impulse Response (FIR) Digital Filters (II) Ideal Impulse Response Design Examples Yogananda Isukapalli
Finite Impulse Response (FIR) Digital Filters (II) Ideal Impulse Response Design Examples Yogananda Isukapalli 1 • FIR Filter Design Problem Given H(z) or H(ejw), find filter coefficients {b0, b1, b2, ….. bN-1} which are equal to {h0, h1, h2, ….hN-1} in the case of FIR filters. 1 z-1 z-1 z-1 z-1 x[n] h0 h1 h2 h3 hN-2 hN-1 1 1 1 1 1 y[n] Consider a general (infinite impulse response) definition: ¥ H (z) = å h[n] z-n n=-¥ 2 From complex variable theory, the inverse transform is: 1 n -1 h[n] = ò H (z)z dz 2pj C Where C is a counterclockwise closed contour in the region of convergence of H(z) and encircling the origin of the z-plane • Evaluating H(z) on the unit circle ( z = ejw ) : ¥ H (e jw ) = åh[n]e- jnw n=-¥ 1 p h[n] = ò H (e jw )e jnwdw where dz = jejw dw 2p -p 3 • Design of an ideal low pass FIR digital filter H(ejw) K -2p -p -wc 0 wc p 2p w Find ideal low pass impulse response {h[n]} 1 p h [n] = H (e jw )e jnwdw LP ò 2p -p 1 wc = Ke jnwdw 2p ò -wc Hence K h [n] = sin(nw ) n = 0, ±1, ±2, …. ±¥ LP np c 4 Let K = 1, wc = p/4, n = 0, ±1, …, ±10 The impulse response coefficients are n = 0, h[n] = 0.25 n = ±4, h[n] = 0 = ±1, = 0.225 = ±5, = -0.043 = ±2, = 0.159 = ±6, = -0.053 = ±3, = 0.075 = ±7, = -0.032 n = ±8, h[n] = 0 = ±9, = 0.025 = ±10, = 0.032 5 Non Causal FIR Impulse Response We can make it causal if we shift hLP[n] by 10 units to the right: K h [n] = sin((n -10)w ) LP (n -10)p c n = 0, 1, 2, …. -
ELEG 5173L Digital Signal Processing Ch. 5 Digital Filters
Department of Electrical Engineering University of Arkansas ELEG 5173L Digital Signal Processing Ch. 5 Digital Filters Dr. Jingxian Wu [email protected] 2 OUTLINE • FIR and IIR Filters • Filter Structures • Analog Filters • FIR Filter Design • IIR Filter Design 3 FIR V.S. IIR • LTI discrete-time system – Difference equation in time domain N M y(n) ak y(n k) bk x(n k) k 1 k 0 – Transfer function in z-domain N M k k Y (z) akY (z)z bk X (z)z k 1 k 0 M k bk z Y (z) k 0 H (z) N X (z) k 1 ak z k 1 4 FIR V.S. IIR • Finite impulse response (FIR) – difference equation in the time domain M y(n) bk x(n k) k 0 – Transfer function in the Z-domain M Y (z) k H (z) bk z X (z) k 0 – Impulse response h(n) [b ,b ,,b ] 0 1 M • The impulse response is of finite length finite impulse response 5 FIR V.S. IIR • Infinite impulse response (IIR) – Difference equation in the time domain N M y(n) ak y(n k) bk x(n k) k1 k0 – Transfer function in the z-domain M k bk z Y(z) k0 H (z) N X (z) k 1 ak z k1 – Impulse response can be obtained through inverse-z transform, and it has infinite length 6 FIR V.S. IIR • Example – Find the impulse response of the following system. Is it a FIR or IIR filter? Is it stable? 1 y(n) y(n 2) x(n) 4 7 FIR V.S. -
Fourier Analysis and Sampling Theory
Reading Required: Shirley, Ch. 9 Recommended: Ron Bracewell, The Fourier Transform and Its Applications, McGraw-Hill. Fourier analysis and Don P. Mitchell and Arun N. Netravali, “Reconstruction Filters in Computer Computer sampling theory Graphics ,” Computer Graphics, (Proceedings of SIGGRAPH 88). 22 (4), pp. 221-228, 1988. Brian Curless CSE 557 Fall 2009 1 2 What is an image? Images as functions We can think of an image as a function, f, from R2 to R: f(x,y) gives the intensity of a channel at position (x,y) Realistically, we expect the image only to be defined over a rectangle, with a finite range: • f: [a,b]x[c,d] Æ [0,1] A color image is just three functions pasted together. We can write this as a “vector-valued” function: ⎡⎤rxy(, ) f (,xy )= ⎢⎥ gxy (, ) ⎢⎥ ⎣⎦⎢⎥bxy(, ) We’ll focus in grayscale (scalar-valued) images for now. 3 4 Digital images Motivation: filtering and resizing In computer graphics, we usually create or operate What if we now want to: on digital (discrete)images: smooth an image? Sample the space on a regular grid sharpen an image? Quantize each sample (round to nearest enlarge an image? integer) shrink an image? If our samples are Δ apart, we can write this as: In this lecture, we will explore the mathematical underpinnings of these operations. f [n ,m] = Quantize{ f (n Δ, m Δ) } 5 6 Convolution Convolution in 2D One of the most common methods for filtering a In two dimensions, convolution becomes: function, e.g., for smoothing or sharpening, is called convolution. -
Filter Effects and Filter Artifacts in the Analysis of Electrophysiological Data
GENERAL COMMENTARY published: 09 July 2012 doi: 10.3389/fpsyg.2012.00233 Filter effects and filter artifacts in the analysis of electrophysiological data Andreas Widmann* and Erich Schröger Institute of Psychology, University of Leipzig, Leipzig, Germany *Correspondence: [email protected] Edited by: Guillaume A. Rousselet, University of Glasgow, UK Reviewed by: Rufin VanRullen, Centre de Recherche Cerveau et Cognition, France Lucas C. Parra, City College of New York, USA A commentary on unity gain at DC (the step response never FILTER EFFECTS VS. FILTER ARTIFACTS returns to one). These artifacts are due to a We also recommend to distinguish between Four conceptual fallacies in mapping the known misconception in FIR filter design in filter effects, that is, the obligatory effects time course of recognition EEGLAB1. The artifacts are further ampli- any filter with equivalent properties – cutoff by VanRullen, R. (2011). Front. Psychol. fied by filtering twice, forward and back- frequency, roll-off, ripple, and attenuation 2:365. doi: 10.3389/fpsyg.2011.00365 ward, to achieve zero-phase. – would have on the data (e.g., smoothing With more appropriate filters the under- of transients as demonstrated by the filter’s Does filtering preclude us from studying estimation of signal onset latency due to step response), and filter artifacts, that is, ERP time-courses? the smoothing effect of low-pass filtering effects which can be minimized by selection by Rousselet, G. A. (2012). Front. Psychol. could be narrowed down to about 4–12 ms of filter type and parameters (e.g., ringing). 3:131. doi: 10.3389/fpsyg.2012.00131 in the simulated dataset (see Figure 1 and Appendix for a simulation), that is, about an CAUSAL FILTERING In a recent review, VanRullen (2011) con- order of magnitude smaller than assumed. -
Digital Signal Processing Filter Design
2065-27 Advanced Training Course on FPGA Design and VHDL for Hardware Simulation and Synthesis 26 October - 20 November, 2009 Digital Signal Processing Filter Design Massimiliano Nolich DEEI Facolta' di Ingegneria Universita' degli Studi di Trieste via Valerio, 10, 34127 Trieste Italy Filter design 1 Design considerations: a framework |H(f)| 1 C ıp 1 ıp ıs f 0 fp fs Passband Transition Stopband band The design of a digital filter involves five steps: Specification: The characteristics of the filter often have to be specified in the frequency domain. For example, for frequency selective filters (lowpass, highpass, bandpass, etc.) the specification usually involves tolerance limits as shown above. Coefficient calculation: Approximation methods have to be used to calculate the values hŒk for a FIR implementation, or ak, bk for an IIR implementation. Equivalently, this involves finding a filter which has H.z/ satisfying the requirements. Realisation: This involves converting H.z/ into a suitable filter structure. Block or flow diagrams are often used to depict filter structures, and show the computational procedure for implementing the digital filter. 1 Analysis of finite wordlength effects: In practice one should check that the quantisation used in the implementation does not degrade the performance of the filter to a point where it is unusable. Implementation: The filter is implemented in software or hardware. The criteria for selecting the implementation method involve issues such as real-time performance, complexity, processing requirements, and availability of equipment. 2 Finite impulse response (FIR) filter design A FIR filter is characterised by the equations N 1 yŒn D hŒkxŒn k kXD0 N 1 H.z/ D hŒkzk: kXD0 The following are useful properties of FIR filters: They are always stable — the system function contains no poles. -
Lecture 2 - Digital Filters
Lecture 2 - Digital Filters 2.1 Digital filtering Digital filtering can be implemented either in hardware or software; in the first case, the numerical processor is either a special-purpose chip or it is assembled out of a set of digital integrated circuits which provide the essential building blocks of a digital filtering operation – storage, delay, addition/subtraction and multiplication by constants. On the other hand, a general-purpose mini-or micro-computer can also be programmed as a digital filter, in which case the numerical processor is the computer’s CPU, GPU and memory. Filtering may be applied to signals which are then transformed back to the analogue domain using a digital to analogue convertor or worked with entirely in the digital domain. 33 2.1.1 Reasons for using a digital rather than an analogue filter The numerical processor can easily be (re-)programmed to implement a num- • ber of different filters. The accuracy of a digital filter is dependent only on the round-off error in • the arithmetic. This has two advantages: – the accuracy is predictable and hence the performance of the digital signal processing algorithm is known apriori. – The round-off error can be minimized with appropriate design techniques and hence digital filters can meet very tight specifications on magnitude and phase characteristics (which would be almost impossible to achieve with analogue filters because of component tolerances and circuit noise). The widespread use of mini- and micro-computers in engineering has greatly • increased the number of digital signals recorded and processed. Power sup- ply and temperature variations have no effect on a programme stored in a computer. -
Lecture 5 - Digital Filters
Lecture 5 - Digital Filters 5.1 Digital filtering Digital filtering can be implemented either in hardware or software; in the first case, the numerical processor is either a special-purpose chip or it is assembled out of a set of digital integrated circuits which provide the essential building blocks of a digital filtering operation – storage, delay, addition/subtraction and multiplica- tion by constants. On the other hand, a general-purpose mini-or micro-computer can also be programmed as a digital filter, in which case the numerical processor is the computer’s CPU and memory. Filtering may be applied to signals which are then transformed back to the analogue domain using a digital to analogue convertor or worked with entirely in the digital domain (e.g. biomedical signals are digitised, filtered and analysed to detect some abnormal activity). 5.1.1 Reasons for using a digital rather than an analogue filter The numerical processor can easily be (re-)programmed to implement a number of different filters. The accuracy of a digital filter is dependent only on the round-off error in the arithmetic. This has two advantages: – the accuracy is predictable and hence the performance of the digital signal processing algorithm is known a priori. – The round-off error can be minimized with appropriate design tech- niques and hence digital filters can meet very tight specifications on magnitude and phase characteristics (which would be almost impossi- ble to achieve with analogue filters because of component tolerances and circuit noise). 59 The widespread use of mini- and micro-computers in engineering has greatly increased the number of digital signals recorded and processed. -
Aliasing, Image Sampling and Reconstruction
https://youtu.be/yr3ngmRuGUc Recall: a pixel is a point… Aliasing, Image Sampling • It is NOT a box, disc or teeny wee light and Reconstruction • It has no dimension • It occupies no area • It can have a coordinate • More than a point, it is a SAMPLE Many of the slides are taken from Thomas Funkhouser course slides and the rest from various sources over the web… Image Sampling Imaging devices area sample. • An image is a 2D rectilinear array of samples • In video camera the CCD Quantization due to limited intensity resolution array is an area integral Sampling due to limited spatial and temporal resolution over a pixel. • The eye: photoreceptors Intensity, I Pixels are infinitely small point samples J. Liang, D. R. Williams and D. Miller, "Supernormal vision and high- resolution retinal imaging through adaptive optics," J. Opt. Soc. Am. A 14, 2884- 2892 (1997) 1 Aliasing Good Old Aliasing Reconstruction artefact Sampling and Reconstruction Sampling Reconstruction Slide © Rosalee Nerheim-Wolfe 2 Sources of Error Aliasing (in general) • Intensity quantization • In general: Artifacts due to under-sampling or poor reconstruction Not enough intensity resolution • Specifically, in graphics: Spatial aliasing • Spatial aliasing Temporal aliasing Not enough spatial resolution • Temporal aliasing Not enough temporal resolution Under-sampling Figure 14.17 FvDFH Sampling & Aliasing Spatial Aliasing • Artifacts due to limited spatial resolution • Real world is continuous • The computer world is discrete • Mapping a continuous function to a -
Oversampling with Noise Shaping • Place the Quantizer in a Feedback Loop Xn() Yn() Un() Hz() – Quantizer
Oversampling Converters David Johns and Ken Martin University of Toronto ([email protected]) ([email protected]) University of Toronto slide 1 of 57 © D.A. Johns, K. Martin, 1997 Motivation • Popular approach for medium-to-low speed A/D and D/A applications requiring high resolution Easier Analog • reduced matching tolerances • relaxed anti-aliasing specs • relaxed smoothing filters More Digital Signal Processing • Needs to perform strict anti-aliasing or smoothing filtering • Also removes shaped quantization noise and decimation (or interpolation) University of Toronto slide 2 of 57 © D.A. Johns, K. Martin, 1997 Quantization Noise en() xn() yn() xn() yn() en()= yn()– xn() Quantizer Model • Above model is exact — approx made when assumptions made about en() • Often assume en() is white, uniformily distributed number between ±∆ ⁄ 2 •∆ is difference between two quantization levels University of Toronto slide 3 of 57 © D.A. Johns, K. Martin, 1997 Quantization Noise T ∆ time • White noise assumption reasonable when: — fine quantization levels — signal crosses through many levels between samples — sampling rate not synchronized to signal frequency • Sample lands somewhere in quantization interval leading to random error of ±∆ ⁄ 2 University of Toronto slide 4 of 57 © D.A. Johns, K. Martin, 1997 Quantization Noise • Quantization noise power shown to be ∆2 ⁄ 12 and is independent of sampling frequency • If white, then spectral density of noise, Se()f , is constant. S f e() ∆ 1 Height k = ---------- ---- x f 12 s f f f s 0 s –---- ---- 2 2 University of Toronto slide 5 of 57 © D.A. Johns, K. Martin, 1997 Oversampling Advantage • Oversampling occurs when signal of interest is bandlimited to f0 but we sample higher than 2f0 • Define oversampling-rate OSR = fs ⁄ ()2f0 (1) • After quantizing input signal, pass it through a brickwall digital filter with passband up to f0 y ()n 1 un() Hf() y2()n N-bit quantizer Hf() 1 f f fs –f 0 f s –---- 0 0 ---- 2 2 University of Toronto slide 6 of 57 © D.A.