The Case for Oversampling

Total Page:16

File Type:pdf, Size:1020Kb

The Case for Oversampling EE247 Lecture 24 Oversampled ADCs – Why oversampling? – Pulse-count modulation – Sigma-delta modulation • 1-Bit quantization • Quantization error (noise) spectrum • SQNR analysis • Limit cycle oscillations –2nd order ΣΔ modulator • Dynamic range • Practical implementation – Effect of various nonidealities on the ΣΔ performance EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 1 The Case for Oversampling Nyquist sampling: Signal fs “narrow” “Nyquist” DSP transition fs >2B +δ ADC B Freq AA-Filter Sampler Oversampling: fs >> fN?? Signal “wide” Oversampled ADC DSP transition fs= MfN B Freq AA-Filter Sampler • Nyquist rate fN = 2B • Oversampling rate M = fs/fN >> 1 EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 2 Nyquist v.s. Oversampled Converters Antialiasing |X(f)| Input Signal frequency fB Nyquist Sampling f B fs 2fs frequency fS ~2fB Anti-aliasing Filter Oversampling fB fs frequency fS >> 2fB EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 3 Oversampling Benefits • No stringent requirements imposed on analog building blocks • Takes advantage of the availability of low cost, low power digital filtering • Relaxed transition band requirements for analog anti-aliasing filters • Reduced baseband quantization noise power • Allows trading speed for resolution EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 4 ADC Converters Baseband Noise Δ • For a quantizer with step size and sampling rate fs : – Quantization noise power distributed uniformly across Nyquist bandwidth ( fs/2) Ne(f) NB -fs /2 -fB fB f s /2 – Power spectral density: 22⎛⎞Δ ==e1 N(f)e ⎜⎟ fs ⎝⎠12 fs – Noise is aliased into the Nyquist band –fs /2 to fs /2 EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 5 Oversampled Converters Baseband Noise ff BB⎛⎞Δ2 ==1 SN(f)dfBe∫∫⎜⎟ df −− ffBB⎝⎠12 fs Ne(f) Δ2 ⎛⎞ = 2fB ⎜⎟ NB 12⎝⎠ fs = where for fBs f /2 Δ2 = SB0 -fs /2 -fB fB f s /2 12 ⎛⎞2f S SS==BB0 BB0⎜⎟ ⎝⎠fMs f where M==s oversampling ratio 2fB EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 6 Oversampled Converters Baseband Noise ⎛⎞2f S SS==BB0 BB0⎜⎟ ⎝⎠fMs f where M==s oversampling ratio 2fB 2X increase in M Æ 3dB reduction in SB Æ ½ bit increase in resolution/octave oversampling To increase the improvement in resolution: Embed quantizer in a feedback loop ÆPredictive (delta modulation) ÆNoise shaping (sigma delta modulation) EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 7 Pulse-Count Modulation Nyquist V (kT) in ADC t/T 0 1 2 Oversampled V (kT) in ADC, M = 8 t/T 0 1 2 Mean of pulse-count signal approximates analog input! EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 8 Pulse-Count Spectrum Magnitude f • Signal: low frequencies, f < B << fs • Quantization error: high frequency, B … fs / 2 • Separate with low-pass filter! EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 9 Oversampled ADC Predictive Coding + vIN ADC _ DOUT Predictor • Quantize the difference signal rather than the signal itself • Smaller input to ADC Æ Buy dynamic range • Only works if combined with oversampling • 1-Bit digital output • Digital filter computes “average” ÆN-Bit output EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 10 Oversampled ADC f = M f f = f +δ s1 N s2 N Signal E.g. Decimator “wide” Pulse-Count f = Mf “narrow” DSP transition s N Modulator transition Analog Sampler Modulator Digital AA-Filter AA-Filter B Freq 1-Bit Digital N-Bit Digital Decimator: • Digital (low-pass) filter • Removes quantization error for f > B • Provides most anti-alias filtering • Narrow transition band, high-order • 1-Bit input, N-Bit output (essentially computes “average”) EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 11 Modulator • Objectives: – Convert analog input to 1-Bit pulse density stream – Move quantization error to high frequencies f >>B – Operates at high frequency fs >> fN • M = 8 … 256 (typical)….1024 • Since modulator operated at high frequencies Æ need to keep circuitry “simple” Æ ΣΔ = ΔΣ Modulator EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 12 Sigma- Delta Modulators Analog 1-Bit ΣΔ modulators convert a continuous time analog input vIN into a 1-Bit sequence dOUT fs + vIN H(z) _ dOUT DAC Loop filter 1b Quantizer (comparator) EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 13 Sigma-Delta Modulators • The loop filter H can be either switched-capacitor or continuous time • Switched-capacitor filters are “easier” to implement + frequency characteristics scale with clock rate • Continuous time filters provide anti-aliasing protection • Loop filter can also be realized with passive LC’s at very high frequencies fs + vIN H(z) _ dOUT DAC EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 14 Oversampling A/D Conversion fs fs /M Input Signal Bandwidth Oversampling 1-bit Decimation n-bit Modulator Filter @ f /M B=fs /2M @ fs s fs = sampling rate M= oversampling ratio • Analog front-end Æ oversampled noise-shaping modulator • Converts original signal to a 1-bit digital output at the high rate of (2MXB) • Digital back-end Æ digital filter • Removes out-of-band quantization noise • Provides anti-aliasing to allow re-sampling @ lower sampling rate EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 15 1st Order ΣΔ Modulator In a 1st order modulator, simplest loop filter Æ an integrator z-1 H(z) = 1 – z-1 + vIN _ ∫ dOUT DAC EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 16 1st Order ΣΔ Modulator Switched-capacitor implementation φ φ 1 2 φ 2 - dOUT 1,0 + Vi +Δ/2 -Δ/2 EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 17 1st Order ΔΣ Modulator + vIN _ ∫ Δ ≤ ≤ Δ - /2 vIN + /2 dOUT -Δ/2 or +Δ/2 DAC • Properties of the first-order modulator: – Analog input range is equal to the DAC reference – The average value of dOUT must equal the average value of vIN – +1’s (or –1’s) density in dOUT is an inherently monotonic function of vIN Æ linearity is not dependent on component matching – Alternative multi-bit DAC (and ADCs) solutions reduce the quantization error but loose this inherent monotonicity & relaxed matching requirements EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 18 1st Order ΣΔ Modulator Tally of 1-Bit quantization error quantizer Analog input 1 2 3 Δ ≤ ≤ Δ X Q Y - /2 Vin + /2 z-1 -1 1-z 1-Bit digital Sine Wave Integrator Comparator output stream, -1, +1 Instantaneous Implicit 1-Bit DAC quantization error +Δ/2, -Δ/2 (Δ = 2) • M chosen to be 8 (low) to ease observability EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 19 1st Order Modulator Signals 1st Order Sigma-Delta X analog input 1.5 X Q Q tally of q-error Y Y digital/DAC output 1 0.5 Mean of Y approximates X 0 Amplitude -0.5 -1 T = 1/fs = 1/ (M fN) -1.5 0 10 20 30 40 50 60 Time [ t/T ] EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 20 ΣΔ Modulator Characteristics • Quantization noise and thermal noise (KT/C) distributed over –fs /2 to +fs /2 Æ Total noise within signal bandwidth reduced by 1/M • Very high SQNR achievable (> 20 Bits!) • Inherently linear for 1-Bit DAC • To first order, quantization error independent of component matching • Limited to moderate & low speed EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 21 Output Spectrum • Definitely not white! 30 • Skewed towards higher 20 Input frequencies 10 • Notice the distinct tones 0 -10 -20 Amplitude [ dBWN ] • dBWN (dB White Noise) -30 scale sets the 0dB line -40 at the noise per bin of a random -1, +1 -50 0 0.1 0.2 0.3 0.4 0.5 sequence Frequency [ f /fs] EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 22 Quantization Noise Analysis Quantization Error e(kT) Integrator −1 = z Σ H(z ) −1 Σ x(kT) 1+ z y(kT) Quantizer Model • Sigma-Delta modulators are nonlinear systems with memory Æ difficult to analyze directly • Representing the quantizer as an additive noise source linearizes the system EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 23 Signal Transfer Function −1 z x(kT) H(z ) = Σ Integrator −1 y(kT) 1+ z - H(z) ω Hj()ω = 0 jω Signal transfer function Æ low pass function: ω = 1 HjSig () + s Magnitude 1 ω 0 Yz() Hz () − f Frequency Hz()== = z1 ⇒ Delay 0 Sig Xz() 1+ Hz () EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 24 Noise Transfer Function Qualitative Analysis 2 vn v ω i 0 vo 2 Σ veq - jω 22=×⎛⎞f vveq n ⎜⎟ 2 ⎝⎠f0 2 × ⎛⎞f vn ⎜⎟ ⎝⎠f0 ω v vi Σ 0 o - jω f Frequency 2 0 22=×⎛⎞f vveq n ⎜⎟ ⎝⎠f0 ω v • Input referred-noiseÆ zero @ Σ 0 o ω DC (s-plane) vi - j EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 25 STF and NTF Quantization Error e(kT) Integrator −1 = z Σ H(z ) − Σ x(kT) 1+ z 1 y(kT) Quantizer Model Signal transfer function: Yz() Hz () − STF== =z 1 ⇒ Delay Xz() 1+ Hz () Noise transfer function: Y (z) 1 − NTF = = = 1− z 1 ⇒ Differentiator E(z) 1+ H (z) EECS 247 Lecture 24: Oversampling Data Converters © 2005 H. K. Page 26 Noise Transfer Function Yz() 1 − NTF== =−1 z 1 Ez() 1+ Hz () jTωω/2− jT /2 −−jTωω jT/2⎛⎞ee− NTF()(1 jω =− e )=2 e ⎜⎟ ⎝⎠2 − ω = 2sin/2ejTjT/2 ()ω =×−−jTωπ/2 j /2 ()ω 2sin/2ee⎣⎦⎡⎤ T = ()ω −−jT()ωπ/2 ⎣⎦⎡⎤2sinTe /2 = where Tf1/ s Thus: ωπ NTF()=2sin f() T /2=2sin() f / fs = 2 Nfy () NTF() f Ne () f EECS 247 Lecture 24: Oversampling Data Converters © 2005 H.
Recommended publications
  • Mathematical Basics of Bandlimited Sampling and Aliasing
    Signal Processing Mathematical basics of bandlimited sampling and aliasing Modern applications often require that we sample analog signals, convert them to digital form, perform operations on them, and reconstruct them as analog signals. The important question is how to sample and reconstruct an analog signal while preserving the full information of the original. By Vladimir Vitchev o begin, we are concerned exclusively The limits of integration for Equation 5 T about bandlimited signals. The reasons are specified only for one period. That isn’t a are both mathematical and physical, as we problem when dealing with the delta func- discuss later. A signal is said to be band- Furthermore, note that the asterisk in Equa- tion, but to give rigor to the above expres- limited if the amplitude of its spectrum goes tion 3 denotes convolution, not multiplica- sions, note that substitutions can be made: to zero for all frequencies beyond some thresh- tion. Since we know the spectrum of the The integral can be replaced with a Fourier old called the cutoff frequency. For one such original signal G(f), we need find only the integral from minus infinity to infinity, and signal (g(t) in Figure 1), the spectrum is zero Fourier transform of the train of impulses. To the periodic train of delta functions can be for frequencies above a. In that case, the do so we recognize that the train of impulses replaced with a single delta function, which is value a is also the bandwidth (BW) for this is a periodic function and can, therefore, be the basis for the periodic signal.
    [Show full text]
  • Efficient Supersampling Antialiasing for High-Performance Architectures
    Efficient Supersampling Antialiasing for High-Performance Architectures TR91-023 April, 1991 Steven Molnar The University of North Carolina at Chapel Hill Department of Computer Science CB#3175, Sitterson Hall Chapel Hill, NC 27599-3175 This work was supported by DARPA/ISTO Order No. 6090, NSF Grant No. DCI- 8601152 and IBM. UNC is an Equa.l Opportunity/Affirmative Action Institution. EFFICIENT SUPERSAMPLING ANTIALIASING FOR HIGH­ PERFORMANCE ARCHITECTURES Steven Molnar Department of Computer Science University of North Carolina Chapel Hill, NC 27599-3175 Abstract Techniques are presented for increasing the efficiency of supersampling antialiasing in high-performance graphics architectures. The traditional approach is to sample each pixel with multiple, regularly spaced or jittered samples, and to blend the sample values into a final value using a weighted average [FUCH85][DEER88][MAMM89][HAEB90]. This paper describes a new type of antialiasing kernel that is optimized for the constraints of hardware systems and produces higher quality images with fewer sample points than traditional methods. The central idea is to compute a Poisson-disk distribution of sample points for a small region of the screen (typically pixel-sized, or the size of a few pixels). Sample points are then assigned to pixels so that the density of samples points (rather than weights) for each pixel approximates a Gaussian (or other) reconstruction filter as closely as possible. The result is a supersampling kernel that implements importance sampling with Poisson-disk-distributed samples. The method incurs no additional run-time expense over standard weighted-average supersampling methods, supports successive-refinement, and can be implemented on any high-performance system that point samples accurately and has sufficient frame-buffer storage for two color buffers.
    [Show full text]
  • Moving Average Filters
    CHAPTER 15 Moving Average Filters The moving average is the most common filter in DSP, mainly because it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average filter is optimal for a common task: reducing random noise while retaining a sharp step response. This makes it the premier filter for time domain encoded signals. However, the moving average is the worst filter for frequency domain encoded signals, with little ability to separate one band of frequencies from another. Relatives of the moving average filter include the Gaussian, Blackman, and multiple- pass moving average. These have slightly better performance in the frequency domain, at the expense of increased computation time. Implementation by Convolution As the name implies, the moving average filter operates by averaging a number of points from the input signal to produce each point in the output signal. In equation form, this is written: EQUATION 15-1 Equation of the moving average filter. In M &1 this equation, x[ ] is the input signal, y[ ] is ' 1 % y[i] j x [i j ] the output signal, and M is the number of M j'0 points used in the moving average. This equation only uses points on one side of the output sample being calculated. Where x[ ] is the input signal, y[ ] is the output signal, and M is the number of points in the average. For example, in a 5 point moving average filter, point 80 in the output signal is given by: x [80] % x [81] % x [82] % x [83] % x [84] y [80] ' 5 277 278 The Scientist and Engineer's Guide to Digital Signal Processing As an alternative, the group of points from the input signal can be chosen symmetrically around the output point: x[78] % x[79] % x[80] % x[81] % x[82] y[80] ' 5 This corresponds to changing the summation in Eq.
    [Show full text]
  • ESE 531: Digital Signal Processing
    ESE 531: Digital Signal Processing Lec 12: February 21st, 2017 Data Converters, Noise Shaping (con’t) Penn ESE 531 Spring 2017 - Khanna Lecture Outline ! Data Converters " Anti-aliasing " ADC " Quantization " Practical DAC ! Noise Shaping Penn ESE 531 Spring 2017 - Khanna 2 ADC Penn ESE 531 Spring 2017 - Khanna 3 Anti-Aliasing Filter with ADC Penn ESE 531 Spring 2017 - Khanna 4 Oversampled ADC Penn ESE 531 Spring 2017 - Khanna 5 Oversampled ADC Penn ESE 531 Spring 2017 - Khanna 6 Oversampled ADC Penn ESE 531 Spring 2017 - Khanna 7 Oversampled ADC Penn ESE 531 Spring 2017 - Khanna 8 Sampling and Quantization Penn ESE 531 Spring 2017 - Khanna 9 Sampling and Quantization Penn ESE 531 Spring 2017 - Khanna 10 Effect of Quantization Error on Signal ! Quantization error is a deterministic function of the signal " Consequently, the effect of quantization strongly depends on the signal itself ! Unless, we consider fairly trivial signals, a deterministic analysis is usually impractical " More common to look at errors from a statistical perspective " "Quantization noise” ! Two aspects " How much noise power (variance) does quantization add to our samples? " How is this noise distributed in frequency? Penn ESE 531 Spring 2017 - Khanna 11 Quantization Error ! Model quantization error as noise ! In that case: Penn ESE 531 Spring 2017 - Khanna 12 Ideal Quantizer ! Quantization step Δ ! Quantization error has sawtooth shape, ! Bounded by –Δ/2, +Δ/2 ! Ideally infinite input range and infinite number of quantization levels Penn ESE 568 Fall 2016 - Khanna adapted from Murmann EE315B, Stanford 13 Ideal B-bit Quantizer ! Practical quantizers have a limited input range and a finite set of output codes ! E.g.
    [Show full text]
  • Comparing Oversampling Techniques to Handle the Class Imbalance Problem: a Customer Churn Prediction Case Study
    Received September 14, 2016, accepted October 1, 2016, date of publication October 26, 2016, date of current version November 28, 2016. Digital Object Identifier 10.1109/ACCESS.2016.2619719 Comparing Oversampling Techniques to Handle the Class Imbalance Problem: A Customer Churn Prediction Case Study ADNAN AMIN1, SAJID ANWAR1, AWAIS ADNAN1, MUHAMMAD NAWAZ1, NEWTON HOWARD2, JUNAID QADIR3, (Senior Member, IEEE), AHMAD HAWALAH4, AND AMIR HUSSAIN5, (Senior Member, IEEE) 1Center for Excellence in Information Technology, Institute of Management Sciences, Peshawar 25000, Pakistan 2Nuffield Department of Surgical Sciences, University of Oxford, Oxford, OX3 9DU, U.K. 3Information Technology University, Arfa Software Technology Park, Lahore 54000, Pakistan 4College of Computer Science and Engineering, Taibah University, Medina 344, Saudi Arabia 5Division of Computing Science and Maths, University of Stirling, Stirling, FK9 4LA, U.K. Corresponding author: A. Amin ([email protected]) The work of A. Hussain was supported by the U.K. Engineering and Physical Sciences Research Council under Grant EP/M026981/1. ABSTRACT Customer retention is a major issue for various service-based organizations particularly telecom industry, wherein predictive models for observing the behavior of customers are one of the great instruments in customer retention process and inferring the future behavior of the customers. However, the performances of predictive models are greatly affected when the real-world data set is highly imbalanced. A data set is called imbalanced if the samples size from one class is very much smaller or larger than the other classes. The most commonly used technique is over/under sampling for handling the class-imbalance problem (CIP) in various domains.
    [Show full text]
  • Super-Sampling Anti-Aliasing Analyzed
    Super-sampling Anti-aliasing Analyzed Kristof Beets Dave Barron Beyond3D [email protected] Abstract - This paper examines two varieties of super-sample anti-aliasing: Rotated Grid Super- Sampling (RGSS) and Ordered Grid Super-Sampling (OGSS). RGSS employs a sub-sampling grid that is rotated around the standard horizontal and vertical offset axes used in OGSS by (typically) 20 to 30°. RGSS is seen to have one basic advantage over OGSS: More effective anti-aliasing near the horizontal and vertical axes, where the human eye can most easily detect screen aliasing (jaggies). This advantage also permits the use of fewer sub-samples to achieve approximately the same visual effect as OGSS. In addition, this paper examines the fill-rate, memory, and bandwidth usage of both anti-aliasing techniques. Super-sampling anti-aliasing is found to be a costly process that inevitably reduces graphics processing performance, typically by a substantial margin. However, anti-aliasing’s posi- tive impact on image quality is significant and is seen to be very important to an improved gaming experience and worth the performance cost. What is Aliasing? digital medium like a CD. This translates to graphics in that a sample represents a specific moment as well as a Computers have always strived to achieve a higher-level specific area. A pixel represents each area and a frame of quality in graphics, with the goal in mind of eventu- represents each moment. ally being able to create an accurate representation of reality. Of course, to achieve reality itself is impossible, At our current level of consumer technology, it simply as reality is infinitely detailed.
    [Show full text]
  • Discrete - Time Signals and Systems
    Discrete - Time Signals and Systems Sampling – II Sampling theorem & Reconstruction Yogananda Isukapalli Sampling at diffe- -rent rates From these figures, it can be concluded that it is very important to sample the signal adequately to avoid problems in reconstruction, which leads us to Shannon’s sampling theorem 2 Fig:7.1 Claude Shannon: The man who started the digital revolution Shannon arrived at the revolutionary idea of digital representation by sampling the information source at an appropriate rate, and converting the samples to a bit stream Before Shannon, it was commonly believed that the only way of achieving arbitrarily small probability of error in a communication channel was to 1916-2001 reduce the transmission rate to zero. All this changed in 1948 with the publication of “A Mathematical Theory of Communication”—Shannon’s landmark work Shannon’s Sampling theorem A continuous signal xt( ) with frequencies no higher than fmax can be reconstructed exactly from its samples xn[ ]= xn [Ts ], if the samples are taken at a rate ffs ³ 2,max where fTss= 1 This simple theorem is one of the theoretical Pillars of digital communications, control and signal processing Shannon’s Sampling theorem, • States that reconstruction from the samples is possible, but it doesn’t specify any algorithm for reconstruction • It gives a minimum sampling rate that is dependent only on the frequency content of the continuous signal x(t) • The minimum sampling rate of 2fmax is called the “Nyquist rate” Example1: Sampling theorem-Nyquist rate x( t )= 2cos(20p t ), find the Nyquist frequency ? xt( )= 2cos(2p (10) t ) The only frequency in the continuous- time signal is 10 Hz \ fHzmax =10 Nyquist sampling rate Sampling rate, ffsnyq ==2max 20 Hz Continuous-time sinusoid of frequency 10Hz Fig:7.2 Sampled at Nyquist rate, so, the theorem states that 2 samples are enough per period.
    [Show full text]
  • Designing Filters Using the Digital Filter Design Toolkit Rahman Jamal, Mike Cerna, John Hanks
    NATIONAL Application Note 097 INSTRUMENTS® The Software is the Instrument ® Designing Filters Using the Digital Filter Design Toolkit Rahman Jamal, Mike Cerna, John Hanks Introduction The importance of digital filters is well established. Digital filters, and more generally digital signal processing algorithms, are classified as discrete-time systems. They are commonly implemented on a general purpose computer or on a dedicated digital signal processing (DSP) chip. Due to their well-known advantages, digital filters are often replacing classical analog filters. In this application note, we introduce a new digital filter design and analysis tool implemented in LabVIEW with which developers can graphically design classical IIR and FIR filters, interactively review filter responses, and save filter coefficients. In addition, real-world filter testing can be performed within the digital filter design application using a plug-in data acquisition board. Digital Filter Design Process Digital filters are used in a wide variety of signal processing applications, such as spectrum analysis, digital image processing, and pattern recognition. Digital filters eliminate a number of problems associated with their classical analog counterparts and thus are preferably used in place of analog filters. Digital filters belong to the class of discrete-time LTI (linear time invariant) systems, which are characterized by the properties of causality, recursibility, and stability. They can be characterized in the time domain by their unit-impulse response, and in the transform domain by their transfer function. Obviously, the unit-impulse response sequence of a causal LTI system could be of either finite or infinite duration and this property determines their classification into either finite impulse response (FIR) or infinite impulse response (IIR) system.
    [Show full text]
  • Finite Impulse Response (FIR) Digital Filters (II) Ideal Impulse Response Design Examples Yogananda Isukapalli
    Finite Impulse Response (FIR) Digital Filters (II) Ideal Impulse Response Design Examples Yogananda Isukapalli 1 • FIR Filter Design Problem Given H(z) or H(ejw), find filter coefficients {b0, b1, b2, ….. bN-1} which are equal to {h0, h1, h2, ….hN-1} in the case of FIR filters. 1 z-1 z-1 z-1 z-1 x[n] h0 h1 h2 h3 hN-2 hN-1 1 1 1 1 1 y[n] Consider a general (infinite impulse response) definition: ¥ H (z) = å h[n] z-n n=-¥ 2 From complex variable theory, the inverse transform is: 1 n -1 h[n] = ò H (z)z dz 2pj C Where C is a counterclockwise closed contour in the region of convergence of H(z) and encircling the origin of the z-plane • Evaluating H(z) on the unit circle ( z = ejw ) : ¥ H (e jw ) = åh[n]e- jnw n=-¥ 1 p h[n] = ò H (e jw )e jnwdw where dz = jejw dw 2p -p 3 • Design of an ideal low pass FIR digital filter H(ejw) K -2p -p -wc 0 wc p 2p w Find ideal low pass impulse response {h[n]} 1 p h [n] = H (e jw )e jnwdw LP ò 2p -p 1 wc = Ke jnwdw 2p ò -wc Hence K h [n] = sin(nw ) n = 0, ±1, ±2, …. ±¥ LP np c 4 Let K = 1, wc = p/4, n = 0, ±1, …, ±10 The impulse response coefficients are n = 0, h[n] = 0.25 n = ±4, h[n] = 0 = ±1, = 0.225 = ±5, = -0.043 = ±2, = 0.159 = ±6, = -0.053 = ±3, = 0.075 = ±7, = -0.032 n = ±8, h[n] = 0 = ±9, = 0.025 = ±10, = 0.032 5 Non Causal FIR Impulse Response We can make it causal if we shift hLP[n] by 10 units to the right: K h [n] = sin((n -10)w ) LP (n -10)p c n = 0, 1, 2, ….
    [Show full text]
  • CSMOUTE: Combined Synthetic Oversampling and Undersampling Technique for Imbalanced Data Classification
    CSMOUTE: Combined Synthetic Oversampling and Undersampling Technique for Imbalanced Data Classification Michał Koziarski Department of Electronics AGH University of Science and Technology Al. Mickiewicza 30, 30-059 Kraków, Poland Email: [email protected] Abstract—In this paper we propose a novel data-level algo- ity observations (oversampling). Secondly, the algorithm-level rithm for handling data imbalance in the classification task, Syn- methods, which adjust the training procedure of the learning thetic Majority Undersampling Technique (SMUTE). SMUTE algorithms to better accommodate for the data imbalance. In leverages the concept of interpolation of nearby instances, previ- ously introduced in the oversampling setting in SMOTE. Further- this paper we focus on the former. Specifically, we propose more, we combine both in the Combined Synthetic Oversampling a novel undersampling algorithm, Synthetic Majority Under- and Undersampling Technique (CSMOUTE), which integrates sampling Technique (SMUTE), which leverages the concept of SMOTE oversampling with SMUTE undersampling. The results interpolation of nearby instances, previously introduced in the of the conducted experimental study demonstrate the usefulness oversampling setting in SMOTE [5]. Secondly, we propose a of both the SMUTE and the CSMOUTE algorithms, especially when combined with more complex classifiers, namely MLP and Combined Synthetic Oversampling and Undersampling Tech- SVM, and when applied on datasets consisting of a large number nique (CSMOUTE), which integrates SMOTE oversampling of outliers. This leads us to a conclusion that the proposed with SMUTE undersampling. The aim of this paper is to approach shows promise for further extensions accommodating serve as a preliminary study of the potential usefulness of local data characteristics, a direction discussed in more detail in the proposed approach, with the final goal of extending it to the paper.
    [Show full text]
  • The Nyquist Sampling Rate for Spiraling Curves 11
    THE NYQUIST SAMPLING RATE FOR SPIRALING CURVES PHILIPPE JAMING, FELIPE NEGREIRA & JOSE´ LUIS ROMERO Abstract. We consider the problem of reconstructing a compactly supported function from samples of its Fourier transform taken along a spiral. We determine the Nyquist sampling rate in terms of the density of the spiral and show that, below this rate, spirals suffer from an approximate form of aliasing. This sets a limit to the amount of under- sampling that compressible signals admit when sampled along spirals. More precisely, we derive a lower bound on the condition number for the reconstruction of functions of bounded variation, and for functions that are sparse in the Haar wavelet basis. 1. Introduction 1.1. The mobile sampling problem. In this article, we consider the reconstruction of a compactly supported function from samples of its Fourier transform taken along certain curves, that we call spiraling. This problem is relevant, for example, in magnetic resonance imaging (MRI), where the anatomy and physiology of a person are captured by moving sensors. The Fourier sampling problem is equivalent to the sampling problem for bandlimited functions - that is, functions whose Fourier transform are supported on a given compact set. The most classical setting concerns functions of one real variable with Fourier transform supported on the unit interval [ 1/2, 1/2], and sampled on a grid ηZ, with η > 0. The sampling rate η determines whether− every bandlimited function can be reconstructed from its samples: reconstruction fails if η > 1 and succeeds if η 6 1 [42]. The transition value η = 1 is known as the Nyquist sampling rate, and it is the benchmark for all sampling schemes: modern sampling strategies that exploit the particular structure of a certain class of signals are praised because they achieve sub-Nyquist sampling rates.
    [Show full text]
  • 2.161 Signal Processing: Continuous and Discrete Fall 2008
    MIT OpenCourseWare http://ocw.mit.edu 2.161 Signal Processing: Continuous and Discrete Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.161 Signal Processing - Continuous and Discrete 1 Sampling and the Discrete Fourier Transform 1 Sampling Consider a continuous function f(t) that is limited in extent, T1 · t < T2. In order to process this function in the computer it must be sampled and represented by a ¯nite set of numbers. The most common sampling scheme is to use a ¯xed sampling interval ¢T and to form a sequence of length N: ffng (n = 0 : : : N ¡ 1), where fn = f(T1 + n¢T ): In subsequent processing the function f(t) is represented by the ¯nite sequence ffng and the sampling interval ¢T . In practice, sampling occurs in the time domain by the use of an analog-digital (A/D) converter. The mathematical operation of sampling (not to be confused with the physics of sampling) is most commonly described as a multiplicative operation, in which f(t) is multiplied by a Dirac comb sampling function s(t; ¢T ), consisting of a set of delayed Dirac delta functions: X1 s(t; ¢T ) = ±(t ¡ n¢T ): (1) n=¡1 ? We denote the sampled waveform f (t) as X1 ? f (t) = s(t; ¢T )f(t) = f(t)±(t ¡ n¢T ) (2) n=¡1 ? as shown in Fig. 1. Note that f (t) is a set of delayed weighted delta functions, and that the waveform must be interpreted in the distribution sense by the strength (or area) of each component impulse.
    [Show full text]