High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

Total Page:16

File Type:pdf, Size:1020Kb

High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data _r_ ¸ _-y NASA Technical Memorandum 110340 High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data Eugene A. Morelli Langley Research Center, Hampton, Virginia June 1997 National Aeronautics and Space Administration Langley Research Center Hampton, Virginia 23681-0001 Table of Contents Abstract ............................................................................................. iii Nomenclature ...................................................................................... iv I. Introduction .................................................................................... 1 II. Theoretical Development ..................................................................... 2 III. Examples ...................................................................................... 9 IV. Summary ...................................................................................... 11 V. Acknowledgments ............................................................................ 11 VI. References .................................................................................... 12 Appendix - Derivation of the Cubic Interpolation Formulae ................................. 13 Figures .............................................................................................. 19 Abstract Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp z-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle. iii Nomenclature f cyclic frequency, Hz FFT Fast Fourier Transform i time index J imaginary number = k frequency index M number of discrete frequencies N number of sampling intervals t time, sec T data record length, sec Pn nth order Lagrange interpolating polynomial x(t) real function of time t xi sampled x(t) at time iAt .i k discrete Fourier transform of x i for frequency fk ;(:1 finite Fourier transform of x(t) for frequencyf W interior point cubic weighting function O_ angle of attack, deg At sampling interval, sec endpoint interpolation function Y endpoint cubic weighting function 0 discrete Fourier transform angular step fO frequency, rad / sec interior point interpolation function superscripts - Fourier transform * complex conjugate IV I. Introduction Data analysis and linear parametric modeling can be carried out in either the time domain or the frequency domain I. Frequency domain analysis has many advantages, including physical insight, direct applicability to control system design, and lower dimensionality for model parameter estimation, among others 1-3. Frequency domain techniques based on spectral estimation 3-5, output error 1-3, and equation error 1,6,7have been used successfully for flight test data analysis and modeling. The basis for all of these frequency domain methods is the finite Fourier transform, which is the mechanism for transforming time domain data to the frequency domain. Any errors in the transformation from time to frequency domain affect the accuracy of the raw data in the frequency domain, which in turn impacts data analysis and modeling results. Typically, transformation from the time domain to frequency domain is brought about using sampled time domain data and a simple Euler approximation of the finite Fourier integral. This calculation, called the discrete Fourier transform, can be implemented with a Fast Fourier Transform (FFT) algorithm 8 for increased computational speed. The simple Euler approximation makes the discrete Fourier transform increasingly inaccurate with increasing frequency and sampling time. Accuracy of transformation to the frequency domain can be improved using various quadrature methods 8, which involve evaluating the Fourier integral separately for each individual frequency of interest. For a given data record length, the discrete Fourier transform constrains the frequency resolution to equal the reciprocal of the record length. As the record length decreases, frequency resolution can become coarse, leading to omission of important data features in the frequency domain. One solution to this problem is to pad the time domain data with zeros and thus artificially increase the data record length, but this is inconvenient and increases processing time. It is also possible to interpolate values in the frequency domain, but this is also inconvenient and can be inaccurate as well. When the frequency band of interest is relatively small, most of the processing involved in the discrete Fourier transform is for frequencies outside the range of interest, since the computations include all frequencies up to the Nyquist frequency. The purpose of this work is to present a convenient and efficient method for computing the finite Fourier transform with high accuracy using sampled data, while allowing a wide range of choice for the frequency band and frequency resolution. This capability gives a solid underpinning to the many frequency domain techniques developed and renders those methods commensurately more accurate and flexible. The next section outlines the theory for the flexible, high accuracy finite Fourier transform method. Following that, some realistic simulation examples are presented to demonstrate some of the issues discussed in the theory. Finally, an example using measured flight test data from the F-18 High Alpha Research Vehicle is included. II. Theoretical Development The Fourier transform of a continuous scalar time function x(t) on a finite time interval [0, T] is called thefinite Fourier transform, and is defined by _(f)= _: x(t)e -j2nft dt (1) or _(o9) = _o x(t) e -jC°t dt (2) where o.) = 2fff (3) When the time function x(t) is sampled at discrete evenly spaced time intervals, the finite Fourier transform can be approximated by N-I .f(f) = At E xie-J2nfti (4) i=0 where t i-iAt , x i-x(ti)=x(iAt) i=0,1,2 ..... N (5) and T At = -- (6) N Eq. (4) is a simple Euler approximation for the integral in Eq. (1), using the first N discrete values of the continuous time function x(t). Note from Eq. (5) that the total number of samples is N + 1. The discrete Fourier transform can be used to evaluate the summation in Eq. (4) at specific frequencies. The discrete Fourier transform is defined by N-I Xk -- E xi e-J2_rftti k =0,1,2 ..... M-1 (7) i=0 where k k fk-- k =0,1,2 ..... M-1 (8) NAt T and (9) M = (NN/2+ 1)/2 forfor NU evenodd The values given for M in Eq. (9) represent the fundamental limitation that frequencies contained in a sampled time history must fall within the frequency band [0, fn ) (i.e., below fn), where fn is the Nyquist frequency, defined as one half the sampling frequency: 1 fn - (10) 2At For the discrete frequencies fk in Eq. (8), using Eq. (7) in Eq. (4) gives (Yk)--At k =0,1,2 ..... M-1 (11) The discrete Fourier transform in Eq. (7) can be expressed in another form by substituting from Eqs. (5) and (8) to obtain N-1 "_k = Z Xi e-J2trik/N k =0,1,2 ..... M-1 (12) i=0 Now define 27rk 0 = cok At = 2trfk At - k =0,1,2 ..... M-1 (13) N where the dependence of 0on k is not shown explicitly by the notation. Using Eq. (13), Eq. (12) can be written as N-1 "_k = Z xi e-jOi k=0,1,2 ..... M-1 (14) i=0 Combining Eqs. (12)-(14), and (4), J?(0) = Y( 2]rfk At)--- At xk k =0,1,2 ..... M-1 (15) When N is an integer power of 2, FFT algorithms 8 can be used to efficiently carry out the summation in Eq. (12). Eqs. (7) and (8) indicate that the discrete Fourier transform is defined for frequencies which correspond to an integral number of cycles for each time period T. Since the finite Fourier transform is an integration over the time interval [0, T], the values computed using the approximation in Eq. (15) can be small, and thus susceptible to round-off error accumulated during the summation in Eq. (12). In addition, since Eq. (15) represents a simple Euler approximation to the integral in Eq. (1), high frequencies fk or large sample times At will introduce additional inaccuracy because of the oscillatory nature of the integrand. The latter difficulty was recognized as early as 1928 by Filon 9, who proposed an interpolation scheme for the sampled time domain data to improve the accuracy of finite Fourier integrals. The same basic approach is taken here, using a cubic Lagrange polynomial interpolation due to Press et al.8. The resulting calculation uses the discrete Fourier transform with weighting factors as follows: J?(0)--At{ W(O) [Y"k + X_v e-j°N] "l- _O(O)Xo "+"_¢l(0)X 1 + _2(0)X 2 -I- ]g3(0)X 3 (16) + e j°r/at O)x N "1- )t 1 (O)XN_ 1 + _t2(O)XN_ 2 + _t3(O)XN_ 3 where W(0):(6+02_ _ 11 04 23 0 6 (17) 304 )(3-4c°s0+c°s20)=l 720 +--15120 4 and _0(0) =(-42+502)+(6+02)(8c°sO-c°s20) j(-120+603)+(6+O2)sin20 604 604 (18) t-102+__ __+-.-L-2 02 04q- 06 32 45 15120103 04 226800169 06_j012 45 105 28358 46777586 / _1(0)=14(3-02)-7(6+02)cos0 300-5(6+02)sinO 604 - j 604 (19) 7 7 02 +--5 04 7 06 -jO 7 1 02 +-- 06 24 180 3456 259200 72 168 7257611 04 598752013 ) _"2(0) = -4(3-02)+ 2(6+ 02)cosO -120 + 2(6+ 02)sinO 304 - j 304 (20) 1 + 1 02 5 04 1 06 -- jO(-- 7 d- 1 02 11 04 -t- 13 06" ) 6 45 6048 64800 k, 90 210 90720 7484400 ) _,3(0)=2(3-02)-(6+02)cos0 60-(6+02)sinO 604 - j 604 (21) 5 1 .
Recommended publications
  • Discrete - Time Signals and Systems
    Discrete - Time Signals and Systems Sampling – II Sampling theorem & Reconstruction Yogananda Isukapalli Sampling at diffe- -rent rates From these figures, it can be concluded that it is very important to sample the signal adequately to avoid problems in reconstruction, which leads us to Shannon’s sampling theorem 2 Fig:7.1 Claude Shannon: The man who started the digital revolution Shannon arrived at the revolutionary idea of digital representation by sampling the information source at an appropriate rate, and converting the samples to a bit stream Before Shannon, it was commonly believed that the only way of achieving arbitrarily small probability of error in a communication channel was to 1916-2001 reduce the transmission rate to zero. All this changed in 1948 with the publication of “A Mathematical Theory of Communication”—Shannon’s landmark work Shannon’s Sampling theorem A continuous signal xt( ) with frequencies no higher than fmax can be reconstructed exactly from its samples xn[ ]= xn [Ts ], if the samples are taken at a rate ffs ³ 2,max where fTss= 1 This simple theorem is one of the theoretical Pillars of digital communications, control and signal processing Shannon’s Sampling theorem, • States that reconstruction from the samples is possible, but it doesn’t specify any algorithm for reconstruction • It gives a minimum sampling rate that is dependent only on the frequency content of the continuous signal x(t) • The minimum sampling rate of 2fmax is called the “Nyquist rate” Example1: Sampling theorem-Nyquist rate x( t )= 2cos(20p t ), find the Nyquist frequency ? xt( )= 2cos(2p (10) t ) The only frequency in the continuous- time signal is 10 Hz \ fHzmax =10 Nyquist sampling rate Sampling rate, ffsnyq ==2max 20 Hz Continuous-time sinusoid of frequency 10Hz Fig:7.2 Sampled at Nyquist rate, so, the theorem states that 2 samples are enough per period.
    [Show full text]
  • Evaluating Oscilloscope Sample Rates Vs. Sampling Fidelity: How to Make the Most Accurate Digital Measurements
    AC 2011-2914: EVALUATING OSCILLOSCOPE SAMPLE RATES VS. SAM- PLING FIDELITY Johnnie Lynn Hancock, Agilent Technologies About the Author Johnnie Hancock is a Product Manager at Agilent Technologies Digital Test Division. He began his career with Hewlett-Packard in 1979 as an embedded hardware designer, and holds a patent for digital oscillo- scope amplifier calibration. Johnnie is currently responsible for worldwide application support activities that promote Agilent’s digitizing oscilloscopes and he regularly speaks at technical conferences world- wide. Johnnie graduated from the University of South Florida with a degree in electrical engineering. In his spare time, he enjoys spending time with his four grandchildren and restoring his century-old Victorian home located in Colorado Springs. Contact Information: Johnnie Hancock Agilent Technologies 1900 Garden of the Gods Rd Colorado Springs, CO 80907 USA +1 (719) 590-3183 johnnie [email protected] c American Society for Engineering Education, 2011 Evaluating Oscilloscope Sample Rates vs. Sampling Fidelity: How to Make the Most Accurate Digital Measurements Introduction Digital storage oscilloscopes (DSO) are the primary tools used today by digital designers to perform signal integrity measurements such as setup/hold times, rise/fall times, and eye margin tests. High performance oscilloscopes are also widely used in university research labs to accurately characterize high-speed digital devices and systems, as well as to perform high energy physics experiments such as pulsed laser testing. In addition, general-purpose oscilloscopes are used extensively by Electrical Engineering students in their various EE analog and digital circuits lab courses. The two key banner specifications that affect an oscilloscope’s signal integrity measurement accuracy are bandwidth and sample rate.
    [Show full text]
  • Enhancing ADC Resolution by Oversampling
    AVR121: Enhancing ADC resolution by oversampling 8-bit Features Microcontrollers • Increasing the resolution by oversampling • Averaging and decimation • Noise reduction by averaging samples Application Note 1 Introduction Atmel’s AVR controller offers an Analog to Digital Converter with 10-bit resolution. In most cases 10-bit resolution is sufficient, but in some cases higher accuracy is desired. Special signal processing techniques can be used to improve the resolution of the measurement. By using a method called ‘Oversampling and Decimation’ higher resolution might be achieved, without using an external ADC. This Application Note explains the method, and which conditions need to be fulfilled to make this method work properly. Figure 1-1. Enhancing the resolution. A/D A/D A/D 10-bit 11-bit 12-bit t t t Rev. 8003A-AVR-09/05 2 Theory of operation Before reading the rest of this Application Note, the reader is encouraged to read Application Note AVR120 - ‘Calibration of the ADC’, and the ADC section in the AVR datasheet. The following examples and numbers are calculated for Single Ended Input in a Free Running Mode. ADC Noise Reduction Mode is not used. This method is also valid in the other modes, though the numbers in the following examples will be different. The ADCs reference voltage and the ADCs resolution define the ADC step size. The ADC’s reference voltage, VREF, may be selected to AVCC, an internal 2.56V / 1.1V reference, or a reference voltage at the AREF pin. A lower VREF provides a higher voltage precision but minimizes the dynamic range of the input signal.
    [Show full text]
  • MULTIRATE SIGNAL PROCESSING Multirate Signal Processing
    MULTIRATE SIGNAL PROCESSING Multirate Signal Processing • Definition: Signal processing which uses more than one sampling rate to perform operations • Upsampling increases the sampling rate • Downsampling reduces the sampling rate • Reference: Digital Signal Processing, DeFatta, Lucas, and Hodgkiss B. Baas, EEC 281 431 Multirate Signal Processing • Advantages of lower sample rates –May require less processing –Likely to reduce power dissipation, P = CV2 f, where f is frequently directly proportional to the sample rate –Likely to require less storage • Advantages of higher sample rates –May simplify computation –May simplify surrounding analog and RF circuitry • Remember that information up to a frequency f requires a sampling rate of at least 2f. This is the Nyquist sampling rate. –Or we can equivalently say the Nyquist sampling rate is ½ the sampling frequency, fs B. Baas, EEC 281 432 Upsampling Upsampling or Interpolation •For an upsampling by a factor of I, add I‐1 zeros between samples in the original sequence •An upsampling by a factor I is commonly written I For example, upsampling by two: 2 • Obviously the number of samples will approximately double after 2 •Note that if the sampling frequency doubles after an upsampling by two, that t the original sample sequence will occur at the same points in time t B. Baas, EEC 281 434 Original Signal Spectrum •Example signal with most energy near DC •Notice 5 spectral “bumps” between large signal “bumps” B. Baas, EEC 281 π 2π435 Upsampled Signal (Time) •One zero is inserted between the original samples for 2x upsampling B. Baas, EEC 281 436 Upsampled Signal Spectrum (Frequency) • Spectrum of 2x upsampled signal •Notice the location of the (now somewhat compressed) five “bumps” on each side of π B.
    [Show full text]
  • Introduction Simulation of Signal Averaging
    Signal Processing Naureen Ghani December 9, 2017 Introduction Signal processing is used to enhance signal components in noisy measurements. It is especially important in analyzing time-series data in neuroscience. Applications of signal processing include data compression and predictive algorithms. Data analysis techniques are often subdivided into operations in the spatial domain and frequency domain. For one-dimensional time series data, we begin by signal averaging in the spatial domain. Signal averaging is a technique that allows us to uncover small amplitude signals in the noisy data. It makes the following assumptions: 1. Signal and noise are uncorrelated. 2. The timing of the signal is known. 3. A consistent signal component exists when performing repeated measurements. 4. The noise is truly random with zero mean. In reality, all these assumptions may be violated to some degree. This technique is still useful and robust in extracting signals. Simulation of Signal Averaging To simulate signal averaging, we will generate a measurement x that consists of a signal s and a noise component n. This is repeated over N trials. For each digitized trial, the kth sample point in the jth trial can be written as xj(k) = sj(k) + nj(k) Here is the code to simulate signal averaging: % Signal Averaging Simulation % Generate 256 noisy trials trials = 256; noise_trials = randn(256); % Generate sine signal sz = 1:trials; sz = sz/(trials/2); S = sin(2*pi*sz); % Add noise to 256 sine signals for i = 1:trials noise_trials(i,:) = noise_trials(i,:) + S;
    [Show full text]
  • Aliasing, Image Sampling and Reconstruction
    https://youtu.be/yr3ngmRuGUc Recall: a pixel is a point… Aliasing, Image Sampling • It is NOT a box, disc or teeny wee light and Reconstruction • It has no dimension • It occupies no area • It can have a coordinate • More than a point, it is a SAMPLE Many of the slides are taken from Thomas Funkhouser course slides and the rest from various sources over the web… Image Sampling Imaging devices area sample. • An image is a 2D rectilinear array of samples • In video camera the CCD Quantization due to limited intensity resolution array is an area integral Sampling due to limited spatial and temporal resolution over a pixel. • The eye: photoreceptors Intensity, I Pixels are infinitely small point samples J. Liang, D. R. Williams and D. Miller, "Supernormal vision and high- resolution retinal imaging through adaptive optics," J. Opt. Soc. Am. A 14, 2884- 2892 (1997) 1 Aliasing Good Old Aliasing Reconstruction artefact Sampling and Reconstruction Sampling Reconstruction Slide © Rosalee Nerheim-Wolfe 2 Sources of Error Aliasing (in general) • Intensity quantization • In general: Artifacts due to under-sampling or poor reconstruction Not enough intensity resolution • Specifically, in graphics: Spatial aliasing • Spatial aliasing Temporal aliasing Not enough spatial resolution • Temporal aliasing Not enough temporal resolution Under-sampling Figure 14.17 FvDFH Sampling & Aliasing Spatial Aliasing • Artifacts due to limited spatial resolution • Real world is continuous • The computer world is discrete • Mapping a continuous function to a
    [Show full text]
  • Design of High-Speed, Low-Power, Nyquist Analog-To-Digital Converters
    Linköping Studies in Science and Technology Thesis No. 1423 Design of High‐Speed, Low‐Power, Nyquist Analog‐to‐Digital Converters Timmy Sundström LiU‐TEK‐LIC‐2009:31 Department of Electrical Engineering Linköping University, SE‐581 83 Linköping, Sweden Linköping 2009 ISBN 978‐91‐7393‐486‐2 ISSN 0280‐7971 ii Abstract The scaling of CMOS technologies has increased the performance of general purpose processors and DSPs while analog circuits designed in the same process have not been able to utilize the process scaling to the same extent, suffering from reduced voltage headroom and reduced analog gain. In order to design efficient analog ‐to‐digital converters in nanoscale CMOS there is a need to both understand the physical limitations as well as to develop new architectures and circuits that take full advantage of what the process has to offer. This thesis explores the power dissipation of Nyquist rate analog‐to‐digital converters and their lower bounds, set by both the thermal noise limit and the minimum device and feature sizes offered by the process. The use of digital error correction, which allows for low‐accuracy analog components leads to a power dissipation reduction. Developing the bounds for power dissipation based on this concept, it is seen that the power of low‐to‐medium resolution converters is reduced when going to more modern CMOS processes, something which is supported by published results. The design of comparators is studied in detail and a new topology is proposed which reduces the kickback by 6x compared to conventional topologies. This comparator is used in two flash ADCs, the first employing redundancy in the comparator array, allowing for the use of small sized, low‐ power, low‐accuracy comparators to achieve an overall low‐power solution.
    [Show full text]
  • Chapter 5 the APPLICATION of the Z TRANSFORM 5.5.4 Aliasing
    Chapter 5 THE APPLICATION OF THE Z TRANSFORM 5.5.4 Aliasing Copyright c 2005 Andreas Antoniou Victoria, BC, Canada Email: [email protected] July 14, 2018 Frame # 1 Slide # 1 A. Antoniou Digital Signal Processing { Sec. 5.5.4 Aliasing is objectionable in practice and it must be prevented from occurring. This presentation explores the nature of aliasing in DSP. Aliasing can occur in other types of systems where sampled signals are involved, for example, in videos and movies, as will be demonstrated. Introduction When a continuous-time signal that contains frequencies outside the baseband −!s =2 < ! < !s =2 is sampled, a phenomenon known as aliasing will arise whereby frequencies outside the baseband impersonate frequencies within the baseband. Frame # 2 Slide # 2 A. Antoniou Digital Signal Processing { Sec. 5.5.4 This presentation explores the nature of aliasing in DSP. Aliasing can occur in other types of systems where sampled signals are involved, for example, in videos and movies, as will be demonstrated. Introduction When a continuous-time signal that contains frequencies outside the baseband −!s =2 < ! < !s =2 is sampled, a phenomenon known as aliasing will arise whereby frequencies outside the baseband impersonate frequencies within the baseband. Aliasing is objectionable in practice and it must be prevented from occurring. Frame # 2 Slide # 3 A. Antoniou Digital Signal Processing { Sec. 5.5.4 Aliasing can occur in other types of systems where sampled signals are involved, for example, in videos and movies, as will be demonstrated. Introduction When a continuous-time signal that contains frequencies outside the baseband −!s =2 < ! < !s =2 is sampled, a phenomenon known as aliasing will arise whereby frequencies outside the baseband impersonate frequencies within the baseband.
    [Show full text]
  • Nyquist Sampling Theorem
    Nyquist Sampling Theorem By: Arnold Evia Table of Contents • What is the Nyquist Sampling Theorem? • Bandwidth • Sampling • Impulse Response Train • Fourier Transform of Impulse Response Train • Sampling in the Fourier Domain o Sampling cases • Review What is the Nyquist Sampling Theorem? • Formal Definition: o If the frequency spectra of a function x(t) contains no frequencies higher than B hertz, x(t) is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart. • In other words, to be able to accurately reconstruct a signal, samples must be recorded every 1/(2B) seconds, where B is the bandwidth of the signal. Bandwidth • There are many definitions to bandwidth depending on the application • For signal processing, it is referred to as the range of frequencies above 0 (|F(w)| of f(t)) Bandlimited signal with bandwidth B • Signals that have a definite value for the highest frequency are bandlimited (|F(w)|=0 for |w|>B) • In reality, signals are never bandlimited o In order to be bandlimited, the signal must have infinite duration in time Non-bandlimited signal (representative of real signals) Sampling • Sampling is recording values of a function at certain times • Allows for transformation of a continuous time function to a discrete time function • This is obtained by multiplication of f(t) by a unit impulse train Impulse Response Train • Consider an impulse train: • Sometimes referred to as comb function • Periodic with a value of 1 for every nT0, where n is integer values from -∞ to ∞, and 0 elsewhere Fourier Transform of Impulse Train InputSet upthe Equations function into the SolveSolve Dn for for one Dn period SubstituteUnderstand Dn into Answerfirst equation fourier transform eqs.
    [Show full text]
  • The Fundamentals of FFT-Based Signal Analysis and Measurement Michael Cerna and Audrey F
    Application Note 041 The Fundamentals of FFT-Based Signal Analysis and Measurement Michael Cerna and Audrey F. Harvey Introduction The Fast Fourier Transform (FFT) and the power spectrum are powerful tools for analyzing and measuring signals from plug-in data acquisition (DAQ) devices. For example, you can effectively acquire time-domain signals, measure the frequency content, and convert the results to real-world units and displays as shown on traditional benchtop spectrum and network analyzers. By using plug-in DAQ devices, you can build a lower cost measurement system and avoid the communication overhead of working with a stand-alone instrument. Plus, you have the flexibility of configuring your measurement processing to meet your needs. To perform FFT-based measurement, however, you must understand the fundamental issues and computations involved. This application note serves the following purposes. • Describes some of the basic signal analysis computations, • Discusses antialiasing and acquisition front ends for FFT-based signal analysis, • Explains how to use windows correctly, • Explains some computations performed on the spectrum, and • Shows you how to use FFT-based functions for network measurement. The basic functions for FFT-based signal analysis are the FFT, the Power Spectrum, and the Cross Power Spectrum. Using these functions as building blocks, you can create additional measurement functions such as frequency response, impulse response, coherence, amplitude spectrum, and phase spectrum. FFTs and the Power Spectrum are useful for measuring the frequency content of stationary or transient signals. FFTs produce the average frequency content of a signal over the entire time that the signal was acquired. For this reason, you should use FFTs for stationary signal analysis or in cases where you need only the average energy at each frequency line.
    [Show full text]
  • 2.161 Signal Processing: Continuous and Discrete Fall 2008
    MIT OpenCourseWare http://ocw.mit.edu 2.161 Signal Processing: Continuous and Discrete Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Massachusetts Institute of Technology Department of Mechanical Engineering 2.161 Signal Processing - Continuous and Discrete Fall Term 2008 1 Lecture 10 Reading: • Class Handout: Sampling and the Discrete Fourier Transform • Proakis & Manolakis (4th Ed.) Secs. 6.1 – 6.3, Sec. 7.1 • Oppenheim, Schafer & Buck (2nd Ed.) Secs. 4.1 – 4.3, Secs. 8.1 – 8.5 1 The Sampling Theorem Given a set of samples {fn} and its generating function f(t), an important question to ask is whether the sample set uniquely defines the function that generated it? In other words, given {fn} can we unambiguously reconstruct f(t)? The answer is clearly no, as shown below, where there are obviously many functions that will generate the given set of samples. f(t ) t In fact there are an infinity of candidate functions that will generate the same sample set. The Nyquist sampling theorem places restrictions on the candidate functions and, if satisfied, will uniquely define the function that generated a given set of samples. The theorem may be stated in many equivalent ways, we present three of them here to illustrate different aspects of the theorem: • A function f(t), sampled at equal intervals ΔT , can not be unambiguously reconstructed from its sample set {fn} unless it is known a-priori that f(t) contains no spectral energy at or above a frequency of π/ΔT radians/s.
    [Show full text]
  • Nyquist Interpolation Improves Neuron Yield in Multiunit Recordings Timothy J
    Journal of Neuroscience Methods 155 (2006) 81–91 Nyquist interpolation improves neuron yield in multiunit recordings Timothy J. Blanche ∗, Nicholas V. Swindale Department of Ophthalmology and Visual Sciences, University of British Columbia, Canada Received 27 October 2005; received in revised form 6 December 2005; accepted 22 December 2005 Abstract Multiunit electrodes, in particular tetrodes and polytrodes, are able to isolate action potentials from many neurons simultaneously. However, inaccuracies in the post-acquisition reconstruction of recorded spike waveforms can affect the reliability of spike detection and sorting. Here we show that bandlimited interpolation with sample-and-hold delay correction reduces waveform variability, leading to improved reliability of threshold-based event detection and improved spike sorting accuracy. Interpolation of continuously acquired data is, however, computationally expensive. A cost-benefit analysis was made of varying sampling rates from 12.5 kHz (no interpolation) to 100 kHz (eight times oversampling, with respect to the Nyquist frequency), taking into consideration the final application of the data. For most purposes, including spike sorting, sample rates below 25 kHz with bandlimited interpolation to 50 kHz were ideal, with negligible gains above this rate. A practical benefit, especially for large electrode arrays, is that the bandwidth and storage requirements can be greatly reduced by using data acquisition rates at or slightly above the Nyquist frequency. © 2006 Published by Elsevier B.V.
    [Show full text]