Lecture 5 –Sampled Time Control

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 5 –Sampled Time Control Lecture 5 –Sampled Time Control • Sampled time vs. continuous time: implementation, aliasing • Linear sampled systems modeling refresher, DSP • Sampled-time frequency analysis • Sampled-time implementation of the basic controllers – I, PI, PD, PID – 80% (or more) of control loops in industry are digital PID EE392m - Spring 2005 Control Engineering 5-1 Gorinevsky Sampled Time Models • Time is often sampled because of the digital computer use – digital (sampled time) control system x(t + d) = f (x,u,t) y = g(x,u,t) • Numerical integration of continuous-time ODE x()t + d ≈ x(t) + d ⋅ f (x,u,t), t = kd • Time can be sampled because this is how a system works • Example: bank account balance x(t +1) = x(t) + u(t) – x(t) - balance in the end of day t – u(t) - total of deposits and withdrawals that day y = x – y(t) - displayed in a daily statement EE392m - Spring 2005 Control Engineering 5-2 Gorinevsky Sampling: continuous time view • Sampled and continuous time together • Continuous time physical system + digital controller – ZOH = Zero Order Hold A/D, Sample Control D/A, ZOH computing Sensors Physical Actuators system EE392m - Spring 2005 Control Engineering 5-3 Gorinevsky Signal sampling, aliasing • Nyquist frequency: ω ω ω π N= ½ S; S= 2 /T ω ω ω • Frequency folding: k S± map to the same frequency • Sampling Theorem: sampling is Ok if there are no frequency ω components above N • Practical approach to anti-aliasing: low pass filter (LPF) • Sampled→continuous: impostoring Low A/D, Sample Digital D/A, ZOH Low Pass computing Pass Filter Filter EE392m - Spring 2005 Control Engineering 5-4 Gorinevsky Linear state space model • Generic state space model: x(t +1) = f (x,u,t) = y g(x,u,t) x(t + 1) = Ax(t) + Bu(t) • LTI state space model y(t) = Cx(t) – Physics-based linear system model – Obtained by sampling a continuous time model – Zero-order hold (ZOH) T + = AcT + ⎛ Act ⎞ = + x(t T ) e x(t) ⎜ e Bcdt⎟u(t) x&(t) Ac x(t) Bcu(t) 14243 ⎝ ∫0 ⎠ A 1424 434 y(t) = C x(t) B c = y(t) Cc x(t) • Matlab commands for model conversion: help ltimodels EE392m - Spring 2005 Control Engineering 5-5 Gorinevsky Disk read-write head example • Sampled time model for T=0.0001 Read-write head control (use Matlab ss and c2d) ⎡0.998 0⎤ ⎡ 10−3 ⎤ x(k + 1) = ⋅ x(k) + ⋅ u(k) ⎢ −4 ⎥ ⎢ −8 ⎥ ⎣ 10 0⎦ ⎣5⋅10 ⎦ 142 43 142 43 A B y(k) = []0 1 ⋅ x(k) 123 y + by = gu b = 20, g = 10 C && & 10 y = u s(s + 20) x(k + 1) = Ax(k ) + Bu(t) − 20 0 10 dx = ⎡ ⎤ ⋅ + ⎡ ⎤ ⋅ y(k ) = Cx(k ) ⎢ ⎥ x ⎢ ⎥ u dt ⎣ 1 0⎦ ⎣ 0 ⎦ 142 43 { A B y = []0 1 ⋅ x 123 C EE392m - Spring 2005 Control Engineering 5-6 Gorinevsky Sampled-time system stability + = = x(k 1) Ax(k ), x(0) x0 Example: % Take A, B, C • Initial condition response % from the R-W head example >> eig(A) = k >> ans = x(k ) A x0 1.0000 0.9980 • Exponential stability condition >> >> K = 10 λ = >> Ak = A - K*B*C; { j } eig( A) >> eig(Ak) λ < >> ans = j 1 0.9990 0.9990 >> 0.04 >> N = 2000; 0.03 >> x(:,1) = [1; 0]; >> for t=2:N, 0.02 >> x(:,t) = Ak*x(:,t-1); 0.01 >> end >> 0 0 200 400 600 800 1000 1200 1400 1600 1800 2000 >> plot(1:N,x(2,:)) EE392m - Spring 2005 Control Engineering 5-7 Gorinevsky Impulse response • Response to an input impulse u y δ (⋅) ⎯⎯→P h(⋅) t t • Sampled time: t = 1, 2, ... • Control history = linear combination of the impulses ⇒ system response = linear combination of the impulse responses ∞ u(t) = ∑δ (t − k)u(k) k=0 ∞ y(t) = ∑h(t − k)u(k) = ()h *u (t) k=0 EE392m - Spring 2005 Control Engineering 5-8 Gorinevsky Convolution representation of a sampled-time system • Convolution signal t processing y = h * u y(t) = ∑h(t − k)u(k) k=−∞ notation • Impulse response u(t) = δ (t) ⇒ y(t) = h(t) • Step response: u = 1 for t > 0 t t g(t) = ∑h(t − k) = ∑h( j) h(t) = g(t) − g(t −1) k=0 j=0 Matlab commands: g = cumsum(h); h = diff(g); EE392m - Spring 2005 Control Engineering 5-9 Gorinevsky z-transform vs. Laplace transform ∞ • Discrete (z-transform) transfer function: H (z) = ∑h(k)z−k = – function of complex variable z Imag z k 0 – z forward shift operator i – analytical outside the circle |z|≥r 1 – all poles are inside the circle Re z – for a stable system r ≤ 1 ∞ • Laplace transform transfer function: H (s) = ∫ h(t)estdt −∞ – function of complex variable s Imag s – s differentiation operator – analytical in a half plane Re s ≤ a Re s – for a stable system a ≤ 1 EE392m - Spring 2005 Control Engineering 5-10 Gorinevsky Sampled time vs. continuous time • Continuous time analysis (digital implementation of a continuous time controller) 1 – Tustin’s method = trapezoidal rule of integration for H(s) = s ⎛ 2 1− z−1 ⎞ H (s) → H (z) = H ⎜ s = ⋅ ⎟ s ⎜ −1 ⎟ ⎝ T 1+ z ⎠ – Matched Zero Pole: map each zero and a pole in accordance with s = e zT • Sampled time analysis (Sampling of continuous signals and system) • Systems analysis is often performed continuous time - this is where the controlled plant is. EE392m - Spring 2005 Control Engineering 5-11 Gorinevsky FIR model N h = − = () y(t) ∑hFIR (t k)u(k) hFIR *u (t) k=0 t • FIR = Finite Impulse Response • Cut off the trailing part of the impulse response to obtain FIR • FIR filter state x is the shift register content u(t) y(t) z-1 h0 x1=u(t-1) x(t + 1) = Ax(t) + Bu(t) -1 z h1 y(t) = Cx(t) x2=u(t-2) -1 z h2 x3=u(t-3) EE392m - Spring 2005 Control Engineeringh3 5-12 Gorinevsky IIR model na nb = − − + − • IIR model: y(t) ∑ak y(t k) ∑bku(t k) k=1 k=0 • Filter states: y(t-1), …, y(t-na ), u(t-1), …, u(t-nb ) u(t) b y(t) z-1 0 z-1 u(t-1) y(t-1) -a z-1 b1 1 z-1 u(t-2) y(t-2) -1 -a2 -1 z b2 z u(t-3) y(t-3) -a b3 3 EE392m - Spring 2005 Control Engineering 5-13 Gorinevsky IIR model • Matlab implementation of an IIR model: filter • Transfer function realization: unit delay operator z-1 y(t) = H (z)u(t) B(z) b z N + b z N −1 + ... + b H (z) = = 0 1 N Control convention N + N −1 + + Matlab tf, step etc A(z) z a1 z ... a N + −1 + + −N = b0 b1 z ... bN z Signal processing convention + −1 + + −N 1 a1 z ... a N z Matlab filter etc ()()1 + a z −1 + ... + a z − N y(t) = b + b z −1 + ... + b z − N u(t) 14241 44 434N44 104241 44 4344N 4 A( z ) B( z ) • FIR model is a special case of an IIR with A(z) =1 (or zN ) EE392m - Spring 2005 Control Engineering 5-14 Gorinevsky IIR approximation example • Low order IIR approximation of impulse response: (prony in Matlab Signal Processing Toolbox) • Fewer parameters than a FIR model • Example: sideways heat transfer – impulse response h(t) – approximation with IIR filter a = [a1 a2 ], b=[b0 b1 b2 b3 b4 ] IMPULSE RESPONSE −1 −2 −3 −4 0.06 b + b z + b z + b z + b z H (z) = 0 1 2 3 4 + −1 + −2 1 a1z a2 z 0.04 0.02 0 0 20 40 60 80 100 TIME EE392m - Spring 2005 Control Engineering 5-15 Gorinevsky Linear state space model • LTI state space model x(t + 1) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t) • Unit delay operator z-1: z-1 x(t) = x(t-1) • Transfer function of an LTI model – defines an IIR representation − y = [()Iz − A 1 B + D]⋅ u − H (z) = C()Iz − A 1 B + D • Stability: the poles are inside the unit circle EE392m - Spring 2005 Control Engineering 5-16 Gorinevsky Stability analysis • Transfer function poles tell you everything about stability • Model-based analysis for a simple feedback example: y = H (z)u H(z)K y = y = L(z) y = − − + d d u K( y yd ) 1 H(z)K • If H(z) is a rational transfer function describing an IIR model • Then L(z) also is a rational transfer function describing an IIR model EE392m - Spring 2005 Control Engineering 5-17 Gorinevsky State space realization of IIR filter u(t) b4 b3 b2 b1 b0 y(t) = H (z)u(t) −1 −N -1 -1 -1 -1 B(z) b + b z + ... + b z z z z z H (z) = = 0 1 N + −1 + + −N A(z) 1 a1z ... aN z a4 a3 a2 a1 y(t) + − − − − ⎡ x1(t 1)⎤ ⎡ a1 a2 a3 a3 ⎤ ⎡ x1(t)⎤ ⎡1⎤ ⎢x (t + 1)⎥ ⎢ 1 0 0 0 ⎥ ⎢x (t)⎥ ⎢0⎥ ⎢ 2 ⎥ = ⎢ ⎥ ⋅ ⎢ 2 ⎥ + ⎢ ⎥u(t) + ⎢x3 (t 1)⎥ ⎢ 0 1 0 0 ⎥ ⎢x3 (t)⎥ ⎢0⎥ ⎢ + ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣x4 (t 1)⎦ ⎣ 0 0 1 0 ⎦ ⎣x4 (t)⎦ ⎣0⎦ ⎡ x1(t)⎤ ⎢x (t)⎥ = []− − − − ⋅ ⎢ 2 ⎥ + y(t) b1 a1b0 b2 a2b0 b3 a3b0 b4 a4b0 b0u(t) ⎢x3 (t)⎥ ⎢ ⎥ ⎣x4 (t)⎦ EE392m - Spring 2005 Control Engineering 5-18 Gorinevsky Poles and Zeros System Impulse Response 1 •…not quite so! 0.8 • Example: 0.6 0.4 z Amplitude y = H (z)u = 0.2 − z 0.7 0 0 5 10 15 20 25 • FIR model - truncated IIR Ti me (sec) 19 + 18 + 17 + + + = = z 0.7z 0.49z ..
Recommended publications
  • Discrete - Time Signals and Systems
    Discrete - Time Signals and Systems Sampling – II Sampling theorem & Reconstruction Yogananda Isukapalli Sampling at diffe- -rent rates From these figures, it can be concluded that it is very important to sample the signal adequately to avoid problems in reconstruction, which leads us to Shannon’s sampling theorem 2 Fig:7.1 Claude Shannon: The man who started the digital revolution Shannon arrived at the revolutionary idea of digital representation by sampling the information source at an appropriate rate, and converting the samples to a bit stream Before Shannon, it was commonly believed that the only way of achieving arbitrarily small probability of error in a communication channel was to 1916-2001 reduce the transmission rate to zero. All this changed in 1948 with the publication of “A Mathematical Theory of Communication”—Shannon’s landmark work Shannon’s Sampling theorem A continuous signal xt( ) with frequencies no higher than fmax can be reconstructed exactly from its samples xn[ ]= xn [Ts ], if the samples are taken at a rate ffs ³ 2,max where fTss= 1 This simple theorem is one of the theoretical Pillars of digital communications, control and signal processing Shannon’s Sampling theorem, • States that reconstruction from the samples is possible, but it doesn’t specify any algorithm for reconstruction • It gives a minimum sampling rate that is dependent only on the frequency content of the continuous signal x(t) • The minimum sampling rate of 2fmax is called the “Nyquist rate” Example1: Sampling theorem-Nyquist rate x( t )= 2cos(20p t ), find the Nyquist frequency ? xt( )= 2cos(2p (10) t ) The only frequency in the continuous- time signal is 10 Hz \ fHzmax =10 Nyquist sampling rate Sampling rate, ffsnyq ==2max 20 Hz Continuous-time sinusoid of frequency 10Hz Fig:7.2 Sampled at Nyquist rate, so, the theorem states that 2 samples are enough per period.
    [Show full text]
  • Evaluating Oscilloscope Sample Rates Vs. Sampling Fidelity: How to Make the Most Accurate Digital Measurements
    AC 2011-2914: EVALUATING OSCILLOSCOPE SAMPLE RATES VS. SAM- PLING FIDELITY Johnnie Lynn Hancock, Agilent Technologies About the Author Johnnie Hancock is a Product Manager at Agilent Technologies Digital Test Division. He began his career with Hewlett-Packard in 1979 as an embedded hardware designer, and holds a patent for digital oscillo- scope amplifier calibration. Johnnie is currently responsible for worldwide application support activities that promote Agilent’s digitizing oscilloscopes and he regularly speaks at technical conferences world- wide. Johnnie graduated from the University of South Florida with a degree in electrical engineering. In his spare time, he enjoys spending time with his four grandchildren and restoring his century-old Victorian home located in Colorado Springs. Contact Information: Johnnie Hancock Agilent Technologies 1900 Garden of the Gods Rd Colorado Springs, CO 80907 USA +1 (719) 590-3183 johnnie [email protected] c American Society for Engineering Education, 2011 Evaluating Oscilloscope Sample Rates vs. Sampling Fidelity: How to Make the Most Accurate Digital Measurements Introduction Digital storage oscilloscopes (DSO) are the primary tools used today by digital designers to perform signal integrity measurements such as setup/hold times, rise/fall times, and eye margin tests. High performance oscilloscopes are also widely used in university research labs to accurately characterize high-speed digital devices and systems, as well as to perform high energy physics experiments such as pulsed laser testing. In addition, general-purpose oscilloscopes are used extensively by Electrical Engineering students in their various EE analog and digital circuits lab courses. The two key banner specifications that affect an oscilloscope’s signal integrity measurement accuracy are bandwidth and sample rate.
    [Show full text]
  • Enhancing ADC Resolution by Oversampling
    AVR121: Enhancing ADC resolution by oversampling 8-bit Features Microcontrollers • Increasing the resolution by oversampling • Averaging and decimation • Noise reduction by averaging samples Application Note 1 Introduction Atmel’s AVR controller offers an Analog to Digital Converter with 10-bit resolution. In most cases 10-bit resolution is sufficient, but in some cases higher accuracy is desired. Special signal processing techniques can be used to improve the resolution of the measurement. By using a method called ‘Oversampling and Decimation’ higher resolution might be achieved, without using an external ADC. This Application Note explains the method, and which conditions need to be fulfilled to make this method work properly. Figure 1-1. Enhancing the resolution. A/D A/D A/D 10-bit 11-bit 12-bit t t t Rev. 8003A-AVR-09/05 2 Theory of operation Before reading the rest of this Application Note, the reader is encouraged to read Application Note AVR120 - ‘Calibration of the ADC’, and the ADC section in the AVR datasheet. The following examples and numbers are calculated for Single Ended Input in a Free Running Mode. ADC Noise Reduction Mode is not used. This method is also valid in the other modes, though the numbers in the following examples will be different. The ADCs reference voltage and the ADCs resolution define the ADC step size. The ADC’s reference voltage, VREF, may be selected to AVCC, an internal 2.56V / 1.1V reference, or a reference voltage at the AREF pin. A lower VREF provides a higher voltage precision but minimizes the dynamic range of the input signal.
    [Show full text]
  • MULTIRATE SIGNAL PROCESSING Multirate Signal Processing
    MULTIRATE SIGNAL PROCESSING Multirate Signal Processing • Definition: Signal processing which uses more than one sampling rate to perform operations • Upsampling increases the sampling rate • Downsampling reduces the sampling rate • Reference: Digital Signal Processing, DeFatta, Lucas, and Hodgkiss B. Baas, EEC 281 431 Multirate Signal Processing • Advantages of lower sample rates –May require less processing –Likely to reduce power dissipation, P = CV2 f, where f is frequently directly proportional to the sample rate –Likely to require less storage • Advantages of higher sample rates –May simplify computation –May simplify surrounding analog and RF circuitry • Remember that information up to a frequency f requires a sampling rate of at least 2f. This is the Nyquist sampling rate. –Or we can equivalently say the Nyquist sampling rate is ½ the sampling frequency, fs B. Baas, EEC 281 432 Upsampling Upsampling or Interpolation •For an upsampling by a factor of I, add I‐1 zeros between samples in the original sequence •An upsampling by a factor I is commonly written I For example, upsampling by two: 2 • Obviously the number of samples will approximately double after 2 •Note that if the sampling frequency doubles after an upsampling by two, that t the original sample sequence will occur at the same points in time t B. Baas, EEC 281 434 Original Signal Spectrum •Example signal with most energy near DC •Notice 5 spectral “bumps” between large signal “bumps” B. Baas, EEC 281 π 2π435 Upsampled Signal (Time) •One zero is inserted between the original samples for 2x upsampling B. Baas, EEC 281 436 Upsampled Signal Spectrum (Frequency) • Spectrum of 2x upsampled signal •Notice the location of the (now somewhat compressed) five “bumps” on each side of π B.
    [Show full text]
  • Introduction Simulation of Signal Averaging
    Signal Processing Naureen Ghani December 9, 2017 Introduction Signal processing is used to enhance signal components in noisy measurements. It is especially important in analyzing time-series data in neuroscience. Applications of signal processing include data compression and predictive algorithms. Data analysis techniques are often subdivided into operations in the spatial domain and frequency domain. For one-dimensional time series data, we begin by signal averaging in the spatial domain. Signal averaging is a technique that allows us to uncover small amplitude signals in the noisy data. It makes the following assumptions: 1. Signal and noise are uncorrelated. 2. The timing of the signal is known. 3. A consistent signal component exists when performing repeated measurements. 4. The noise is truly random with zero mean. In reality, all these assumptions may be violated to some degree. This technique is still useful and robust in extracting signals. Simulation of Signal Averaging To simulate signal averaging, we will generate a measurement x that consists of a signal s and a noise component n. This is repeated over N trials. For each digitized trial, the kth sample point in the jth trial can be written as xj(k) = sj(k) + nj(k) Here is the code to simulate signal averaging: % Signal Averaging Simulation % Generate 256 noisy trials trials = 256; noise_trials = randn(256); % Generate sine signal sz = 1:trials; sz = sz/(trials/2); S = sin(2*pi*sz); % Add noise to 256 sine signals for i = 1:trials noise_trials(i,:) = noise_trials(i,:) + S;
    [Show full text]
  • Aliasing, Image Sampling and Reconstruction
    https://youtu.be/yr3ngmRuGUc Recall: a pixel is a point… Aliasing, Image Sampling • It is NOT a box, disc or teeny wee light and Reconstruction • It has no dimension • It occupies no area • It can have a coordinate • More than a point, it is a SAMPLE Many of the slides are taken from Thomas Funkhouser course slides and the rest from various sources over the web… Image Sampling Imaging devices area sample. • An image is a 2D rectilinear array of samples • In video camera the CCD Quantization due to limited intensity resolution array is an area integral Sampling due to limited spatial and temporal resolution over a pixel. • The eye: photoreceptors Intensity, I Pixels are infinitely small point samples J. Liang, D. R. Williams and D. Miller, "Supernormal vision and high- resolution retinal imaging through adaptive optics," J. Opt. Soc. Am. A 14, 2884- 2892 (1997) 1 Aliasing Good Old Aliasing Reconstruction artefact Sampling and Reconstruction Sampling Reconstruction Slide © Rosalee Nerheim-Wolfe 2 Sources of Error Aliasing (in general) • Intensity quantization • In general: Artifacts due to under-sampling or poor reconstruction Not enough intensity resolution • Specifically, in graphics: Spatial aliasing • Spatial aliasing Temporal aliasing Not enough spatial resolution • Temporal aliasing Not enough temporal resolution Under-sampling Figure 14.17 FvDFH Sampling & Aliasing Spatial Aliasing • Artifacts due to limited spatial resolution • Real world is continuous • The computer world is discrete • Mapping a continuous function to a
    [Show full text]
  • Design of High-Speed, Low-Power, Nyquist Analog-To-Digital Converters
    Linköping Studies in Science and Technology Thesis No. 1423 Design of High‐Speed, Low‐Power, Nyquist Analog‐to‐Digital Converters Timmy Sundström LiU‐TEK‐LIC‐2009:31 Department of Electrical Engineering Linköping University, SE‐581 83 Linköping, Sweden Linköping 2009 ISBN 978‐91‐7393‐486‐2 ISSN 0280‐7971 ii Abstract The scaling of CMOS technologies has increased the performance of general purpose processors and DSPs while analog circuits designed in the same process have not been able to utilize the process scaling to the same extent, suffering from reduced voltage headroom and reduced analog gain. In order to design efficient analog ‐to‐digital converters in nanoscale CMOS there is a need to both understand the physical limitations as well as to develop new architectures and circuits that take full advantage of what the process has to offer. This thesis explores the power dissipation of Nyquist rate analog‐to‐digital converters and their lower bounds, set by both the thermal noise limit and the minimum device and feature sizes offered by the process. The use of digital error correction, which allows for low‐accuracy analog components leads to a power dissipation reduction. Developing the bounds for power dissipation based on this concept, it is seen that the power of low‐to‐medium resolution converters is reduced when going to more modern CMOS processes, something which is supported by published results. The design of comparators is studied in detail and a new topology is proposed which reduces the kickback by 6x compared to conventional topologies. This comparator is used in two flash ADCs, the first employing redundancy in the comparator array, allowing for the use of small sized, low‐ power, low‐accuracy comparators to achieve an overall low‐power solution.
    [Show full text]
  • Chapter 5 the APPLICATION of the Z TRANSFORM 5.5.4 Aliasing
    Chapter 5 THE APPLICATION OF THE Z TRANSFORM 5.5.4 Aliasing Copyright c 2005 Andreas Antoniou Victoria, BC, Canada Email: [email protected] July 14, 2018 Frame # 1 Slide # 1 A. Antoniou Digital Signal Processing { Sec. 5.5.4 Aliasing is objectionable in practice and it must be prevented from occurring. This presentation explores the nature of aliasing in DSP. Aliasing can occur in other types of systems where sampled signals are involved, for example, in videos and movies, as will be demonstrated. Introduction When a continuous-time signal that contains frequencies outside the baseband −!s =2 < ! < !s =2 is sampled, a phenomenon known as aliasing will arise whereby frequencies outside the baseband impersonate frequencies within the baseband. Frame # 2 Slide # 2 A. Antoniou Digital Signal Processing { Sec. 5.5.4 This presentation explores the nature of aliasing in DSP. Aliasing can occur in other types of systems where sampled signals are involved, for example, in videos and movies, as will be demonstrated. Introduction When a continuous-time signal that contains frequencies outside the baseband −!s =2 < ! < !s =2 is sampled, a phenomenon known as aliasing will arise whereby frequencies outside the baseband impersonate frequencies within the baseband. Aliasing is objectionable in practice and it must be prevented from occurring. Frame # 2 Slide # 3 A. Antoniou Digital Signal Processing { Sec. 5.5.4 Aliasing can occur in other types of systems where sampled signals are involved, for example, in videos and movies, as will be demonstrated. Introduction When a continuous-time signal that contains frequencies outside the baseband −!s =2 < ! < !s =2 is sampled, a phenomenon known as aliasing will arise whereby frequencies outside the baseband impersonate frequencies within the baseband.
    [Show full text]
  • Nyquist Sampling Theorem
    Nyquist Sampling Theorem By: Arnold Evia Table of Contents • What is the Nyquist Sampling Theorem? • Bandwidth • Sampling • Impulse Response Train • Fourier Transform of Impulse Response Train • Sampling in the Fourier Domain o Sampling cases • Review What is the Nyquist Sampling Theorem? • Formal Definition: o If the frequency spectra of a function x(t) contains no frequencies higher than B hertz, x(t) is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart. • In other words, to be able to accurately reconstruct a signal, samples must be recorded every 1/(2B) seconds, where B is the bandwidth of the signal. Bandwidth • There are many definitions to bandwidth depending on the application • For signal processing, it is referred to as the range of frequencies above 0 (|F(w)| of f(t)) Bandlimited signal with bandwidth B • Signals that have a definite value for the highest frequency are bandlimited (|F(w)|=0 for |w|>B) • In reality, signals are never bandlimited o In order to be bandlimited, the signal must have infinite duration in time Non-bandlimited signal (representative of real signals) Sampling • Sampling is recording values of a function at certain times • Allows for transformation of a continuous time function to a discrete time function • This is obtained by multiplication of f(t) by a unit impulse train Impulse Response Train • Consider an impulse train: • Sometimes referred to as comb function • Periodic with a value of 1 for every nT0, where n is integer values from -∞ to ∞, and 0 elsewhere Fourier Transform of Impulse Train InputSet upthe Equations function into the SolveSolve Dn for for one Dn period SubstituteUnderstand Dn into Answerfirst equation fourier transform eqs.
    [Show full text]
  • The Fundamentals of FFT-Based Signal Analysis and Measurement Michael Cerna and Audrey F
    Application Note 041 The Fundamentals of FFT-Based Signal Analysis and Measurement Michael Cerna and Audrey F. Harvey Introduction The Fast Fourier Transform (FFT) and the power spectrum are powerful tools for analyzing and measuring signals from plug-in data acquisition (DAQ) devices. For example, you can effectively acquire time-domain signals, measure the frequency content, and convert the results to real-world units and displays as shown on traditional benchtop spectrum and network analyzers. By using plug-in DAQ devices, you can build a lower cost measurement system and avoid the communication overhead of working with a stand-alone instrument. Plus, you have the flexibility of configuring your measurement processing to meet your needs. To perform FFT-based measurement, however, you must understand the fundamental issues and computations involved. This application note serves the following purposes. • Describes some of the basic signal analysis computations, • Discusses antialiasing and acquisition front ends for FFT-based signal analysis, • Explains how to use windows correctly, • Explains some computations performed on the spectrum, and • Shows you how to use FFT-based functions for network measurement. The basic functions for FFT-based signal analysis are the FFT, the Power Spectrum, and the Cross Power Spectrum. Using these functions as building blocks, you can create additional measurement functions such as frequency response, impulse response, coherence, amplitude spectrum, and phase spectrum. FFTs and the Power Spectrum are useful for measuring the frequency content of stationary or transient signals. FFTs produce the average frequency content of a signal over the entire time that the signal was acquired. For this reason, you should use FFTs for stationary signal analysis or in cases where you need only the average energy at each frequency line.
    [Show full text]
  • High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data
    _r_ ¸ _-y NASA Technical Memorandum 110340 High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data Eugene A. Morelli Langley Research Center, Hampton, Virginia June 1997 National Aeronautics and Space Administration Langley Research Center Hampton, Virginia 23681-0001 Table of Contents Abstract ............................................................................................. iii Nomenclature ...................................................................................... iv I. Introduction .................................................................................... 1 II. Theoretical Development ..................................................................... 2 III. Examples ...................................................................................... 9 IV. Summary ...................................................................................... 11 V. Acknowledgments ............................................................................ 11 VI. References .................................................................................... 12 Appendix - Derivation of the Cubic Interpolation Formulae ................................. 13 Figures .............................................................................................. 19 Abstract Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency
    [Show full text]
  • 2.161 Signal Processing: Continuous and Discrete Fall 2008
    MIT OpenCourseWare http://ocw.mit.edu 2.161 Signal Processing: Continuous and Discrete Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Massachusetts Institute of Technology Department of Mechanical Engineering 2.161 Signal Processing - Continuous and Discrete Fall Term 2008 1 Lecture 10 Reading: • Class Handout: Sampling and the Discrete Fourier Transform • Proakis & Manolakis (4th Ed.) Secs. 6.1 – 6.3, Sec. 7.1 • Oppenheim, Schafer & Buck (2nd Ed.) Secs. 4.1 – 4.3, Secs. 8.1 – 8.5 1 The Sampling Theorem Given a set of samples {fn} and its generating function f(t), an important question to ask is whether the sample set uniquely defines the function that generated it? In other words, given {fn} can we unambiguously reconstruct f(t)? The answer is clearly no, as shown below, where there are obviously many functions that will generate the given set of samples. f(t ) t In fact there are an infinity of candidate functions that will generate the same sample set. The Nyquist sampling theorem places restrictions on the candidate functions and, if satisfied, will uniquely define the function that generated a given set of samples. The theorem may be stated in many equivalent ways, we present three of them here to illustrate different aspects of the theorem: • A function f(t), sampled at equal intervals ΔT , can not be unambiguously reconstructed from its sample set {fn} unless it is known a-priori that f(t) contains no spectral energy at or above a frequency of π/ΔT radians/s.
    [Show full text]