Introduction to Time Series Analysis. Lecture 18

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Time Series Analysis. Lecture 18 Introduction to Time Series Analysis. Lecture 18. 1. Review: Spectral density, rational spectra, linear filters. 2. Frequency response of linear filters. 3. Spectral estimation 4. Sample autocovariance 5. Discrete Fourier transform and the periodogram 1 Review: Spectral density If a time series X has autocovariance γ satisfying { t} ∞ γ(h) < , then we define its spectral density as h=−∞ | | ∞ ∞ f(ν)= γ(h)e−2πiνh h=−∞ for <ν< . We have −∞ ∞ 1/2 γ(h)= e2πiνhf(ν) dν. −1/2 2 Review: Rational spectra For a linear time series with MA( ) polynomial ψ, ∞ 2 2πiν 2 f(ν)= σw ψ e . If it is an ARMA(p,q), we have 2 θ(e−2πiν ) f(ν)= σ2 w φ (e−2πiν) 2 q −2πiν 2 θ e zj = σ2 q j=1 − , w 2 p −2πiν 2 φp e pj j=1 | − | where z1,...,zq are the zeros (roots of θ(z)) and p1,...,pp are the poles (roots of φ(z)). 3 Review: Time-invariant linear filters A filter is an operator; given a time series X , it maps to a time series { t} Y . A linear filter satisfies { t} ∞ Yt = at,jXj. j=−∞ time-invariant: at,t−j = ψj : ∞ Yt = ψjXt−j. j=−∞ causal: j < 0 implies ψj = 0. ∞ Yt = ψj Xt−j. j=0 4 Time-invariant linear filters The operation ∞ ψjXt−j j=−∞ is called the convolution of X with ψ. 5 Time-invariant linear filters The sequence ψ is also called the impulse response, since the output Y of { t} the linear filter in response to a unit impulse, 1 if t = 0, Xt = 0 otherwise, is ∞ Yt = ψ(B)Xt = ψj Xt−j = ψt. j=−∞ 6 Introduction to Time Series Analysis. Lecture 18. 1. Review: Spectral density, rational spectra, linear filters. 2. Frequency response of linear filters. 3. Spectral estimation 4. Sample autocovariance 5. Discrete Fourier transform and the periodogram 7 Frequency response of a time-invariant linear filter Suppose that X has spectral density f (ν) and ψ is stable, that is, { t} x ∞ ψ < . Then Y = ψ(B)X has spectral density j=−∞ | j | ∞ t t 2πiν 2 fy(ν)= ψ e fx(ν). The function ν ψ(e2πiν ) (the polynomial ψ(z) evaluated on the unit → circle) is known as the frequency response or transfer function of the linear filter. The squared modulus, ν ψ(e2πiν ) 2 is known as the power transfer → | | function of the filter. 8 Frequency response of a time-invariant linear filter For stable ψ, Yt = ψ(B)Xt has spectral density 2πiν 2 fy(ν)= ψ e fx(ν). We have seen that a linear process, Yt = ψ(B)Wt, is a special case, since f (ν)= ψ(e2πiν ) 2σ2 = ψ(e2πiν ) 2f (ν). y | | w | | w When we pass a time series X through a linear filter, the spectral density { t} is multiplied, frequency-by-frequency, by the squared modulus of the frequency response ν ψ(e2πiν ) 2. → | | This is a version of the equality Var(aX)= a2Var(X), but the equality is true for the component of the variance at every frequency. This is also the origin of the name ‘filter.’ 9 Frequency response of a filter: Details 2πiν 2 Why is fy(ν)= ψ e fx(ν)? First, ∞ ∞ γy(h)= E ψj Xt−j ψkXt+h−k j=−∞ k=−∞ ∞ ∞ = ψj ψkE [Xt+h−kXt−j] j=−∞ k=−∞ ∞ ∞ ∞ ∞ = ψ ψ γ (h + j k)= ψ ψ γ (l). j k x − j h+j−l x j=−∞ k=−∞ j=−∞ l=−∞ It is easy to check that ∞ ψ < and ∞ γ (h) < imply j=−∞ | j | ∞ h=−∞ | x | ∞ that ∞ γ (h) < . Thus, the spectral density of y is defined. h=−∞ | y | ∞ 10 Frequency response of a filter: Details ∞ −2πiνh fy(ν)= γ(h)e h=−∞ ∞ ∞ ∞ −2πiνh = ψj ψh+j−lγx(l)e h=−∞ j=−∞ l=−∞ ∞ ∞ ∞ 2πiνj −2πiνl −2πiν(h+j−l) = ψje γx(l)e ψh+j−le j=−∞ l=−∞ h=−∞ ∞ 2πiνj −2πiνh = ψ(e )fx(ν) ψhe h=−∞ 2πiνj 2 = ψ(e ) fx(ν). 11 Frequency response: Examples 2πiν 2 2 For a linear process Yt = ψ(B)Wt, fy(ν)= ψ e σw. For an ARMA model, ψ(B)= θ(B)/φ(B), so Y has the rational { t} spectrum 2 θ(e−2πiν ) f (ν)= σ2 y w φ (e−2πiν) 2 q −2πiν 2 θ q e zj = σ2 j=1 − , w 2 p −2πiν 2 φp e pj j=1 | − | where pj and zj are the poles and zeros of the rational function z θ(z)/φ(z). → 12 Frequency response: Examples Consider the moving average 1 k Y = X . t 2k + 1 t−j j=−k This is a time invariant linear filter (but it is not causal). Its transfer function is the Dirichlet kernel 1 k ψ(e−2πiν )= D (2πν)= e−2πijν k 2k + 1 j=−k 1 if ν = 0, = sin(2π(k+1/2)ν) (2k+1) sin(πν) otherwise. 13 Example: Moving average Transfer function of moving average (k=5) 1 0.8 0.6 0.4 0.2 0 −0.2 −0.4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 ν 14 Example: Moving average Squared modulus of transfer function of moving average (k=5) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 ν This is a low-pass filter: It preserves low frequencies and diminishes high frequencies. It is often used to estimate a monotonic trend component of a series. 15 Example: Differencing Consider the first difference Y = (1 B)X . t − t This is a time invariant, causal, linear filter. Its transfer function is ψ(e−2πiν ) = 1 e−2πiν, − so ψ(e−2πiν) 2 = 2(1 cos(2πν)). | | − 16 Example: Differencing Transfer function of first difference 4 3.5 3 2.5 2 1.5 1 0.5 0 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 ν This is a high-pass filter: It preserves high frequencies and diminishes low frequencies. It is often used to eliminate a trend component of a series. 17 Introduction to Time Series Analysis. Lecture 18. 1. Review: Spectral density, rational spectra, linear filters. 2. Frequency response of linear filters. 3. Spectral estimation 4. Sample autocovariance 5. Discrete Fourier transform and the periodogram 18 Estimating the Spectrum: Outline We have seen that the spectral density gives an alternative view of • stationary time series. Given a realization x ,...,x of a time series, how can we estimate • 1 n the spectral density? One approach: replace γ( ) in the definition • · ∞ f(ν)= γ(h)e−2πiνh, h=−∞ with the sample autocovariance γˆ( ). · Another approach, called the periodogram: compute I(ν), the squared • modulus of the discrete Fourier transform (at frequencies ν = k/n). 19 Estimating the spectrum: Outline These two approaches are identical at the Fourier frequencies ν = k/n. • The asymptotic expectation of the periodogram I(ν) is f(ν). We can • derive some asymptotic properties, and hence do hypothesis testing. Unfortunately, the asymptotic variance of I(ν) is constant. • It is not a consistent estimator of f(ν). We can reduce the variance by smoothing the periodogram—averaging • over adjacent frequencies. If we average over a narrower range as n , we can obtain a consistent estimator of the spectral density. → ∞ 20 Estimating the spectrum: Sample autocovariance Idea: use the sample autocovariance γˆ( ), defined by · n−|h| 1 γˆ(h)= (x x¯)(x x¯), for n<h<n, n t+|h| − t − − t=1 as an estimate of the autocovariance γ( ), and then use a sample version of · ∞ f(ν)= γ(h)e−2πiνh, h=−∞ That is, for 1/2 ν 1/2, estimate f(ν) with − ≤ ≤ n−1 fˆ(ν)= γˆ(h)e−2πiνh. h=−n+1 21 Estimating the spectrum: Periodogram Another approach to estimating the spectrum is called the periodogram. It was proposed in 1897 by Arthur Schuster (at Owens College, which later became part of the University of Manchester), who used it to investigate periodicity in the occurrence of earthquakes, and in sunspot activity. Arthur Schuster, “On Lunar and Solar Periodicities of Earthquakes,” Proceedings of the Royal Society of London, Vol. 61 (1897), pp. 455–465. To define the periodogram, we need to introduce the discrete Fourier transform of a finite sequence x1,...,xn. 22 Introduction to Time Series Analysis. Lecture 18. 1. Review: Spectral density, rational spectra, linear filters. 2. Frequency response of linear filters. 3. Spectral estimation 4. Sample autocovariance 5. Discrete Fourier transform and the periodogram 23 Discrete Fourier transform For a sequence (x1,...,xn), define the discrete Fourier transform (DFT) as (X(ν0),X(ν1),...,X(νn−1)), where 1 n X(ν )= x e−2πiνkt, k √n t t=1 and ν = k/n (for k = 0, 1,...,n 1) are called the Fourier frequencies. k − (Think of ν : k = 0,...,n 1 as the discrete version of the frequency { k − } range ν [0, 1].) ∈ First, let’s show that we can view the DFT as a representation of x in a different basis, the Fourier basis. 24 Discrete Fourier transform Consider the space Cn of vectors of n complex numbers, with inner product a,b = a∗b, where a∗ is the complex conjugate transpose of the vector a Cn. ∈ Suppose that a set φ : j = 0, 1,...,n 1 of n vectors in Cn are { j − } orthonormal: 1 if j = k, φj,φk = 0 otherwise.
Recommended publications
  • Lecture 3: Transfer Function and Dynamic Response Analysis
    Transfer function approach Dynamic response Summary Basics of Automation and Control I Lecture 3: Transfer function and dynamic response analysis Paweł Malczyk Division of Theory of Machines and Robots Institute of Aeronautics and Applied Mechanics Faculty of Power and Aeronautical Engineering Warsaw University of Technology October 17, 2019 © Paweł Malczyk. Basics of Automation and Control I Lecture 3: Transfer function and dynamic response analysis 1 / 31 Transfer function approach Dynamic response Summary Outline 1 Transfer function approach 2 Dynamic response 3 Summary © Paweł Malczyk. Basics of Automation and Control I Lecture 3: Transfer function and dynamic response analysis 2 / 31 Transfer function approach Dynamic response Summary Transfer function approach 1 Transfer function approach SISO system Definition Poles and zeros Transfer function for multivariable system Properties 2 Dynamic response 3 Summary © Paweł Malczyk. Basics of Automation and Control I Lecture 3: Transfer function and dynamic response analysis 3 / 31 Transfer function approach Dynamic response Summary SISO system Fig. 1: Block diagram of a single input single output (SISO) system Consider the continuous, linear time-invariant (LTI) system defined by linear constant coefficient ordinary differential equation (LCCODE): dny dn−1y + − + ··· + _ + = an n an 1 n−1 a1y a0y dt dt (1) dmu dm−1u = b + b − + ··· + b u_ + b u m dtm m 1 dtm−1 1 0 initial conditions y(0), y_(0),..., y(n−1)(0), and u(0),..., u(m−1)(0) given, u(t) – input signal, y(t) – output signal, ai – real constants for i = 1, ··· , n, and bj – real constants for j = 1, ··· , m. How do I find the LCCODE (1)? .
    [Show full text]
  • Linear Filtering of Random Processes
    Linear Filtering of Random Processes Lecture 13 Spring 2002 Wide-Sense Stationary A stochastic process X(t) is wss if its mean is constant E[X(t)] = µ and its autocorrelation depends only on τ = t1 − t2 ∗ Rxx(t1,t2)=E[X(t1)X (t2)] ∗ E[X(t + τ)X (t)] = Rxx(τ) ∗ Note that Rxx(−τ)=Rxx(τ)and Rxx(0) = E[|X(t)|2] Lecture 13 1 Example We found that the random telegraph signal has the autocorrelation function −c|τ| Rxx(τ)=e We can use the autocorrelation function to find the second moment of linear combinations such as Y (t)=aX(t)+bX(t − t0). 2 2 Ryy(0) = E[Y (t)] = E[(aX(t)+bX(t − t0)) ] 2 2 2 2 = a E[X (t)] + 2abE[X(t)X(t − t0)] + b E[X (t − t0)] 2 2 = a Rxx(0) + 2abRxx(t0)+b Rxx(0) 2 2 =(a + b )Rxx(0) + 2abRxx(t0) −ct =(a2 + b2)Rxx(0) + 2abe 0 Lecture 13 2 Example (continued) We can also compute the autocorrelation Ryy(τ)forτ =0. ∗ Ryy(τ)=E[Y (t + τ)Y (t)] = E[(aX(t + τ)+bX(t + τ − t0))(aX(t)+bX(t − t0))] 2 = a E[X(t + τ)X(t)] + abE[X(t + τ)X(t − t0)] 2 + abE[X(t + τ − t0)X(t)] + b E[X(t + τ − t0)X(t − t0)] 2 2 = a Rxx(τ)+abRxx(τ + t0)+abRxx(τ − t0)+b Rxx(τ) 2 2 =(a + b )Rxx(τ)+abRxx(τ + t0)+abRxx(τ − t0) Lecture 13 3 Linear Filtering of Random Processes The above example combines weighted values of X(t)andX(t − t0) to form Y (t).
    [Show full text]
  • Control Theory
    Control theory S. Simrock DESY, Hamburg, Germany Abstract In engineering and mathematics, control theory deals with the behaviour of dynamical systems. The desired output of a system is called the reference. When one or more output variables of a system need to follow a certain ref- erence over time, a controller manipulates the inputs to a system to obtain the desired effect on the output of the system. Rapid advances in digital system technology have radically altered the control design options. It has become routinely practicable to design very complicated digital controllers and to carry out the extensive calculations required for their design. These advances in im- plementation and design capability can be obtained at low cost because of the widespread availability of inexpensive and powerful digital processing plat- forms and high-speed analog IO devices. 1 Introduction The emphasis of this tutorial on control theory is on the design of digital controls to achieve good dy- namic response and small errors while using signals that are sampled in time and quantized in amplitude. Both transform (classical control) and state-space (modern control) methods are described and applied to illustrative examples. The transform methods emphasized are the root-locus method of Evans and fre- quency response. The state-space methods developed are the technique of pole assignment augmented by an estimator (observer) and optimal quadratic-loss control. The optimal control problems use the steady-state constant gain solution. Other topics covered are system identification and non-linear control. System identification is a general term to describe mathematical tools and algorithms that build dynamical models from measured data.
    [Show full text]
  • Step Response of Series RLC Circuit ‐ Output Taken Across Capacitor
    ESE 271 / Spring 2013 / Lecture 23 Step response of series RLC circuit ‐ output taken across capacitor. What happens during transient period from initial steady state to final steady state? 1 ESE 271 / Spring 2013 / Lecture 23 Transfer function of series RLC ‐ output taken across capacitor. Poles: Case 1: ‐‐two differen t real poles Case 2: ‐ two identical real poles ‐ complex conjugate poles Case 3: 2 ESE 271 / Spring 2013 / Lecture 23 Case 1: two different real poles. Step response of series RLC ‐ output taken across capacitor. Overdamped case –the circuit demonstrates relatively slow transient response. 3 ESE 271 / Spring 2013 / Lecture 23 Case 1: two different real poles. Freqqyuency response of series RLC ‐ output taken across capacitor. Uncorrected Bode Gain Plot Overdamped case –the circuit demonstrates relatively limited bandwidth 4 ESE 271 / Spring 2013 / Lecture 23 Case 2: two identical real poles. Step response of series RLC ‐ output taken across capacitor. Critically damped case –the circuit demonstrates the shortest possible rise time without overshoot. 5 ESE 271 / Spring 2013 / Lecture 23 Case 2: two identical real poles. Freqqyuency response of series RLC ‐ output taken across capacitor. Critically damped case –the circuit demonstrates the widest bandwidth without apparent resonance. Uncorrected Bode Gain Plot 6 ESE 271 / Spring 2013 / Lecture 23 Case 3: two complex poles. Step response of series RLC ‐ output taken across capacitor. Underdamped case – the circuit oscillates. 7 ESE 271 / Spring 2013 / Lecture 23 Case 3: two complex poles. Freqqyuency response of series RLC ‐ output taken across capacitor. Corrected Bode GiGain Plot Underdamped case –the circuit can demonstrate apparent resonant behavior.
    [Show full text]
  • Frequency Response and Bode Plots
    1 Frequency Response and Bode Plots 1.1 Preliminaries The steady-state sinusoidal frequency-response of a circuit is described by the phasor transfer function Hj( ) . A Bode plot is a graph of the magnitude (in dB) or phase of the transfer function versus frequency. Of course we can easily program the transfer function into a computer to make such plots, and for very complicated transfer functions this may be our only recourse. But in many cases the key features of the plot can be quickly sketched by hand using some simple rules that identify the impact of the poles and zeroes in shaping the frequency response. The advantage of this approach is the insight it provides on how the circuit elements influence the frequency response. This is especially important in the design of frequency-selective circuits. We will first consider how to generate Bode plots for simple poles, and then discuss how to handle the general second-order response. Before doing this, however, it may be helpful to review some properties of transfer functions, the decibel scale, and properties of the log function. Poles, Zeroes, and Stability The s-domain transfer function is always a rational polynomial function of the form Ns() smm as12 a s m asa Hs() K K mm12 10 (1.1) nn12 n Ds() s bsnn12 b s bsb 10 As we have seen already, the polynomials in the numerator and denominator are factored to find the poles and zeroes; these are the values of s that make the numerator or denominator zero. If we write the zeroes as zz123,, zetc., and similarly write the poles as pp123,, p , then Hs( ) can be written in factored form as ()()()s zsz sz Hs() K 12 m (1.2) ()()()s psp12 sp n 1 © Bob York 2009 2 Frequency Response and Bode Plots The pole and zero locations can be real or complex.
    [Show full text]
  • Linear Time Invariant Systems
    UNIT III LINEAR TIME INVARIANT CONTINUOUS TIME SYSTEMS CT systems – Linear Time invariant Systems – Basic properties of continuous time systems – Linearity, Causality, Time invariance, Stability – Frequency response of LTI systems – Analysis and characterization of LTI systems using Laplace transform – Computation of impulse response and transfer function using Laplace transform – Differential equation – Impulse response – Convolution integral and Frequency response. System A system may be defined as a set of elements or functional blocks which are connected together and produces an output in response to an input signal. The response or output of the system depends upon transfer function of the system. Mathematically, the functional relationship between input and output may be written as y(t)=f[x(t)] Types of system Like signals, systems may also be of two types as under: 1. Continuous-time system 2. Discrete time system Continuous time System Continuous time system may be defined as those systems in which the associated signals are also continuous. This means that input and output of continuous – time system are both continuous time signals. For example: Audio, video amplifiers, power supplies etc., are continuous time systems. Discrete time systems Discrete time system may be defined as a system in which the associated signals are also discrete time signals. This means that in a discrete time system, the input and output are both discrete time signals. For example, microprocessors, semiconductor memories, shift registers etc., are discrete time signals. LTI system:- Systems are broadly classified as continuous time systems and discrete time systems. Continuous time systems deal with continuous time signals and discrete time systems deal with discrete time system.
    [Show full text]
  • Chapter 3 FILTERS
    Chapter 3 FILTERS Most images are a®ected to some extent by noise, that is unexplained variation in data: disturbances in image intensity which are either uninterpretable or not of interest. Image analysis is often simpli¯ed if this noise can be ¯ltered out. In an analogous way ¯lters are used in chemistry to free liquids from suspended impurities by passing them through a layer of sand or charcoal. Engineers working in signal processing have extended the meaning of the term ¯lter to include operations which accentuate features of interest in data. Employing this broader de¯nition, image ¯lters may be used to emphasise edges | that is, boundaries between objects or parts of objects in images. Filters provide an aid to visual interpretation of images, and can also be used as a precursor to further digital processing, such as segmentation (chapter 4). Most of the methods considered in chapter 2 operated on each pixel separately. Filters change a pixel's value taking into account the values of neighbouring pixels too. They may either be applied directly to recorded images, such as those in chapter 1, or after transformation of pixel values as discussed in chapter 2. To take a simple example, Figs 3.1(b){(d) show the results of applying three ¯lters to the cashmere ¯bres image, which has been redisplayed in Fig 3.1(a). ² Fig 3.1(b) is a display of the output from a 5 £ 5 moving average ¯lter. Each pixel has been replaced by the average of pixel values in a 5 £ 5 square, or window centred on that pixel.
    [Show full text]
  • A Multidimensional Filtering Framework with Applications to Local Structure Analysis and Image Enhancement
    Linkoping¨ Studies in Science and Technology Dissertation No. 1171 A Multidimensional Filtering Framework with Applications to Local Structure Analysis and Image Enhancement Bjorn¨ Svensson Department of Biomedical Engineering Linkopings¨ universitet SE-581 85 Linkoping,¨ Sweden http://www.imt.liu.se Linkoping,¨ April 2008 A Multidimensional Filtering Framework with Applications to Local Structure Analysis and Image Enhancement c 2008 Bjorn¨ Svensson Department of Biomedical Engineering Linkopings¨ universitet SE-581 85 Linkoping,¨ Sweden ISBN 978-91-7393-943-0 ISSN 0345-7524 Printed in Linkoping,¨ Sweden by LiU-Tryck 2008 Abstract Filtering is a fundamental operation in image science in general and in medical image science in particular. The most central applications are image enhancement, registration, segmentation and feature extraction. Even though these applications involve non-linear processing a majority of the methodologies available rely on initial estimates using linear filters. Linear filtering is a well established corner- stone of signal processing, which is reflected by the overwhelming amount of literature on finite impulse response filters and their design. Standard techniques for multidimensional filtering are computationally intense. This leads to either a long computation time or a performance loss caused by approximations made in order to increase the computational efficiency. This dis- sertation presents a framework for realization of efficient multidimensional filters. A weighted least squares design criterion ensures preservation of the performance and the two techniques called filter networks and sub-filter sequences significantly reduce the computational demand. A filter network is a realization of a set of filters, which are decomposed into a structure of sparse sub-filters each with a low number of coefficients.
    [Show full text]
  • Simplified, Physically-Informed Models of Distortion and Overdrive Guitar Effects Pedals
    Proc. of the 10th Int. Conference on Digital Audio Effects (DAFx-07), Bordeaux, France, September 10-15, 2007 SIMPLIFIED, PHYSICALLY-INFORMED MODELS OF DISTORTION AND OVERDRIVE GUITAR EFFECTS PEDALS David T. Yeh, Jonathan S. Abel and Julius O. Smith Center for Computer Research in Music and Acoustics (CCRMA) Stanford University, Stanford, CA [dtyeh|abel|jos]@ccrma.stanford.edu ABSTRACT retained, however, because intermodulation due to mixing of sub- sonic components with audio frequency components is noticeable This paper explores a computationally efficient, physically in- in the audio band. formed approach to design algorithms for emulating guitar distor- tion circuits. Two iconic effects pedals are studied: the “Distor- Stages are partitioned at points in the circuit where an active tion” pedal and the “Tube Screamer” or “Overdrive” pedal. The element with low source impedance drives a high impedance load. primary distortion mechanism in both pedals is a diode clipper This approximation is also made with less accuracy where passive with an embedded low-pass filter, and is shown to follow a non- components feed into loads with higher impedance. Neglecting linear ordinary differential equation whose solution is computa- the interaction between the stages introduces magnitude error by a tionally expensive for real-time use. In the proposed method, a scalar factor and neglects higher order terms in the transfer func- simplified model, comprising the cascade of a conditioning filter, tion that are usually small in the audio band. memoryless nonlinearity and equalization filter, is chosen for its The nonlinearity may be evaluated as a nonlinear ordinary dif- computationally efficient, numerically robust properties.
    [Show full text]
  • Control System Design Methods
    Christiansen-Sec.19.qxd 06:08:2004 6:43 PM Page 19.1 The Electronics Engineers' Handbook, 5th Edition McGraw-Hill, Section 19, pp. 19.1-19.30, 2005. SECTION 19 CONTROL SYSTEMS Control is used to modify the behavior of a system so it behaves in a specific desirable way over time. For example, we may want the speed of a car on the highway to remain as close as possible to 60 miles per hour in spite of possible hills or adverse wind; or we may want an aircraft to follow a desired altitude, heading, and velocity profile independent of wind gusts; or we may want the temperature and pressure in a reactor vessel in a chemical process plant to be maintained at desired levels. All these are being accomplished today by control methods and the above are examples of what automatic control systems are designed to do, without human intervention. Control is used whenever quantities such as speed, altitude, temperature, or voltage must be made to behave in some desirable way over time. This section provides an introduction to control system design methods. P.A., Z.G. In This Section: CHAPTER 19.1 CONTROL SYSTEM DESIGN 19.3 INTRODUCTION 19.3 Proportional-Integral-Derivative Control 19.3 The Role of Control Theory 19.4 MATHEMATICAL DESCRIPTIONS 19.4 Linear Differential Equations 19.4 State Variable Descriptions 19.5 Transfer Functions 19.7 Frequency Response 19.9 ANALYSIS OF DYNAMICAL BEHAVIOR 19.10 System Response, Modes and Stability 19.10 Response of First and Second Order Systems 19.11 Transient Response Performance Specifications for a Second Order
    [Show full text]
  • Mathematical Modeling of Control Systems
    OGATA-CH02-013-062hr 7/14/09 1:51 PM Page 13 2 Mathematical Modeling of Control Systems 2–1 INTRODUCTION In studying control systems the reader must be able to model dynamic systems in math- ematical terms and analyze their dynamic characteristics.A mathematical model of a dy- namic system is defined as a set of equations that represents the dynamics of the system accurately, or at least fairly well. Note that a mathematical model is not unique to a given system.A system may be represented in many different ways and, therefore, may have many mathematical models, depending on one’s perspective. The dynamics of many systems, whether they are mechanical, electrical, thermal, economic, biological, and so on, may be described in terms of differential equations. Such differential equations may be obtained by using physical laws governing a partic- ular system—for example, Newton’s laws for mechanical systems and Kirchhoff’s laws for electrical systems. We must always keep in mind that deriving reasonable mathe- matical models is the most important part of the entire analysis of control systems. Throughout this book we assume that the principle of causality applies to the systems considered.This means that the current output of the system (the output at time t=0) depends on the past input (the input for t<0) but does not depend on the future input (the input for t>0). Mathematical Models. Mathematical models may assume many different forms. Depending on the particular system and the particular circumstances, one mathemati- cal model may be better suited than other models.
    [Show full text]
  • 12: Resonance
    12: Resonance • QuadraticFactors + • Damping Factor and Q • Parallel RLC • Behaviour at Resonance • Away from resonance • Bandwidth and Q • Power and Energy at Resonance + • Low Pass Filter • Resonance Peak for LP filter • Summary 12: Resonance E1.1 Analysis of Circuits (2017-10213) Resonance: 12 – 1 / 11 Quadratic Factors + 12: Resonance 2 • QuadraticFactors + A quadratic factor in a transfer function is: F (jω) = a (jω) + b (jω) + c. • Damping Factor and Q • Parallel RLC • Behaviour at Resonance • Away from resonance • Bandwidth and Q • Power and Energy at Resonance + • Low Pass Filter • Resonance Peak for LP filter • Summary E1.1 Analysis of Circuits (2017-10213) Resonance: 12 – 2 / 11 Quadratic Factors + 12: Resonance 2 • QuadraticFactors + A quadratic factor in a transfer function is: F (jω) = a (jω) + b (jω) + c. • Damping Factor and Q • Parallel RLC 2 • Behaviour at Resonance Case 1: If b 4ac then we can factorize it: • Away from resonance ≥ • Bandwidth and Q F (jω) = a(jω p1)(jω p2) • Power and Energy at − − Resonance + • Low Pass Filter • Resonance Peak for LP filter • Summary E1.1 Analysis of Circuits (2017-10213) Resonance: 12 – 2 / 11 Quadratic Factors + 12: Resonance 2 • QuadraticFactors + A quadratic factor in a transfer function is: F (jω) = a (jω) + b (jω) + c. • Damping Factor and Q • Parallel RLC 2 • Behaviour at Resonance Case 1: If b 4ac then we can factorize it: • Away from resonance ≥ • Bandwidth and Q F (jω) = a(jω p1)(jω p2) • Power and Energy at − − Resonance + b √b2 4ac • Low Pass Filter where p = − ± − . • Resonance Peak for LP i 2a filter • Summary E1.1 Analysis of Circuits (2017-10213) Resonance: 12 – 2 / 11 Quadratic Factors ++ 12: Resonance 2 • QuadraticFactors + A quadratic factor in a transfer function is: F (jω) = a (jω) + b (jω) + c.
    [Show full text]