Lab 7: Filters and Convolution

Total Page:16

File Type:pdf, Size:1020Kb

Lab 7: Filters and Convolution Lab 7: Filters and convolution One of the most remarkable properties of a linear dynamical system is its response to sinusoidal inputs. As we learned in lecture, a sinusoidal input to a linear dynamical system produces a sinusoidal output having the same frequency as the input. However, the output sinusoid can differ in two ways from the input sinusoid: 1) its amplitude, and 2) its phase. Measurements of the phase and amplitude of the output to a sinusoidal input can be used to identify the differential equation which best captures the input-output re- lationships of the dynamical system. The graphical technique for estimating the transfer function from these experimental observations is the Bode plot. Once the transfer function has been determined the response of the dynam- ical system to any realizable input can be predicted. The purpose of this laboratory is to use computer experiments to verify these observations and to introduce the concepts of a filter and convolution. 1 Background The lecture briefly reviews the concept of an impulse function, filtering and convolution. A practical, but important, detail concerns the representation of a sine wave wave in a computer program. We can write a simple program that plots a 10 second interval of a sine wave for a given frequency, f, and a given time step. import numpy as np import matplotlib.pyplot as plt f=0.1 N=11 NN=101 1 start=0.0 stop=100.0 t=np.linspace(start,stop,num=N,endpoint=True) tt=np.linspace(start,stop,num=NN,endpoint=True) plt.plot(t,np.sin(2*np.pi*f*t),'k-o') plt.plot(tt,np.sin(2*np.pi*f*tt),'r-') plt.axis([0,stop,-1.1,1.1]) plt.show() 1 The period, T , is f , so when f = 0:1, we have T = 10. If we choose start and stop to be, respectively, 0 and 100, then this would draw ten cycles of a 0:1Hz provided that N is chosen appropriately (red line in figure produced by this program). The positive integer, N, is the number of equally spaced time samples between, in this case, 0 and 100. It is important to remember that by choosing the endpoint=True option for np.linspace() both the starting and the end point are included, N points altogether1. The time step, dt is stop-start dt = times per period N What is a good choice of the number of time steps, N? If we choose N = 10, then dt = 1 per period and we would obtain 10 identical xalues of x since we are sampled once per period and the rtime series is 10 periods long (compare black and red lines). This is not a good choice of N since we cannot uniquely represent a function using a single value. Next we could try N = 20 which would give us two point per period. In principle we can uniquely represent a sine wave if we sample at twice the frequency (this is the concept of the Nyquist frequency that we will discuss in the next lab). However, x would not look like a nice sine wave. So we must pick N larger: the larger N, the smaller dt, and hence the more frequently the sine wave is sampled. However, it is also true that the larger N the more computer memory it takes to store in this case x and t, and the greater the number of calculations that the computer must make. Thus there is a practical trade{ off between the accuracy by which functions are to be represented and the time it takes the computer to complete its calculations. The bottom line is that as we change f we need to adjust dt. A reasonable choice might be to sample the sine wave 10 − 100 times per period. Ideally we would like to keep the number of sample points per period the same as we change f. 1Note that to get say 10 evenly spaced samples you need to set num= N + 1. 2 Browser use: Have available the document intro_python.pdf on your browser for review, in particular, the sections that deal with how to de- clare a function. As always, information concerning the use of Python and matplotlib functions can be obtained from the Internet using a browser. Housekeeping: Labs 8 and 9 use some of the programs and results that we develop in this lab. Thus it is useful to put today's material in a sub- directory of ~\pyprogs called, for example, convol, namely ~\pyprogs\convol. This will make it easier for you to find these results when we need them. Exercise 1: Bode plots: low{pass filter 1. Write a computer program to integrate dx = k x + sin 2πft ; (1) dt 1 where k1 is a constant. 2. Take k1 = −1. 3. Vary frequency over a large range (say f = 0:01 to f = 100) and measure the amplitude. Plot the amplitude versus frequency. 4. Vary the frequency and measure the shift between the sine wave and the solution of the equation. For example, we can plot the input and output sine wave n the same figure. Plot the phase shift versus the frequency. 5. Now change a to, let's say, k1 = 10 and then k1 = 0:1. What happens? 6. Construct the Bode plot by choosing a suitable range of f. For each choice of f enter the amplitude and phase shift of the steady{state solution and write these values in a *.tsv file. Use this *.tsv file to construct the phase part of the Bode plot. 7. You have just constructed your first low pass filter. Answer the follow- ing: • The frequency on the input sine wave and the output sine wave is the same. True of false? 3 • The output amplitude changes as a function of frequency. True or false? • The output sine wave is phase shifted compared to the input sine wave. True or false? • Why is this type of filter called a low pass filter? 8. What is the transfer function for (1)? 9. Show that the impulse function, I(t; t0), is 0 I(t; t ) = k1 exp(k1t) : (Remember that the impulse function corresponds to the case when the input is a delta function.) Is this the same as the answer we derived in class (Section 7.3)? 10. The transfer function, C(s), for the thermometer is of the form 1 C(s) = : 1 + αs Sketch the Bode plot. What is the corner frequency? 11. Consider the transfer function 10 C(s) = : s(1 + 0:5s)(1 + 0:1s) Sketch the Bode plot for the amplitude showing the contribution of each term. Exercise 2: The convolution integral: The concept of convolution is one of the most important concepts in science: every time one uses a device to make a measurement, one essentially performs a convolution. All physical observations are limited by the resolving power of instruments, and for this reason alone convolution is ubiquitous. This principle does not only apply to laboratory instruments: our senses, e.g. vision, touch, smell, audition, proprioception, are also convoluted by the sensory receptors which translate physical stimuli into neural spikes trains. This convolution is mathematically as the operation Z 1 y(t) = x(t)h(t − u)du ; (2) −∞ 4 where x(t) describes the input to the dynamical system and h(t − u) de- scribes the impulse function. In practice it is very easy to perform a convolu- tion numerically. In Python this is accomplished using the single command np.convolve() (similar commands exist, for example, in MATLAB). In or- der to use the convolution integral to solve (1) we need to know that the impulse function. Convolution: Graphical interpretation Although the mechanics of evaluating the convolution integral seem straight- forward, it is quite difficult to understand in a simple way the process of evalu- ating the convolution integral. The purpose of this exercise is to demonstrate a graphical interpretation of convolution [1]. This graphical method is very useful and, for example, we can use it to understand issues related to zero- padding and to appreciate the difference between convolution and correlation (Lab 14). Figure 1: Graphical method for convolving a delta{function input (vertical dashed line in 'Multiplication panel') with an alpha{type impulse response (solid line). See text for discussion. The graphical convolution method divides the process of convolution into 5 four steps [1]: 1. Folding: Reflect the impulse response, I(t), about the ordinate axis to obtain I(−t). 2. Displacement Shift I(−t) by an amount ∆t to obtain I(−t + ∆t). 3. Multiplication: Multiply I(−t + ∆t) by the input, b(t). 4. Integration: Determine the area under the curve produced by I(−t + ∆t) and b(t). Figure 1 illustrates this procedure in the evaluation of the alpha function response, I(t), of a neuron to a single delta-function input, b(t). The step `Folding' means that we reflect the impulse response about the y-axis. The step `Displacement' means that we slowly shift the reflected impulse in the positive direction along the x-axis. The step `Multiplication' means that we multiple the reflected impulse with the input. It is easiest to understand this step iteratively: for each time step do the multiplication (and then Step 4), then displace the reflected impulse response another time step to the right and repeat the process. In general this corresponds to the shaded region under the two curves for which both functions are non-zero. Finally the step `Integration' calculates the area under the the two curves.
Recommended publications
  • Lecture 3: Transfer Function and Dynamic Response Analysis
    Transfer function approach Dynamic response Summary Basics of Automation and Control I Lecture 3: Transfer function and dynamic response analysis Paweł Malczyk Division of Theory of Machines and Robots Institute of Aeronautics and Applied Mechanics Faculty of Power and Aeronautical Engineering Warsaw University of Technology October 17, 2019 © Paweł Malczyk. Basics of Automation and Control I Lecture 3: Transfer function and dynamic response analysis 1 / 31 Transfer function approach Dynamic response Summary Outline 1 Transfer function approach 2 Dynamic response 3 Summary © Paweł Malczyk. Basics of Automation and Control I Lecture 3: Transfer function and dynamic response analysis 2 / 31 Transfer function approach Dynamic response Summary Transfer function approach 1 Transfer function approach SISO system Definition Poles and zeros Transfer function for multivariable system Properties 2 Dynamic response 3 Summary © Paweł Malczyk. Basics of Automation and Control I Lecture 3: Transfer function and dynamic response analysis 3 / 31 Transfer function approach Dynamic response Summary SISO system Fig. 1: Block diagram of a single input single output (SISO) system Consider the continuous, linear time-invariant (LTI) system defined by linear constant coefficient ordinary differential equation (LCCODE): dny dn−1y + − + ··· + _ + = an n an 1 n−1 a1y a0y dt dt (1) dmu dm−1u = b + b − + ··· + b u_ + b u m dtm m 1 dtm−1 1 0 initial conditions y(0), y_(0),..., y(n−1)(0), and u(0),..., u(m−1)(0) given, u(t) – input signal, y(t) – output signal, ai – real constants for i = 1, ··· , n, and bj – real constants for j = 1, ··· , m. How do I find the LCCODE (1)? .
    [Show full text]
  • Control Theory
    Control theory S. Simrock DESY, Hamburg, Germany Abstract In engineering and mathematics, control theory deals with the behaviour of dynamical systems. The desired output of a system is called the reference. When one or more output variables of a system need to follow a certain ref- erence over time, a controller manipulates the inputs to a system to obtain the desired effect on the output of the system. Rapid advances in digital system technology have radically altered the control design options. It has become routinely practicable to design very complicated digital controllers and to carry out the extensive calculations required for their design. These advances in im- plementation and design capability can be obtained at low cost because of the widespread availability of inexpensive and powerful digital processing plat- forms and high-speed analog IO devices. 1 Introduction The emphasis of this tutorial on control theory is on the design of digital controls to achieve good dy- namic response and small errors while using signals that are sampled in time and quantized in amplitude. Both transform (classical control) and state-space (modern control) methods are described and applied to illustrative examples. The transform methods emphasized are the root-locus method of Evans and fre- quency response. The state-space methods developed are the technique of pole assignment augmented by an estimator (observer) and optimal quadratic-loss control. The optimal control problems use the steady-state constant gain solution. Other topics covered are system identification and non-linear control. System identification is a general term to describe mathematical tools and algorithms that build dynamical models from measured data.
    [Show full text]
  • Step Response of Series RLC Circuit ‐ Output Taken Across Capacitor
    ESE 271 / Spring 2013 / Lecture 23 Step response of series RLC circuit ‐ output taken across capacitor. What happens during transient period from initial steady state to final steady state? 1 ESE 271 / Spring 2013 / Lecture 23 Transfer function of series RLC ‐ output taken across capacitor. Poles: Case 1: ‐‐two differen t real poles Case 2: ‐ two identical real poles ‐ complex conjugate poles Case 3: 2 ESE 271 / Spring 2013 / Lecture 23 Case 1: two different real poles. Step response of series RLC ‐ output taken across capacitor. Overdamped case –the circuit demonstrates relatively slow transient response. 3 ESE 271 / Spring 2013 / Lecture 23 Case 1: two different real poles. Freqqyuency response of series RLC ‐ output taken across capacitor. Uncorrected Bode Gain Plot Overdamped case –the circuit demonstrates relatively limited bandwidth 4 ESE 271 / Spring 2013 / Lecture 23 Case 2: two identical real poles. Step response of series RLC ‐ output taken across capacitor. Critically damped case –the circuit demonstrates the shortest possible rise time without overshoot. 5 ESE 271 / Spring 2013 / Lecture 23 Case 2: two identical real poles. Freqqyuency response of series RLC ‐ output taken across capacitor. Critically damped case –the circuit demonstrates the widest bandwidth without apparent resonance. Uncorrected Bode Gain Plot 6 ESE 271 / Spring 2013 / Lecture 23 Case 3: two complex poles. Step response of series RLC ‐ output taken across capacitor. Underdamped case – the circuit oscillates. 7 ESE 271 / Spring 2013 / Lecture 23 Case 3: two complex poles. Freqqyuency response of series RLC ‐ output taken across capacitor. Corrected Bode GiGain Plot Underdamped case –the circuit can demonstrate apparent resonant behavior.
    [Show full text]
  • Frequency Response and Bode Plots
    1 Frequency Response and Bode Plots 1.1 Preliminaries The steady-state sinusoidal frequency-response of a circuit is described by the phasor transfer function Hj( ) . A Bode plot is a graph of the magnitude (in dB) or phase of the transfer function versus frequency. Of course we can easily program the transfer function into a computer to make such plots, and for very complicated transfer functions this may be our only recourse. But in many cases the key features of the plot can be quickly sketched by hand using some simple rules that identify the impact of the poles and zeroes in shaping the frequency response. The advantage of this approach is the insight it provides on how the circuit elements influence the frequency response. This is especially important in the design of frequency-selective circuits. We will first consider how to generate Bode plots for simple poles, and then discuss how to handle the general second-order response. Before doing this, however, it may be helpful to review some properties of transfer functions, the decibel scale, and properties of the log function. Poles, Zeroes, and Stability The s-domain transfer function is always a rational polynomial function of the form Ns() smm as12 a s m asa Hs() K K mm12 10 (1.1) nn12 n Ds() s bsnn12 b s bsb 10 As we have seen already, the polynomials in the numerator and denominator are factored to find the poles and zeroes; these are the values of s that make the numerator or denominator zero. If we write the zeroes as zz123,, zetc., and similarly write the poles as pp123,, p , then Hs( ) can be written in factored form as ()()()s zsz sz Hs() K 12 m (1.2) ()()()s psp12 sp n 1 © Bob York 2009 2 Frequency Response and Bode Plots The pole and zero locations can be real or complex.
    [Show full text]
  • Linear Time Invariant Systems
    UNIT III LINEAR TIME INVARIANT CONTINUOUS TIME SYSTEMS CT systems – Linear Time invariant Systems – Basic properties of continuous time systems – Linearity, Causality, Time invariance, Stability – Frequency response of LTI systems – Analysis and characterization of LTI systems using Laplace transform – Computation of impulse response and transfer function using Laplace transform – Differential equation – Impulse response – Convolution integral and Frequency response. System A system may be defined as a set of elements or functional blocks which are connected together and produces an output in response to an input signal. The response or output of the system depends upon transfer function of the system. Mathematically, the functional relationship between input and output may be written as y(t)=f[x(t)] Types of system Like signals, systems may also be of two types as under: 1. Continuous-time system 2. Discrete time system Continuous time System Continuous time system may be defined as those systems in which the associated signals are also continuous. This means that input and output of continuous – time system are both continuous time signals. For example: Audio, video amplifiers, power supplies etc., are continuous time systems. Discrete time systems Discrete time system may be defined as a system in which the associated signals are also discrete time signals. This means that in a discrete time system, the input and output are both discrete time signals. For example, microprocessors, semiconductor memories, shift registers etc., are discrete time signals. LTI system:- Systems are broadly classified as continuous time systems and discrete time systems. Continuous time systems deal with continuous time signals and discrete time systems deal with discrete time system.
    [Show full text]
  • Simplified, Physically-Informed Models of Distortion and Overdrive Guitar Effects Pedals
    Proc. of the 10th Int. Conference on Digital Audio Effects (DAFx-07), Bordeaux, France, September 10-15, 2007 SIMPLIFIED, PHYSICALLY-INFORMED MODELS OF DISTORTION AND OVERDRIVE GUITAR EFFECTS PEDALS David T. Yeh, Jonathan S. Abel and Julius O. Smith Center for Computer Research in Music and Acoustics (CCRMA) Stanford University, Stanford, CA [dtyeh|abel|jos]@ccrma.stanford.edu ABSTRACT retained, however, because intermodulation due to mixing of sub- sonic components with audio frequency components is noticeable This paper explores a computationally efficient, physically in- in the audio band. formed approach to design algorithms for emulating guitar distor- tion circuits. Two iconic effects pedals are studied: the “Distor- Stages are partitioned at points in the circuit where an active tion” pedal and the “Tube Screamer” or “Overdrive” pedal. The element with low source impedance drives a high impedance load. primary distortion mechanism in both pedals is a diode clipper This approximation is also made with less accuracy where passive with an embedded low-pass filter, and is shown to follow a non- components feed into loads with higher impedance. Neglecting linear ordinary differential equation whose solution is computa- the interaction between the stages introduces magnitude error by a tionally expensive for real-time use. In the proposed method, a scalar factor and neglects higher order terms in the transfer func- simplified model, comprising the cascade of a conditioning filter, tion that are usually small in the audio band. memoryless nonlinearity and equalization filter, is chosen for its The nonlinearity may be evaluated as a nonlinear ordinary dif- computationally efficient, numerically robust properties.
    [Show full text]
  • Control System Design Methods
    Christiansen-Sec.19.qxd 06:08:2004 6:43 PM Page 19.1 The Electronics Engineers' Handbook, 5th Edition McGraw-Hill, Section 19, pp. 19.1-19.30, 2005. SECTION 19 CONTROL SYSTEMS Control is used to modify the behavior of a system so it behaves in a specific desirable way over time. For example, we may want the speed of a car on the highway to remain as close as possible to 60 miles per hour in spite of possible hills or adverse wind; or we may want an aircraft to follow a desired altitude, heading, and velocity profile independent of wind gusts; or we may want the temperature and pressure in a reactor vessel in a chemical process plant to be maintained at desired levels. All these are being accomplished today by control methods and the above are examples of what automatic control systems are designed to do, without human intervention. Control is used whenever quantities such as speed, altitude, temperature, or voltage must be made to behave in some desirable way over time. This section provides an introduction to control system design methods. P.A., Z.G. In This Section: CHAPTER 19.1 CONTROL SYSTEM DESIGN 19.3 INTRODUCTION 19.3 Proportional-Integral-Derivative Control 19.3 The Role of Control Theory 19.4 MATHEMATICAL DESCRIPTIONS 19.4 Linear Differential Equations 19.4 State Variable Descriptions 19.5 Transfer Functions 19.7 Frequency Response 19.9 ANALYSIS OF DYNAMICAL BEHAVIOR 19.10 System Response, Modes and Stability 19.10 Response of First and Second Order Systems 19.11 Transient Response Performance Specifications for a Second Order
    [Show full text]
  • Mathematical Modeling of Control Systems
    OGATA-CH02-013-062hr 7/14/09 1:51 PM Page 13 2 Mathematical Modeling of Control Systems 2–1 INTRODUCTION In studying control systems the reader must be able to model dynamic systems in math- ematical terms and analyze their dynamic characteristics.A mathematical model of a dy- namic system is defined as a set of equations that represents the dynamics of the system accurately, or at least fairly well. Note that a mathematical model is not unique to a given system.A system may be represented in many different ways and, therefore, may have many mathematical models, depending on one’s perspective. The dynamics of many systems, whether they are mechanical, electrical, thermal, economic, biological, and so on, may be described in terms of differential equations. Such differential equations may be obtained by using physical laws governing a partic- ular system—for example, Newton’s laws for mechanical systems and Kirchhoff’s laws for electrical systems. We must always keep in mind that deriving reasonable mathe- matical models is the most important part of the entire analysis of control systems. Throughout this book we assume that the principle of causality applies to the systems considered.This means that the current output of the system (the output at time t=0) depends on the past input (the input for t<0) but does not depend on the future input (the input for t>0). Mathematical Models. Mathematical models may assume many different forms. Depending on the particular system and the particular circumstances, one mathemati- cal model may be better suited than other models.
    [Show full text]
  • 12: Resonance
    12: Resonance • QuadraticFactors + • Damping Factor and Q • Parallel RLC • Behaviour at Resonance • Away from resonance • Bandwidth and Q • Power and Energy at Resonance + • Low Pass Filter • Resonance Peak for LP filter • Summary 12: Resonance E1.1 Analysis of Circuits (2017-10213) Resonance: 12 – 1 / 11 Quadratic Factors + 12: Resonance 2 • QuadraticFactors + A quadratic factor in a transfer function is: F (jω) = a (jω) + b (jω) + c. • Damping Factor and Q • Parallel RLC • Behaviour at Resonance • Away from resonance • Bandwidth and Q • Power and Energy at Resonance + • Low Pass Filter • Resonance Peak for LP filter • Summary E1.1 Analysis of Circuits (2017-10213) Resonance: 12 – 2 / 11 Quadratic Factors + 12: Resonance 2 • QuadraticFactors + A quadratic factor in a transfer function is: F (jω) = a (jω) + b (jω) + c. • Damping Factor and Q • Parallel RLC 2 • Behaviour at Resonance Case 1: If b 4ac then we can factorize it: • Away from resonance ≥ • Bandwidth and Q F (jω) = a(jω p1)(jω p2) • Power and Energy at − − Resonance + • Low Pass Filter • Resonance Peak for LP filter • Summary E1.1 Analysis of Circuits (2017-10213) Resonance: 12 – 2 / 11 Quadratic Factors + 12: Resonance 2 • QuadraticFactors + A quadratic factor in a transfer function is: F (jω) = a (jω) + b (jω) + c. • Damping Factor and Q • Parallel RLC 2 • Behaviour at Resonance Case 1: If b 4ac then we can factorize it: • Away from resonance ≥ • Bandwidth and Q F (jω) = a(jω p1)(jω p2) • Power and Energy at − − Resonance + b √b2 4ac • Low Pass Filter where p = − ± − . • Resonance Peak for LP i 2a filter • Summary E1.1 Analysis of Circuits (2017-10213) Resonance: 12 – 2 / 11 Quadratic Factors ++ 12: Resonance 2 • QuadraticFactors + A quadratic factor in a transfer function is: F (jω) = a (jω) + b (jω) + c.
    [Show full text]
  • Discussion 3B
    EECS 16B Designing Information Devices and Systems II Spring 2018 J. Roychowdhury and M. Maharbiz Discussion 3B 1 Transfer Functions When we analyzed circuits in the phasor domain, we always told you what the input voltage, or the input sinusoid to the circuit was. However, sometimes we have many input sinusoids, and we want to look at how a circuit (or system) generically responds to a sinusoid input of frequency w. We want to see how an input sinusoid “transfers” into an output sinusoid. How do we do this? Let’s start with a simple RC circuit. R i(t) + + vin C vout − − Figure 1: First Order RC Low Pass Filter 1 In the phasor domain, the impedance of the capacitor is ZC = jwC and the impedance of the resistor is ZR = R. Because we treat impedances the same as resistances, this circuit looks like a voltage divider in the phasor domain. Remember we must also represent vin as a phasor V˜in; transfer functions are in the phasor domain only, not the time domain. 1 Z 1 V˜ = C V˜ = jwC V˜ = V˜ out Z + Z in 1 in jwRC + 1 in R C R + jwC We define the frequency response as V˜ 1 H(w) = out = V˜in jwRC + 1 Now, given an arbitrary input sinusoid, if we multiply it by the frequency response, we can get the output sinusoid. What this allows us to do is to model any arbitrary circuit as a 4-port (two input, two output) black box. The transfer function completely defines how our circuit works.
    [Show full text]
  • Resonance of Fractional Transfer Functions of the Second Kind Rachid Malti, Xavier Moreau, Firas Khemane
    Resonance of fractional transfer functions of the second kind Rachid Malti, Xavier Moreau, Firas Khemane To cite this version: Rachid Malti, Xavier Moreau, Firas Khemane. Resonance of fractional transfer functions of the second kind. The 3th IFAC Workshop on Fractional Differentiation and its Applications, FDA08, Nov 2008, Ankara, Turkey. pp.1-6. hal-00326418 HAL Id: hal-00326418 https://hal.archives-ouvertes.fr/hal-00326418 Submitted on 26 Jan 2009 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Resonance of fractional transfer functions of the second kind Rachid MALTI, Xavier MOREAU, and Firas KHEMANE ∗ Bordeaux University – IMS, 351, cours de la Lib´eration, 33405 Talence Cedex, France. firstname.lastname @ims-bordeaux.fr { } Abstract: Canonical fractional transfer function of the second kind is studied in this paper. Stability and resonance conditions are determined in terms of pseudo-damping factor and commensurable order. Keywords: Resonance, fractional system, second order transfer function, canonical form. 1. INTRODUCTION F (s) + + 1 1 Σ K1 ν Σ K2 Commensurable fractional systems can be represented in s sν a transfer function form as: mB − − b sνj ν j T (s ) j=0 H (s)= = , (1) ν PmA R(s ) νi 1+ ais Fig.
    [Show full text]
  • The Z-Transform
    CHAPTER 33 The z-Transform Just as analog filters are designed using the Laplace transform, recursive digital filters are developed with a parallel technique called the z-transform. The overall strategy of these two transforms is the same: probe the impulse response with sinusoids and exponentials to find the system's poles and zeros. The Laplace transform deals with differential equations, the s-domain, and the s-plane. Correspondingly, the z-transform deals with difference equations, the z-domain, and the z-plane. However, the two techniques are not a mirror image of each other; the s-plane is arranged in a rectangular coordinate system, while the z-plane uses a polar format. Recursive digital filters are often designed by starting with one of the classic analog filters, such as the Butterworth, Chebyshev, or elliptic. A series of mathematical conversions are then used to obtain the desired digital filter. The z-transform provides the framework for this mathematics. The Chebyshev filter design program presented in Chapter 20 uses this approach, and is discussed in detail in this chapter. The Nature of the z-Domain To reinforce that the Laplace and z-transforms are parallel techniques, we will start with the Laplace transform and show how it can be changed into the z- transform. From the last chapter, the Laplace transform is defined by the relationship between the time domain and s-domain signals: 4 X(s) ' x(t) e &st dt m t '&4 where x(t) and X(s) are the time domain and s-domain representation of the signal, respectively.
    [Show full text]