Useful MATLAB Functions

Total Page:16

File Type:pdf, Size:1020Kb

Useful MATLAB Functions Appendix A Useful MATLABrfunctions The following are useful MATLABrfunctions for circuit analysis and synthesis. The reader is encouraged to consult the reference guide and the “Help” directory in any MATLABrwindow for a detailed description of each function.1 Ax=b: Linear equation Ax=b is solved by invoking x = Anb. See also linsolve. Bode plot: [mag, phs, w]=bode(n, d, w) places the magnitude in vector mag and phase in vector phs at each frequency in vector w of a transfer function whose numerator polynomial is n and denominator polynomial is d. Convolution: y=conv(p,q) obtains the product of two polynomials, represented by their vectors of coefficients p and q. Eigenvalues: [V,D]=eig(A) places the eigenvalues of A in a diagonal matrix D and their corresponding eigenvectors in V. Elliptic functions: [sn cn dn]=ellipj(u,m) yields sn(u;k), cn(u;k), and dn(u;k); and [K, E]=ellipke(m) yields K(k) and E(k), where m = k2. Factors: y=factor(f) obtains the factors of a symbolic polynomial f . Imaginary part: y=imag(z) returns the imaginary part of z. Linear equations: y=linsolve(A,b) obtains the symbolic or numerical solution of linear equation Ax = b, A and b being numeric or symbolic. Matrix exponential: H=expm(At), where A is a matrix of simple rational numbers and t declared symbolic, will obtain the impulse response matrix H(t) in analytic form. Numerator and denominator: [n,d]=numden(f) places the numerator of symbolic rational function f in n and the denominator in d. Ordinary differential equations: dsolve(...) obtains the symbolic solution of an or- dinary differential equation. Polynomial from its roots: p=poly(r) obtain the coefficients of polynomial p in descending order in a row vector whose roots are placed in a column vector r. 1 The Academia area of The MathWorks Web site provides many resources for professors and students. Check there for contributed course materials, recommended products by cur- riculum, tutorials on MATLABrsoftware, and information on MATLABrStudent Version: www.mathworks.com/products/academia/ 285 286 A Useful MATLABrfunctions Polynomial to symbolic: p=poly2sym(q,’s’) converts a vector of coefficients of a polynomial q to its symbolic representation in s. Random numbers: y = m + s ¤ randn generates a random number taken from a universe of numbers which are normally distributed with mean m and variance s 2 . Random numbers: y=a + (b-a)*rand generates a random number taken from a universe of numbers which are uniformly distributed over [a;b]. Real part: y=real(z) returns the real part of z. Residues: [r,p,k]=residue(b,a) obtains the partial fraction expansion of rational function b=a where a and b are expressed as vectors of coefficients of polynomi- als a and b. The residues are placed in vector r and the poles in vector p. Vector k contains the coefficients of any excess polynomial. Roots of a polynomial: r=roots(p) obtains a column vector of roots of polyno- mial p expressed as a row vector of coefficients in descending order. Step response: y=step(num,den,t) produces the step response of a transfer func- tion whose numerator polynomial is num and denominator polynomial is den, both expressed as vectors of coefficients, for a vector of time points in t. Simplify: y=simplify(f) obtains a simplified version of a symbolic function f . Substitution: y=subs(f, ’s’, w) replaces each occurrence of s by each member of vector w in the symbolic function f . This function is useful in computing the frequency response of a rational function f if w is a vector of frequency points. Symbolic to polynomial: q=sym2poly(p) converts a symbolic representation of a polynomial p to its representation as a vector q of coefficients. Transfer function: [num,den]=ss2tf(A,B,C,D,k), where A;B;C;D are the matri- ces of a state equation in normal form, with an input component Uk = 1 and all others being zero, will return a vector of transfer functions whose numerator polynomials are in the vector num and whose denominator polynomial, being the same for all, is in den. Transfer function to zeros and poles: [z,p,k]=tf2zp(num,den) places the zeros of a transfer function with numerator num and denominator den in vector z, the poles in vector p and the gain in k. The vector num contains the coefficients of the numerator polynomial and den those of the denominator polynomial. Zeros and poles to transfer function: [num,den]=zp2tf(z,p,k) forms the transfer function with numerator polynomial num and denominator polynomial den from a vector of zeros z, a vector of poles p and a multiplicative constant k. References 1. Abramowitz, M., (ed.), I.A.S.: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. National Bureau of Standards, Washington, D.C. (1967) 2. Ascher, U.M., Petzold, L.R.: Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations. SIAM, Philadelphia (1998) 3. Bashkow, T.R.: The A matrix, new network description. IRE Trans. on Circuit Theory CT-4, 117–120 (1957) 4. Belevitch, V.: Th´eorie des circuits de t´ele ´communication. Librairie Universitaire Louvain (1957) 5. Belevitch, V.: An alternative derivation of Brune’s cycle. IRE Trans. Circuit Theory CT-6, 389–390 (1959) 6. Belevitch, V.: On the Bott-Duffin synthesis of driving-point impedances. IRE Trans. Circuit Theory CT-6, 389–390 (1959) 7. Blostein, M.L.: Sensitivity analysis of parasitic effects in resistance-terminated LC two-ports. IEEE Trans. on Circuit Theory CT-14, 21–26 (1967) 8. Bode, H.W.: Network Analysis and Amplifier Design. D. Van Nostrand, Princeton (1945) 9. Bott, R., Duffin, R.J.: Impedance synthesis without use of transformers. J. Appl. Phys. 20, 816 (1949) 10. Branin Jr., F.H.: The inverse of the incidence matrix of a tree and the formulation of the algebraic- first-order differential equations of an RLC network. IEEE Trans. on Circuit Theory CT-10, 543–544 (1963) 11. Brown, D.P.: Derivative-explicit differential equations for RLC graphs. J. Franklin Institute 275, 503–514 (1963) 12. Brune, O.: Synthesis of a finite two-terminal network whose driving-point impedance is a prescribed function of frequency. J. Math. Phys. 10, 191–236 (1931) 13. Bryant, P.R.: The explicit form of Bashkow A matrix. IRE Trans. on Circuit Theory CT-9, 303–306 (1962) 14. Carlin, H.J.: A new approach to gain-bandwidth problems. IEEE Trans. on Circuits and Sys- tems CAS-23(4), 170–175 (1977) 15. Carlin, H.J., Civalleri, P.P.: Wideband Circuit Design. CRC Press, Boca Raton (1998) 16. Cauer, W.: Die Verwirklichung von Wechselstromwidersta ¨nden vorgeschriebener Frequenzabha ¨ngigkeit. Arch. Elektrotech. 17(4), 355–388 (1926) 17. Chen, W.K.: Theory and Design of Broadband Matching Networks. Pergamom Press, Oxford (1976) 18. Daniels, R.W.: Approximation Methods for Electronic Filter Design. McGraw-Hill, New York (1974) 19. Darlington, S.: Synthesis of reactance 4-poles which produce prescribed insertion loss char- acteristics. J. Math. Phys. 18, 257–353 (1939) 287 288 References 20. Darlington, S.: Realization of a constant phase-difference. Bell System Technical Journal 29, 94–104 (1950) 21. Director, S.W.: LU factorization in network sensitivity computations. IEEE Trans. on Circuit Theory CT-18, 184–185 (1971) 22. Ellinger, F.: Radio Frequency Integrated Circuits and Technologies. Springer, Berlin; New York (2007) 23. Fano, R.M.: Theoretical limitations on the broad-band matching of arbitrary impedances, Part I. J. Franklin Institute 249(1), 57–83 (1950) 24. Fano, R.M.: Theoretical limitations on the broad-band matching of arbitrary impedances, Part II. J. Franklin Institute 249(2), 139–154 (1950) 25. Filanovsky, I.M.: Sensitivity and selectivity. In: W.K. Chen (ed.) The Circuits and Filters Handbook, chap. 68, pp. 2205–2236. CRC Press, Boca Raton (1995) 26. Foster, R.M.: A reactance theorem. Bell System Journal 3, 259–267 (1924) 27. Fujisawa, T.: Realizability theorem for mid-series or mid-shunt low-pass ladders without mu- tual induction. IRE Trans. PGCT CT-2(4), 320–325 (1955) 28. Guillemin, E.A.: The Mathematics of Circuit Analysis. John Wiley and Sons, New York (1953) 29. Guillemin, E.A.: Synthesis of Passive Networks. John Wiley and Sons, New York (1957) 30. Hazony, D.: An alternate approach to the Bott-Duffin cycle. IRE Trans. Circuit Theory CT-8, 363 (1961) 31. Hazony, D.: Two extensions of the Darlington synthesis procedure. IRE Trans. Circuit Theory CT-8, 284–288 (1961) 32. Hazony, D., Schott, F.W.: A cascade representation of the Bott-Duffin synthesis. IRE Trans. Circuit Theory CT-5, 144 (1958) 33. Huang, Q., Sansen, W.: Design techniques for switched capacitor broadband phase splitting networks. IEEE Transactions on Circuit Theory CT-34, 1096–1102 (1987) 34. Hurwitz, A.: Ueber die Bedingungen, unter welchen eine Gleichung nur Wurzeln mit nega- tiven reelen Theilen besitzt. Math. Ann. 46, 273–284 (1895) 35. II, L.W.C.: Digital and Analog Communication Systems. Prentice Hall, Upper Saddle River, NJ (1997) 36. Jeffrey, A.: Handbook of Mathematical Formulas and Integrals. Academic Press (1995) 37. Kim, W.H.: A new method of driving-point function synthesis. Technical report 1, University of Illinois Engineering Experimental Station (1956) 38. Kishi, G., Kida, T.: Energy theory of sensitivity in LCR networks. IEEE Transactions on Circuit Theory CT-14(4), 380–387 (1967) 39. Lathi, B.P.: Modern Digital and Analog Communication Systems, 3rd edn. Oxford University Press, New York (1998) 40. Luck, D.G.C.: Properties of some wide-band phase splitting networks.
Recommended publications
  • Input and Output Directions and Hankel Singular Values CEE 629
    Principal Input and Output Directions and Hankel Singular Values CEE 629. System Identification Duke University, Fall 2017 1 Continuous-time systems in the frequency domain In the frequency domain, the input-output relationship of a LTI system (with r inputs, m outputs, and n internal states) is represented by the m-by-r rational frequency response function matrix equation y(ω) = H(ω)u(ω) . At a frequency ω a set of inputs with amplitudes u(ω) generate steady-state outputs with amplitudes y(ω). (These amplitude vectors are, in general, complex-valued, indicating mag- nitude and phase.) The singular value decomposition of the transfer function matrix is H(ω) = Y (ω) Σ(ω) U ∗(ω) (1) where: U(ω) is the r by r orthonormal matrix of input amplitude vectors, U ∗U = I, and Y (ω) is the m by m orthonormal matrix of output amplitude vectors, Y ∗Y = I Σ(ω) is the m by r diagonal matrix of singular values, Σ(ω) = diag(σ1(ω), σ2(ω), ··· σn(ω)) At any frequency ω, the singular values are ordered as: σ1(ω) ≥ σ2(ω) ≥ · · · ≥ σn(ω) ≥ 0 Re-arranging the singular value decomposition of H(s), H(ω)U(ω) = Y (ω) Σ(ω) or H(ω) ui(ω) = σi(ω) yi(ω) where ui(ω) and yi(ω) are the i-th columns of U(ω) and Y (ω). Since ||ui(ω)||2 = 1 and ||yi(ω)||2 = 1, the singular value σi(ω) represents the scaling from inputs with complex am- plitudes ui(ω) to outputs with amplitudes yi(ω).
    [Show full text]
  • LABORATORY 3: Transient Circuits, RC, RL Step Responses, 2Nd Order Circuits
    Alpha Laboratories ECSE-2010 LABORATORY 3: Transient circuits, RC, RL step responses, 2nd Order Circuits Note: If your partner is no longer in the class, please talk to the instructor. Material covered: RC circuits Integrators Differentiators 1st order RC, RL Circuits 2nd order RLC series, parallel circuits Thevenin circuits Part A: Transient Circuits RC Time constants: A time constant is the time it takes a circuit characteristic (Voltage for example) to change from one state to another state. In a simple RC circuit where the resistor and capacitor are in series, the RC time constant is defined as the time it takes the voltage across a capacitor to reach 63.2% of its final value when charging (or 36.8% of its initial value when discharging). It is assume a step function (Heavyside function) is applied as the source. The time constant is defined by the equation τ = RC where τ is the time constant in seconds R is the resistance in Ohms C is the capacitance in Farads The following figure illustrates the time constant for a square pulse when the capacitor is charging and discharging during the appropriate parts of the input signal. You will see a similar plot in the lab. Note the charge (63.2%) and discharge voltages (36.8%) after one time constant, respectively. Written by J. Braunstein Modified by S. Sawyer Spring 2020: 1/26/2020 Rensselaer Polytechnic Institute Troy, New York, USA 1 Alpha Laboratories ECSE-2010 Written by J. Braunstein Modified by S. Sawyer Spring 2020: 1/26/2020 Rensselaer Polytechnic Institute Troy, New York, USA 2 Alpha Laboratories ECSE-2010 Discovery Board: For most of the remaining class, you will want to compare input and output voltage time varying signals.
    [Show full text]
  • Lecture 3: Transfer Function and Dynamic Response Analysis
    Transfer function approach Dynamic response Summary Basics of Automation and Control I Lecture 3: Transfer function and dynamic response analysis Paweł Malczyk Division of Theory of Machines and Robots Institute of Aeronautics and Applied Mechanics Faculty of Power and Aeronautical Engineering Warsaw University of Technology October 17, 2019 © Paweł Malczyk. Basics of Automation and Control I Lecture 3: Transfer function and dynamic response analysis 1 / 31 Transfer function approach Dynamic response Summary Outline 1 Transfer function approach 2 Dynamic response 3 Summary © Paweł Malczyk. Basics of Automation and Control I Lecture 3: Transfer function and dynamic response analysis 2 / 31 Transfer function approach Dynamic response Summary Transfer function approach 1 Transfer function approach SISO system Definition Poles and zeros Transfer function for multivariable system Properties 2 Dynamic response 3 Summary © Paweł Malczyk. Basics of Automation and Control I Lecture 3: Transfer function and dynamic response analysis 3 / 31 Transfer function approach Dynamic response Summary SISO system Fig. 1: Block diagram of a single input single output (SISO) system Consider the continuous, linear time-invariant (LTI) system defined by linear constant coefficient ordinary differential equation (LCCODE): dny dn−1y + − + ··· + _ + = an n an 1 n−1 a1y a0y dt dt (1) dmu dm−1u = b + b − + ··· + b u_ + b u m dtm m 1 dtm−1 1 0 initial conditions y(0), y_(0),..., y(n−1)(0), and u(0),..., u(m−1)(0) given, u(t) – input signal, y(t) – output signal, ai – real constants for i = 1, ··· , n, and bj – real constants for j = 1, ··· , m. How do I find the LCCODE (1)? .
    [Show full text]
  • An Approximate Transfer Function Model for a Double-Pipe Counter-Flow Heat Exchanger
    energies Article An Approximate Transfer Function Model for a Double-Pipe Counter-Flow Heat Exchanger Krzysztof Bartecki Division of Control Science and Engineering, Opole University of Technology, ul. Prószkowska 76, 45-758 Opole, Poland; [email protected] Abstract: The transfer functions G(s) for different types of heat exchangers obtained from their par- tial differential equations usually contain some irrational components which reflect quite well their spatio-temporal dynamic properties. However, such a relatively complex mathematical representa- tion is often not suitable for various practical applications, and some kind of approximation of the original model would be more preferable. In this paper we discuss approximate rational transfer func- tions Gˆ(s) for a typical thick-walled double-pipe heat exchanger operating in the counter-flow mode. Using the semi-analytical method of lines, we transform the original partial differential equations into a set of ordinary differential equations representing N spatial sections of the exchanger, where each nth section can be described by a simple rational transfer function matrix Gn(s), n = 1, 2, ... , N. Their proper interconnection results in the overall approximation model expressed by a rational transfer function matrix Gˆ(s) of high order. As compared to the previously analyzed approximation model for the double-pipe parallel-flow heat exchanger which took the form of a simple, cascade interconnection of the sections, here we obtain a different connection structure which requires the use of the so-called linear fractional transformation with the Redheffer star product. Based on the resulting rational transfer function matrix Gˆ(s), the frequency and the steady-state responses of the approximate model are compared here with those obtained from the original irrational transfer Citation: Bartecki, K.
    [Show full text]
  • A Passive Synthesis for Time-Invariant Transfer Functions
    IEEE TRANSACTIONS ON CIRCUIT THEORY, VOL. CT-17, NO. 3, AUGUST 1970 333 A Passive Synthesis for Time-Invariant Transfer Functions PATRICK DEWILDE, STUDENT MEMBER, IEEE, LEONARD hiI. SILVERRJAN, MEMBER, IEEE, AND R. W. NEW-COMB, MEMBER, IEEE Absfroct-A passive transfer-function synthesis based upon state- [6], p. 307), in which a minimal-reactance scalar transfer- space techniques is presented. The method rests upon the formation function synthesis can be obtained; it provides a circuit of a coupling admittance that, when synthesized by. resistors and consideration of such concepts as stability, conkol- gyrators, is to be loaded by capacitors whose voltages form the state. By the use of a Lyapunov transformation, the coupling admittance lability, and observability. Background and the general is made positive real, while further transformations allow internal theory and position of state-space techniques in net- dissipation to be moved to the source or the load. A general class work theory can be found in [7]. of configurations applicable to integrated circuits and using only grounded gyrators, resistors, and a minimal number of capacitors II. PRELIMINARIES is obtained. The minimum number of resistors for the structure is also obtained. The technique illustrates how state-variable We first recall several facts pertinent to the intended theory can be used to obtain results not yet available through synthesis. other methods. Given a transfer function n X m matrix T(p) that is rational wit.h real coefficients (called real-rational) and I. INTRODUCTION that has T(m) = D, a finite constant matrix, there exist, real constant matrices A, B, C, such that (lk is the N ill, a procedure for time-v:$able minimum-re- k X lc identity, ,C[ ] is the Laplace transform) actance passive synthesis of a “stable” impulse response matrix was given based on a new-state- T(p) = D + C[pl, - A]-‘B, JXYI = T(~)=Wl equation technique for imbedding the impulse-response (14 matrix in a passive driving-point impulse-response matrix.
    [Show full text]
  • Experiment 12: AC Circuits - RLC Circuit
    Experiment 12: AC Circuits - RLC Circuit Introduction An inductor (L) is an important component of circuits, on the same level as resistors (R) and capacitors (C). The inductor is based on the principle of inductance - that moving charges create a magnetic field (the reverse is also true - a moving magnetic field creates an electric field). Inductors can be used to produce a desired magnetic field and store energy in its magnetic field, similar to capacitors being used to produce electric fields and storing energy in their electric field. At its simplest level, an inductor consists of a coil of wire in a circuit. The circuit symbol for an inductor is shown in Figure 1a. So far we observed that in an RC circuit the charge, current, and potential difference grew and decayed exponentially described by a time constant τ. If an inductor and a capacitor are connected in series in a circuit, the charge, current and potential difference do not grow/decay exponentially, but instead oscillate sinusoidally. In an ideal setting (no internal resistance) these oscillations will continue indefinitely with a period (T) and an angular frequency ! given by 1 ! = p (1) LC This is referred to as the circuit's natural angular frequency. A circuit containing a resistor, a capacitor, and an inductor is called an RLC circuit (or LCR), as shown in Figure 1b. With a resistor present, the total electromagnetic energy is no longer constant since energy is lost via Joule heating in the resistor. The oscillations of charge, current and potential are now continuously decreasing with amplitude.
    [Show full text]
  • ELECTRICAL CIRCUIT ANALYSIS Lecture Notes
    ELECTRICAL CIRCUIT ANALYSIS Lecture Notes (2020-21) Prepared By S.RAKESH Assistant Professor, Department of EEE Department of Electrical & Electronics Engineering Malla Reddy College of Engineering & Technology Maisammaguda, Dhullapally, Secunderabad-500100 B.Tech (EEE) R-18 MALLA REDDY COLLEGE OF ENGINEERING AND TECHNOLOGY II Year B.Tech EEE-I Sem L T/P/D C 3 -/-/- 3 (R18A0206) ELECTRICAL CIRCUIT ANALYSIS COURSE OBJECTIVES: This course introduces the analysis of transients in electrical systems, to understand three phase circuits, to evaluate network parameters of given electrical network, to draw the locus diagrams and to know about the networkfunctions To prepare the students to have a basic knowledge in the analysis of ElectricNetworks UNIT-I D.C TRANSIENT ANALYSIS: Transient response of R-L, R-C, R-L-C circuits (Series and parallel combinations) for D.C. excitations, Initial conditions, Solution using differential equation and Laplace transform method. UNIT - II A.C TRANSIENT ANALYSIS: Transient response of R-L, R-C, R-L-C Series circuits for sinusoidal excitations, Initial conditions, Solution using differential equation and Laplace transform method. UNIT - III THREE PHASE CIRCUITS: Phase sequence, Star and delta connection, Relation between line and phase voltages and currents in balanced systems, Analysis of balanced and Unbalanced three phase circuits UNIT – IV LOCUS DIAGRAMS & RESONANCE: Series and Parallel combination of R-L, R-C and R-L-C circuits with variation of various parameters.Resonance for series and parallel circuits, concept of band width and Q factor. UNIT - V NETWORK PARAMETERS:Two port network parameters – Z,Y, ABCD and hybrid parameters.Condition for reciprocity and symmetry.Conversion of one parameter to other, Interconnection of Two port networks in series, parallel and cascaded configuration and image parameters.
    [Show full text]
  • Lecture 4: RLC Circuits and Resonant Circuits
    Lecture 4: R-L-C Circuits and Resonant Circuits RLC series circuit: ● What's VR? ◆ Simplest way to solve for V is to use voltage divider equation in complex notation: V R X L X C V = in R R + XC + XL L C R Vin R = Vin = V0 cosω t 1 R + + jωL jωC ◆ Using complex notation for the apply voltage V = V cosωt = Real(V e jωt ): j t in 0 0 V e ω R V = 0 R $ 1 ' € R + j& ωL − ) % ωC ( ■ We are interested in the both the magnitude of VR and its phase with respect to Vin. ■ First the magnitude: jωt V0e R € V = R $ 1 ' R + j& ωL − ) % ωC ( V R = 0 2 2 $ 1 ' R +& ωL − ) % ωC ( K.K. Gan L4: RLC and Resonance Circuits 1 € ■ The phase of VR with respect to Vin can be found by writing VR in purely polar notation. ❑ For the denominator we have: 0 * 1 -4 2 ωL − $ 1 ' 2 $ 1 ' 2 1, /2 R + j& ωL − ) = R +& ωL − ) exp1 j tan− ωC 5 % C ( % C( , R / ω ω 2 , /2 3 + .6 ❑ Define the phase angle φ : Imaginary X tanφ = Real X € 1 ωL − = ωC R ❑ We can now write for VR in complex form: V R e jωt V = o R 2 € jφ 2 % 1 ( e R +' ωL − * Depending on L, C, and ω, the phase angle can be & ωC) positive or negative! In this example, if ωL > 1/ωC, j(ωt−φ) then VR(t) lags Vin(t). = VR e ■ Finally, we can write down the solution for V by taking the real part of the above equation: j(ωt−φ) V R e V R cos(ωt −φ) V = Real 0 = 0 R 2 2 € 2 % 1 ( 2 % 1 ( R +' ωL − * R +' ωL − * & ωC ) & ωC ) K.K.
    [Show full text]
  • State Space Models
    State Space Models MUS420 Equations of motion for any physical system may be Introduction to Linear State Space Models conveniently formulated in terms of its state x(t): Julius O. Smith III ([email protected]) Center for Computer Research in Music and Acoustics (CCRMA) Input Forces u(t) Department of Music, Stanford University Stanford, California 94305 ft Model State x(t) x˙(t) February 5, 2019 Outline R State Space Models x˙(t)= ft[x(t),u(t)] • where Linear State Space Formulation • x(t) = state of the system at time t Markov Parameters (Impulse Response) • u(t) = vector of external inputs (typically driving forces) Transfer Function • ft = general function mapping the current state x(t) and Difference Equations to State Space Models inputs u(t) to the state time-derivative x˙(t) • Similarity Transformations The function f may be time-varying, in general • • t Modal Representation (Diagonalization) This potentially nonlinear time-varying model is • • Matlab Examples extremely general (but causal) • Even the human brain can be modeled in this form • 1 2 State-Space History Key Property of State Vector The key property of the state vector x(t) in the state 1. Classic phase-space in physics (Gibbs 1901) space formulation is that it completely determines the System state = point in position-momentum space system at time t 2. Digital computer (1950s) 3. Finite State Machines (Mealy and Moore, 1960s) Future states depend only on the current state x(t) • and on any inputs u(t) at time t and beyond 4. Finite Automata All past states and the entire input history are 5.
    [Show full text]
  • MUS420 Introduction to Linear State Space Models
    State Space Models MUS420 Equations of motion for any physical system may be Introduction to Linear State Space Models conveniently formulated in terms of its state x(t): Julius O. Smith III ([email protected]) Center for Computer Research in Music and Acoustics (CCRMA) Input Forces u(t) Department of Music, Stanford University Stanford, California 94305 ft Model State x(t) x˙(t) February 5, 2019 Outline R State Space Models x˙(t)= ft[x(t),u(t)] • where Linear State Space Formulation • x(t) = state of the system at time t Markov Parameters (Impulse Response) • u(t) = vector of external inputs (typically driving forces) Transfer Function • ft = general function mapping the current state x(t) and Difference Equations to State Space Models inputs u(t) to the state time-derivative x˙(t) • Similarity Transformations The function f may be time-varying, in general • • t Modal Representation (Diagonalization) This potentially nonlinear time-varying model is • • Matlab Examples extremely general (but causal) • Even the human brain can be modeled in this form • 1 2 State-Space History Key Property of State Vector The key property of the state vector x(t) in the state 1. Classic phase-space in physics (Gibbs 1901) space formulation is that it completely determines the System state = point in position-momentum space system at time t 2. Digital computer (1950s) 3. Finite State Machines (Mealy and Moore, 1960s) Future states depend only on the current state x(t) • and on any inputs u(t) at time t and beyond 4. Finite Automata All past states and the entire input history are 5.
    [Show full text]
  • Lecture 14 - AC Circuits, Resonance Y&F Chapter 31, Sec
    Physics 121 - Electricity and Magnetism Lecture 14 - AC Circuits, Resonance Y&F Chapter 31, Sec. 3 - 8 • The Series RLC Circuit. Amplitude and Phase Relations • Phasor Diagrams for Voltage and Current • Impedance and Phasors for Impedance • Resonance • Power in AC Circuits, Power Factor • Examples • Transformers • Summaries Copyright R. Janow – Fall 2013 Current & voltage phases in pure R, C, and L circuits Current is the same everywhere in a single branch (including phase) Phases of voltages in elements are referenced to the current phasor • Apply sinusoidal voltage E (t) = EmCos(wDt) • For pure R, L, or C loads, phase angles are 0, +p/2, -p/2 • Reactance” means ratio of peak voltage to peak current (generalized resistances). VR& iR in phase VC lags iC by p/2 VL leads iL by p/2 Resistance Capacitive Reactance Inductive Reactance 1 V /i R Vmax /iC C Vmax /iL L wDL max R wDC Copyright R. Janow – Fall 2013 The impedance is the ratio of peak EMF to peak current peak applied voltage Em Z [Z] ohms peak current that flows im 2 2 2 Magnitude of Em: Em VR (VL VC) 1 Reactances: L wDL C wDC L VL /iL C VC /iC R VR /iR im iR,max iL,max iC,max For series LRC circuit, divide Em by peak current 2 2 1/2 Applies to a single Magnitude of Z: Z [ R (L C) ] branch with L, C, R VL VC L C Phase angle F: tan(F) see diagram VR R F measures the power absorbed by the circuit: P Em im Em im cos(F) • R ~ 0 tiny losses, no power absorbed im normal to Em F ~ +/- p/2 • XL=XC im parallel to Em F 0 Z=R maximum currentCopyright (resonance) R.
    [Show full text]
  • ECE504: Lecture 2
    ECE504: Lecture 2 ECE504: Lecture 2 D. Richard Brown III Worcester Polytechnic Institute 15-Sep-2009 Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 1 / 46 ECE504: Lecture 2 Lecture 2 Major Topics We are still in Part I of ECE504: Mathematical description of systems model mathematical description → You should be reading Chen chapters 2-3 now. 1. Advantages and disadvantages of different mathematical descriptions 2. CT and DT transfer functions review 3. Relationships between mathematical descriptions Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 2 / 46 ECE504: Lecture 2 Preliminary Definition: Relaxed Systems Definition A system is said to be “relaxed” at time t = t0 if the output y(t) for all t t is excited exclusively by the ≥ 0 input u(t) for t t . ≥ 0 Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 3 / 46 ECE504: Lecture 2 Input-Output DE Description: Capabilities and Limitations Example: by˙(t) ay(t)+ = du(t)+ eu˙(t) cy¨(t) + Can describe memoryless, lumped, or distributed systems. + Can describe causal or non-causal systems. + Can describe linear or non-linear systems. + Can describe time-invariant or time-varying systems. + Can describe relaxed or non-relaxed systems (non-zero initial conditions). — No explicit access to internal behavior of systems, e.g. doesn’t directly to apply to systems like “sharks and sardines”. — Difficult to analyze directly (differential equations). Worcester Polytechnic Institute D. Richard Brown III 15-Sep-2009 4 / 46 ECE504: Lecture 2 State-Space Description: Capabilities and Limitations Example: x˙ (t) = Ax(t)+ Bu(t) y(t) = Cx(t)+ Du(t) — Can’t describe distributed systems.
    [Show full text]