An Elementary Introduction to Linear Dynamical System

Total Page:16

File Type:pdf, Size:1020Kb

An Elementary Introduction to Linear Dynamical System An elementary introduction to linear dynamical system Bijan Bagchi Department of Applied Mathematics University of Calcutta E-mail : bbagchi123@rediffmail.com 18.01.2012 1 What is a dynamical system (DS)? Mathematically, a DS deals with an initial value problem of the type d−!x −! = f (t; −!x ; u) dt −! k where x denotes a vector having components (x1; x2; :::; xk) 2 R ; t is the −! −! time, f is a vector flow : f = (f1; f2; :::; fk); fug is a set of auxiliary objects −! and there exist a set of initial conditions x0 = [x1(0); x2(0); :::; xk(0)]. Thus we write 0 1 0 1 0 1 x1 f1 x1(0) B C B C B C B C B C B C B x2 C B f2 C B x2(0) C B C B C B C B C B C B C −! B : C −! B : C −! B : C x = B C, f = B C, x0 = B C. B C B C B C B : C B : C B : C B C B C B C B C B C B C B : C B : C B : C @ A @ A @ A xk fk xk(0) The case k = 1 is trivial implying a scalar equation with the solution Z t x = x0 + f(s; x; u) ds: 0 The case k = 2 has the form (with no explicit presence of t) x_i = fi(x1; x2) i = 1; 2 where we also assume fi(x1; x2) to be continuously differentiable in the neigh- borhood of [x1(0); x2(0)]. Physically, a DS addresses an evolutionary problem related to some phys- ical or chemical or biological system or lattice maps or cellular automata in which the values of variables may be specified in uniformly spaced points of time (difference equations) or with respect to their time derivatives (differ- ential equations). 2 Take the simplest canonical form given by the equation d−!x −! = −!x_ = f (−!x ) dt −! It is called autonomous when f does not depend on time directly but only −! through the state variable x . The space Rk or an appropriate subspace of dependent variables is referred to as the state space or phase space or configuration space. An autonomous system has the formal form d−!x −! = f (−!x ) dt −! −! −! where there is no dependence of t on f , f is a map Rk ! Rk and x = −! (x1; x2; :::; xk). If the map f is linear, the DS is termed a linear dynamical system. Then it has the form d−!x −! = f (−!x ) = [A]−!x ≡ A−!x dt where [A] is an k × k matrix with constant elements : Aαβ =constant, −! 1 ≤ α; β ≤ k. For k = 2, f is given by 0 1 −! −! −! a b −! f ( x ) = A x = @ A x c d Note that a second-order ODE of the general type x¨ + ax_ + bx + c = 0 can be decomposed into a set of coupled first-order ODEs : x_1 = x2 x_2 = −ax2 − bx1 − c where x1 = x and x2 =x _. Thus the k = 2 case is of much interest. 3 Definitions −! • f (t; −!x ; u) is called a vector flow or a vector field. • System of ODE is called autonomous if the vector flow does not depend on t. Otherwise the system is non-autonomous. • In the case of autonomous system and k ≤ 3, the representation of phase space is useful : dx k = 1 (x; dt ) phase plane k = 2 (x; y) phase plane k = 3 (x; y; z) phase space It is clear that if the vector flow is plotted in each point of the phase plane (k = 2), it depicts the phase portrait of the system of ODE. • x∗ is a fixed point or an equilibrium point if there is no movement in x∗ i.e. f(t; x∗; u) = 0 Example 1 Consider the damped pendulum g θ¨ + γθ_ + sinθ = 0 l With x = θ; y = θ_ it can be translated to the pair g x_ = y y_ = − sinx − γy l ) two fixed points (x = 0; y = 0) and (x = π; y = 0) and generally x = 2nπ; y = 0 and x = (2n + 1)π; y = 0. 4 Example 2 Consider a two-dimensional case in which x_1 = −x1 x_2 = −x2 The solutions are x1 = k1 exp(−t) x2 = k2 exp(−t) The fixed point is clearly the origin x1 = 0 = x2. If we eliminate t we get k1 x1 = x2 k2 ) trajectories are straight lines. What is the relevance of the fixed point with regard to the trajectories? For this we notice lim xi(t) = lim ki exp(−t) = 0 t!1 t!1 ) fixed point will be reached in the infinite future : time directions point toward the origin. Figure 1: Phase portrait 5 Dynamical configurations for k = 2 We discuss systems in two-variables of a linear space autonomous system. The general form, as noted earlier, can be written as 0 1 0 1 0 1 0 1 x_ x a11 a12 x @ A = A @ A = @ A @ A y_ y a21 a22 y where a11; a12; a21; a22 2 R. The equilibrium point for whichx _ = 0 and y_ = 0 is x = y = 0 if det(A) 6= 0. The characteristic equation is det(A − λI) = 0 I is a 2 × 2 order identity matrix 0 1 a11 − λ a12 ) det @ A = 0 a21 a22 − λ 2 ) λ − (a11 + a22)λ + (a11a22 − a12a21) = 0 ) λ2 − tr(A)λ + det(A) = 0 The eigenvalues are 1 p λ = (tr(A) ± ∆) 1;2 2 2 2 where ∆ = (tr(A)) − 4 det(A) = (a11 + a22) − 4(a11a22 − a12a21) 2 = (a11 − a22) + 4a12a21 = discriminant The different types of the dynamic behavior of this k = 2 system are dictated by the signs of the eigenvalues λ1; λ2 which in turn are controlled by the trace and determinant of A. 6 We distinguish the various features of the discriminant as follows : Figure 2: 7 Qualitative behavior of the above tendencies of λ1; λ2 Figure 3: 8 Figure 4: 9 To illustrate how the above procedure works let us consider the following example of a damped oscillator. x¨ + 2bx_ + !2x = 0 (b > 0) x_ = y y_ = −2by − !2x | {z } an equivalent coupled system Fixed point : x = 0; y = 0. 0 − λ 1 The characteristic polynomial is given by 2 −! −2b − λ Eigenvalues are determined from 0 − λ 1 p 2 2 = 0 ) λ1;2 = −b ± i ! − b 2 −! −2b − λ p 1 or from the relation λ1;2 = 2 (tr(A) ± ∆) where in this case ∆ = −4(!2 − b2) and tr(A) = −2b: We therefore summarize • an unstable node for b < −! < 0 • an unstable spiral for −! < b < 0 • a stable spiral for 0 < b < ! • a stable node for b > ! > 0 There is a change in the feature of the critical point from spiral to node at b = !. 10 General character of a linear DS : Phase plane analysis Phase plane analysis was first developed by H. Poincar´e which we will discuss in the context of studying elementary singular points. (x; y) plane ! phase plane (real domain) dx dy = P (x; y; t) = Q(x; y; t) dt dt | {z } dx generates a second order differential equation! (y = dt ) curve ! phase curve (eliminate dt) dy Q(x; y; t) = dx P (x; y; t) singular point (x0; y0) ! P (x0; y0) = 0 = Q(x0; y0) ordinary point (any) ! without the above property Interpretation : For an ordinary point there is a definite slope of the tangent to the trajectory through it. For a singular point the tangent direction is indeterminable and the trajectory degenerates into one point - the singular point itself. Cauchy's criterion : Through an ordinary point of the phase plane passes one, and only one, phase trajectory. Classical mechanics : We have typically the general solution x = x(t − t0; x0; y0) = x(t) y = y(t − t0; x0; y0) = y(t) where initial point is (x0; y0) at t = t0. Eliminate (t−t0) to get the trajectory. dx dy dy Summary : dt and dt prescribe the law of motion, dx prescribes a certain geometric curve passing through a given point. 11 Elementary singular points • Vortex point (centre) : SHM equation (attractive force) mx¨ = −kx k > 0 (stable motion) Factorized form : dx dy k = y = −( )x = −!2x dt dt m Elimination of t gives dy x = −!2 dx y x2 y2 ) + = λ (Ellipse) α2 β2 2h (α2 = β2 = 2h h integration constant) !2 x = 0 = y is the centre of ellipse and represents the singular point called the vortex point (centre). The trajectories which are closed curves can be seen to enclose the vortex point in their interior but none approaching the vortex point. 12 Figure 5: y =x _ (centre) • Saddle Point : Consider the case of repulsive force mx¨ = kx k > 0 Factorized form dx dy k = y = ( )x = !2x dt dt m Elimination of t gives dy x = !2 dx y ) y2 − !2x2 = h h integration constant (Hyperbola) h = 0 gives the asymptotes. x = 0 = y is a singular point called the saddle point or a critical point. There are two singular trajectories passing through the saddle point. Note that the motion on these two trajectories are asymptotic approaching the sad- dle point for t = +1 or t = −∞. 13 Figure 6: y =x _ (saddle point or critical point) • Focal Point : Let us focus on the damped SHM : x¨ = −!2x − 2bx_ (b2 − !2 < 0 : underdamping) It corresponds to the factorized pair dx dy = y = −2by − !2x dt dt dy (2by+!2x) implying dx = − y x = 0 = y is the singular point called the focal point. In the (x; t) variables the elementary solution is −bt x = x0e cos(¯!t + α) x0; α ! integration constants ! 0 as t ! 1 (second factor remains bounded) We are led to spirals approaching the focal point.
Recommended publications
  • Linear Systems
    Linear Systems Professor Yi Ma Professor Claire Tomlin GSI: Somil Bansal Scribe: Chih-Yuan Chiu Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, U.S.A. May 22, 2019 2 Contents Preface 5 Notation 7 1 Introduction 9 1.1 Lecture 1 . .9 2 Linear Algebra Review 15 2.1 Lecture 2 . 15 2.2 Lecture 3 . 22 2.3 Lecture 3 Discussion . 28 2.4 Lecture 4 . 30 2.5 Lecture 4 Discussion . 35 2.6 Lecture 5 . 37 2.7 Lecture 6 . 42 3 Dynamical Systems 45 3.1 Lecture 7 . 45 3.2 Lecture 7 Discussion . 51 3.3 Lecture 8 . 55 3.4 Lecture 8 Discussion . 60 3.5 Lecture 9 . 61 3.6 Lecture 9 Discussion . 67 3.7 Lecture 10 . 72 3.8 Lecture 10 Discussion . 86 4 System Stability 91 4.1 Lecture 12 . 91 4.2 Lecture 12 Discussion . 101 4.3 Lecture 13 . 106 4.4 Lecture 13 Discussion . 117 4.5 Lecture 14 . 120 4.6 Lecture 14 Discussion . 126 4.7 Lecture 15 . 127 3 4 CONTENTS 4.8 Lecture 15 Discussion . 148 5 Controllability and Observability 157 5.1 Lecture 16 . 157 5.2 Lecture 17 . 163 5.3 Lectures 16, 17 Discussion . 176 5.4 Lecture 18 . 182 5.5 Lecture 19 . 185 5.6 Lecture 20 . 194 5.7 Lectures 18, 19, 20 Discussion . 211 5.8 Lecture 21 . 216 5.9 Lecture 22 . 222 6 Additional Topics 233 6.1 Lecture 11 . 233 6.2 Hamilton-Jacobi-Bellman Equation . 254 A Appendix to Lecture 12 259 A.1 Cayley-Hamilton Theorem: Alternative Proof 1 .
    [Show full text]
  • 3.3 Diagonalization and Eigenvalues
    3.3. Diagonalization and Eigenvalues 171 n 1 3 0 1 Exercise 3.2.33 Show that adj (uA)= u − adj A for all 1 Exercise 3.2.28 If A− = 0 2 3 find adj A. n n A matrices . 3 1 1 × − Exercise 3.2.34 Let A and B denote invertible n n ma- Exercise 3.2.29 If A is 3 3 and det A = 2, find × 1 × trices. Show that: det (A− + 4 adj A). 0 A Exercise 3.2.30 Show that det = det A det B a. adj (adj A) = (det A)n 2A (here n 2) [Hint: See B X − ≥ Example 3.2.8.] when A and B are 2 2. What ifA and Bare 3 3? × × 0 I [Hint: Block multiply by .] b. adj (A 1) = (adj A) 1 I 0 − − Exercise 3.2.31 Let A be n n, n 2, and assume one c. adj (AT ) = (adj A)T column of A consists of zeros.× Find≥ the possible values of rank (adj A). d. adj (AB) = (adj B)(adj A) [Hint: Show that AB adj (AB)= AB adj B adj A.] Exercise 3.2.32 If A is 3 3 and invertible, compute × det ( A2(adj A) 1). − − 3.3 Diagonalization and Eigenvalues The world is filled with examples of systems that evolve in time—the weather in a region, the economy of a nation, the diversity of an ecosystem, etc. Describing such systems is difficult in general and various methods have been developed in special cases. In this section we describe one such method, called diag- onalization, which is one of the most important techniques in linear algebra.
    [Show full text]
  • Linear Dynamics: Clustering Without Identification
    Linear Dynamics: Clustering without identification Chloe Ching-Yun Hsuy Michaela Hardty Moritz Hardty University of California, Berkeley Amazon University of California, Berkeley Abstract for LDS parameter estimation, but it is inherently non-convex and can often get stuck in local min- Linear dynamical systems are a fundamental ima [Hazan et al., 2018]. Even when full system iden- and powerful parametric model class. How- tification is hard, is there still hope to learn meaningful ever, identifying the parameters of a linear information about linear dynamics without learning all dynamical system is a venerable task, per- system parameters? We provide a positive answer to mitting provably efficient solutions only in this question. special cases. This work shows that the eigen- We show that the eigenspectrum of the state-transition spectrum of unknown linear dynamics can be matrix of unknown linear dynamics can be identified identified without full system identification. without full system identification. The eigenvalues of We analyze a computationally efficient and the state-transition matrix play a significant role in provably convergent algorithm to estimate the determining the properties of a linear system. For eigenvalues of the state-transition matrix in example, in two dimensions, the eigenvalues determine a linear dynamical system. the stability of a linear dynamical system. Based on When applied to time series clustering, the trace and the determinant of the state-transition our algorithm can efficiently cluster multi- matrix, we can classify a linear system as a stable dimensional time series with temporal offsets node, a stable spiral, a saddle, an unstable node, or an and varying lengths, under the assumption unstable spiral.
    [Show full text]
  • Binet-Cauchy Kernels on Dynamical Systems and Its Application to the Analysis of Dynamic Scenes ∗
    Binet-Cauchy Kernels on Dynamical Systems and its Application to the Analysis of Dynamic Scenes ∗ S.V.N. Vishwanathan Statistical Machine Learning Program National ICT Australia Canberra, 0200 ACT, Australia Tel: +61 (2) 6125 8657 E-mail: [email protected] Alexander J. Smola Statistical Machine Learning Program National ICT Australia Canberra, 0200 ACT, Australia Tel: +61 (2) 6125 8652 E-mail: [email protected] Ren´eVidal Center for Imaging Science, Department of Biomedical Engineering, Johns Hopkins University 308B Clark Hall, 3400 N. Charles St., Baltimore MD 21218 Tel: +1 (410) 516 7306 E-mail: [email protected] October 5, 2010 Abstract. We propose a family of kernels based on the Binet-Cauchy theorem, and its extension to Fredholm operators. Our derivation provides a unifying framework for all kernels on dynamical systems currently used in machine learning, including kernels derived from the behavioral framework, diffusion processes, marginalized kernels, kernels on graphs, and the kernels on sets arising from the subspace angle approach. In the case of linear time-invariant systems, we derive explicit formulae for computing the proposed Binet-Cauchy kernels by solving Sylvester equations, and relate the proposed kernels to existing kernels based on cepstrum coefficients and subspace angles. We show efficient methods for computing our kernels which make them viable for the practitioner. Besides their theoretical appeal, these kernels can be used efficiently in the comparison of video sequences of dynamic scenes that can be modeled as the output of a linear time-invariant dynamical system. One advantage of our kernels is that they take the initial conditions of the dynamical systems into account.
    [Show full text]
  • Arxiv:1908.01039V3 [Cs.LG] 29 Feb 2020 LDS Eigenvalues
    Linear Dynamics: Clustering without identification Chloe Ching-Yun Hsuy Michaela Hardty Moritz Hardty University of California, Berkeley Amazon University of California, Berkeley Abstract for LDS parameter estimation, but it is inherently non-convex and can often get stuck in local min- Linear dynamical systems are a fundamental ima [Hazan et al., 2018]. Even when full system iden- and powerful parametric model class. How- tification is hard, is there still hope to learn meaningful ever, identifying the parameters of a linear information about linear dynamics without learning all dynamical system is a venerable task, per- system parameters? We provide a positive answer to mitting provably efficient solutions only in this question. special cases. This work shows that the eigen- We show that the eigenspectrum of the state-transition spectrum of unknown linear dynamics can be matrix of unknown linear dynamics can be identified identified without full system identification. without full system identification. The eigenvalues of We analyze a computationally efficient and the state-transition matrix play a significant role in provably convergent algorithm to estimate the determining the properties of a linear system. For eigenvalues of the state-transition matrix in example, in two dimensions, the eigenvalues determine a linear dynamical system. the stability of a linear dynamical system. Based on When applied to time series clustering, the trace and the determinant of the state-transition our algorithm can efficiently cluster multi- matrix, we can classify a linear system as a stable dimensional time series with temporal offsets node, a stable spiral, a saddle, an unstable node, or an and varying lengths, under the assumption unstable spiral.
    [Show full text]
  • Linear Dynamical Systems As a Core Computational Primitive
    Linear Dynamical Systems as a Core Computational Primitive Shiva Kaul Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Abstract Running nonlinear RNNs for T steps takes Ω(T ) time. Our construction, called LDStack, approximately runs them in O(log T ) parallel time, and obtains arbitrarily low error via repetition. First, we show nonlinear RNNs can be approximated by a stack of multiple-input, multiple-output (MIMO) LDS. This replaces nonlinearity across time with nonlinearity along depth. Next, we show that MIMO LDS can be approximated by an average or a concatenation of single-input, multiple-output (SIMO) LDS. Finally, we present an algorithm for running (and differentiating) SIMO LDS in O(log T ) parallel time. On long sequences, LDStack is much faster than traditional RNNs, yet it achieves similar accuracy in our experiments. Furthermore, LDStack is amenable to linear systems theory. Therefore, it improves not only speed, but also interpretability and mathematical tractability. 1 Introduction Nonlinear RNNs have two crucial shortcomings. The first is computational: running an RNN for T steps is a sequential operation which takes Ω(T ) time. The second is analytical: it is challenging to gain intuition about the behavior of a nonlinear RNN, and even harder to prove this behavior is desirable. These shortcomings have motivated practitioners to abandon RNNs altogether and to model time series by other means. These include hierarchies of (dilated) convolutions [Oord et al., 2016, Gehring et al., 2017] and attention mechanisms which are differentiable analogues of key-value lookups [Bahdanau et al., 2014, Vaswani et al., 2017].
    [Show full text]
  • Dynamical Systems and Matrix Algebra
    Dynamical Systems and Matrix Algebra K. Behrend August 12, 2018 Abstract This is a review of how matrix algebra applies to linear dynamical systems. We treat the discrete and the continuous case. 1 Contents Introduction 4 1 Discrete Dynamical Systems 4 1.1 A Markov Process . 4 A migration example . 4 Translating the problem into matrix algebra . 4 Finding the equilibrium . 7 The line of fixed points . 9 1.2 Fibonacci's Example . 11 Description of the dynamical system . 11 Model using rabbit vectors . 12 Starting the analysis . 13 The method of eigenvalues . 13 Finding the eigenvalues . 17 Finding the eigenvectors . 18 Qualitative description of the long-term behaviour . 20 The second eigenvalue . 20 Exact solution . 22 Finding the constants . 24 More detailed analysis . 25 The Golden Ratio . 26 1.3 Predator-Prey System . 27 Frogs and flies . 27 The model . 28 Solving the system . 29 Discussion of the solution . 31 The phase portrait . 32 The method of diagonalization . 34 Concluding remarks . 39 1.4 Summary of the Method . 40 The method of undetermined coefficients . 41 The method of diagonalization . 41 The long term behaviour . 42 1.5 Worked Examples . 44 A 3-dimensional dynamical system . 44 The powers of a matrix . 49 2 1.6 More on Markov processes . 53 1.7 Exercises . 59 2 Continuous Dynamical Systems 60 2.1 Flow Example . 60 2.2 Discrete Model . 61 2.3 Refining the Discrete Model . 62 2.4 The continuous model . 63 2.5 Solving the system of differential equations . 64 One homogeneous linear differential equation . 64 Our system of linear differential equations .
    [Show full text]
  • Lecture Notes for EE263
    Lecture Notes for EE263 Stephen Boyd Introduction to Linear Dynamical Systems Autumn 2007-08 Copyright Stephen Boyd. Limited copying or use for educational purposes is fine, but please acknowledge source, e.g., “taken from Lecture Notes for EE263, Stephen Boyd, Stanford 2007.” Contents Lecture 1 – Overview Lecture 2 – Linear functions and examples Lecture 3 – Linear algebra review Lecture 4 – Orthonormal sets of vectors and QR factorization Lecture 5 – Least-squares Lecture 6 – Least-squares applications Lecture 7 – Regularized least-squares and Gauss-Newton method Lecture 8 – Least-norm solutions of underdetermined equations Lecture 9 – Autonomous linear dynamical systems Lecture 10 – Solution via Laplace transform and matrix exponential Lecture 11 – Eigenvectors and diagonalization Lecture 12 – Jordan canonical form Lecture 13 – Linear dynamical systems with inputs and outputs Lecture 14 – Example: Aircraft dynamics Lecture 15 – Symmetric matrices, quadratic forms, matrix norm, and SVD Lecture 16 – SVD applications Lecture 17 – Example: Quantum mechanics Lecture 18 – Controllability and state transfer Lecture 19 – Observability and state estimation Lecture 20 – Some final comments Basic notation Matrix primer Crimes against matrices Solving general linear equations using Matlab Least-squares and least-norm solutions using Matlab Exercises EE263 Autumn 2007-08 Stephen Boyd Lecture 1 Overview • course mechanics • outline & topics • what is a linear dynamical system? • why study linear systems? • some examples 1–1 Course mechanics • all class
    [Show full text]
  • Dynamical Systems Dennis Pixton
    Dynamical Systems Version 0.2 Dennis Pixton E-mail address: [email protected] Department of Mathematical Sciences Binghamton University Copyright 2009{2010 by the author. All rights reserved. The most current version of this book is available at the website http://www.math.binghamton.edu/dennis/dynsys.html. This book may be freely reproduced and distributed, provided that it is reproduced in its entirety from one of the versions which is posted on the website above at the time of reproduction. This book may not be altered in any way, except for changes in format required for printing or other distribution, without the permission of the author. Contents Chapter 1. Discrete population models 1 1.1. The basic model 1 1.2. Discrete dynamical systems 3 1.3. Some variations 4 1.4. Steady states and limit states 6 1.5. Bounce graphs 8 Exercises 12 Chapter 2. Continuous population models 15 2.1. The basic model 15 2.2. Continuous dynamical systems 18 2.3. Some variations 19 2.4. Steady states and limit states 26 2.5. Existence and uniqueness 29 Exercises 34 Chapter 3. Discrete Linear models 37 3.1. A stratified population model 37 3.2. Matrix powers, eigenvalues and eigenvectors. 40 3.3. Non negative matrices 44 3.4. Networks; more examples 45 3.5. Google PageRank 51 3.6. Complex eigenvalues 55 Exercises 60 Chapter 4. Linear models: continuous version 65 4.1. The exponential function 65 4.2. Some models 72 4.3. Phase portraits 78 Exercises 87 Chapter 5. Non-linear systems 90 5.1.
    [Show full text]
  • Lecture Notes for Introduction to Dynamical Systems: CM131A 2018-2019
    Lecture notes for Introduction to Dynamical Systems: CM131A Based on notes by A. Annibale, R. K¨uhnand H.C. Rae 2018-2019 Lecturer: G. Watts 1 Contents 1 Overview of the course 4 1.1 Revision Exercises . .9 I Differential Equations 10 2 First order Differential Equations 11 2.1 Basic Ideas . 11 2.2 First order differential equations . 12 2.3 General solution of specific equations . 13 2.3.1 First order, explicit . 13 2.3.2 First order, variables separable . 14 2.3.3 First order, linear . 16 2.3.4 First order, homogeneous . 18 2.4 Initial value problems . 20 2.5 Existence and Uniqueness of Solutions | Picard's theorem . 23 2.5.1 Picard iterates . 24 2.6 Exercises . 28 3 Second order Differential Equations 31 3.1 Second order differential equations . 31 3.1.1 Second order linear, with constant coefficients . 32 3.2 Existence and Uniqueness of Solutions | Picard's theorem . 38 3.3 Exercises . 41 II Dynamical Systems 44 4 Introduction to Dynamical Systems 45 Version of Mar 14, 2019 2 5 First Order Autonomous Systems 50 5.1 Trajectories, orbits and phase portraits . 50 5.2 Termination of Motion . 55 5.3 Estimating times of Motion . 60 5.4 Stability | A More General Discussion . 66 5.4.1 Stability of Fixed Points . 66 5.4.2 Structural Stability . 68 5.4.3 Stability of Motion . 71 5.5 Asymptotic Analysis . 72 5.5.1 Asymptotic Analysis and Dynamical Systems . 74 5.6 Exercises . 80 6 Second Order Autonomous Systems 85 6.1 Phase Space and Phase Portraits .
    [Show full text]
  • Mathematical Description of Linear Dynamical Systems* R
    J.S.I.A.M. CONTROI Ser. A, Vol. 1, No. Printed in U.,q.A., 1963 MATHEMATICAL DESCRIPTION OF LINEAR DYNAMICAL SYSTEMS* R. E. KALMAN Abstract. There are two different ways of describing dynamical systems: (i) by means of state w.riables and (if) by input/output relations. The first method may be regarded as an axiomatization of Newton's laws of mechanics and is taken to be the basic definition of a system. It is then shown (in the linear case) that the input/output relations determine only one prt of a system, that which is completely observable and completely con- trollable. Using the theory of controllability and observability, methods are given for calculating irreducible realizations of a given impulse-response matrix. In par- ticular, an explicit procedure is given to determine the minimal number of state varibles necessary to realize a given transfer-function matrix. Difficulties arising from the use of reducible realizations are discussed briefly. 1. Introduction and summary. Recent developments in optimM control system theory are bsed on vector differential equations as models of physical systems. In the older literature on control theory, however, the same systems are modeled by ransfer functions (i.e., by the Laplace trans- forms of the differential equations relating the inputs to the outputs). Two differet languages have arisen, both of which purport to talk about the same problem. In the new approach, we talk about state variables, tran- sition equations, etc., and make constant use of abstract linear algebra. In the old approach, the key words are frequency response, pole-zero pat- terns, etc., and the main mathematical tool is complex function theory.
    [Show full text]
  • STABILITY Math 21B, O. Knill
    STABILITY Math 21b, O. Knill LINEAR DYNAMICAL SYSTEM. A linear map x Ax defines a dynamical system. Iterating the map 2 7! n produces an orbit x0; x1 = Ax; x2 = A = AAx; :::. The vector xn = A x0 describes the situation of the system at time n. Where does xn go when time evolves? Can one describe what happens asymptotically when time n goes to infinity? In the case of the Fibonacci sequence xn which gives the number of rabbits in a rabbit population at time n, the population grows essentially exponentially. Such a behavior would be called unstable. On the other hand, if A is a rotation, then An~v stays bounded which is a type of stability. If A is a dilation with a dilation factor < 1, then An~v 0 for all ~v, a thing which we will call asymptotic stability. The next pictures show experiments with some!orbits An~v with different matrices. 0:99 1 0:54 1 0:99 1 − − − 1 0 0:95 0 0:99 0 stable (not asymptotic asymptotic asymptotic) stable stable 0:54 1 2:5 1 1 0:1 − − 1:01 0 1 0 0 1 unstable unstable unstable ASYMPTOTIC STABILITY. The origin ~0 is invariant under a linear map T (~x) = A~x. It is called asymptot- ically stable if An(~x) ~0 for all ~x IRn. ! 2 p q EXAMPLE. Let A = be a dilation rotation matrix. Because multiplication wich such a matrix is q −p analogue to the multiplication with a complex number z = p+iq, the matrix An corresponds to a multiplication with (p+iq)n.
    [Show full text]