Lecture Notes for EE263 Stephen Boyd Introduction to Linear Dynamical Systems Autumn 2007-08 Copyright Stephen Boyd. Limited copying or use for educational purposes is fine, but please acknowledge source, e.g., “taken from Lecture Notes for EE263, Stephen Boyd, Stanford 2007.” Contents Lecture 1 – Overview Lecture 2 – Linear functions and examples Lecture 3 – Linear algebra review Lecture 4 – Orthonormal sets of vectors and QR factorization Lecture 5 – Least-squares Lecture 6 – Least-squares applications Lecture 7 – Regularized least-squares and Gauss-Newton method Lecture 8 – Least-norm solutions of underdetermined equations Lecture 9 – Autonomous linear dynamical systems Lecture 10 – Solution via Laplace transform and matrix exponential Lecture 11 – Eigenvectors and diagonalization Lecture 12 – Jordan canonical form Lecture 13 – Linear dynamical systems with inputs and outputs Lecture 14 – Example: Aircraft dynamics Lecture 15 – Symmetric matrices, quadratic forms, matrix norm, and SVD Lecture 16 – SVD applications Lecture 17 – Example: Quantum mechanics Lecture 18 – Controllability and state transfer Lecture 19 – Observability and state estimation Lecture 20 – Some final comments Basic notation Matrix primer Crimes against matrices Solving general linear equations using Matlab Least-squares and least-norm solutions using Matlab Exercises EE263 Autumn 2007-08 Stephen Boyd Lecture 1 Overview • course mechanics • outline & topics • what is a linear dynamical system? • why study linear systems? • some examples 1–1 Course mechanics • all class info, lectures, homeworks, announcements on class web page: www.stanford.edu/class/ee263 course requirements: • weekly homework • takehome midterm exam (date TBD) • takehome final exam (date TBD) Overview 1–2 Prerequisites • exposure to linear algebra (e.g., Math 103) • exposure to Laplace transform, differential equations not needed, but might increase appreciation: • control systems • circuits & systems • dynamics Overview 1–3 Major topics & outline • linear algebra & applications • autonomous linear dynamical systems • linear dynamical systems with inputs & outputs • basic quadratic control & estimation Overview 1–4 Linear dynamical system continuous-time linear dynamical system (CT LDS) has the form dx = A(t)x(t)+ B(t)u(t), y(t)= C(t)x(t)+ D(t)u(t) dt where: • t ∈ R denotes time • x(t) ∈ Rn is the state (vector) • u(t) ∈ Rm is the input or control • y(t) ∈ Rp is the output Overview 1–5 × • A(t) ∈ Rn n is the dynamics matrix × • B(t) ∈ Rn m is the input matrix × • C(t) ∈ Rp n is the output or sensor matrix × • D(t) ∈ Rp m is the feedthrough matrix for lighter appearance, equations are often written x˙ = Ax + Bu, y = Cx + Du • CT LDS is a first order vector differential equation • also called state equations, or ‘m-input, n-state, p-output’ LDS Overview 1–6 Some LDS terminology • most linear systems encountered are time-invariant: A, B, C, D are constant, i.e., don’t depend on t • when there is no input u (hence, no B or D) system is called autonomous • very often there is no feedthrough, i.e., D = 0 • when u(t) and y(t) are scalar, system is called single-input, single-output (SISO); when input & output signal dimensions are more than one, MIMO Overview 1–7 Discrete-time linear dynamical system discrete-time linear dynamical system (DT LDS) has the form x(t + 1) = A(t)x(t)+ B(t)u(t), y(t)= C(t)x(t)+ D(t)u(t) where • t ∈ Z = {0, ±1, ±2,...} • (vector) signals x, u, y are sequences DT LDS is a first order vector recursion Overview 1–8 Why study linear systems? applications arise in many areas, e.g. • automatic control systems • signal processing • communications • economics, finance • circuit analysis, simulation, design • mechanical and civil engineering • aeronautics • navigation, guidance Overview 1–9 Usefulness of LDS • depends on availability of computing power, which is large & increasing exponentially • used for – analysis & design – implementation, embedded in real-time systems • like DSP, was a specialized topic & technology 30 years ago Overview 1–10 Origins and history • parts of LDS theory can be traced to 19th century • builds on classical circuits & systems (1920s on) (transfer functions . ) but with more emphasis on linear algebra • first engineering application: aerospace, 1960s • transitioned from specialized topic to ubiquitous in 1980s (just like digital signal processing, information theory, . ) Overview 1–11 Nonlinear dynamical systems many dynamical systems are nonlinear (a fascinating topic) so why study linear systems? • most techniques for nonlinear systems are based on linear methods • methods for linear systems often work unreasonably well, in practice, for nonlinear systems • if you don’t understand linear dynamical systems you certainly can’t understand nonlinear dynamical systems Overview 1–12 Examples (ideas only, no details) • let’s consider a specific system x˙ = Ax, y = Cx with x(t) ∈ R16, y(t) ∈ R (a ‘16-state single-output system’) • model of a lightly damped mechanical system, but it doesn’t matter Overview 1–13 typical output: 3 2 1 0 −1 −2 −3 0 50 100 150 200 250 300 350 t 3 2 1 0 y y −1 −2 −3 0 100 200 300 400 500 600 700 800 900 1000 t • output waveform is very complicated; looks almost random and unpredictable • we’ll see that such a solution can be decomposed into much simpler (modal) components Overview 1–14 t 0.2 0 −0.2 0 50 100 150 200 250 300 350 1 0 −1 0 50 100 150 200 250 300 350 0.5 0 −0.5 0 50 100 150 200 250 300 350 2 0 −2 0 50 100 150 200 250 300 350 1 0 −1 0 50 100 150 200 250 300 350 2 0 −2 0 50 100 150 200 250 300 350 5 0 −5 0 50 100 150 200 250 300 350 0.2 0 −0.2 0 50 100 150 200 250 300 350 (idea probably familiar from ‘poles’) Overview 1–15 Input design add two inputs, two outputs to system: x˙ = Ax + Bu, y = Cx, x(0) = 0 × × where B ∈ R16 2, C ∈ R2 16 (same A as before) 2 problem: find appropriate u : R+ → R so that y(t) → ydes = (1, −2) simple approach: consider static conditions (u, x, y constant): x˙ =0= Ax + Bustatic, y = ydes = Cx solve for u to get: −1 −1 −0.63 ustatic = −CA B ydes = 0.36 Overview 1–16 let’s apply u = ustatic and just wait for things to settle: 0 −0.2 1 −0.4 u −0.6 −0.8 −1 −200 0 200 400 600 800 t1000 1200 1400 1600 1800 0.4 0.3 2 0.2 u 0.1 0 −0.1 −200 0 200 400 600 800 t1000 1200 1400 1600 1800 2 1.5 1 1 y 0.5 0 −200 0 200 400 600 800 t1000 1200 1400 1600 1800 0 −1 2 −2 y −3 −4 −200 0 200 400 600 800 t1000 1200 1400 1600 1800 . takes about 1500 sec for y(t) to converge to ydes Overview 1–17 using very clever input waveforms (EE263) we can do much better, e.g. 0.2 0 1 −0.2 u −0.4 −0.6 0 10 20 t30 40 50 60 0.4 2 0.2 u 0 −0.2 0 10 20 t30 40 50 60 1 0.5 1 y 0 −0.5 0 10 20 t30 40 50 60 0 −0.5 2 −1 y −1.5 −2 −2.5 0 10 20 t30 40 50 60 . here y converges exactly in 50 sec Overview 1–18 in fact by using larger inputs we do still better, e.g. 5 1 0 u −5 −5 0 5 10t 15 20 25 1 0.5 2 0 u −0.5 −1 −1.5 −5 0 5 10t 15 20 25 2 1 1 0 y −1 −5 0 5 10t 15 20 25 0 −0.5 2 −1 y −1.5 −2 −5 0 5 10t 15 20 25 . here we have (exact) convergence in 20 sec Overview 1–19 in this course we’ll study • how to synthesize or design such inputs • the tradeoff between size of u and convergence time Overview 1–20 Estimation / filtering u w y H(s) A/D • signal u is piecewise constant (period 1 sec) • filtered by 2nd-order system H(s), step response s(t) • A/D runs at 10Hz, with 3-bit quantizer Overview 1–21 1 ) t 0 ( u −1 0 1 2 3 4 5 6 7 8 9 10 1.5 ) 1 t ( 0.5 s 0 0 1 2 3 4 5 6 7 8 9 10 1 ) t ( 0 w −1 0 1 2 3 4 5 6 7 8 9 10 1 ) t 0 ( y −1 0 1 2 3 4 5 6 7 8 9 10 t problem: estimate original signal u, given quantized, filtered signal y Overview 1–22 simple approach: • ignore quantization • design equalizer G(s) for H(s) (i.e., GH ≈ 1) • approximate u as G(s)y . yields terrible results Overview 1–23 formulate as estimation problem (EE263) . 1 0.8 0.6 (dotted) 0.4 ) t ( 0.2 ˆ u 0 −0.2 −0.4 (solid) and −0.6 ) t −0.8 ( u −1 0 1 2 3 4 5 6 7 8 9 10 t RMS error 0.03, well below quantization error (!) Overview 1–24 EE263 Autumn 2007-08 Stephen Boyd Lecture 2 Linear functions and examples • linear equations and functions • engineering examples • interpretations 2–1 Linear equations consider system of linear equations y1 = a11x1 + a12x2 + ··· + a1nxn y2 = a21x1 + a22x2 + ··· + a2nxn . ym = am1x1 + am2x2 + ··· + amnxn can be written in matrix form as y = Ax, where y1 a11 a12 ··· a1n x1 y2 a21 a22 ··· a2n x2 y = A = x = .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages428 Page
-
File Size-