Linear Systems and Control: a First Course (Course Notes for AAE 564)

Total Page:16

File Type:pdf, Size:1020Kb

Linear Systems and Control: a First Course (Course Notes for AAE 564) Linear Systems and Control: A First Course (Course notes for AAE 564) Martin Corless School of Aeronautics & Astronautics Purdue University West Lafayette, Indiana [email protected] August 25, 2008 Contents 1 Introduction 1 1.1 Ingredients..................................... 4 1.2 Somenotation................................... 4 1.3 MATLAB ..................................... 4 2 State space representation of dynamical systems 5 2.1 Linearexamples.................................. 5 2.1.1 Afirstexample .............................. 5 2.1.2 Theunattachedmass........................... 6 2.1.3 Spring-mass-damper ........................... 6 2.1.4 Asimplestructure ............................ 7 2.2 Nonlinearexamples............................... 8 2.2.1 Afirstnonlinearsystem ......................... 8 2.2.2 Planarpendulum ............................. 9 2.2.3 Attitudedynamicsofarigidbody. .. 9 2.2.4 Bodyincentralforcemotion. 10 2.2.5 Ballisticsindrag ............................. 11 2.2.6 Doublependulumoncart . .. .. 12 2.2.7 Two-linkroboticmanipulator . .. 13 2.3 Discrete-timeexamples . ... 14 2.3.1 Thediscreteunattachedmass . 14 2.4 Generalrepresentation . ... 15 2.4.1 Continuous-time ............................. 15 2.4.2 Discrete-time ............................... 18 2.4.3 Exercises.................................. 20 2.5 Vectors....................................... 22 2.5.1 Vector spaces and IRn .......................... 22 2.5.2 IR2 andpictures.............................. 24 2.5.3 Derivatives ................................ 25 2.5.4 MATLAB ................................. 25 2.6 Vector representation of dynamical systems . ......... 26 2.7 Solutions and equilibrium states . ...... 28 2.7.1 Continuous-time ............................. 28 2.7.2 Discrete-time ............................... 31 2.7.3 Exercises.................................. 32 iii 2.8 Numericalsimulation . .. .. .. 33 2.8.1 MATLAB ................................. 33 2.8.2 Simulink.................................. 34 2.8.3 IMSL.................................... 35 2.9 Exercises...................................... 38 3 Linear time-invariant (LTI) systems 41 3.1 Matrices...................................... 41 3.2 Linear time-invariant systems . ..... 48 3.2.1 Continuous-time ............................. 48 3.2.2 Discrete-time ............................... 49 3.3 Linearization about an equilibrium solution . ......... 50 3.3.1 Derivativeasamatrix .......................... 50 3.3.2 Linearization of continuous-time systems . ....... 51 3.3.3 Implicitlinearization . .. 54 3.3.4 Linearization of discrete-time systems . ....... 55 4 Some linear algebra 59 4.1 Linearequations ................................. 59 4.2 Subspaces ..................................... 66 4.3 Basis........................................ 67 4.3.1 Span(notSpam) ............................. 67 4.3.2 Linearindependence ........................... 68 4.3.3 Basis.................................... 73 4.4 Rangeandnullspaceofamatrix . .. 74 4.4.1 Nullspace................................. 74 4.4.2 Range ................................... 76 4.4.3 Solving linear equations revisited . ..... 79 4.5 Coordinatetransformations . .... 82 4.5.1 Coordinate transformations and linear maps . ...... 82 4.5.2 Thestructureofalinearmap . 84 5 Behavior of (LTI) systems : I 87 5.1 Initialstuff .................................... 87 5.1.1 CT..................................... 87 5.1.2 DT..................................... 88 5.2 Scalarsystems................................... 89 5.2.1 CT..................................... 89 5.2.2 First order complex and second order real. ..... 90 5.2.3 DT..................................... 91 5.3 Eigenvalues and eigenvectors . ..... 92 5.3.1 Real A ................................... 95 5.3.2 MATLAB ................................. 96 5.3.3 Companionmatrices ........................... 97 5.4 Behavior of continuous-time systems . ...... 98 5.4.1 System significance of eigenvectors and eigenvalues . ......... 98 5.4.2 Solutions for nondefective matrices . ..... 104 5.4.3 Defective matrices and generalized eigenvectors . ......... 105 5.5 Behavior of discrete-time systems . ...... 109 5.5.1 System significance of eigenvectors and eigenvalues . ......... 109 5.5.2 All solutions for nondefective matrices . ...... 110 5.5.3 Some solutions for defective matrices . ..... 110 5.6 Similaritytransformations . ..... 112 5.7 HotelCalifornia................................. 117 5.7.1 Invariantsubspaces . 117 5.7.2 Moreoninvariantsubspaces . 118 5.8 Diagonalizablesystems . 119 5.8.1 Diagonalizablematrices . 119 5.8.2 Diagonalizable continuous-time systems . ....... 121 5.8.3 DiagonalizableDTsystems. 123 5.9 Real Jordan form for real diagonalizable systems . .......... 125 5.9.1 CTsystems ................................ 126 5.10 Diagonalizable second order systems . ....... 126 5.10.1 StateplaneportraitsforCTsystems . .... 127 5.11Exercises...................................... 127 6 eAt 133 6.1 Statetransitionmatrix . 133 6.1.1 Continuoustime ............................. 133 6.1.2 Discretetime ............................... 135 6.2 Polynomialsofasquarematrix . 136 6.2.1 Polynomialsofamatrix . .. .. 136 6.2.2 Cayley-HamiltonTheorem . 137 6.3 Functionsofasquarematrix. 139 6.3.1 The matrix exponential: eA ....................... 142 6.3.2 Othermatrixfunctions . 142 6.4 The state transition matrix: eAt ......................... 143 6.5 Computation of eAt ................................ 144 6.5.1 MATLAB ................................. 144 6.5.2 Numericalsimulation. 144 6.5.3 Jordanform................................ 144 6.5.4 Laplacestyle ............................... 144 6.6 Sampled-datasystems ............................. 146 6.7 Exercises...................................... 147 7 Behavior of (LTI) systems : II 149 7.1 Introduction.................................... 149 7.2 Jordanblocks ................................... 150 7.3 Generalizedeigenvectors . .... 153 7.3.1 Generalizedeigenvectors . 153 7.3.2 Chain of generalized eigenvectors . .... 154 7.3.3 System significance of generalized eigenvectors . ......... 155 7.4 TheJordanformofasquarematrix. 156 7.4.1 Specialcase ................................ 156 7.4.2 Block-diagonalmatrices . 158 7.4.3 Alessspecialcase............................. 159 7.4.4 Anotherlessspecialcase . 159 7.4.5 Generalcase................................ 161 7.5 LTIsystems.................................... 162 7.6 Examples on numerical computations with repeated roots ........... 163 8 Stability and boundedness 167 8.1 Continuous-timesystems . 167 8.1.1 Boundednessofsolutions. 167 8.1.2 Stability of an equilibrium state . .... 169 8.1.3 Asymptoticstability . 171 8.2 Discrete-timesystems. 173 8.2.1 Stability of an equilibrium state . .... 174 8.2.2 Asymptoticstability . 174 8.3 Anicetable.................................... 175 8.4 Linearizationandstability . ..... 175 8.5 Exercises...................................... 177 9 Inner products 183 9.1 The usual scalar product on IRn and Cn .................... 183 9.2 Orthogonality ................................... 184 9.2.1 Orthonormalbasis ............................ 185 9.3 Hermitianmatrices ............................... 187 9.4 Positive and negative (semi)definite matrices . .......... 188 9.4.1 Quadraticforms.............................. 188 9.4.2 Definitematrices ............................. 189 9.4.3 Semi-definitematrices . 190 9.5 Singularvaluedecomposition . .... 191 10 Lyapunov theory 197 10.1 Asymptotic stability results . ...... 197 10.1.1 MATLAB.................................. 202 10.2 Stabilityresults ............................... 203 10.3 Mechanicalsystems. .. .. 204 10.3.1 A simple stabilization procedure . ..... 206 10.4DTLyapunovtheory ............................... 209 10.4.1 Stabilityresults . 209 11 Systems with inputs 213 11.1Examples ..................................... 213 11.2 Generaldescription . 213 11.3 Linear systems and linearization . ....... 214 11.3.1 LTIsystems................................ 214 11.3.2 Systems described by higher order differential equations. 214 11.3.3 Controlled equilibrium states . .... 214 11.3.4 Linearization ............................... 215 11.4 ResponseofLTIsystems . 217 11.4.1 Convolutionintegral . 217 11.4.2 Impulseresponse ............................. 218 11.4.3 Response to exponentials and sinusoids . ...... 218 11.5Discrete-time .................................. 219 11.5.1 Linear discrete-time systems . .... 219 11.5.2 Discretizationandzeroorderhold . ..... 219 11.5.3 General form of solution for LTI systems . ..... 222 12 Input output systems 225 12.1Examples ..................................... 225 12.2 Generaldescription . 225 12.3 Linear time-invariant systems and linearization . ............ 227 12.3.1 Linear time invariant systems . 227 12.3.2 Linearization ............................... 227 12.4 ResponseofLTIsystems . 228 12.4.1 Convolutionintegral . 228 12.4.2 Impulseresponse ............................. 230 12.4.3 State space realization of an impulse response . ........ 231 12.4.4 Response to exponentials and sinusoids . ...... 232 12.5Discrete-time .................................. 233 12.5.1 Linear discrete-time systems . .... 233 12.5.2 Discretization and zero-order hold . ...... 233 12.5.3 Generalformofsolution . 234 13 Transfer functions 237 13.1 Transfer functions ........................... 237 ←− ←− 13.1.1 Polesandzeros .............................. 244 13.1.2 Discrete-time
Recommended publications
  • Scientific Computing: an Introductory Survey
    Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Scientific Computing: An Introductory Survey Chapter 4 – Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial, educational use only. Michael T. Heath Scientific Computing 1 / 87 Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Outline 1 Eigenvalue Problems 2 Existence, Uniqueness, and Conditioning 3 Computing Eigenvalues and Eigenvectors Michael T. Heath Scientific Computing 2 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues are also important in analyzing numerical methods Theory and algorithms apply to complex matrices as well as real matrices With complex matrices, we use conjugate transpose, AH , instead of usual transpose, AT Michael T. Heath Scientific Computing 3 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalues and Eigenvectors Standard eigenvalue problem : Given n × n matrix A, find scalar λ and nonzero vector x such that A x = λ x λ is eigenvalue, and x is corresponding eigenvector
    [Show full text]
  • Diagonalizable Matrix - Wikipedia, the Free Encyclopedia
    Diagonalizable matrix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_diagonalization Diagonalizable matrix From Wikipedia, the free encyclopedia (Redirected from Matrix diagonalization) In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.[1] A square matrix which is not diagonalizable is called defective. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. Geometrically, a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling) — it scales the space, as does a homogeneous dilation, but by a different factor in each direction, determined by the scale factors on each axis (diagonal entries). Contents 1 Characterisation 2 Diagonalization 3 Simultaneous diagonalization 4 Examples 4.1 Diagonalizable matrices 4.2 Matrices that are not diagonalizable 4.3 How to diagonalize a matrix 4.3.1 Alternative Method 5 An application 5.1 Particular application 6 Quantum mechanical application 7 See also 8 Notes 9 References 10 External links Characterisation The fundamental fact about diagonalizable maps and matrices is expressed by the following: An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A.
    [Show full text]
  • Mata Glossary of Common Terms
    Title stata.com Glossary Description Commonly used terms are defined here. Mata glossary arguments The values a function receives are called the function’s arguments. For instance, in lud(A, L, U), A, L, and U are the arguments. array An array is any indexed object that holds other objects as elements. Vectors are examples of 1- dimensional arrays. Vector v is an array, and v[1] is its first element. Matrices are 2-dimensional arrays. Matrix X is an array, and X[1; 1] is its first element. In theory, one can have 3-dimensional, 4-dimensional, and higher arrays, although Mata does not directly provide them. See[ M-2] Subscripts for more information on arrays in Mata. Arrays are usually indexed by sequential integers, but in associative arrays, the indices are strings that have no natural ordering. Associative arrays can be 1-dimensional, 2-dimensional, or higher. If A were an associative array, then A[\first"] might be one of its elements. See[ M-5] asarray( ) for associative arrays in Mata. binary operator A binary operator is an operator applied to two arguments. In 2-3, the minus sign is a binary operator, as opposed to the minus sign in -9, which is a unary operator. broad type Two matrices are said to be of the same broad type if the elements in each are numeric, are string, or are pointers. Mata provides two numeric types, real and complex. The term broad type is used to mask the distinction within numeric and is often used when discussing operators or functions.
    [Show full text]
  • The Jacobian of the Exponential Function
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Magnus, Jan R.; Pijls, Henk G.J.; Sentana, Enrique Working Paper The Jacobian of the exponential function Tinbergen Institute Discussion Paper, No. TI 2020-035/III Provided in Cooperation with: Tinbergen Institute, Amsterdam and Rotterdam Suggested Citation: Magnus, Jan R.; Pijls, Henk G.J.; Sentana, Enrique (2020) : The Jacobian of the exponential function, Tinbergen Institute Discussion Paper, No. TI 2020-035/III, Tinbergen Institute, Amsterdam and Rotterdam This Version is available at: http://hdl.handle.net/10419/220072 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu TI 2020-035/III Tinbergen Institute Discussion Paper The Jacobian of the exponential function Jan R.
    [Show full text]
  • 0 Nonsymmetric Eigenvalue Problems 149
    0 Nonsymmetric Eigenvalue Problems 4.1. Introduction We discuss canonical forms (in section 4.2), perturbation theory (in section 4.3), and algorithms for the eigenvalue problem for a single nonsymmetric matrix A (in section 4.4). Chapter 5 is devoted to the special case of real symmet- ric matrices A = A T (and the SVD). Section 4.5 discusses generalizations to eigenvalue problems involving more than one matrix, including motivating applications from the analysis of vibrating systems, the solution of linear differ- ential equations, and computational geometry. Finally, section 4.6 summarizes all the canonical forms, algorithms, costs, applications, and available software in a list. One can roughly divide the algorithms for the eigenproblem into two groups: direct methods and iterative methods. This chapter considers only direct meth- ods, which are intended to compute all of the eigenvalues, and (optionally) eigenvectors. Direct methods are typically used on dense matrices and cost O(n3 ) operations to compute all eigenvalues and eigenvectors; this cost is rel- atively insensitive to the actual matrix entries. The mail direct method used in practice is QR iteration with implicit shifts (see section 4.4.8). It is interesting that after more than 30 years of depend- able service, convergence failures of this algorithm have quite recently been observed, analyzed, and patched [25, 65]. But there is still no global conver- gence proof, even though the current algorithm is considered quite reliable. So the problem of devising an algorithm that is numerically stable and globally (and quickly!) convergent remains open. (Note that "direct" methods must still iterate, since finding eigenvalues is mathematically equivalent to finding zeros of polynomials, for which no noniterative methods can exist.
    [Show full text]
  • On the Construction of Nearest Defective Matrices to a Normal Matrix
    On the construction of nearest defective matrices to a normal matrix Rafikul Alam∗ Department of Mathematics Indian Institute of Technology Guwahati Guwahati - 781 039, INDIA Abstract. Let A be an n-by-n normal matrix with n distinct eigenvalues. We describe a simple procedure to construct a defective matrix whose distance from A, among all the defective matrices, is the shortest. Our construction requires only an appropriate pair of eigenvalues of A and their corresponding eigenvectors. We also show that the nearest defective matrices influence the evolution of spectral projections of A. Keywords: Defective matrix, multiple eigenvalue, pseudospectra, eigenvector, spectral pro- jection. 1 Introduction Let A Cn×n be a normal matrix with n distinct eigenvalues. Let d(A) be the distance of A ∈ from the set of defective matrices, that is, d(A) := inf A A0 : A0 is defective , (1.1) {k − k } where is either the 2-norm or the Frobenius norm. For a non-normal B, the determination k·k of d(B) and a defective matrix B0 such that B B0 = d(B) is a nontrivial task and is widely k − k known as Wilkinson’s problem (see, for example, [8], [9], [10], [6], [2], [5], [4], [1]). By contrast, since A is normal, it is easy to see that for the 2-norm 1 d(A)= min λi λj (1.2) 2 i6=j | − | where λ1,...,λn are the eigenvalues of A. However, it appears that the construction of a defective matrix A0 such that A A0 = d(A) is nontrivial. Note that the existence of such an k − k A0 would imply that the infimum in (1.1) is actually the minimum.
    [Show full text]
  • Consensus Guidelines for the Use and Interpretation of Angiogenesis Assays
    Angiogenesis (2018) 21:425–532 https://doi.org/10.1007/s10456-018-9613-x REVIEW PAPER Consensus guidelines for the use and interpretation of angiogenesis assays Patrycja Nowak‑Sliwinska1,2 · Kari Alitalo3 · Elizabeth Allen4 · Andrey Anisimov3 · Alfred C. Aplin5 · Robert Auerbach6 · Hellmut G. Augustin7,8,9 · David O. Bates10 · Judy R. van Beijnum11 · R. Hugh F. Bender12 · Gabriele Bergers13,4 · Andreas Bikfalvi14 · Joyce Bischof15 · Barbara C. Böck7,8,9 · Peter C. Brooks16 · Federico Bussolino17,18 · Bertan Cakir19 · Peter Carmeliet20,21 · Daniel Castranova22 · Anca M. Cimpean23 · Ondine Cleaver24 · George Coukos25 · George E. Davis26 · Michele De Palma27 · Anna Dimberg28 · Ruud P. M. Dings29 · Valentin Djonov30 · Andrew C. Dudley31,32 · Neil P. Dufton33 · Sarah‑Maria Fendt34,35 · Napoleone Ferrara36 · Marcus Fruttiger37 · Dai Fukumura38 · Bart Ghesquière39,40 · Yan Gong19 · Robert J. Grifn29 · Adrian L. Harris41 · Christopher C. W. Hughes12 · Nan W. Hultgren12 · M. Luisa Iruela‑Arispe42 · Melita Irving25 · Rakesh K. Jain38 · Raghu Kalluri43 · Joanna Kalucka20,21 · Robert S. Kerbel44 · Jan Kitajewski45 · Ingeborg Klaassen46 · Hynda K. Kleinmann47 · Pieter Koolwijk48 · Elisabeth Kuczynski44 · Brenda R. Kwak49 · Koen Marien50 · Juan M. Melero‑Martin51 · Lance L. Munn38 · Roberto F. Nicosia5,52 · Agnes Noel53 · Jussi Nurro54 · Anna‑Karin Olsson55 · Tatiana V. Petrova56 · Kristian Pietras57 · Roberto Pili58 · Jefrey W. Pollard59 · Mark J. Post60 · Paul H. A. Quax61 · Gabriel A. Rabinovich62 · Marius Raica23 · Anna M. Randi33 · Domenico Ribatti63,64 · Curzio Ruegg65 · Reinier O. Schlingemann46,48 · Stefan Schulte‑Merker66 · Lois E. H. Smith19 · Jonathan W. Song67,68 · Steven A. Stacker69 · Jimmy Stalin66 · Amber N. Stratman22 · Maureen Van de Velde53 · Victor W. M. van Hinsbergh48 · Peter B. Vermeulen50,72 · Johannes Waltenberger70 · Brant M. Weinstein22 · Hong Xin36 · Bahar Yetkin‑Arik46 · Seppo Yla‑Herttuala54 · Mervin C.
    [Show full text]
  • Department of Mathematics, University of Utah Analysis of Numerical Methods I MTH6610 – Section 001 – Fall 2017 Lecture Notes – Eigenvalues Wednesday October 25, 2017
    Department of Mathematics, University of Utah Analysis of Numerical Methods I MTH6610 { Section 001 { Fall 2017 Lecture notes { Eigenvalues Wednesday October 25, 2017 These notes are not a substitute for class attendance. Their main purpose is to provide a lecture overview summarizing the topics covered. Reading: Trefethen & Bau III, Lectures 24 n×n Let A 2 C . A scalar λ 2 C is an eigenvalue of A if there exists a nonzero vector v such that Av = λv n For a fixed eigenvalue λj, the subspace Vj ⊂ C containing vectors v satisfying the above equation is called the eigenspace associated to λj. The eigenspace Vj has dimension dj, and this dimension is called the geometric multiplicity of the eigenvalue λj. Let λ1; : : : ; λp be an enumeration of the eigenvalues of A, and let λj have corresponding eigenspace Vj of dimension dj ≥ 1. Let vj1; : : : ; vjdj be any basis for Vj. Then we have the matrix equality AV = V Λ; where the matrices V and Λ are defined as 0 1 λ1Id1 B λ2Id C Λ = B 2 C ;V = v ··· v v ··· v ··· v B .. C 11 1d1 21 2d2 pdp @ . A λpIdp If λ is an eigenvalue of A, the definition implies that pA(λ) := det(λI − A) = 0; and also any λ satisfying the above is an eigenvalue of A. The Laplace expansion of the determinant implies that z 7! pA(z) is a polynomial of degree n. pA is the called the characteristic polynomial of A. We see that A must therefore have exactly n eigenvalues corresponding to the n roots of pA, some of which may be repeated.
    [Show full text]
  • MTH 2311 Linear Algebra Week 11 Resources
    MTH 2311 Linear Algebra Week 11 Resources Colin Burdine 4/20/20 Major Topics: 1. Diagonalization Textbook Material: Linear Algebra and Its Applications, 5th Edition by Lay and McDonald Sections 4.2-4.4 1 Conceptual Review Recall from last week that we introduced the concept of characteristic equations with the following theorem: The Fundamental Theorem of Algebra: A polynomial of order n has at most n real roots. If we allow for roots to be complex-valued (have an imaginary component) or to be counted multiple times (such as −1 is a repeated root of x2 + 2x + 1), then the polynomial has exactly n roots. If we allow for complex eigenvalues and repeated eigenvalues, then a matrix A 2 Rn×n must have exactly n eigenvalues. In this course, we will not be working with complex eigenvalues; however, we will encounter repeated eigenvalues. Recall that we can also have eigenvalues of 0, which are addressed in the FAQ section below (repeated from last week). This week we will be introducing the idea of diagonalization, which is a kind of matrix fac- torization that is both useful in what it says about a matrix and what it allows us to do. In some textbooks, diagonalization is referred to as the eigenvalue decomposition of a matrix, because it decomposes a matrix into the product of three matrices. As we will discuss below, 1 however, this factorization is not always possible. Suppose we are given a square matrix A. If the matrix is diagonalizable, we can decompose it as the following square matrix product: A = PDP−1; where the following conditions are met: 1.
    [Show full text]
  • The Computation of Matrix Function in Particular the Matrix Expotential
    The Computation of Matrix Functions in Particular, The Matrix Exponential By SYED MUHAMMAD GHUFRAN A thesis submitted to The University of Birmingham for the Degree of Master of Philosophy School of Mathematics The University of Birmingham October, 2009 University of Birmingham Research Archive e-theses repository This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder. i Acknowledgements I am extremely grateful to my supervisor, Professor Roy Mathias, for sharing his knowledge, excellent guidance, patience and advice throughout this dissertation. Without his support it would not be possible in short span of time. I wish to thank Prof C.Parker , and Mrs J Lowe for their magnificent support, professional advice and experience to overcome my problems. I am grateful to my colleagues Ali Muhammad Farah, She Li and Dr. Jamal ud Din for their help and support, and my cousins Faqir Hussain, Faizan ur Rasheed and my uncle Haji Aziz and my close friend Shah Jehan for his kindness. I am also very grateful to my parents, brothers, sisters, my wife for their support and trust throughout these long hard times. Further I would like to thank my family and especially to my kids and my nephew Abdul Muqeet how missed me a lot.
    [Show full text]
  • Matrix Decompositions
    MSc in Systems and Control Engineering Lab component for Mathematical Techniques module Numerical Linear Algebra: Matrix Decompositions by G.D. Halikias and J. Kiskiras London October 2006 1 Introduction The purpose of this lab is to twofold: The first aim is to familiarize you with the main linear algebraic Matlab functions, related to the MSc module “Mathematical Techniques”, with special emphasis on eigenvalues and eigen- vectors, spectral decomposition of square matrices, the Jordan and Schur form of a matrix, the singular value decomposition and some of its applica- tions (computation of rank and nullity of a matrix, generation of orthogo- nal bases of its range and kernel, solution of linear equations, reduced-rank matrix approximations). This material, along with some basic theory and simple computational exercises is contained in sections 2 and 3. You can go through these exercises at your own pace (during or outside the formal lab sessions); for additional theory background consult your notes and the suggested textbooks. For difficulties with Matlab programming consult the lab demonstrators and use liberally Matlab’s on-line help functions. The exercises included in section 4 are more demanding, both analytically and computationally. Do not attempt these before you have been familiar with the basics of Matlab programming and the exercises in sections 2 and 3. The exercises include: 1. Programming a function that reduces a matrix into row echelon form using a sequence of elementary transformations, and using your pro- gramme to solve linear equations and calculate the numerical rank and row-span of a matrix. 2. Verifying “Sylvester’s law of inertia” via simple numerical experiments.
    [Show full text]
  • Linear Algebra in Maple®
    72 Linear Algebra in Maple® 72.1 Introduction ......................................72-1 72.2 Vectors ...........................................72-2 72.3 Matrices ..........................................72-4 72.4 Arrays ............................................72-8 72.5 Equation Solving and Matrix Factoring ............72-9 72.6 Eigenvalues and Eigenvectors ......................72-11 72.7 Linear Algebra with Modular Arithmetic in Maple ..........................................72-12 72.8 Numerical Linear Algebra in Maple ................72-13 72.9 Canonical Forms ..................................72-15 David. J. Jeffrey 72.10 Structured Matrices ...............................72-16 The University of Western Ontario 72.11 Functions of Matrices .............................72-19 Robert M. Corless 72.12 Matrix Stability ...................................72-20 The University of Western Ontario References ................................................72-21 72.1 Introduction Maple® is a general purpose computational system that combines symbolic computation with exact and approximate (floating-point)numerical computation and offers a comprehensive suite of scientific graphics as well. The main library of functions is written in the Maple programming language, a rich language designed to allow easy access to advanced mathematical algorithms. A special feature of Maple is user access to the source code for the library, including the ability to trace Maple’s execution and see its internal workings; only the parts of Maple that are compiled, for example, the kernel, cannot be traced. Another feature is that users can link to LAPACK library routines transparently, and thereby benefit from fast and reliable floating-point computation. The development of Maple started in the early 80s, and the company Maplesoft was founded in 1988. A strategic partnership with NAG Inc. in 2000 brought highly efficient numerical routines to Maple, including LAPACK.
    [Show full text]