University of Arizona
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Matrix Algebra and Control
Appendix A Matrix Algebra and Control Boldface lower case letters, e.g., a or b, denote vectors, boldface capital letters, e.g., A, M, denote matrices. A vector is a column matrix. Containing m elements (entries) it is referred to as an m-vector. The number of rows and columns of a matrix A is nand m, respectively. Then, A is an (n, m)-matrix or n x m-matrix (dimension n x m). The matrix A is called positive or non-negative if A>, 0 or A :2:, 0 , respectively, i.e., if the elements are real, positive and non-negative, respectively. A.1 Matrix Multiplication Two matrices A and B may only be multiplied, C = AB , if they are conformable. A has size n x m, B m x r, C n x r. Two matrices are conformable for multiplication if the number m of columns of the first matrix A equals the number m of rows of the second matrix B. Kronecker matrix products do not require conformable multiplicands. The elements or entries of the matrices are related as follows Cij = 2::;;'=1 AivBvj 'Vi = 1 ... n, j = 1 ... r . The jth column vector C,j of the matrix C as denoted in Eq.(A.I3) can be calculated from the columns Av and the entries BVj by the following relation; the jth row C j ' from rows Bv. and Ajv: column C j = L A,vBvj , row Cj ' = (CT),j = LAjvBv, (A. 1) /1=1 11=1 A matrix product, e.g., AB = (c: c:) (~b ~b) = 0 , may be zero although neither multipli cand A nor multiplicator B is zero. -
Classic Thesis
4 Eigenvalues and Eigenvectors Because in this book we are interested in state spaces, many of the linear operations we consider will be performed on vectors within a single space, resulting in transformed vectors within that same space. Thus, we will consider operators of the form A:X X. A special property of these operators is that they lead to special vectors and scalars known as eigenvectors and eigenvalues. These quantities are of particular importance in the stability and control of linear systems. In this chapter, we will discuss eigenvalues and eigenvectors and related concepts such as singular values. 4.1 A-Invariant Subspaces In any space and for any operator A, there are certain subspaces in which, if we take a vector and operate on it using A, the result remains in that subspace. Formally, A-Invariant Subspace: Let X1 be a subspace of linear vector space X. This subspace is said to be A-invariant if for every vector x X1 , Ax X1 . When the operator A is understood from the context, then X1 is sometimes said to be simply “invariant.” (4.1) Finite-dimensional subspaces can always be thought of as lines, planes, or hyperplanes that pass through the origin of their parent space. In the next section, we will consider A-invariant subspaces consisting of lines through the origin, i.e., the eigenvectors. 148 Part 1. Mathematical Introduction to State Space 4.2 Definitions of Eigenvectors and Eigenvalues Recall that a linear operator A is simply a rule that assigns a new vector Ax to an old vector x. -
Unified Canonical Forms for Matrices Over a Differential Ring
Unified Canonical Forms for Matrices Over a Differential Ring J. Zhu and C. D. Johnson Electrical and Computer Engineering Department University of Alabama in Huntsville Huntsville, Alabama 35899 Submitted by Russell Menis ABSTRACT Linear differential equations with variable coefficients of the vector form (i) % = A(t)x and the scalar form (ii) y(“) + cy,(t)y(“-‘) + . * . + a2(thj + a,(t)y = 0 can be studied as operators on a differential module over a differential ring. Using this differential algebraic structure and a classical result on differential operator factoriza- tions developed by Cauchy and Floquet, a new (variable) eigenvalue theory and an associated set of matrix canonical forms are developed in this paper for matrices over a differential ring. In particular, eight new spectral and modal matrix canonical forms are developed that include as special cases the classical companion, Jordan (diagonal), and (generalized) Vandermonde canonical forms, as traditionally used for matrices over a number ring (field). Therefore, these new matrix canonical forms can be viewed as unifications of those classical ones. Two additional canonical matrices that perform order reductions of a Cauchy-Floquet factorization of a linear differential operator are also introduced. General and explicit formulas for obtaining the new canonical forms are derived in terms of the new (variable) eigenvalues. The results obtained here have immediate applications in diverse engineering and scientific fields, such as physics, control systems, communications, and electrical networks, in which linear differential equations (i) and (ii) are used as mathematical models. 1. INTRODUCTION This paper is concerned with the class of finite-dimensional, homoge- neous linear differential equations of the vector form S = A( t)x, x(h) = x0 (1.1) LlNEAR ALGEBRA AND ITS APPLlCATlONS 147:201-248 (1991) 201 0 Elsevier Science Publishing Co., Inc., 1991 655 Avenue of the Americas, New York, NY 10010 0024-3795/91/$3.50 202 J. -
Generalized Eigenvector - Wikipedia
11/24/2018 Generalized eigenvector - Wikipedia Generalized eigenvector In linear algebra, a generalized eigenvector of an n × n matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.[1] Let be an n-dimensional vector space; let be a linear map in L(V), the set of all linear maps from into itself; and let be the matrix representation of with respect to some ordered basis. There may not always exist a full set of n linearly independent eigenvectors of that form a complete basis for . That is, the matrix may not be diagonalizable.[2][3] This happens when the algebraic multiplicity of at least one eigenvalue is greater than its geometric multiplicity (the nullity of the matrix , or the dimension of its nullspace). In this case, is called a defective eigenvalue and is called a defective matrix.[4] A generalized eigenvector corresponding to , together with the matrix generate a Jordan chain of linearly independent generalized eigenvectors which form a basis for an invariant subspace of .[5][6][7] Using generalized eigenvectors, a set of linearly independent eigenvectors of can be extended, if necessary, to a complete basis for .[8] This basis can be used to determine an "almost diagonal matrix" in Jordan normal form, similar to , which is useful in computing certain matrix functions of .[9] The matrix is also useful in solving the system of linear differential equations where need not be diagonalizable.[10][11] Contents Overview and definition Examples Example 1 Example 2 Jordan chains -
Maxima by Example: Ch. 5, Matrix Solution Methods ∗
Maxima by Example: Ch. 5, Matrix Solution Methods ∗ Edwin (Ted) Woollett March 30, 2016 Contents 1 Introduction and Quick Start 3 1.1 ExamplesofMatrixSyntax. ..................................... 4 1.2 Supplementary Matrix Functions . ........................................ 6 1.3 Linear Inhomogeneous Set of Equations . ......................................... 7 1.4 Eigenvalues and Eigenvectors . ....................................... 8 1.5 Initial Value Problem for a Set of Three First Order LinearODE’s........................................... 13 1.6 MatrixNotationUsedHere . .................................... 15 1.7 References ...................................... ................................. 18 2 Row-Echelon Form and Rank of a Matrix 19 3 Inverse of a Matrix 21 3.1 Inversion Using Maxima core functions . ......................................... 22 3.2 Inversion Using Elementary Row Operations . ........................................... 23 4 Set of Homogeneous Linear Algebraic Equations Ax = 0 27 4.1 Nullspace Basis Vectors of a Matrix . ........................................ 28 4.2 Examples of Eigenvalues and Eigenvectors . ........................................... 33 5 Generalized Eigenvectors, Jordan Chains, Canonical Basis, Jordan Canonical Form 41 5.1 Example1 ........................................ ............................... 42 5.1.1 Example 1 Using mcol solve(known col,unknown col,ukvarL)........................................ 42 5.1.2 Example 1 Using jordan chain(A,eival) .................................... -
Chapter 1 Matrix Algebra. Definitions and Operations
Chapter 1 Matrix Algebra. Definitions and Operations 1.1 Matrices Matrices play very important roles in the computation and analysis of several engineering problems. First, matrices allow for compact notations. As discussed below, matrices are collections of objects arranged in rows and columns. The symbols representing these collec- tions can then induce an algebra, in which different operations such as matrix addition or matrix multiplication can be defined. (This compact representation has become even more significant today with the advent of computer software that allows simple statements such as A+B or A*B to be evaluated directly.) Aside from the convenience in representation, once the matrices have been constructed, other internal properties can be derived and assessed. Properties such as determinant, rank, trace, eigenvalues and eigenvectors (all to be defined later) determine characteristics about the systems from which the matrices were obtained. These properties can then help in the analysis and improvement of the systems under study. It can be argued that some problems may be solved without the use of matrices. How- ever, as the complexity of the problem increases, matrices can help improve the tractability of the solution. Definition 1.1 A matrix is a collection of objects, called the elements of the matrix, arranged in rows and columns. These elements could be numbers, 1 0 0.3 A = − with i = √ 1 2 3 + i 1 − µ − 2 ¶ 3 4 c 2006 Tomas Co, all rights reserved ° or functions, 1 2x(t) + a B = sin(ωt)dt dy/dt µ ¶ We restrict the discussion on matricesR that contain elements for which binary oper- ations such as addition, subtraction, multiplication and division among the elements make algebraic sense.