University of Arizona

Total Page:16

File Type:pdf, Size:1020Kb

Load more

DECOUPLING AND REDUCED ORDER MODELING OF TWO-TIME-SCALE CONTROL SYSTEMS Item Type text; Dissertation-Reproduction (electronic) Authors Anderson, Leonard Robert Publisher The University of Arizona. Rights Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author. Download date 01/10/2021 20:26:31 Link to Item http://hdl.handle.net/10150/298494 INFORMATION TO USERS This was produced from a copy of a document sent to us for microfilming. While the most advanced technological means to photograph and reproduce this document have been used, the quality is heavily dependent upon the quality of the material submitted. The following explanation of techniques is provided to help you understand markings or notations which may appear on this reproduction. 1. The sign or "target" for pages apparently lacking from the document photographed is "Missing Page(s)". If it was possible to obtain the missing page(s) or section, they are spliced into the film along with adjacent pages. This may have necessitated cutting through an image and duplicating adjacent pages to assure you of complete continuity. 2. When an image on the film is obliterated with a round black mark it is an indication that the film inspector noticed either blurred copy because of movement during exposure, or duplicate copy. Unless we meant to delete copyrighted materials that should not have been filmed, you will find a good image of the page in the adjacent frame. 3. When a map, drawing or chart, etc., is part of the material being photo­ graphed the photographer has followed a definite method in "sectioning" the material. It is customary to begin filming at the upper left hand corner of a large sheet and to continue from left to right in equal sections with small overlaps. If necessary, sectioning is continued again—beginning below the first row and continuing on until complete. 4. For any illustrations that cannot be reproduced satisfactorily by xerography, photographic prints can be purchased at additional cost and tipped into your xerographic copy. Requests can be made to our Dissertations Customer Services Department. 5. Some pages in any document may have indistinct print. In all cases we have filmed the best available copy. University Microfilms International 300 N. ZEEB ROAD, ANN ARBOR, Ml 48106 18 BEDFORD ROW, LONDON WC1R 4EJ, ENGLAND 7914436 ANDERSON, Leonard Robert DECOUPLING AND REDUCED ORDER MODELING OF TWO-TIME-SCALE CONTROL SYSTEMS. The University of Arizona Ph.D. 1979 University Microfilms International 300 N. Zeeb Road, Ann Arbor, MI 48106 DECOUPLING AND REDUCED ORDER MODELING OF TWO-TIME-SCALE CONTROL SYSTEMS by Leonard Robert Anderson A Dissertation Submitted to the Faculty of the AEROSPACE AND MECHANICAL ENGINEERING DEPARTMENT In Partial Fulfillment of the Requirements For the Degree of DOCTOR OF PHILOSOPHY • WITH A MAJOR IN MECHANICAL ENGINEERING In the Graduate College THE UNIVERSITY OF ARIZONA 19 7 9 STATEMENT BY AUTHOR This dissertation has been submitted in partial ful­ fillment of requirements for an advanced degree at The University of Arizona and is deposited in the University Library to be made available to borrowers under rules of the Library. Brief quotations from this dissertation are allowable without special permission, provided that accurate acknowledg­ ment of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the head of the major department or Dean of the Graduate College when in his judgment the pro­ posed use of the material is in the interests of scholarship. In all other instances, however, permission must be obtained from the author. SIGNED ACKNOWLEDGMENTS I wish to express my sincere gratitude to my dis­ sertation director, Dr. Robert E. O'Malley, Jr., for his guidance in this work, for financial support provided at several stages, and for the opportunity to visit with Dr. Petar V. Kokotovic of the Coordinated Science Laboratory, University of Illinois, Urbana, whose research and discus­ sions on singularly perturbed engineering systems strongly motivated this work. I would also like to thank the fol­ lowing persons: Dr. T. L. Vincent and Dr. M. R. Bottaccini for their careful review of this dissertation and many helpful comments; Dr. W. R. Sears for his assistance in obtaining fellowship support for this work; Dr. L. B. Scott, head of the Aerospace and Mechanical Engineering Department, for a teaching associateship and other financial assistance; Dr. B. S. Goh and Dr. J. M. Cushing who served on my examining committee; Mrs. Ginger Bierman and Mrs. Sarah Oordt for their excellent typing; and special thanks to my wife, Laurie, for her continuing encouragement and understanding. iii TABLE OF CONTENTS Page LIST OF TABLES vi LIST OF ILLUSTRATIONS vii ABSTRACT ix CHAPTER 1. INTRODUCTION 1 2. MODAL DECOUPLING AND REDUCED ORDER MODELING 9 2.1. A Review of Modal Variables 9 2.2. The LK Decoupling Transformation ... 11 2.3. The Reduced Order Models of Davison and Marshall 18 2.4. A New Reduced Order Model from Singular Perturbations 28 2.5. A Second-Order Example ......... 33 2.6. Feedback Matrix Design via the LK Transformation 40 3. EXISTENCE AND UNIQUENESS OF THE L AND K TIME-SCALE-DECOUPLING MATRICES 46 3.1. A Review of Matrix Eigenstructure ... 47 3.2. The General Solution to the Non- symmetric Algebraic Riccati Equation . 51 3.3. Conditions for Matrices L and K to be Real 58 3.4. Uniqueness of L and K Matrices ... 61 4. COMPUTING THE LK TRANSFORMATION 64 4.1. Computing the L Matrix 64 4.2. Computing the K Matrix 71 4.3. Ordering and Partitioning of the State Variables 72 4.4. The Relationship Between Ordering and Coupling 74 4.5. The Computational Method Summarized . 77 iv V TABLE OF CONTENTS—Continued . Page 5. NUMERICAL EXAMPLES 78 5.1. A Strongly Two-Time-Scale Example ... 78 5.2. A Weakly Two-Time-Scale Example .... 85 6. EXTENSIONS TO TIME-VARYING CONTROL SYSTEMS 94 6.1. The Time Varying LK Transformation 94 6.2. A Second-Order Example 108 APPENDIX A: DATA FOR EXAMPLES 5.1 AND 5.2 116 REFERENCES 121 LIST OF TABLES Table Page 5.1. Execution Times Corresponding to Figures 5.5 and 5.6 90 5.2. Data for Cases I, II and III 92 A.l. The A Matrix for the Turbofan Engine Model 119 A.2. The B Matrix for the Turbofan Engine Model 120 vi LIST OF ILLUSTRATIONS Figure Page 2.1. Response of reduced-order models to initial conditions for Example 2.5, 35 2.2. Response of reduced-order models to a step input for Example 2.5 37 2.3. Response of reduced-order models to a ramp input for Example 2.5 38 2.4. Response of reduced-order models to a sinusoidal input for Example 2.5 39 5.1. Convergence of Algorithms 4.1 and 4.2 for Example 5.1 81 5.2. Response of a slow variable, velocity variation, of Example 5.1 to initial conditions v(0) = 100 ft/sec, <5(0) = .5 rad/sec, a(0)=0, y(0) = 0 82 5.3. Response of a fast variable, angle of attack, of Example 5.1 to initial conditions v(0) = 100 ft/sec, <5(0) = .05 rad/sec, a(0) = 0, y(0) =0 83 5.4. Response of Example 5.1 to a .1 (rad) elevator deflection applied for one second 84 5.5. Convergence of Algorithms 4.1 and 4.2 for Example 5.2 with n^=3, y = .383 . 88 5.6. Convergence of Algorithms 4.1 and 4.2 for Example 5.2 with n^ = 5, y = .371 .... 89 5.7. Convergence of Algorithm 4.1 for Example 5.2 with n^ = 5 for alternative ordering of the state and alternative initial Lo 91 vii viii LIST OF ILLUSTRATIONS—Continued Figure Page 6.1. Eigenvalues of original and block- decoupled system for Example 6.2 110 6.2. L(t) and K(t) time histories for Example 6.2 Ill 6.3. Response of state variables to initial conditions for Example 6.2 113 6.4. Response of state variables to control inputs for Example 6.2 114 ABSTRACT This study presents new analytical findings and a new computational method for time-scale decoupling and order reduction of linear control systems. A transformation of variables is considered which reduces the system to block- diagonal form with slow and fast modes decoupled. Once the decoupled form of the system is obtained, a small parameter can be identified which provides a measure of the system's time scale separation, and the methods of singular perturba­ tions can be applied to yield reduced-order approximations of the original system. A particular reduced-order model so obtained overcomes certain difficulties encountered with other well known order-reduction methodologies. The type of control system considered here is a very general, widely used form. The ability to identify a specific small parameter in all such systems provides a useful link to singular perturbations theory which requires a small parameter to be identified. Although the examples of two-time-scale systems presented here are relatively low order, due to recent advances in computational methods for matrices these decoupling techniques can be directly applied to large scale systems with orders of 100 or more. ix X In Chapter 1, a two-time-scale property is defined in terms of system eigenvalues, and a general class of linear transformations which yield time scale decoupling is also defined.
Recommended publications
  • Matrix Algebra and Control

    Matrix Algebra and Control

    Appendix A Matrix Algebra and Control Boldface lower case letters, e.g., a or b, denote vectors, boldface capital letters, e.g., A, M, denote matrices. A vector is a column matrix. Containing m elements (entries) it is referred to as an m-vector. The number of rows and columns of a matrix A is nand m, respectively. Then, A is an (n, m)-matrix or n x m-matrix (dimension n x m). The matrix A is called positive or non-negative if A>, 0 or A :2:, 0 , respectively, i.e., if the elements are real, positive and non-negative, respectively. A.1 Matrix Multiplication Two matrices A and B may only be multiplied, C = AB , if they are conformable. A has size n x m, B m x r, C n x r. Two matrices are conformable for multiplication if the number m of columns of the first matrix A equals the number m of rows of the second matrix B. Kronecker matrix products do not require conformable multiplicands. The elements or entries of the matrices are related as follows Cij = 2::;;'=1 AivBvj 'Vi = 1 ... n, j = 1 ... r . The jth column vector C,j of the matrix C as denoted in Eq.(A.I3) can be calculated from the columns Av and the entries BVj by the following relation; the jth row C j ' from rows Bv. and Ajv: column C j = L A,vBvj , row Cj ' = (CT),j = LAjvBv, (A. 1) /1=1 11=1 A matrix product, e.g., AB = (c: c:) (~b ~b) = 0 , may be zero although neither multipli­ cand A nor multiplicator B is zero.
  • Classic Thesis

    Classic Thesis

    4 Eigenvalues and Eigenvectors Because in this book we are interested in state spaces, many of the linear operations we consider will be performed on vectors within a single space, resulting in transformed vectors within that same space. Thus, we will consider operators of the form A:X X. A special property of these operators is that they lead to special vectors and scalars known as eigenvectors and eigenvalues. These quantities are of particular importance in the stability and control of linear systems. In this chapter, we will discuss eigenvalues and eigenvectors and related concepts such as singular values. 4.1 A-Invariant Subspaces In any space and for any operator A, there are certain subspaces in which, if we take a vector and operate on it using A, the result remains in that subspace. Formally, A-Invariant Subspace: Let X1 be a subspace of linear vector space X. This subspace is said to be A-invariant if for every vector x X1 , Ax X1 . When the operator A is understood from the context, then X1 is sometimes said to be simply “invariant.” (4.1) Finite-dimensional subspaces can always be thought of as lines, planes, or hyperplanes that pass through the origin of their parent space. In the next section, we will consider A-invariant subspaces consisting of lines through the origin, i.e., the eigenvectors. 148 Part 1. Mathematical Introduction to State Space 4.2 Definitions of Eigenvectors and Eigenvalues Recall that a linear operator A is simply a rule that assigns a new vector Ax to an old vector x.
  • Unified Canonical Forms for Matrices Over a Differential Ring

    Unified Canonical Forms for Matrices Over a Differential Ring

    Unified Canonical Forms for Matrices Over a Differential Ring J. Zhu and C. D. Johnson Electrical and Computer Engineering Department University of Alabama in Huntsville Huntsville, Alabama 35899 Submitted by Russell Menis ABSTRACT Linear differential equations with variable coefficients of the vector form (i) % = A(t)x and the scalar form (ii) y(“) + cy,(t)y(“-‘) + . * . + a2(thj + a,(t)y = 0 can be studied as operators on a differential module over a differential ring. Using this differential algebraic structure and a classical result on differential operator factoriza- tions developed by Cauchy and Floquet, a new (variable) eigenvalue theory and an associated set of matrix canonical forms are developed in this paper for matrices over a differential ring. In particular, eight new spectral and modal matrix canonical forms are developed that include as special cases the classical companion, Jordan (diagonal), and (generalized) Vandermonde canonical forms, as traditionally used for matrices over a number ring (field). Therefore, these new matrix canonical forms can be viewed as unifications of those classical ones. Two additional canonical matrices that perform order reductions of a Cauchy-Floquet factorization of a linear differential operator are also introduced. General and explicit formulas for obtaining the new canonical forms are derived in terms of the new (variable) eigenvalues. The results obtained here have immediate applications in diverse engineering and scientific fields, such as physics, control systems, communications, and electrical networks, in which linear differential equations (i) and (ii) are used as mathematical models. 1. INTRODUCTION This paper is concerned with the class of finite-dimensional, homoge- neous linear differential equations of the vector form S = A( t)x, x(h) = x0 (1.1) LlNEAR ALGEBRA AND ITS APPLlCATlONS 147:201-248 (1991) 201 0 Elsevier Science Publishing Co., Inc., 1991 655 Avenue of the Americas, New York, NY 10010 0024-3795/91/$3.50 202 J.
  • Generalized Eigenvector - Wikipedia

    Generalized Eigenvector - Wikipedia

    11/24/2018 Generalized eigenvector - Wikipedia Generalized eigenvector In linear algebra, a generalized eigenvector of an n × n matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.[1] Let be an n-dimensional vector space; let be a linear map in L(V), the set of all linear maps from into itself; and let be the matrix representation of with respect to some ordered basis. There may not always exist a full set of n linearly independent eigenvectors of that form a complete basis for . That is, the matrix may not be diagonalizable.[2][3] This happens when the algebraic multiplicity of at least one eigenvalue is greater than its geometric multiplicity (the nullity of the matrix , or the dimension of its nullspace). In this case, is called a defective eigenvalue and is called a defective matrix.[4] A generalized eigenvector corresponding to , together with the matrix generate a Jordan chain of linearly independent generalized eigenvectors which form a basis for an invariant subspace of .[5][6][7] Using generalized eigenvectors, a set of linearly independent eigenvectors of can be extended, if necessary, to a complete basis for .[8] This basis can be used to determine an "almost diagonal matrix" in Jordan normal form, similar to , which is useful in computing certain matrix functions of .[9] The matrix is also useful in solving the system of linear differential equations where need not be diagonalizable.[10][11] Contents Overview and definition Examples Example 1 Example 2 Jordan chains
  • Maxima by Example: Ch. 5, Matrix Solution Methods ∗

    Maxima by Example: Ch. 5, Matrix Solution Methods ∗ Edwin (Ted) Woollett March 30, 2016 Contents 1 Introduction and Quick Start 3 1.1 ExamplesofMatrixSyntax. ..................................... 4 1.2 Supplementary Matrix Functions . ........................................ 6 1.3 Linear Inhomogeneous Set of Equations . ......................................... 7 1.4 Eigenvalues and Eigenvectors . ....................................... 8 1.5 Initial Value Problem for a Set of Three First Order LinearODE’s........................................... 13 1.6 MatrixNotationUsedHere . .................................... 15 1.7 References ...................................... ................................. 18 2 Row-Echelon Form and Rank of a Matrix 19 3 Inverse of a Matrix 21 3.1 Inversion Using Maxima core functions . ......................................... 22 3.2 Inversion Using Elementary Row Operations . ........................................... 23 4 Set of Homogeneous Linear Algebraic Equations Ax = 0 27 4.1 Nullspace Basis Vectors of a Matrix . ........................................ 28 4.2 Examples of Eigenvalues and Eigenvectors . ........................................... 33 5 Generalized Eigenvectors, Jordan Chains, Canonical Basis, Jordan Canonical Form 41 5.1 Example1 ........................................ ............................... 42 5.1.1 Example 1 Using mcol solve(known col,unknown col,ukvarL)........................................ 42 5.1.2 Example 1 Using jordan chain(A,eival) ....................................
  • Chapter 1 Matrix Algebra. Definitions and Operations

    Chapter 1 Matrix Algebra. Definitions and Operations

    Chapter 1 Matrix Algebra. Definitions and Operations 1.1 Matrices Matrices play very important roles in the computation and analysis of several engineering problems. First, matrices allow for compact notations. As discussed below, matrices are collections of objects arranged in rows and columns. The symbols representing these collec- tions can then induce an algebra, in which different operations such as matrix addition or matrix multiplication can be defined. (This compact representation has become even more significant today with the advent of computer software that allows simple statements such as A+B or A*B to be evaluated directly.) Aside from the convenience in representation, once the matrices have been constructed, other internal properties can be derived and assessed. Properties such as determinant, rank, trace, eigenvalues and eigenvectors (all to be defined later) determine characteristics about the systems from which the matrices were obtained. These properties can then help in the analysis and improvement of the systems under study. It can be argued that some problems may be solved without the use of matrices. How- ever, as the complexity of the problem increases, matrices can help improve the tractability of the solution. Definition 1.1 A matrix is a collection of objects, called the elements of the matrix, arranged in rows and columns. These elements could be numbers, 1 0 0.3 A = − with i = √ 1 2 3 + i 1 − µ − 2 ¶ 3 4 c 2006 Tomas Co, all rights reserved ° or functions, 1 2x(t) + a B = sin(ωt)dt dy/dt µ ¶ We restrict the discussion on matricesR that contain elements for which binary oper- ations such as addition, subtraction, multiplication and division among the elements make algebraic sense.