Matrix Algebra and Control

Total Page:16

File Type:pdf, Size:1020Kb

Matrix Algebra and Control Appendix A Matrix Algebra and Control Boldface lower case letters, e.g., a or b, denote vectors, boldface capital letters, e.g., A, M, denote matrices. A vector is a column matrix. Containing m elements (entries) it is referred to as an m-vector. The number of rows and columns of a matrix A is nand m, respectively. Then, A is an (n, m)-matrix or n x m-matrix (dimension n x m). The matrix A is called positive or non-negative if A>, 0 or A :2:, 0 , respectively, i.e., if the elements are real, positive and non-negative, respectively. A.1 Matrix Multiplication Two matrices A and B may only be multiplied, C = AB , if they are conformable. A has size n x m, B m x r, C n x r. Two matrices are conformable for multiplication if the number m of columns of the first matrix A equals the number m of rows of the second matrix B. Kronecker matrix products do not require conformable multiplicands. The elements or entries of the matrices are related as follows Cij = 2::;;'=1 AivBvj 'Vi = 1 ... n, j = 1 ... r . The jth column vector C,j of the matrix C as denoted in Eq.(A.I3) can be calculated from the columns Av and the entries BVj by the following relation; the jth row C j ' from rows Bv. and Ajv: column C j = L A,vBvj , row Cj ' = (CT),j = LAjvBv, (A. 1) /1=1 11=1 A matrix product, e.g., AB = (c: c:) (~b ~b) = 0 , may be zero although neither multipli­ cand A nor multiplicator B is zero. Without A or B being the nullmatrix the product AB only vanishes if both A and B are singular. The matrix B = A 1. is the (right) annihilator of A, i.e., AA 1. = O. A.2 Properties of Matrix Operations Distributivity: A(B + C) = AB + AC . Associativity of addition (A + B) + C = A + (B + C) and multiplication (AB)C = A(BC). Commutativity of addition, non-commutativity of multiplication and raise to higher powers: A+B=B+A, AB f- BA, (A.2) Exceptions: Consider, first, a multi variable control with transfer matrix G( s) in the forward path and unity feedback and, second, H in the forward path and F in the feedback where G = FH. The overall transfer matrix is given by (I + FH)-IFH = FH(I + FH)-I . (A.3) The inverse of the return-difference matrix and G commute unexpectedly. Another exceptional case is the product A exp (At), i.e. the coefficient matrix and the state transition matrix. Finally, suppose A and B nonsingular. Then, A and B commute if their product is the identity matrix: Both AB = I and BA = I yield A = B-1 . Generally, the matrices A and B commute with respect to multiplication if B is a function of A , e.g. as given by a matrix polynomial or by the decomposition in Eq.(A.45). 610 A Matrix Algebra and Control Note that F(I + HF)-I = (I + FH)-I F and F(I + HF)-I H = (I + FH)-IFH , particularly observe the change of order within the parentheses. 0 Properties when transposing or inverting a matrix product: (AA) Inverse and transpose operations (symbols) may be permuted: (AT)-I = (A-If. If A-I = AT is true then A is referred to as an orthogonal matrix. An idempotent matrix A has the property A 2 = A . This result can be observed in least squares, estimation and sliding mode theory, e.g., or A = I - B(C,B)-IC, . (A.5) A matrix A is nilpotent if A k = 0 for some k. Such a matrix appears in the case of the state-space representation of a k-tuples integrator. A.3 Diagonal Matrices A diagonal matrix A is a square matrix with non-zero entries A;; in the main diagonal, only, e.g., (A.6) If these entries A;; are equal to each other A is a scalar matrix. The identity matrix Inis a scalar matrix with elements 1 and dimension n x n: In ~ dia~(1,I,I, ... ,1), In E nnxn . Given a rectangular (n, r)-matrix B, premultiplying B by the identity matrix In or postmultiplying B by Ir yields InB = B or Blr = B . Premultiplying [postmultiplying] a matrix A by a diagonal matrix yields ... ... ... ) ) , (A.7) i.e., a new matrix the rows [columns] of which are successively multiplied (scaled) by d; (i.e., i-th row [column] with di). A.4 Triangular Matrices A lower triangular matrix is a square matrix having all elements zero above the main diagonal, an upper triangular matrix only contains zero elements below the main diagonal. The product of two triangular matrices produces a triangular one again. If A is given as a diagonal matrix or an upper or lower triangular matrix, the eigenvalues A[A] are already given by the entries in the main diagonal A;; Vi = L.n. A.S Column Matrices (Vectors) and Row Matrices The unit m-vector with k-th component 1 is termed ek . Defining this m-vector ek and the n-vector ei , the elementary matrix or Kronecker matrix E;k is given by the dyadic product ek = (0,0, ... I, ... ,0, of = e~mx I) = (Im).k E;k E nnxm (A.8) as an (n,m)-matrix with entry 1 only in the i,k-element and zero elsewhere. Thus, the (n,m)-matrix A can be established element by element: A = L7=1 L;'=I A;kEik where A E nnxm or dim A = n X m . The identity matrix can be achieved by the sum In = L7=1 e;er = L7=1 E;; . The sum vector with elements throughout unity 1 = (I, 1, ... ,If serves as a summation operator for an m-vector: ITa= L~ai. A.6 Reduced Matrix, Minor, Cofactor, Adjoint 611 The inner product of two vectors a and b is a scalar aTb = bT a . Orthogonal vectors have zero inner product. Assume a vector output signal y given by the linear combination of a vector input signal x governed by the transfer equation y=Cx where yEnr , xEnn , CEnrxn . (A.9) The entry Gij of C is considered as an operational factor from the input component Xj to the output component Yi. Note that the output subscript i (effect) is written first and j (source) is written second. A partitioned vector u is denoted by the "vec" symbol u- (:~ ) =vec(uI,u2, ... uN)=vecui=(uf,uI, ... u~f uiEnm " uEnm , m=Lmi. - u~ (A.lO) The norm or length of a is the distance to the nullvector (origin) and is defined by the Frobenius norm lIaliF = .,;;T; . The inner product is always smaller than the product of the norms of the multiplicators (Schwartz inequality): laTbl ::; lIallFllbliF . Triangle inequality (for any kind of norm): lIa + bll ::; Iiall + IIbll . The angle 0 between two vectors u and v is defined by cosO = uTv/(lluIlFllvIlF) . Mapping a matrix A (A.ll) to a vector a is provided by the operator "col" (or by the operator "vec") colA = vecA = a = (All ... AmI: AI2 ... Am2 ... Amnf (A.I2) The operator col lists the entries of A column-wise. Separating the ith column of A will be termed by (A).i. For abbreviation also A.i is used although, usually, only matrices are denoted by upper case boldface letters, irrespective of the subscript. A.i is defined as a column matrix (vector). The operator col (column string) can also be written as col A = (A·f : A·I ... A.~f. (A.I3) The inner product of two real matrices A and B of equal dimension m x n is a scalar and coincides with the trace of the matrix product: (A, B) = (coIA)T colB = trATB = L:::I L:j=1 AijBij ::; IIAIIFIIBIIF . The Frobenius or Euclidian norm of a real matrix A is given by (A.I4) A.6 Reduced Matrix, Minor, Cofactor, Adjoint Given the (n, n)-matrix A , the reduced matrix A"d ik of the size (n - 1) x (n - 1) is obtained by cancelling row i and column k. Repeating for NI ... n, k'v'I ... n yields n 2 different matrices. The minor on the i, k-component of A is defined as the determinant det Ared ik . The cofactor of the i, k-element of A is obtained by permuting the sign of the minor, precisely by multiplying the minor with (-1 )iH, that is, COfikA = (_1)iH det Ared ik . Given the (n, n)-matrix A on the elements Aik, then A = (Aik) = matrix[AikJ and the adjoint is given as the transposed matrix of cofactors: adjA = [matrix( COfikA)JT . The determinant can be decomposed with respect to the row i , or with respect to the kth column, that is, n det A = L AikcofikA Vi = 1 ... n or det A = L AikcofikA 'v'k=l. .. n. (A.I5) k=1 i=l Interpretation of the equations above as a matrix multiplication yields I det A = A adjA = (adjA) A A -I = adjA . (A.I6) detA 612 A Ma.trix Algebra. and Control A.7 Similar Matrices Matrices A and A are similar, i.e., A ~ A, if A = TAT-I. The preceding equation is named similarity transformation. Similar matrices A and A are characterized by the property det A = det A, by the same eigenvalues, eigenvalue multiplicities and eigenvalue indices (and by the same number of generalized eigenvectors), see Eq.(B.I07). Examples of similar matrices are A and A = diag ~i[A) = T-I AT in Eq.(B.I3).
Recommended publications
  • Polynomial Flows in the Plane
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector ADVANCES IN MATHEMATICS 55, 173-208 (1985) Polynomial Flows in the Plane HYMAN BASS * Department of Mathematics, Columbia University. New York, New York 10027 AND GARY MEISTERS Department of Mathematics, University of Nebraska, Lincoln, Nebraska 68588 Contents. 1. Introduction. I. Polynomial flows are global, of bounded degree. 2. Vector fields and local flows. 3. Change of coordinates; the group GA,(K). 4. Polynomial flows; statement of the main results. 5. Continuous families of polynomials. 6. Locally polynomial flows are global, of bounded degree. II. One parameter subgroups of GA,(K). 7. Introduction. 8. Amalgamated free products. 9. GA,(K) as amalgamated free product. 10. One parameter subgroups of GA,(K). 11. One parameter subgroups of BA?(K). 12. One parameter subgroups of BA,(K). 1. Introduction Let f: Rn + R be a Cl-vector field, and consider the (autonomous) system of differential equations with initial condition x(0) = x0. (lb) The solution, x = cp(t, x,), depends on t and x0. For which f as above does the flow (p depend polynomially on the initial condition x,? This question was discussed in [M2], and in [Ml], Section 6. We present here a definitive solution of this problem for n = 2, over both R and C. (See Theorems (4.1) and (4.3) below.) The main tool is the theorem of Jung [J] and van der Kulk [vdK] * This material is based upon work partially supported by the National Science Foun- dation under Grant NSF MCS 82-02633.
    [Show full text]
  • MATH 2030: MATRICES Introduction to Linear Transformations We Have
    MATH 2030: MATRICES Introduction to Linear Transformations We have seen that we may describe matrices as symbol with simple algebraic properties like matrix multiplication, addition and scalar addition. In the particular case of matrix-vector multiplication, i.e., Ax = b where A is an m × n matrix and x; b are n×1 matrices (column vectors) we may represent this as a transformation on the space of column vectors, that is a function F (x) = b , where x is the independent variable and b the dependent variable. In this section we will give a more rigorous description of this idea and provide examples of such matrix transformations, which will lead to the idea of a linear transformation. To begin we look at a matrix-vector multiplication to give an idea of what sort of functions we are working with 21 0 3 1 A = 2 −1 ; v = : 4 5 −1 3 4 then matrix-vector multiplication yields 2 1 3 Av = 4 3 5 −1 We have taken a 2 × 1 matrix and produced a 3 × 1 matrix. More generally for any x we may describe this transformation as a matrix equation y 21 0 3 2 x 3 x 2 −1 = 2x − y : 4 5 y 4 5 3 4 3x + 4y From this product we have found a formula describing how A transforms an arbi- 2 3 trary vector in R into a new vector in R . Expressing this as a transformation TA we have 2 x 3 x T = 2x − y : A y 4 5 3x + 4y From this example we can define some helpful terminology.
    [Show full text]
  • On Multivariate Interpolation
    On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case.
    [Show full text]
  • Vectors, Matrices and Coordinate Transformations
    S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making extensive use of vectors in Dynamics, we will summarize some of their important properties. Vectors For our purposes we will think of a vector as a mathematical representation of a physical entity which has both magnitude and direction in a 3D space. Examples of physical vectors are forces, moments, and velocities. Geometrically, a vector can be represented as arrows. The length of the arrow represents its magnitude. Unless indicated otherwise, we shall assume that parallel translation does not change a vector, and we shall call the vectors satisfying this property, free vectors. Thus, two vectors are equal if and only if they are parallel, point in the same direction, and have equal length. Vectors are usually typed in boldface and scalar quantities appear in lightface italic type, e.g. the vector quantity A has magnitude, or modulus, A = |A|. In handwritten text, vectors are often expressed using the −→ arrow, or underbar notation, e.g. A , A. Vector Algebra Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0.
    [Show full text]
  • Support Graph Preconditioners for Sparse Linear Systems
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Texas A&M University SUPPORT GRAPH PRECONDITIONERS FOR SPARSE LINEAR SYSTEMS A Thesis by RADHIKA GUPTA Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE December 2004 Major Subject: Computer Science SUPPORT GRAPH PRECONDITIONERS FOR SPARSE LINEAR SYSTEMS A Thesis by RADHIKA GUPTA Submitted to Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Approved as to style and content by: Vivek Sarin Paul Nelson (Chair of Committee) (Member) N. K. Anand Valerie E. Taylor (Member) (Head of Department) December 2004 Major Subject: Computer Science iii ABSTRACT Support Graph Preconditioners for Sparse Linear Systems. (December 2004) Radhika Gupta, B.E., Indian Institute of Technology, Bombay; M.S., Georgia Institute of Technology, Atlanta Chair of Advisory Committee: Dr. Vivek Sarin Elliptic partial differential equations that are used to model physical phenomena give rise to large sparse linear systems. Such systems can be symmetric positive definite and can be solved by the preconditioned conjugate gradients method. In this thesis, we develop support graph preconditioners for symmetric positive definite matrices that arise from the finite element discretization of elliptic partial differential equations. An object oriented code is developed for the construction, integration and application of these preconditioners. Experimental results show that the advantages of support graph preconditioners are retained in the proposed extension to the finite element matrices. iv To my parents v ACKNOWLEDGMENTS I would like to express sincere thanks to my advisor, Dr.
    [Show full text]
  • Stabilization, Estimation and Control of Linear Dynamical Systems with Positivity and Symmetry Constraints
    Stabilization, Estimation and Control of Linear Dynamical Systems with Positivity and Symmetry Constraints A Dissertation Presented by Amirreza Oghbaee to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering Northeastern University Boston, Massachusetts April 2018 To my parents for their endless love and support i Contents List of Figures vi Acknowledgments vii Abstract of the Dissertation viii 1 Introduction 1 2 Matrices with Special Structures 4 2.1 Nonnegative (Positive) and Metzler Matrices . 4 2.1.1 Nonnegative Matrices and Eigenvalue Characterization . 6 2.1.2 Metzler Matrices . 8 2.1.3 Z-Matrices . 10 2.1.4 M-Matrices . 10 2.1.5 Totally Nonnegative (Positive) Matrices and Strictly Metzler Matrices . 12 2.2 Symmetric Matrices . 14 2.2.1 Properties of Symmetric Matrices . 14 2.2.2 Symmetrizer and Symmetrization . 15 2.2.3 Quadratic Form and Eigenvalues Characterization of Symmetric Matrices . 19 2.3 Nonnegative and Metzler Symmetric Matrices . 22 3 Positive and Symmetric Systems 27 3.1 Positive Systems . 27 3.1.1 Externally Positive Systems . 27 3.1.2 Internally Positive Systems . 29 3.1.3 Asymptotic Stability . 33 3.1.4 Bounded-Input Bounded-Output (BIBO) Stability . 34 3.1.5 Asymptotic Stability using Lyapunov Equation . 37 3.1.6 Robust Stability of Perturbed Systems . 38 3.1.7 Stability Radius . 40 3.2 Symmetric Systems . 43 3.3 Positive Symmetric Systems . 47 ii 4 Positive Stabilization of Dynamic Systems 50 4.1 Metzlerian Stabilization . 50 4.2 Maximizing the stability radius by state feedback .
    [Show full text]
  • On Control of Discrete-Time LTI Positive Systems
    Applied Mathematical Sciences, Vol. 11, 2017, no. 50, 2459 - 2476 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2017.78246 On Control of Discrete-time LTI Positive Systems DuˇsanKrokavec and Anna Filasov´a Department of Cybernetics and Artificial Intelligence Faculty of Electrical Engineering and Informatics Technical University of Koˇsice Letn´a9/B, 042 00 Koˇsice,Slovakia Copyright c 2017 DuˇsanKrokavec and Anna Filasov´a.This article is distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract Incorporating an associated structure of constraints in the form of linear matrix inequalities, combined with the Lyapunov inequality guaranteing asymptotic stability of discrete-time positive linear system structures, new conditions are presented with which the state-feedback controllers can be designed. Associated solutions of the proposed design conditions are illustrated by numerical illustrative examples. Keywords: state feedback stabilization, linear discrete-time positive sys- tems, Schur matrices, linear matrix inequalities, asymptotic stability 1 Introduction Positive systems are often found in the modeling and control of engineering and industrial processes, whose state variables represent quantities that do not have meaning unless they are nonnegative [26]. The mathematical theory of Metzler matrices has a close relationship to the theory of positive linear time- invariant (LTI) dynamical systems, since in the state-space description form the system dynamics matrix of a positive systems is Metzler and the system input and output matrices are nonnegative matrices. Other references can find, e.g., in [8], [16], [19], [28]. The problem of Metzlerian system stabilization has been previously stud- ied, especially for single input and single output (SISO) continuous-time linear 2460 D.
    [Show full text]
  • A Matrix Handbook for Statisticians
    A MATRIX HANDBOOK FOR STATISTICIANS George A. F. Seber Department of Statistics University of Auckland Auckland, New Zealand BICENTENNIAL BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc., Publication This Page Intentionally Left Blank A MATRIX HANDBOOK FOR STATISTICIANS THE WlLEY BICENTENNIAL-KNOWLEDGE FOR GENERATIONS Gachgeneration has its unique needs and aspirations. When Charles Wiley first opened his small printing shop in lower Manhattan in 1807, it was a generation of boundless potential searching for an identity. And we were there, helping to define a new American literary tradition. Over half a century later, in the midst of the Second Industrial Revolution, it was a generation focused on building the future. Once again, we were there, supplying the critical scientific, technical, and engineering knowledge that helped frame the world. Throughout the 20th Century, and into the new millennium, nations began to reach out beyond their own borders and a new international community was born. Wiley was there, expanding its operations around the world to enable a global exchange of ideas, opinions, and know-how. For 200 years, Wiley has been an integral part of each generation's journey, enabling the flow of information and understanding necessary to meet their needs and fulfill their aspirations. Today, bold new technologies are changing the way we live and learn. Wiley will be there, providing you the must-have knowledge you need to imagine new worlds, new possibilities, and new opportunities. Generations come and go, but you can always count on Wiley to provide you the knowledge you need, when and where you need it! n WILLIAM J.
    [Show full text]
  • Z Matrices, Linear Transformations, and Tensors
    Z matrices, linear transformations, and tensors M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland, USA [email protected] *************** International Conference on Tensors, Matrices, and their Applications Tianjin, China May 21-24, 2016 Z matrices, linear transformations, and tensors – p. 1/35 This is an expository talk on Z matrices, transformations on proper cones, and tensors. The objective is to show that these have very similar properties. Z matrices, linear transformations, and tensors – p. 2/35 Outline • The Z-property • M and strong (nonsingular) M-properties • The P -property • Complementarity problems • Zero-sum games • Dynamical systems Z matrices, linear transformations, and tensors – p. 3/35 Some notation • Rn : The Euclidean n-space of column vectors. n n • R+: Nonnegative orthant, x ∈ R+ ⇔ x ≥ 0. n n n • R++ : The interior of R+, x ∈++⇔ x > 0. • hx,yi: Usual inner product between x and y. • Rn×n: The space of all n × n real matrices. • σ(A): The set of all eigenvalues of A ∈ Rn×n. Z matrices, linear transformations, and tensors – p. 4/35 The Z-property A =[aij] is an n × n real matrix • A is a Z-matrix if aij ≤ 0 for all i =6 j. (In economics literature, −A is a Metzler matrix.) • We can write A = rI − B, where r ∈ R and B ≥ 0. Let ρ(B) denote the spectral radius of B. • A is an M-matrix if r ≥ ρ(B), • nonsingular (strong) M-matrix if r > ρ(B). Z matrices, linear transformations, and tensors – p. 5/35 The P -property • A is a P -matrix if all its principal minors are positive.
    [Show full text]
  • Group Matrix Ring Codes and Constructions of Self-Dual Codes
    Group Matrix Ring Codes and Constructions of Self-Dual Codes S. T. Dougherty University of Scranton Scranton, PA, 18518, USA Adrian Korban Department of Mathematical and Physical Sciences University of Chester Thornton Science Park, Pool Ln, Chester CH2 4NU, England Serap S¸ahinkaya Tarsus University, Faculty of Engineering Department of Natural and Mathematical Sciences Mersin, Turkey Deniz Ustun Tarsus University, Faculty of Engineering Department of Computer Engineering Mersin, Turkey February 2, 2021 arXiv:2102.00475v1 [cs.IT] 31 Jan 2021 Abstract In this work, we study codes generated by elements that come from group matrix rings. We present a matrix construction which we use to generate codes in two different ambient spaces: the matrix ring Mk(R) and the ring R, where R is the commutative Frobenius ring. We show that codes over the ring Mk(R) are one sided ideals in the group matrix k ring Mk(R)G and the corresponding codes over the ring R are G - codes of length kn. Additionally, we give a generator matrix for self- dual codes, which consist of the mentioned above matrix construction. 1 We employ this generator matrix to search for binary self-dual codes with parameters [72, 36, 12] and find new singly-even and doubly-even codes of this type. In particular, we construct 16 new Type I and 4 new Type II binary [72, 36, 12] self-dual codes. 1 Introduction Self-dual codes are one of the most widely studied and interesting class of codes. They have been shown to have strong connections to unimodular lattices, invariant theory, and designs.
    [Show full text]
  • Matrices That Are Similar to Their Inverses
    116 THE MATHEMATICAL GAZETTE Matrices that are similar to their inverses GRIGORE CÃLUGÃREANU 1. Introduction In a group G, an element which is conjugate with its inverse is called real, i.e. the element and its inverse belong to the same conjugacy class. An element is called an involution if it is of order 2. With these notions it is easy to formulate the following questions. 1) Which are the (finite) groups all of whose elements are real ? 2) Which are the (finite) groups such that the identity and involutions are the only real elements ? 3) Which are the (finite) groups in which the real elements form a subgroup closed under multiplication? According to specialists, these (general) questions cannot be solved in any reasonable way. For example, there are numerous families of groups all of whose elements are real, like the symmetric groups Sn. There are many solvable groups whose elements are all real, and one can prove that any finite solvable group occurs as a subgroup of a solvable group whose elements are all real. As for question 2, note that in any Abelian group (conjugations are all the identity function), the only real elements are the identity and the involutions, and they form a subgroup. There are non-abelian examples as well, like a Suzuki 2-group. Question 3 is similar to questions 1 and 2. Therefore the abstract study of reality questions in finite groups is unlikely to have a good outcome. This may explain why in the existing bibliography there are only specific studies (see [1, 2, 3, 4]).
    [Show full text]
  • Centro-Invertible Matrices Linear Algebra and Its Applications, 434 (2011) Pp144-151
    References • R.S. Wikramaratna, The centro-invertible matrix:a new type of matrix arising in pseudo-random number generation, Centro-invertible Matrices Linear Algebra and its Applications, 434 (2011) pp144-151. [doi:10.1016/j.laa.2010.08.011]. Roy S Wikramaratna, RPS Energy [email protected] • R.S. Wikramaratna, Theoretical and empirical convergence results for additive congruential random number generators, Reading University (Conference in honour of J. Comput. Appl. Math., 233 (2010) 2302-2311. Nancy Nichols' 70th birthday ) [doi: 10.1016/j.cam.2009.10.015]. 2-3 July 2012 Career Background Some definitions … • Worked at Institute of Hydrology, 1977-1984 • I is the k by k identity matrix – Groundwater modelling research and consultancy • J is the k by k matrix with ones on anti-diagonal and zeroes – P/t MSc at Reading 1980-82 (Numerical Solution of PDEs) elsewhere • Worked at Winfrith, Dorset since 1984 – Pre-multiplication by J turns a matrix ‘upside down’, reversing order of terms in each column – UKAEA (1984 – 1995), AEA Technology (1995 – 2002), ECL Technology (2002 – 2005) and RPS Energy (2005 onwards) – Post-multiplication by J reverses order of terms in each row – Oil reservoir engineering, porous medium flow simulation and 0 0 0 1 simulator development 0 0 1 0 – Consultancy to Oil Industry and to Government J = = ()j 0 1 0 0 pq • Personal research interests in development and application of numerical methods to solve engineering 1 0 0 0 j =1 if p + q = k +1 problems, and in mathematical and numerical analysis
    [Show full text]