Complementarity Properties of Peirce-Diagonalizable Linear Transformations on Euclidean Jordan Algebras

Total Page:16

File Type:pdf, Size:1020Kb

Complementarity Properties of Peirce-Diagonalizable Linear Transformations on Euclidean Jordan Algebras Optimization Methods and Software Vol. 00, No. 00, January 2009, 1{14 Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras M. Seetharama Gowdaa∗, J. Taob, and Roman Sznajderc aDepartment of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, Maryland 21250, USA ; bDepartment of Mathematics and Statistics, Loyola University Maryland, Baltimore, Maryland 21210, USA; c Department of Mathematics, Bowie State University, Bowie, Maryland 20715, USA (xxxxxx) Peirce-diagonalizable linear transformations on a Euclidean Jordan algebra are of the form P P L(x) = A•x := aij xij , where A = [aij ] is a real symmetric matrix and xij is the Peirce decomposition of an element x in the algebra with respect to a Jordan frame. Examples of such transformations include Lyapunov transformations and quadratic representations on Euclidean Jordan algebras. Schur (or Hadamard) product of symmetric matrices provides another example. Motivated by a recent generalization of the Schur product theorem, in this article, we study general and complementarity properties of such transformations. Keywords: Peirce-diagonalizable transformation, linear complementarity problem, Euclidean Jordan algebra, symmetric cone, Schur/Hadamard product, Lypaunov transformation, quadratic representation. AMS Subject Classification: 15A33; 17C20; 17C65; 90C33. 1. Introduction Let L be a linear transformation on a Euclidean Jordan algebra (V; ◦; h·; ·i) of rank r and let Sr denote the set of all real r × r symmetric matrices. We say that L is Peirce-diagonalizable if there exist a Jordan frame fe1; e2; : : : ; erg (with a specified r ordering of its elements) in V and a matrix A = [aij] 2 S such that for any x 2 V P with its Peirce decomposition x = i≤j xij with respect to fe1; e2; : : : ; erg, we have X L(x) = A • x := aijxij: (1) 1≤i≤j≤r The above expression defines a Peirce-diagonal transformation and a Peirce diag- onal representation of L. Our first example is obtained by taking V = Sr with the canonical Jordan frame (see Section 2 for various notations and definitions). In this case, A • x reduces to the well-known Schur (also known as Hadamard) product of two symmetric matrices and L is the corresponding induced transformation. On a general Euclidean Jordan algebra, two basic examples are: The Lyapunov transformation La and the quadratic representation Pa corresponding to any ele- ∗Corresponding author. Email: [email protected] ISSN: 1055-6788 print/ISSN 1029-4937 online c 2009 Taylor & Francis DOI: 10.1080/1055678xxxxxxxxxxxxx http://www.informaworld.com 2 Taylor & Francis and I.T. Consultant ment a 2 V . These are, respectively, defined on V by 2 La(x) = a ◦ x and Pa(x) = 2a ◦ (a ◦ x) − a ◦ x: If the spectral decomposition of a is given by a = a1e1 + a2e2 + ··· + arer; where fe1; e2; : : : ; erg is a Jordan frame and a1; a2; : : : ; ar are the eigenvalues of a, then with respect to this Jordan frame and the induced Peirce decomposition P x = i≤j xij of any element, we have [20] X ai + aj X L (x) = ( )x and P (x) = (a a )x : (2) a 2 ij a i j ij i≤j i≤j Thus, both La and Pa have the form (1) and the corresponding matrices are given, ai+aj respectively, by [ 2 ] and [aiaj]. We shall see (in Section 3) that every transfor- mation L given by (1) is a linear combination of quadratic representations. Our fourth example deals with L¨ownerfunctions. Given a differentiable function φ : R!R, consider the corresponding L¨ownerfunction Φ : V ! V defined by (the spectral decompositions) a = λ1e1 + λ2e2 + ··· + λrer and Φ(a) = φ(λ1)e1 + φ(λ2)e2 + ··· + φ(λr)er: P P Then the directional derivative of Φ at a = λiei in the direction of x = xij (Peirce decomposition written with respect to fe1; e2; : : : ; erg) is given by 0 X Φ (a; x) := aijxij; i≤j φ(λi)−φ(λj ) where aij := (which, by convention, is the derivative of φ when λi = λj). λi−λj We now note that (for a fixed a), Φ0(a; x) is of the form (1). In [16], Kor´anyi studies the operator monotonicity of Φ based on such expressions. Expressions like (1) also appear in connection with the so-called uniform non- singularity property, see the recent paper [2] for further details. Our objective in this paper is to study Peirce-diagonalizable transformations, and, in particular, describe their complementarity properties. Consider a Euclidean Jordan algebra V with the corresponding symmetric cone K. Given a linear trans- formation L on V and an element q 2 V , the (symmetric cone) linear complemen- tarity problem, denoted by LCP(L; K; q), is to find an x 2 V such that x 2 K; L(x) + q 2 K; and hL(x) + q; xi = 0: (3) This problem is a generalization of the standard linear complementarity problem n×n n [3] which corresponds to a square matrix M 2 R , the cone R+, and a vector q 2 Rn; it is a special case of a variational inequality problem [4]. In the theory of complementarity problems, global uniqueness and solvability is- sues are of fundamental importance. A linear transformation L is said to have the Q-property (GUS-property) if for every q 2 V , LCP(L; K; q) has a solution (re- spectively, unique solution); it is said to have the R0-property if LCP(L; K; 0) has a unique solution (namely zero). In the context of La, GUS, Q, and R0 properties are all equivalent; moreover, they hold if and only if a > 0, that is, when a belongs Optimization Methods and Software 3 to the interior of symmetric cone of V . Similarly, for Pa, the properties GUS, Q, and R0 properties are equivalent, see [8]. In this paper, we extend these results to certain Peirce-diagonalizable transformations. Our motivation for this study also comes from a recent result which generalizes the well-known Schur product the- orem to Euclidean Jordan algebras [19]: If A in (1) is positive semidefinite and x 2 K, then A • x 2 K. The organization of the paper is as follows: Section 2 covers some basic material. In Section 3, we cover general properties of Peirce-diagonalizable transformations. Section 4 deals with the copositivity property. In this section, we introduce certain cones between the cones of completely positive matrices and doubly nonnegative matrices. Section 5 deals with the Z-property, and finally in Section 6, we cover the complementarity properties. 2. Some preliminaries Throughout this paper, we fix a Euclidean Jordan algebra (V; ◦; h·; ·i) of rank r with the corresponding symmetric cone (of squares) K [5], [9]. It is well-known that V is a product of simple algebras, each isomorphic to one of the matrix algebras Sn, Hn, Qn (which are the spaces of n × n Hermitian matrices over real numbers/complex numbers/quaternions), O3 (the space of 3 × 3 Hermitian matrices over octonions), or the Jordan spin algebra Ln (n ≥ 3). In the matrix algebras, the (canonical) inner product is given by hX; Y i = Re trace(XY ) and in Ln, it is the usual inner product on Rn. The symmetric cone of Sn will be n denoted by S+ (with a similar notation in other spaces). In any matrix algebra, let Eij be the matrix with ones in the (i; j) and (j; i) slots and zeros elsewhere; we write, Ei := Eii. In a matrix algebra of rank r, fE1;E2;:::;Erg is the canonical Jordan frame. We use the notation A = [aij] to say that A is a matrix with components aij; we also write r A 0 , A 2 S+ and x ≥ 0 (> 0) , x 2 K (int(K)): For a given Jordan frame fe1; e2; : : : ; erg in V , we consider the corresponding Peirce decomposition ([5], Theorem IV.2.1) X V = Vij; 1≤i≤j≤r 1 where Vii = R ei and Vij = fx 2 V : x ◦ ei = x = x ◦ ejg, when i 6= j. For any 2 P x 2 V , we get the corresponding Peirce decomposition x = i≤j xij. We note that this is an orthogonal direct sum and xii = xiei for all i, where xi 2 R. We recall a few new concepts and notations. Given a matrix A 2 Sr, a Euclidean Jordan algebra V and a Jordan frame fe1; e2; : : : ; erg with a specific ordering, we define the linear transformation DA : V ! V and Schur (or Hadamard) products in the following way: For any two elements x; y 2 V with Peirce decompositions P P x = i≤j xij and y = i≤j yij, X DA(x) = A • x := aijxij 2 V (4) 1≤i≤j≤r 4 Taylor & Francis and I.T. Consultant and r X 1 X x 4 y := hx ; y iE + hx ; y iE 2 Sr: (5) ii ii i 2 ij ij ij 1 1≤i<j≤r We immediately note that X 2 hDA(x); xi = hA • x; xi = aijjjxijjj = hA; x4 xi; (6) i≤j where the inner product on the far right is taken in Sr: When V is Sr with the canonical Jordan frame, the above products A • x and x 4 y coincide with the well-known Schur (or Hadamard) product of two symmetric matrices. This can be seen as follows. Consider matrices x = [αij] and y = [βij] in r P P S so that x = i≤j αijEij and y = i≤j βijEij are their corresponding Peirce de- compositions with respect to the canonical Jordan frame. Then simple calculations show that X A • x = aijαijEij = [aijαij] and x 4 y = [αijβij]; i≤j r where the latter expression is obtained by noting that in S , hEij;Eiji = 2 trace(Eij) = 2: It is interesting to note that (A • x)4 x = A • (x4 x); (7) where the `bullet' product on the right is taken in Sr with respect to the canonical Jordan frame.
Recommended publications
  • Some Aspects of the Optimization Over the Copositive and the Completely Positive Cone
    Some aspects of the optimization over the copositive and the completely positive cone Dissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) Dem Fachbereich IV der Universit¨at Trier vorgelegt von Julia Witzel Trier, 2013 Eingereicht am 17.07.2013 Gutachter: Prof. Dr. Mirjam Dur¨ Prof. Dr. Gabriele Eichfelder Tag der mundlichen¨ Prufung:¨ 25.09.2013 Die Autorin hat im Laufe des Promotionsverfahrens geheiratet und hieß zu dessen Beginn Julia Sponsel. Zusammenfassung Optimierungsprobleme tauchen in vielen praktischen Anwendungsbereichen auf. Zu den einfachsten Problemen geh¨oren lineare Optimierungsprobleme. Oft reichen lineare Probleme allerdings zur Modellierung praktischer Anwendun- gen nicht aus, sondern es muss die Ganzzahligkeit von Variablen gefordert werden oder es tauchen nichtlineare Nebenbedingungen oder eine nichtlineare Zielfunktion auf. Eine wichtige Problemklasse stellen hierbei quadratische Pro- bleme dar, welche im Allgemeinen aber schwer zu l¨osen sind. Ein Ansatz, der sich in den letzten Jahren entwickelt und verbreitet hat, besteht in der Umfor- mulierung quadratischer Probleme in sogenannte kopositive Programme. Ko- positive Programme sind lineare Optimierungsprobleme uber¨ dem Kegel der kopositiven Matrizen n×n T T n Cn = fA 2 R : A = A ; x Ax ≥ 0 fur¨ alle x 2 R ; x ≥ 0g; oder dem Dualkegel, dem Kegel der vollst¨andig positiven Matrizen m ∗ n X T n o Cn = xixi : xi 2 R ; xi ≥ 0 fur¨ alle i = 1; : : : ; m : i=1 Die Schwierigkeit dieser Probleme liegt in der Kegelbedingung. Aber nicht nur das Optimieren uber¨ diesen Kegeln ist schwierig. Wie von Murty und Kabadi (1987) und Dickinson und Gijben (2013) gezeigt ist es NP-schwer zu testen, ob eine gegebene Matrix kopositiv oder vollst¨andig positiv ist.
    [Show full text]
  • Copositive Plus Matrices
    Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their properties. We examine certain subsets of copositive plus matrices, copositive plus matrices with small dimensions, and the copositive plus cone and its dual. Furthermore, we consider the Copositive Plus Completion Problem, which is the problem of deciding whether a matrix with unspecified entries can be completed to obtain a copositive plus matrix. The set of copositive plus matrices is important for Lemke's algorithm, which is an al- gorithm for solving the Linear Complementarity Problem (LCP). The LCP is the problem of deciding whether a solution for a specific system of equations exists and finding such a solution. Lemke's algorithm always terminates in a finite number of steps, but for some prob- lems Lemke's algorithm terminates with no solution while the problem does have a solution. However, when the data matrix of the LCP is copositive plus, Lemke's algorithm always gives a solution if such solution exists. Master Thesis in Applied Mathematics Author: Willemieke van Vliet First supervisor: Dr. Mirjam E. D¨ur Second supervisor: Prof. dr. Harry L. Trentelman Date: October 2011 Johann Bernoulli Institute of Mathematics and Computer Science P.O. Box 407 9700 AK Groningen The Netherlands Contents 1 Introduction 1 1.1 Structure . .1 1.2 Notation . .1 2 Copositive Plus Matrices and their Properties 3 2.1 The Class of Copositive Matrices . .3 2.2 Properties of Copositive Matrices . .4 2.3 Properties of Copositive Plus Matrices .
    [Show full text]
  • Cambridge University Press 978-1-108-47871-7 — Matrix Positivity Charles R
    Cambridge University Press 978-1-108-47871-7 — Matrix Positivity Charles R. Johnson , Ronald L. Smith , Michael J. Tsatsomeros Index More Information Index p-Newton, 159 completion of a partial matrix, 153 t-th Hadamard power, 128 completion to a P-matrix, 88 (convex) cone, 162 cone,3,39 (i,j)-path, 157 congruence of A, 168 (right) Perron eigenvector, 58 conjugate transpose, 1 (right) null space, 114 contiguous minors, 9 (strictly) codefinite pair, 172 convex, 3 (strictly) copositive of order m, 171 convex combination, 3 convex hull, 4 acute cone, 40 Copositive, 164 allow MSP,30 copositive, 10 allow RSP,30 copositive +,10 allow SP,25 copositive with respect to a cone K, 167 almost (strictly) semimonotone, 53 Copositive-plus, 164 almost C, 177 CP-rank, 11 almost C+, 177 critical exponent, 135 almost SC, 177 cycle of length, 59 almost diagonally dominant P-matrices, 88 cycle product, 120 almost principal submatrices, 109 cycle product inequalities, 120 associated quadratic form, 174 determinantal inequalities, 8 B-matrix, 58 diagonal Lyapunov solution, 91 block graph, 155 diagonal Lyapunov solutions, 95 block P-matrices, 88 diagonally dominant matrix, 2 buckle, 157 diagonally dominant of its column entries, 94 diagonally dominant of its columns, 93 Cayley transform, 57 diagonally dominant of its row entries, 94 Cholesky factorization, 10 diagonally dominant of its rows, 93 column (row) deleted submatrix, 16 diagonally symmetrizable, 127 column diagonally dominant, 12 directed graph, 59 common semipositivity vector, 59 directed graph of A, 121 comparison matrix, 58 dispersion, 75 complementary nullities, 114 doubly nonnegative matrices, 10, 135 completely positive matrix, 11 doubly positive matrices, 10 © in this web service Cambridge University Press www.cambridge.org Cambridge University Press 978-1-108-47871-7 — Matrix Positivity Charles R.
    [Show full text]
  • The Copositive-Plus Matrix Completion Problem, Xuhua Liu
    Alabama Journal of Mathematics ISSN 2373-0404 39 (2015) The Copositive-Plus Matrix Completion Problem Xuhua Liu Ronald L. Smith Department of Mathematics Department of Mathematics The University of Tennessee at Chattanooga The University of Tennessee at Chattanooga Chattanooga, TN 37403, USA Chattanooga, TN 37403, USA We derive a useful characterization of 3-by-3 copositive-plus matrices in terms of their entries, and use it to solve the copositive-plus matrix completion problem in the case of specified diagonal by showing that a partial copositive-plus matrix with graph G has a copositive-plus completion if and only if G is a pairwise disjoint union of complete subgraphs. Introduction (cf. Hiriart-Urruty and Seeger (2010)). See recent surveys Hiriart-Urruty and Seeger (2010) and Ikramov and Savel’eva In this article, the superscript “>” denotes transposi- n (2000) on copositive matrices and references therein. tion and R the set of all real n-vectors. A vector x = A partial matrix is one in which some entries are spec- > 2 n (x1;:::; xn) R is said to be nonnegative, denoted by ified, while the remaining entries are unspecified and free x > 0, if xi > 0 for all i = 1;:::; n. to be chosen. A completion of a partial matrix is a choice × Let S n denote the set of all n n real symmetric matrices. of values for the unspecified entries. A matrix completion 2 A matrix A S n is said to be problem asks which partial matrices have completions with + ∗ > n a desired property. A partial C (C, C , resp.) matrix is a (1) (real) positive semidefinite if x Ax > 0 for all x 2 R ; real symmetric partial matrix such that every fully specified > + ∗ ∗ (2) copositive if x Ax > 0 for all x > 0; principal submatrix is C (C, C , resp.).
    [Show full text]
  • Copositive Matrices with Circulant Zero Support
    Copositive matrices with circulant zero support set Roland Hildebrand ∗ July 19, 2021 Abstract 1 n Let n ≥ 5 and let u ,...,u be nonnegative real n-vectors such that the indices of their positive elements form the sets {1, 2,...,n − 2}, {2, 3,...,n − 1},..., {n, 1,...,n − 3}, respectively. Here each index set is obtained from the previous one by a circular shift. The set of copositive forms 1 n n which vanish on the vectors u ,...,u is a face of the copositive cone C . We give an explicit semi-definite description of this face and of its subface consisting of positive semi-definite forms, and study their properties. If the vectors u1,...,un and their positive multiples exhaust the zero set of an exceptional copositive form belonging to this face, then we say it has minimal circulant zero support set, and otherwise non-minimal circulant zero support set. We show that forms with non-minimal circulant zero support set are always extremal, and forms with minimal circulant zero support sets can be extremal only if n is odd. We construct explicit examples of extremal forms with non-minimal circulant zero support set for any order n ≥ 5, and examples of extremal forms with minimal circulant zero support set for any odd order n ≥ 5. The set of all forms with non-minimal circulant zero support set, i.e., defined by different collections u1,...,un of zeros, is a submanifold of codimension 2n, the set of all forms with minimal circulant zero support set a submanifold of codimension n.
    [Show full text]
  • Mathematisches Forschungsinstitut Oberwolfach Copositivity And
    Mathematisches Forschungsinstitut Oberwolfach Report No. 52/2017 DOI: 10.4171/OWR/2017/52 Copositivity and Complete Positivity Organised by Abraham Berman, Haifa Immanuel M. Bomze, Vienna Mirjam D¨ur, Augsburg Naomi Shaked-Monderer, Yezreel Valley 29 October – 4 November 2017 Abstract. A real matrix A is called copositive if xT Ax ≥ 0 holds for all Rn x ∈ +. A matrix A is called completely positive if it can be factorized as A = BBT , where B is an entrywise nonnegative matrix. The concept of copositivity can be traced back to Theodore Motzkin in 1952, and that of complete positivity to Marshal Hall Jr. in 1958. The two classes are related, and both have received considerable attention in the linear algebra community and in the last two decades also in the mathematical optimization commu- nity. These matrix classes have important applications in various fields, in which they arise naturally, including mathematical modeling, optimization, dynamical systems and statistics. More applications constantly arise. The workshop brought together people working in various disciplines re- lated to copositivity and complete positivity, in order to discuss these con- cepts from different viewpoints and to join forces to better understand these difficult but fascinating classes of matrices. Mathematics Subject Classification (2010): 15B48, 15A23, 15A63, 37B25, 37C75, 90C20, 90C22, 93D09 . Introduction by the Organisers T Rn A real matrix A is called copositive if x Ax 0 holds for all x +. Obviously, every positive semidefinite matrix is copositive,≥ and so is every entry∈ wise symmet- ric nonnegative matrix. However, for n 5 the cone of n n copositive matrices is considerably larger and more complex than≥ both the semidefinite× and symmetric nonnegative matrix cones.
    [Show full text]
  • Multilinear Algebra
    A Complete Bibliography of Publications in Linear [and] Multilinear Algebra Nelson H. F. Beebe University of Utah Department of Mathematics, 110 LCB 155 S 1400 E RM 233 Salt Lake City, UT 84112-0090 USA Tel: +1 801 581 5254 FAX: +1 801 581 4148 E-mail: [email protected], [email protected], [email protected] (Internet) WWW URL: http://www.math.utah.edu/~beebe/ 18 May 2021 Version 1.15 Title word cross-reference (0; 1) [FV97, Kim97, Zel85, BF15, BS18d, Cao19, KS83, LN10, Min74d, Zho16]. (0; 1; −1) [GZ99]. (1; −1) [KS84]. (1; 0) [MP13b]. (1; 1) [Zha20a, Che03b]. (1; 2; 3) [GK18]. (2) [Che95a]. (2; 2; 2) [SF17]. (2;p) [YD14]. (α, β)[YZ13]. (α, β; γ) [KO10]. (AXB; CXD)=(E;F) [YLW15]. (AXB; CXD)=(E;G) + [YTL20]. (b; c)[SCK´ 20, Boa19, Dra18b, KCICV18]. (e(xrys)) [Pre85]. (k) [Cha77, Cha79a]. (k + 1) [BT06]. (k; l)[Rem90].(k =2; 3; 4) [Sta15b]. (m; C) [MCL20]. (m; m − 1; 0) [HZ21]. (max; +) [HKT15]. (n + 1) [Sch78, Sch75]. (n; k)=(4; 2) [JLL88]. (n; m) [CM19]. (p; h) [DV18]. (p; q) [KL95]. (p; σ) [ADRP14].p (qr)[Raw81].(R; S) [fLLyP15, Pen15]. (s; p) − w [Ras17]. s ( 5 − 1)=2[Sim95].(T ) [Che95b]. (υ, k; λ)[Der83].(Zm) [How86]. −2 [Ano16b, BPS16]. 0 [HNS99, HLZ05, HL20, JS96, Wu09]. 0; 1 [MS79, BX07]. 0 62 F (Am) [JL90]. 1 [CM15, DFK19, Fra17, HNS99, HLZ05, HL20, JS96, KP14a, Min77, Sta10, Wu09]. 1; 2; 1[KHG13].f1; 2; 3g [ZX10]. f1; 2; 4g 1 [ZX10]. f1; 3g [CIM19]. 10 [LN10]. 12 [EM95].
    [Show full text]
  • Matrix Theory
    Matrix Theory Xingzhi Zhan +VEHYEXI7XYHMIW MR1EXLIQEXMGW :SPYQI %QIVMGER1EXLIQEXMGEP7SGMIX] Matrix Theory https://doi.org/10.1090//gsm/147 Matrix Theory Xingzhi Zhan Graduate Studies in Mathematics Volume 147 American Mathematical Society Providence, Rhode Island EDITORIAL COMMITTEE David Cox (Chair) Daniel S. Freed Rafe Mazzeo Gigliola Staffilani 2010 Mathematics Subject Classification. Primary 15-01, 15A18, 15A21, 15A60, 15A83, 15A99, 15B35, 05B20, 47A63. For additional information and updates on this book, visit www.ams.org/bookpages/gsm-147 Library of Congress Cataloging-in-Publication Data Zhan, Xingzhi, 1965– Matrix theory / Xingzhi Zhan. pages cm — (Graduate studies in mathematics ; volume 147) Includes bibliographical references and index. ISBN 978-0-8218-9491-0 (alk. paper) 1. Matrices. 2. Algebras, Linear. I. Title. QA188.Z43 2013 512.9434—dc23 2013001353 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for such permission should be addressed to the Acquisitions Department, American Mathematical Society, 201 Charles Street, Providence, Rhode Island 02904-2294 USA. Requests can also be made by e-mail to [email protected]. c 2013 by the American Mathematical Society. All rights reserved. The American Mathematical Society retains all rights except those granted to the United States Government.
    [Show full text]
  • Perron–Frobenius Property of Copositive Matrices, and a Block Copositivity Criterion
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Available online at www.sciencedirect.com Linear Algebra and its Applications 429 (2008) 68–71 www.elsevier.com/locate/laa Perron–Frobenius property of copositive matrices, and a block copositivity criterion Immanuel M. Bomze Department of Statistics and Decision Support Systems, University of Vienna, Austria Received 21 November 2007; accepted 5 February 2008 Available online 25 March 2008 Submitted by R.A. Brualdi Abstract Haynsworth and Hoffman proved in 1969 that the spectral radius of a symmetric copositive matrix is an eigenvalue of this matrix. This note investigates conditions which guarantee that an eigenvector correspond- ing to this dominant eigenvalue has no negative coordinates, i.e., whether the Perron–Frobenius property holds. Also a block copositivity criterion using the Schur complement is specified which may be helpful to reduce dimension in copositivity checks and which generalizes results proposed by Andersson et al. in 1995, and Johnson and Reams in 2005. Apparently, the latter five researchers were unaware of the more general results by the author precedingly published in 1987 and 1996, respectively. © 2008 Published by Elsevier Inc. Keywords: Dominant eigenvalue; Positive eigenvector; Schur complement; Spectral radius The Perron–Frobenius property has many well-studied consequences, also for matrices with some negative entries, see [6–9] and references therein. In [4], Haynsworth and Hoffman prove, among other results and in a more general setting, that a symmetric copositive matrix has a dominant (positive) eigenvalue. This way, part of the Perron–Frobenius theorem is extended to a class of matrices larger than the nonnegative ones, as copositive matrices also may have some negative entries, even if they are positive-semidefinite.
    [Show full text]
  • Nearly Positive Matrices Bryan Shader, Naomi Shaked-Monderer and Daniel B
    Nearly positive matrices Bryan Shader, Naomi Shaked-Monderer and Daniel B. Szyld Research Report 13-07-17 July 2013 Department of Mathematics Temple University This report is available in the World Wide Web at http://www.math.temple.edu/szyld Nearly positive matrices∗ Bryan Shadery Naomi Shaked-Mondererz Daniel B. Szyldx Abstract Nearly positive (NP) matrices are nonnegative matrices which, when premultiplied by orthogonal matrices as close to the identity as one wishes, become positive. In other words, all columns of an NP matrix are mapped simultaneously to the interior of the nonnegative cone by mutiplication by a sequence of orthogonal matrices converging to the identity. In this paper, NP matrices are analyzed and characterized in several cases. Different necessary and sufficient conditions for a nonnegative matrix to be an NP matrix are presented. A connection to completely positive matrices is also presented. Keywords: Nonnegative matrices, Positive matrices, Completely positive matrices. AMS subject classification: 15F10 1 Introduction Consider the cone of nonnegative vectors in the m-dimensional space m m R+ = fx 2 R ; x ≥ 0g. Any nonzero vector v in its boundary is nonnega- tive, but not positive. It is not hard to see that an infinitesimal rotation in the m appropriate direction can bring this vector into the interior of the cone R+ . In other words, one can build a sequence of orthogonal matrices Q(`) such that lim`!1 Q(`) = I with the property that Q(`)v > 0. For two non-orthogonal nonnegative vectors u, v, one can also build a se- quence of orthogonal matrices, such that both Q(`)u > 0 and Q(`)v > 0 [6, Theorem 6.12].
    [Show full text]
  • Copositive Programming – a Survey
    Copositive Programming { a Survey Mirjam D¨ur Johann Bernoulli Institute of Mathematics and Computer Science, University of Groningen, P.O. Box 407, 9700 AK Groningen, The Netherlands. [email protected] Copositive programming is a relatively young field in mathematical opti- mization. It can be seen as a generalization of semidefinite programming, since it means optimizing over the cone of so called copositive matrices. Like semidefinite programming, it has proved particularly useful in combinatorial and quadratic optimization. The purpose of this survey is to introduce the field to interested readers in the optimization community who wish to get an understanding of the basic concepts and recent developments in copositive programming, including modeling issues and applications, the connection to semidefinite programming and sum-of-squares approaches, as well as algorith- mic solution approaches for copositive programs. 1 Introduction A copositive program is a linear optimization problem in matrix variables of the following form: min hC; Xi s: t: hAi;Xi = bi (i = 1; : : : ; m); (1) X 2 C; where C is the cone of so-called copositive matrices, that is, the matrices whose n quadratic form takes nonnegative values on the nonnegative orthant R+: T n C = fA 2 S : x Ax ≥ 0 for all x 2 R+g (here S is the set of symmetric n × n matrices, and the inner product of two Pn matrices in (1) is hA; Bi := trace(BA) = i;j=1 aijbij, as usual). Obviously, every positive semidefinite matrix is copositive, and so is every entrywise nonnegative matrix, but the copositive cone is significantly larger than both the semidefinite and the nonnegative matrix cones.
    [Show full text]
  • On Semimonotone Star Matrices and Linear Complementarity Problem
    On Semimonotone Star Matrices and Linear Complementarity Problem R. Janaa,1, A. K. Dasa,2, S. Sinhab,3 aIndian Statistical Institute, 203 B. T. Road, Kolkata, 700 108, India. bJadavpur University, Kolkata , 700 032, India. 1Email: [email protected] 2Email: [email protected] 3Email: [email protected] Abstract s In this article, we introduce the class of semimonotone star (E0) matrices. We s establish the importance of the class of E0-matrices in the context of comple- s mentarity theory. We show that the principal pivot transform of E0-matrix is s ˜s not necessarily E0 in general. However, we prove that E0-matrices, a subclass of s f the E0-matrices with some additional conditions, is in E0 by showing this class is in P0. We prove that LCP(q, A) can be processable by Lemke’s algorithm if ˜s A ∈ E0 ∩ P0. We find some conditions for which the solution set of LCP(q, A) ˜s is bounded and stable under the E0-property. We propose an algorithm based ˜s on an interior point method to solve LCP(q, A) given A ∈ E0. Keywords: Linear complementarity problem, principal pivot transform, Lemke’s ˜s algorithm, interior point method, semimonotone star matrix, E0-matrix. arXiv:1808.00281v4 [math.OC] 9 Sep 2019 AMS subject classifications: 90C33, 90C90. 1 Introduction The concept of pseudomonotone or copositive star matrices on a closed convex cone with respect to complementarity condition was studied by Gowda [12]. The properties of copositive star matrices are well studied in the literature of linear complementarity problem. A star matrix [11] is defined as any point x from solution set of LCP(q, A) T satisfies A x ≤ 0.
    [Show full text]