ON P-MATRICES 1. Introduction and Notation. a Real Matrix a ∈ M N(IR)

ON P-MATRICES 1. Introduction and Notation. a Real Matrix a ∈ M N(IR)

published in Linear Algebra and its Applications (LAA), Vol. 363, 237-250, 2003. ON P -MATRICES SIEGFRIED M. RUMP ¤ Abstract. We present some necessary and su±cient conditions for a real matrix being P -matrix. They are based on the sign-real spectral radius and regularity of a certain interval matrix . We show that no minor can be left out when checking for P -property. Furthermore, a not necessarily exponential method for checking P -property is given. 1. Introduction and notation. A real matrix A 2 Mn(IR) is called P -matrix if all its principal minors are positive. The class of P -matrices is denoted by P. The P -problem, namely the problem of checking whether a given matrix is a P -matrix, is important in many applications, see [1]. A straightforward algorithm evaluating the 2n ¡ 1 principal minors requires some n32n operations. This corresponds to the fact that the P -problem is NP -hard [2]. In Theorem 2.2 we will show that none of these minors can be left out. However, there are other strategies. Recently, Tsatsomeros and Li [20] presented an algorithm based on Schur complements reducing computational complexity to 7 ¢ 2n. The algorithm requires always this number of operations if the matrix in question is a P -matrix. Otherwise, the computational cost is frequently much smaller because one nonpositive minor su±ces to prove A2 = P. In this paper we will present characterizations of P -matrices related to the sign-real spectral radius, and based on that some necessary conditions and su±cient conditions. In case A2 = P we also derive strategies to ¯nd a nonpositive minor. Finally, we give an algorithm which is not a priori exponential for A 2 P, but can be so in the worst case. The method is tested for n = 100, where all other known methods require 2100 operations. However, this approach needs further analysis. We use popular notation in matrix theory. Especially, A[¹] denotes the principal submatrix of A with rows and columns out of ¹ ⊆ f1; : : : ; ng. Absolute value and comparison of vectors and matrices is always to be understood componentwise. For example, signature matrices S are characterized by jSj = I. 2. Characterization of P -property. In [17] we introduced and investigated the sign-real spectral S radius ½0 . In the meantime we also introduced the sign-complex spectral radius. Therefore, for better readability, we changed the notation into ½IR(A) for the sign-real and ½C(A) for the sign-complex spectral radius. The sign-real spectral radius is de¯ned by (1) ½IR(A) := maxfj¸j : SAx = ¸x; jSj = I; 0 6= x 2 IRn; ¸ 2 IRg Note that the maximum is taken over the absolute values of real eigenvalues. Among the characterizations given in [17] is the following [Theorem 2.3]. For 0 < r 2 IR, (2) ½IR(A) < r , det(rI + SA) > 0 for all jSj = I (3) , det(rI + DA) > 0 for all jDj · I: This leads to two characterizations of the P -property. Theorem 2.1. For A 2 Mn(IR) and a positive r such that det(rI ¡ A) 6= 0 the following are equivalent: (i) C := (rI ¡ A)¡1(rI + A) 2 P. (ii) ½IR(A) < r. For nonsingular A, parts (i) and (ii) are equivalent to ¡1 ¡1 ¡1 ¡1 (iii) All B 2 Mn(IR) with A ¡ r I · B · A + r I are nonsingular. ¤ Inst. f. Informatik III, Technical University Hamburg-Harburg, Schwarzenbergstr. 95, 21071 Hamburg, Germany 1 Remark. The assertions follow by [17, Theorem 2.13 and Lemma 2.11]. Following, we give di®erent and simpler proofs. This also allows to conclude the subsequent Theorem 2.2. As remarked by one referee, the assertions also follow by (2), (3) and [9, Theorem 3.4], see also [10, 18]. Proof. Let a ¯xed but arbitrary signature matrix S be given and de¯ne ¹ ⊆ f1; : : : ; ng by (4) ¹ := fi : Sii = 1g 1 De¯ne diagonal D by D := 2 (I ¡ S), so that S = I ¡ 2D and Dii = 0 for i 2 ¹; Dii = 1 for i2 = ¹. Then (I ¡ D)C + D comprises of the rows of C out of ¹, and the rows of the identity matrix out of f1; : : : ; ng n ¹. Therefore, (5) det((I ¡ D)C + D) = det C[¹]: On the other hand, C = (rI ¡ A)¡1(rI + A) = (rI + A)(rI ¡ A)¡1 and (I ¡ D)C + D = f(I ¡ D)(rI + A) + D(rI ¡ A)g(rI ¡ A)¡1 = frI + A ¡ 2DAg(rI ¡ A)¡1 = (rI + SA)(rI ¡ A)¡1; and in view of (5), (6) C 2 P , 8jSj = I : det(rI + SA)= det(rI ¡ A) > 0: Now X det(rI + SA) = det(SA)[!] ¢ rn¡j!j; ! where the sum is taken over all ! ⊆ f1; : : : ; ng including ! = ;. Summing the determinants over all S, all terms cancel except for ! = ;, such that X det(rI + SA) = 2n ¢ rn: jSj=I Therefore, not all det(rI + SA); jSj = I, can be negative. This implies with (6), C 2 P , 8jSj = I : det(rI + SA) > 0; and proves (i) , (ii). Concerning (iii), we use characterization (3) and a continuity argument to obtain ½IR(A) ¸ r , 9 jDj · I : det(rI + DA) = 0 , 9 jDej · r¡1I : det(A¡1 + De) = 0: As a result of the previous proof we have a one-to-one correspondence between the minors of C and signature matrices S in (5) and (6): For det(rI ¡ A) > 0, det C[¹] > 0 , det(rI + SA) > 0(7) for ¹ as de¯ned in (4). As a result we obtain a solution to a question posed at our meeting in Oberwolfach. Theorem 2.2. For every n ¸ 2 and every ; 6= ¹ ⊆ f1; : : : ; ng, there exists a matrix C 2 Mn(IR) with det C[¹] < 0; and det C[!] > 0 for all ! ⊆ f1; : : : ; ng;! 6= ¹: Proof. De¯ne B := (1) 2 Mn(IR), the matrix all components of which are 1's. Obviously, ½IR(B) = ½(B) = n. For every jSj = I, SB is of rank 1, so that the characteristic polynomial of SB is 2 n n¡1 ÂSB(x) = det(xI ¡ SB) = x ¡ tr(SB) ¢ x . Therefore, ÂSB(x) is positive for x > max(0; tr(SB)). But tr(SB) · n ¡ 2 for all jSj = I, S 6= I, and tr(B) = n. Hence, for every n ¡ 2 < r < n, (8) det(rI ¡ B) < 0; and det(rI ¡ SB) > 0 for all jSj = I;S 6= I: Let n ¸ 2 and ; 6= ¹ ⊆ f1; : : : ; ng be given. De¯ne jS0j = I by ( 1 for i 2 ¹ S0 = ii ¡1 otherwise; and set A := ¡S0B. For ¯xed r; n ¡ 2 < r < n, de¯ne C := (rI ¡ A)¡1(rI + A) . Then S0 6= ¡I because ¹ 6= ;, and det(rI ¡ A) > 0 by (8). Furthermore, by (8), S 6= S0 ) det(rI + SA) = det(rI ¡ SS0B) > 0; S = S0 ) det(rI + SA) = det(rI ¡ B) < 0: Finally, the equivalence (7) ¯nishes the proof. IR 0 The proof relies on the following fact. Let A 2 Mn(IR) and r := ½ (A). Then there is r < r with det(r0I ¡ SAe ) < 0 for some jSej = I, and det(r0I ¡ SA) > 0 for all jSj = I, S 6= Se. This is explored in the proof for a speci¯c matrix. We mention that, due to numerical experience, this seems by no means a rare case but rather typical for generic A and r0 · r, r0 ¼ r. 3. Necessary and su±cient conditions. In this section we present conditions for testing the P -property for a given matrix C 2 Mn(IR). First we make sure that the spectral radius of C is less than one. Set dlog ®e (9) ® = kCk1 + 1; ¯ = 2 2 ; C = C=¯; The P -property of C is not changed by the scaling; so we may assume without loss of generality that I ¡ C and I + C are invertible. We note that (9) is performed exactly (without rounding error) in IEEE 754 floating point arithmetic [6]. The inverse Cayley transform of A := (C + I)¡1(C ¡ I) is C = (I ¡ A)¡1(I + A). Note that since ½(C) < 1, A is well de¯ned. By Theorem 2.1 for r = 1, a lower bound on ½IR(A) yields a necessary condition for C 2 P, and an upper bound yields a su±cient condition for the P -property. This implies the following. ¡1 Theorem 3.1. For C 2 Mn(IR) not having ¡1 as an eigenvalue de¯ne A := (C + I) (C ¡ I). Then 1=2 (i) C 2 P ) max jAij Ajij < 1. i;j ¡1 (ii) kD ADk2 < 1 for some diagonal D ) C 2 P. 1=2 IR Proof. Part (i) follows by max jAijAjij · ½ (A) [17, Lemma 5.1] and Theorem 2.1. Part (ii) follows for i;j a maximizing S in (1) by IR ¡1 ¡1 ½ (A) · ½(SA) = ½(SD AD) · kD ADk2: The quantity (10) inf kD¡1ADk D is a well known upper bound for the structured singular value [3]. It can be computed e±ciently [22] using ¡D D the fact that ke Ae k2 is a convex function in the Dii [19]. Next we show that the su±cient condition (ii) in Theorem 3.1 is superior to certain other conditions for P -property. 3 Theorem 3.2. Let C 2 Mn(IR). Then T ¡1 (i) C + C positive de¯nite implies that there exists A := (C + I) (C ¡ I) and kAk2 < 1. (ii) C diagonally dominant with all diagonal elements positive implies that there exists A := (C + ¡1 ¡1 I) (C ¡ I) and inf kD ADk2 < 1, where the in¯mum is taken over all positive diagonal matrices.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us