
' $ Complementary Bases in Symplectic Matrices and a Proof that their Determinant is One Froilan´ M. Dopico Departamento de Matematicas´ Universidad Carlos III de Madrid Spain joint work with Charles R. Johnson Department of Mathematics The College of William and Mary Williamsburg, Virginia, USA & % 1 ' $ Definition and Basic Properties Let In be the identity matrix and J be 2 3 0 In J := 4 5 2 R2n£2n: ¡In 0 Definition: A matrix S 2 R2n£2n is called symplectic if ST JS = J. Symplectic matrices have applications in linear control theory for discrete-time systems. ² The product of two symplectic matrices is also symplectic. ² If S is symplectic then S¡1 and ST are symplectic. ² If ¸ 2 σ(S) then ¸¡1 2 σ(S). From the definition, it is obvious that det S = §1. However, if S is a symplectic matrix then always det S = 1. This is equivalent to the fact that the algebraic multi- plicities of the eigenvalues 1 and ¡1 are both even. & % 2 ' $ No elementary proof available The fact that det S = 1 for symplectic matrices has long been known, but no proof seems entirely elementary. Let us quote, for instance “It is somewhat nontrivial to prove that the determinant itself is 1, and we will accomplish this by expressing the condition for a matrix to be symplectic in terms of a differential form....Another proof may be found in Arnold [Mathematical methods in classical mechanics (1978)].” from D. McDuff and D. Salamon: Introduction to Symplectic Topology, Clarendon Press 1995. Our purpose is to give a straightforward, matrix theoretic proof that det S = 1 when S is symplectic. But in the process, we give some new information about the patterns of linearly independent rows (columns) among the blocks of S. & % 3 ' $ A summary of two classical proofs First Proof, appearing in the books by Arnold, or by R. Abraham and J. Marsden, Foundations of Mechanics, 2nd. Edition, Addison-Wesley (1978). ² Use tensor algebra. More precisely antisymmetric tensor products. ² The goal is to prove that every symplectomorphism preserves the volume form. Second Proof, appearing in E. Artin, Geometric Algebra, Interscience Publishers (1957). ² Define a symplectic transvection as a symplectic matrix S such that Sx = x for all the vectors x in a given hyperplane. ² Any symplectic matrix is a product of symplectic transvections. ² The determinant of any simplectic transvection is one. & % 4 ' $ Determinants, Inverses and Schur Complements For any square partitioned nonsingular matrix A 2 3 2 3 2 3 A11 A12 I 0 A11 A12 4 5 = 4 5 4 5 ; ¡1 A21 A22 A21A11 I 0 C | {z } A ¡1 with C = A22 ¡ A21A11 A12. Using the previous factorization, it is trivial that 2 3 ¤ ¤ A¡1 = 4 5 : ¤ C¡1 As a consequence: Theorem: Let A and A¡1 be conformally partitioned 2 3 2 3 A11 A12 B11 B12 A = 4 5 ;A¡1 = 4 5 : A21 A22 B21 B22 If A11 is nonsingular then det A det A = 11 : det B22 & % 5 ' $ Block properties of symplectic matrices Theorem: Let S 2 R2n£2n be partitioned as 2 3 S11 S12 n£n S = 4 5 where S11;S12;S21;S22 2 R : S21 S22 S is symplectic if and only if 2 3 ST ¡ST S¡1 = 4 22 12 5 : T T ¡S21 S11 Proof: First, consider that S is symplectic if and only if S¡1 = J ¡1ST J = ¡JST J, where we have used J ¡1 = J T = ¡J. And then expand the block product of ¡JST J. The easy case: S11 nonsingular det S11 det S = T = 1: det S11 This may also be easily obtained if some of the other blocks are nonsingular, by using the block structure of the inverse modulo some permutations. & % 6 ' $ The four blocks can be singular Unfortunately the four blocks of a symplectic matrix may be singular, as the following example shows: 2 3 1 0 0 0 6 7 6 7 6 0 0 0 1 7 6 7 : 6 7 4 0 0 1 0 5 0 ¡1 0 0 The notion of complementary bases allows us to relax the restriction of at least one block being nonsingular in the previous proof of det S = 1, making it entirely general. & % 7 ' $ The Complementary Bases Theorem (I) Let us consider again the symplectic matrix S 2 R2n£2n partitioned as 2 3 S11 S12 n£n S = 4 5 where S11;S12;S21;S22 2 R : S21 S22 Theorem: Suppose that rank(S11) = k and that the columns of S11 indexed by ®, ® ⊆ f1; : : : ; ng and j®j = k, are linearly independent. Then the columns 0 of S12 indexed by ® , the complement of ®, together n with the columns ® of S11 constitute a basis of R , i.e., they constitute a nonsingular n £ n matrix. Idea: rank(S11) = 2, fa1; a4g linearly independent. 2 3 a1 a2 a3 a4 a5 b1 b2 b3 b4 b5 S = 4 5 S21 S22 5£5 [a1 b2 b3 a4 b5] 2 R nonsingular & % 8 ' $ The Complementary Bases Theorem (II) THERE ARE EIGHT SIMILAR THEOREMS. 2 3 S11 S12 n£n S = 4 5 where S11;S12;S21;S22 2 R : S21 S22 Theorem: Suppose that rank(Spq) = k, p; q 2 f1; 2g, and that the rows (columns) of Spq indexed by ®, ® ⊆ f1; : : : ; ng and j®j = k, are linearly independent. 0 Then the rows (columns) of Sp0q (Spq0 ) indexed by ® , the complement of ®, together with the rows (columns) n ® of Spq constitute a basis of R , i.e., they constitute a nonsingular n £ n matrix. Once the result is proven for (S11;S12), and, (S12;S11), the other six results are consequence of ST , S¡1, and S¡T being symplectic matrices. The complementary bases theorem remains valid for COMPLEX symplectic matrices: S¤JS = J. & % 9 ' $ Proof of the Complementary Bases Theorem (I) Only the result for a maximal linearly independent set of columns of S11 is proven. 2 3 S11 S12 n£n S = 4 5 where S11;S12;S21;S22 2 R : S21 S22 Step 1: Select a permutation matrix P 2 Rn£n to move the maximal linearly independent set of columns of S11 we are considering to the first k positions using: 2 3 2 3 P 0 S0 S0 S0 := S 4 5 = 4 11 12 5 0 0 0 P S21 S22 | {z } symplectic | {z } symplectic Step 2: There exists a nonsingular matrix Y 2 Rn£n such that 2 3 I Z 0 4 k 5 YS11 = 0 0 & % 10 ' $ Proof of the Complementary Bases Theorem (II) 2 3 2 3 2 3 Y 0 S0 S0 S00 S00 S00 := 4 5 4 11 12 5 = 4 11 12 5 ¡T 0 0 00 00 0 Y S21 S22 S21 S22 | {z } symplectic | {z } symplectic where 2 3 2 3 00 Ik Z 00 X11 X12 S11 = 4 5 and S12 = 4 5 0 0 X21 X22 Step 3: We have to prove that the (n ¡ k) £ (n ¡ k) matrix X22 is nonsingular. Remember that 2 3 S00T ¡S00T S00¡1 = 4 22 12 5 ; then 00T 00T ¡S21 S11 00 00¡1 00 00T 00 00T 0 = (S S )12 = ¡S11 S12 + S12 S11 and this implies T X21 = ¡X22Z : Finally T n ¡ k = rank [X21 X22] = rank X22[¡Z I] = rank X22 & % 11 ' $ Corollary I: zero columns (rows) in a block Corollary: Suppose that the rows (columns) of Spq, p; q 2 f1; 2g, indexed by ¯, ¯ ⊆ f1; : : : ; ng, are zero. Then the rows (columns) of Sp0q (Spq0 ) indexed by ¯ are linearly independent. Idea: 2 3 0 a2 a3 0 a5 b1 b2 b3 b4 b5 S = 4 5 S21 S22 5£5 [b1 b4] 2 R are linearly independent & % 12 ' $ Corollary II: det S = 1 for S 2 R2n£2n symplectic (1) 2 3 2 3 T T S11 S12 S ¡S S = 4 5 and S¡1 = 4 22 12 5 : T T S21 S22 ¡S21 S11 Let us assume rank(S11) = k and that the first k columns of S11 are linearly independent. (Otherwise S(P © P ), with P permutation matrix, P © P symplectic, and det(P © P ) = (det P )2 = 1). Then [S j S ] = [ E E j F F ] 11 12 |{z}1 |{z}2 |{z}1 |{z}2 k n¡k k n¡k with [E1 F2] nonsingular. 2n£2n Let ¦j 2 R be the matrix that interchanges the column j of S11 with the column j of S12. Then 2 3 E1 F2 ¤ S¦k+1 ¢ ¢ ¢ ¦n = 4 5 ¤ ¤ 2 3 ¤ ¤ 6 7 ¡1 ¡1 6 T 7 (S¦k+1 ¢ ¢ ¢ ¦n) = ¦n ¢ ¢ ¢ ¦k+1S = E 4 ¤ 1 5 T ¡F2 & % 13 ' $ Corollary II: det S = 1 for S 2 R2n£2n symplectic (2) Finally n¡k det[E1 F2] (¡1) det S = det S¦k+1 ¢ ¢ ¢ ¦n = 2 3 ET det 4 1 5 T ¡F2 n¡k det[E1 F2] n¡k = (¡1) T = (¡1) det[E1 F2] Then det S = 1 QED & % 14.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-