A Square Matrix Is Congruent to Its Transpose

Total Page:16

File Type:pdf, Size:1020Kb

A Square Matrix Is Congruent to Its Transpose Journal of Algebra 257 (2002) 97–105 www.academicpress.com A square matrix is congruent to its transpose Dragomir Ž. Ðokovic´ a,∗,1 and Khakim D. Ikramov b,2 a Department of Pure Mathematics, University of Waterloo, Waterloo, ON, Canada N2L 3G1 b Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow 119899, Russia Received 12 September 2001 Communicated by Efim Zelmanov Abstract For any matrix X let X denote its transpose. We show that if A is an n by n matrix over afieldK,thenA and A are congruent over K, i.e., P AP = A for some P ∈ GLn(K). 2002 Elsevier Science (USA). All rights reserved. Keywords: Congruence of matrices; Bilinear forms; Sesquilinear forms; Kronecker modules 1. Introduction For any matrix X let X denote its transpose. Let us start by recalling the following known fact (see, e.g., [2, Theorem 4, p. 205] or [3, Theorem 11]): if A is a complex n by n matrix, then A and A are congruent, i.e., P AP = A for some invertible complex matrix P . Our main objective is to prove that the same assertion is valid for matrices over an arbitrary field K. In spite of its elementary character, the proof of this result is quite involved. * Corresponding author. E-mail addresses: [email protected] (D.Ž. Ðokovic),´ [email protected] (K.D. Ikramov). 1 The author was supported in part by the NSERC Grant A-5285. 2 The author was supported in part by the NSERC Grant OGP 000811. 0021-8693/02/$ – see front matter 2002 Elsevier Science (USA). All rights reserved. PII:S0021-8693(02)00126-6 98 D.Ž. Ðokovi´c, K.D. Ikramov / Journal of Algebra 257 (2002) 97–105 Theorem 1.1. If A is an n by n matrix over a field K,thenP AP = A for some P ∈ GLn(K). The following result is an immediate consequence. −1 Corollary 1.2. For A ∈ GLn(K), the matrices A and A are congruent over K. −1 Proof. Indeed, if P ∈ GLn(K) is such that P AP = A ,thenQ AQ = A for Q = PA−1. ✷ The proof of Theorem 1.1 is based on the paper of Riehm [3] and an addendum to it by Gabriel [1]. This paper solves the equivalence problem for bilinear forms on finite-dimensional vector spaces over a field. For technical reasons we prefer to use the subsequent paper of Riehm and Shrader-Frechette [4], which gives a solution of the equivalence problem for sesquilinear forms on finitely generated modules over semisimple (Artinian) rings. We need only apply this general theory to bilinear forms over K. Let us reformulate the above theorem in the language of bilinear forms. If f : V × V → K is a bilinear form on a finite-dimensional K-vector space V , we shall say that (V, f ) is a bilinear space. The definition of equivalence of two bilinear forms is the usual one. Definition 1.3. Two bilinear forms f : V × V → K and g : W × W → K are equivalent if there exists a vector space isomorphism ϕ : V → W such that g(ϕ(x), ϕ(y)) = f(x,y), ∀x,y ∈ V . In that case, assuming that V and W are finite-dimensional, we also say that the bilinear spaces (V, f ) and (W, g) are isometric and that ϕ is an isometry. Let us define the transpose of a bilinear form. Definition 1.4. The transpose of a bilinear form f : V × V → K is the bilinear form g : V ×V → K such that g(x,y)= f(y,x) for all x,y ∈ V . We shall denote the transpose of f by f . Let f and g be as in Definition 1.3 and assume that dim(V ) = dim(W) = n<∞.Wefixabasis{v1,v2,...,vn} of V . Then the n by n matrix A = (aij ) where aij = f(vi ,vj ) is the matrix of f with respect to this basis. The matrix of f , with respect to the same basis, is A . Similarly, let B = (bij ) be the matrix of g with respect to a fixed basis {w1,w2,...,wn} of W. To say that f and g are equivalent is the same as to say that P AP = B for some P ∈ GLn(K). Theorem 1.5. Let V be a finite-dimensional K-vector space, f : V × V → K a bilinear form on V , and f its transposed form. Then f and f are equivalent. D.Ž. Ðokovi´c, K.D. Ikramov / Journal of Algebra 257 (2002) 97–105 99 The orthogonality of subspaces of a bilinear space is defined as follows. Definition 1.6. Let (V, f ) be a bilinear space. We say that the subspaces U and W of V are orthogonal to each other if f(U,W)= 0andf(W,U)= 0. Answering the question of a referee, we point out that Theorem 1.1 does not generalize to “hermitian conjugacy,” i.e., if σ ∈ Aut(K) with σ 2 = 1andwe set A∗ = (A)σ ,thenA and A∗ in general are not ∗-congruent. (Two n by n matrices A and B over K are said to be ∗-congruent if B = P ∗AP for some P ∈ GLn(K).) A simple counter example is provided by (K, σ) = (the complex field, the complex conjugation) and the 1 by 1 matrix A =[i],wherei is the imaginary unit. 2. Kronecker modules In this section we recall some facts about the Kronecker modules which are special cases of the general Kronecker modules discussed in [4]. The reader should consult this reference and [1] for more details. We define a Kronecker module as a four-tuple (X,u,v,Y) where X and Y are finite-dimensional K-vector spaces and u, v : X → Y are linear maps. To a bilinear space (Z, h) we assign the Kronecker module K(Z,h) = K(Z) = ∗ ∗ ∗ (Z, hl,hr ,Z ),whereZ is the dual space of Z and hl,hr : Z → Z are defined by hl(x)(y) = h(x,y), hr (x)(y) = h(y,x), ∀x,y ∈ Z. Every Kronecker module is a direct sum of indecomposable ones which are unique up to ordering and isomorphism. There are five types of indecomposable Kronecker modules (X,u,v,Y): I. Both u and v are isomorphisms and u−1v is indecomposable (i.e., it has only one elementary divisor). II. The spaces X and Y have the same dimension and the pencil λu + µv, with respect to suitable bases of X and Y , has the matrix λ µλ . .. µ . . .. λ µλ 100 D.Ž. Ðokovi´c, K.D. Ikramov / Journal of Algebra 257 (2002) 97–105 II∗. Similar to II with the matrix µλ µλ . .. µ . . .. λ µ III. In this case dim(Y ) = dim(X) + 1 and the pencil λu + µv has the matrix λ µλ . .. µ . . .. λ µ III∗. In this case dim(X) = dim(Y ) + 1andthematrixis µλ . .. µ . . .. λ µλ The following theorem is an immediate consequence of the theory of Kronecker modules (also known as the theory of matrix pencils). Theorem 2.1. If (V, f ) is a bilinear space, then the Kronecker modules K(V,f) and K(V,f) are isomorphic. In terms of matrices, this can be restated as follows. Theorem 2.2. If A is an n by n matrix over a field K, then the matrix pencils λA + µA and λA + µA are equivalent. If (Z, h) is a bilinear space and Z = Z1 + Z2 is a direct decomposition of Z such that h(Z1,Z2) = 0andh(Z2,Z1) = 0, then we say that this space is the orthogonal direct sum of the bilinear spaces (Z1,h1) and (Z2,h2),where h1 and h2 are the corresponding restrictions of h. The following theorem is a very special case of the general result stated in [4, Section 9]. Theorem 2.3. Every bilinear space (Z, h) can be decomposed into an orthogonal direct sum Z = ZI + ZII + ZIII D.Ž. Ðokovi´c, K.D. Ikramov / Journal of Algebra 257 (2002) 97–105 101 such that all indecomposable direct summands of K(ZI) are of type I, those ∗ ∗ of K(ZII) are of type II or II , and those of K(ZIII) are of type III or III .If hI is the restriction of h to ZI × ZI, etc., the bilinear spaces (ZI,hI), (ZII,hII), (ZIII,hIII) are uniquely determined by (Z, h) up to isometry. Moreover, if Z = ZII or Z = ZIII,thenK(Z) determines (Z, h) up to isometry. 3. Reduction to the nondegenerate case We now begin the proof of Theorem 1.5. We warn the reader that the notation and the definitions of the invariants of bilinear spaces in [3] and [4] do not agree. The second paper is more general and we shall exclusively use the definitions given there. By Theorem 2.3, there is an orthogonal direct decomposition V = VI + VII + VIII where the summands VI, VII,andVIII have the properties stated there. × Let fI be the restriction of f to VI VI,etc.,andfI the restriction of f to × VI VI, etc. Clearly fI is the transpose of fI, etc. We claim that the bilinear spaces (VII,fII) and (VII,fII) are isometric, and so are (VIII,fIII) and (VIII,fIII). The Kronecker module K(VII,fII) is a direct sum of indecomposable summands ∗ =∼ of type II or II . By Theorem 2.2, K(VII,fII) K(VII,fII) and the last assertion of Theorem 2.3 implies that (VII,fII) and (VII,fII) are isometric.
Recommended publications
  • Majorana Spinors
    MAJORANA SPINORS JOSE´ FIGUEROA-O'FARRILL Contents 1. Complex, real and quaternionic representations 2 2. Some basis-dependent formulae 5 3. Clifford algebras and their spinors 6 4. Complex Clifford algebras and the Majorana condition 10 5. Examples 13 One dimension 13 Two dimensions 13 Three dimensions 14 Four dimensions 14 Six dimensions 15 Ten dimensions 16 Eleven dimensions 16 Twelve dimensions 16 ...and back! 16 Summary 19 References 19 These notes arose as an attempt to conceptualise the `symplectic Majorana{Weyl condition' in 5+1 dimensions; but have turned into a general discussion of spinors. Spinors play a crucial role in supersymmetry. Part of their versatility is that they come in many guises: `Dirac', `Majorana', `Weyl', `Majorana{Weyl', `symplectic Majorana', `symplectic Majorana{Weyl', and their `pseudo' counterparts. The tra- ditional physics approach to this topic is a mixed bag of tricks using disparate aspects of representation theory of finite groups. In these notes we will attempt to provide a uniform treatment based on the classification of Clifford algebras, a work dating back to the early 60s and all but ignored by the theoretical physics com- munity. Recent developments in superstring theory have made us re-examine the conditions for the existence of different kinds of spinors in spacetimes of arbitrary signature, and we believe that a discussion of this more uniform approach is timely and could be useful to the student meeting this topic for the first time or to the practitioner who has difficulty remembering the answer to questions like \when do symplectic Majorana{Weyl spinors exist?" The notes are organised as follows.
    [Show full text]
  • Chapter IX. Tensors and Multilinear Forms
    Notes c F.P. Greenleaf and S. Marques 2006-2016 LAII-s16-quadforms.tex version 4/25/2016 Chapter IX. Tensors and Multilinear Forms. IX.1. Basic Definitions and Examples. 1.1. Definition. A bilinear form is a map B : V V C that is linear in each entry when the other entry is held fixed, so that × → B(αx, y) = αB(x, y)= B(x, αy) B(x + x ,y) = B(x ,y)+ B(x ,y) for all α F, x V, y V 1 2 1 2 ∈ k ∈ k ∈ B(x, y1 + y2) = B(x, y1)+ B(x, y2) (This of course forces B(x, y)=0 if either input is zero.) We say B is symmetric if B(x, y)= B(y, x), for all x, y and antisymmetric if B(x, y)= B(y, x). Similarly a multilinear form (aka a k-linear form , or a tensor− of rank k) is a map B : V V F that is linear in each entry when the other entries are held fixed. ×···×(0,k) → We write V = V ∗ . V ∗ for the set of k-linear forms. The reason we use V ∗ here rather than V , and⊗ the⊗ rationale for the “tensor product” notation, will gradually become clear. The set V ∗ V ∗ of bilinear forms on V becomes a vector space over F if we define ⊗ 1. Zero element: B(x, y) = 0 for all x, y V ; ∈ 2. Scalar multiple: (αB)(x, y)= αB(x, y), for α F and x, y V ; ∈ ∈ 3. Addition: (B + B )(x, y)= B (x, y)+ B (x, y), for x, y V .
    [Show full text]
  • Compactification of Classical Groups
    COMMUNICATIONS IN ANALYSIS AND GEOMETRY Volume 10, Number 4, 709-740, 2002 Compactification of Classical Groups HONGYU HE 0. Introduction. 0.1. Historical Notes. Compactifications of symmetric spaces and their arithmetic quotients have been studied by many people from different perspectives. Its history went back to E. Cartan's original paper ([2]) on Hermitian symmetric spaces. Cartan proved that a Hermitian symmetric space of noncompact type can be realized as a bounded domain in a complex vector space. The com- pactification of Hermitian symmetric space was subsequently studied by Harish-Chandra and Siegel (see [16]). Thereafter various compactifications of noncompact symmetric spaces in general were studied by Satake (see [14]), Furstenberg (see [3]), Martin (see [4]) and others. These compact- ifications are more or less of the same nature as proved by C. Moore (see [11]) and by Guivarc'h-Ji-Taylor (see [4]). In the meanwhile, compacti- fication of the arithmetic quotients of symmetric spaces was explored by Satake (see [15]), Baily-Borel (see [1]), and others. It plays a significant role in the theory of automorphic forms. One of the main problems involved is the analytic properties of the boundary under the compactification. In all these compactifications that have been studied so far, the underlying compact space always has boundary. For a more detailed account of these compactifications, see [16] and [4]. In this paper, motivated by the author's work on the compactification of symplectic group (see [9]), we compactify all the classical groups. One nice feature of our compactification is that the underlying compact space is a symmetric space of compact type.
    [Show full text]
  • Forms on Inner Product Spaces
    FORMS ON INNER PRODUCT SPACES MARIA INFUSINO PROSEMINAR ON LINEAR ALGEBRA WS2016/2017 UNIVERSITY OF KONSTANZ Abstract. This note aims to give an introduction on forms on inner product spaces and their relation to linear operators. After briefly recalling some basic concepts from the theory of linear operators on inner product spaces, we focus on the space of forms on a real or complex finite-dimensional vector space V and show that it is isomorphic to the space of linear operators on V . We also describe the matrix representation of a form with respect to an ordered basis of the space on which it is defined, giving special attention to the case of forms on finite-dimensional complex inner product spaces and in particular to Hermitian forms. Contents Introduction 1 1. Preliminaries 1 2. Sesquilinear forms on inner product spaces 3 3. Matrix representation of a form 4 4. Hermitian forms 6 Notes 7 References 8 FORMS ON INNER PRODUCT SPACES 1 Introduction In this note we are going to introduce the concept of forms on an inner product space and describe some of their main properties (c.f. [2, Section 9.2]). The notion of inner product is a basic notion in linear algebra which allows to rigorously in- troduce on any vector space intuitive geometrical notions such as the length of an element and the orthogonality between two of them. We will just recall this notion in Section 1 together with some basic examples (for more details on this structure see e.g. [1, Chapter 3], [2, Chapter 8], [3]).
    [Show full text]
  • Orthogonal, Symplectic and Unitary Representations of Finite Groups
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 353, Number 12, Pages 4687{4727 S 0002-9947(01)02807-0 Article electronically published on June 27, 2001 ORTHOGONAL, SYMPLECTIC AND UNITARY REPRESENTATIONS OF FINITE GROUPS CARL R. RIEHM Abstract. Let K be a field, G a finite group, and ρ : G ! GL(V ) a linear representation on the finite dimensional K-space V . The principal problems considered are: I. Determine (up to equivalence) the nonsingular symmetric, skew sym- metric and Hermitian forms h : V × V ! K which are G-invariant. II. If h is such a form, enumerate the equivalence classes of representations of G into the corresponding group (orthogonal, symplectic or unitary group). III. Determine conditions on G or K under which two orthogonal, sym- plectic or unitary representations of G are equivalent if and only if they are equivalent as linear representations and their underlying forms are \isotypi- cally" equivalent. This last condition means that the restrictions of the forms to each pair of corresponding isotypic (homogeneous) KG-module components of their spaces are equivalent. We assume throughout that the characteristic of K does not divide 2jGj. Solutions to I and II are given when K is a finite or local field, or when K is a global field and the representation is \split". The results for III are strongest when the degrees of the absolutely irreducible representations of G are odd { for example if G has odd order or is an Abelian group, or more generally has a normal Abelian subgroup of odd index { and, in the case that K is a local or global field, when the representations are split.
    [Show full text]
  • A PRIMER on SESQUILINEAR FORMS This Is an Alternative
    A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from x8.1, 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin's book. Any terminology (such as sesquilinear form or complementary subspace) which is discussed here but not in Artin is optional for the course, and will not appear on homework or exams. 1. Sesquilinear forms n The dot product is an important tool for calculations in R . For instance, we can use it to n measure distance and angles. However, it doesn't come from just the vector space structure on R { to define it implicitly involves a choice of basis. In Math 67, you may have studied inner products on real vector spaces and learned that this is the general context in which the dot product arises. Now, we will make a more general study of the extra structure implicit in dot products and more generally inner products. This is the subject of bilinear forms. However, not all forms of interest are bilinear. When working with complex vector spaces, one often works with Hermitian forms, which toss in an extra complex conjugation. In order to handle both of these cases at once, we'll work in the context of sesquilinear forms. For convenience, we'll assume throughout that our vector spaces are finite dimensional. We first set up the background on field automorphisms. Definition 1.1. Let F be a field. An automorphism of F is a bijection from F to itself which preserves the operations of addition and multiplication.
    [Show full text]
  • Canonical Matrices of Bilinear and Sesquilinear Forms
    Canonical matrices of bilinear and sesquilinear forms Roger A. Horn Department of Mathematics, University of Utah Salt Lake City, Utah 84112-0090, [email protected] Vladimir V. Sergeichuk∗ Institute of Mathematics, Tereshchenkivska 3, Kiev, Ukraine, [email protected] Abstract Canonical matrices are given for bilinear forms over an algebraically closed or real closed field; • sesquilinear forms over an algebraically closed field and over real • quaternions with any nonidentity involution; and sesquilinear forms over a field F of characteristic different from • 2 with involution (possibly, the identity) up to classification of Hermitian forms over finite extensions of F; the canonical matri- ces are based on any given set of canonical matrices for similarity arXiv:0709.2408v2 [math.RT] 3 Oct 2007 over F. A method for reducing the problem of classifying systems of forms and linear mappings to the problem of classifying systems of linear map- pings is used to construct the canonical matrices. This method has its This is the authors’ version of a work that was accepted for publication in Linear Algebra and its Applications (2007), doi:10.1016/j.laa.2007.07.023. ∗The research was done while this author was visiting the University of Utah supported by NSF grant DMS-0070503 and the University of S˜ao Paulo supported by FAPESP, processo 05/59407-6. 1 origins in representation theory and was devised in [V.V. Sergeichuk, Math. USSR-Izv. 31 (1988) 481–501]. AMS classification: 15A21, 15A63. Keywords: Canonical matrices; Bilinear and sesquilinear forms; Congruence and *congruence; Quivers and algebras with involution. 1 Introduction We give canonical matrices of bilinear forms over an algebraically closed or real closed field (familiar examples are C and R), and of sesquilinear forms over an algebraically closed field and over P-quaternions (P is a real closed field) with respect to any nonidentity involution.
    [Show full text]
  • Representation Theorems for Solvable Sesquilinear Forms
    REPRESENTATION THEOREMS FOR SOLVABLE SESQUILINEAR FORMS ROSARIO CORSO AND CAMILLO TRAPANI Abstract. New results are added to the paper [4] about q-closed and solvable sesquilinear forms. The structure of the Banach space D[|| · ||Ω] defined on the domain D of a q-closed sesquilinear form Ω is unique up to isomorphism, and the adjoint of a sesquilinear form has the same property of q-closure or of solvability. The operator associated to a solvable sesquilinear form is the greatest which represents the form and it is self-adjoint if, and only if, the form is symmetric. We give more criteria of solvability for q-closed sesquilinear forms. Some of these criteria are related to the numerical range, and we analyse in particular the forms which are solvable with respect to inner products. The theory of solvable sesquilinear forms generalises those of many known sesquilinear forms in literature. Keywords: Kato’s First Representation Theorem, q-closed/solvable sesquilinear forms, compatible norms, Banach-Gelfand triplet. MSC (2010): 47A07, 47A30. 1. Introduction If Ω is a bounded sesquilinear form defined on a Hilbert space H, with inner product h·|·i and norm ||·||, then, as it is well known, then there exists a unique bounded operator T on H such that arXiv:1702.00605v3 [math.FA] 14 Jul 2017 Ω(ξ, η)= hTξ|ηi ∀ξ, η ∈H. (1) In the unbounded case we are interested in determining a representation of type (1), but we confine ourselves to consider the forms defined on a dense subspace D of H and to search a closed operator T , with dense domain D(T ) ⊆ D in H such that Ω(ξ, η)= hTξ|ηi ∀ξ ∈ D(T ), η ∈ D.
    [Show full text]
  • Download Article (PDF)
    DEMONSTRATE MATIIEMATJCA Vol. XXVII No 1 1994 Michal Muzalewski ON ORTHOGONALITY RELATION DETERMINED BY A GENERALIZED SEMI-INNER PRODUCT In the paper we arc dealing with structures QJD = (V, W; IL), where V and W are vector spaccs over a skew-field K. We assume that the relation IL C V X W satisfies some very weak axioms which are usually satisfied by the orthog- onality relations and wc prove that then the relation IL is induced by an appriopiate form f: V x W A', i.e IL = iL^, where xlL^y :<£> £(x,y) = 0, for x € V, y <E W. 1. Introductory definitions and facts DEFINITION 1. Let V, W be left vector spaccs over skew-fields K and L, respectively. Let us denote L(V) := {L: L is a subspace of V A dim L = 1} , H(V) := {H: H is a subspace of V A codim H = 1} and S(V) := {U: U is a subspace of V} . In the case of finite dimensional projective spaccs by a correlation in ^3(V) := (L(V), H(V); C) wc mean any isomorphism K:<P(V)-(H(V),L(V);D). which is usually identified with its natural extension K:S(V) —• S(V). If dim V = oo, then there is no such function K, therefore we must change and generalize our definition. 54 M. Mnzalcwski A map K: L(V) H(W)[7 : H(V) L(W)], is said to be correlation [dual correlation], if it satisfies the following condition: Corrn : L0 C Lt + ... + in kL0 D kLj n..
    [Show full text]
  • Forms and Operators
    Lecture 5 Forms and operators Now we introduce the main object of this course – namely forms in Hilbert spaces. They are so popular in analysis because the Lax-Milgram lemma yields properties of existence and uniqueness which are best adapted for establishing weak solutions of elliptic partial differential equations. What is more, we already have the Lumer-Phillips machinery at our disposal, which allows us to go much further and to associate holomorphic semigroups with forms. 5.1 Forms: algebraic properties In this section we introduce forms and put together some algebraic properties. As domain we consider a vector space V over K. A sesquilinear form on V is a mapping a: V V K such that × → a(u + v, w) = a(u, w) + a(v, w), a(λu, w) = λa(u, w), a(u, v + w) = a(u, v) + a(u, w), a(u, λv) = λa(u, v) for all u, v, w V , λ K. ∈ ∈ If K = R, then a sesquilinear form is the same as a bilinear form. If K = C, then a is antilinear in the second variable: it is additive in the second variable but not homogeneous. Thus the form is linear in the first variable, whereas only half of the linearity conditions 1 are fulfilled for the second variable. The form is 1 2 -linear; or sesquilinear since the Latin ‘sesqui’ means ‘one and a half’. For simplicity we will mostly use the terminology form instead of sesquilinear form. A form a is called symmetric if a(u, v) = a(v, u)(u, v V ), ∈ and a is called accretive if Re a(u, u) 0 (u V ).
    [Show full text]
  • The Unitary Group Let V Be a Complex Vector Space. a Sesquilinear Form Is
    The unitary group Let V be a complex vector space. A sesquilinear form is a map ∗ : V ×V ! C obeying: ~u ∗ (~v + ~w) = ~u ∗ ~v + ~u ∗ ~w (~u + ~v) ∗ ~w = ~u ∗ ~w + ~v ∗ ~w λ(~u ∗ ~v) = (λ~u) ∗ ~v = ~u ∗ (λ~v) A sesquilinear form is called Hermitian if it obeys ~u ∗ ~v = ~v ∗ ~u: Note that this implies that ~v ∗~v 2 R. A Hermitian bilinear form is called positive definite if, for all nonzero ~v, we have ~v ∗ ~v = 0. If we identify V with Cn, then sequilinear forms are of the form ~v ∗ ~w = ~vyQ~w for an y arbitrary Q 2 Matn×n(C), the Hermitian condition says that Q = Q , and the positive definite condition says that Q is positive definite Hermitian. The standard Hermitian form on Cn is ~vy ~w. Let V be a finite complex vector space equipped with a positive definite Hermitian form ∗. We define A : V ! V to be unitary if (A~v) ∗ (A~w) = ~v ∗ ~w for all ~v and ~w in V . We will show that V has a ∗-orthonormal basis of eigenvectors, whose eigenvalues are all of the form eiθ. Our proof is by induction on n. Let λ be a complex eigenvalue of A, with eigenvector ~v. Problem 1. Show that jλj = 1. Problem 2. Show that A takes f~w : ~v ∗ ~w = 0g to itself. Problem 3. Explain why we are done. 1 2 The orthogonal group On this page we make sure to get a clear record of a result which was done confusingly in homework: A real orthogonal matrix can be put into block diagonal form where the blocks are [ ±1 ] and [ cos θ − sin θ ].
    [Show full text]
  • Solutions to the Exercises of Lecture 5 of the Internet Seminar 2014/15
    Solutions to the Exercises of Lecture 5 of the Internet Seminar 2014/15 Exercise 5.1 Claim: Let V be a normed vector space, and let a: V × V ! K be a sesquilinear form. Then a is bounded if and only if a is continuous. (Note that this is slightly more general than the statement given in the exercise, since V needs not be a Hilbert space.) Proof. First assume that a is bounded, i.e., there exists a constant M ≥ 0 such that N ja(u; v)j ≤ M kukV kvkV for all u; v 2 V . Let ((un; vn))n2N 2 (V × V ) and u; v 2 V such that (un; vn) ! (u; v) in V × V as n ! 1 (which is equivalent to un ! u; vn ! v in V as n ! 1). Since (un)n2N is convergent there exists C ≥ 0 such that kunkV ≤ C for all n 2 N. Hence we compute ja(u; v) − a(un; vn)j ≤ ja(u; v) − a(un; v)j + ja(un; v) − a(un; vn)j = ja(u − un; v)j + ja(un; v − vn)j ≤ M ku − unkV kvkV + M kunkV kv − vnkV ≤ M ku − unkV kvkV + MC kv − vnkV for all n 2 N, which tends to 0 as n ! 1. Thus we have shown that a is continuous. In order to show the converse, assume that a is not bounded. Then there exist sequences N (un)n2N; (vn)n2N 2 V such that ja(un; vn)j > n kunkV kvnkV 1 un 1 vn for all n 2 .
    [Show full text]