6.5 Unitary and Orthogonal Operators and Their Matrices ∀X ∈ V, ||T(X)

Total Page:16

File Type:pdf, Size:1020Kb

6.5 Unitary and Orthogonal Operators and Their Matrices ∀X ∈ V, ||T(X) 6.5 Unitary and Orthogonal Operators and their Matrices In this section we focus on length-preserving transformations of an inner product space. Throughout, as usual, we assume V is an inner product space. Definition 6.36. A linear isometry of an inner product space V over F is a linear map T satisfying x V, T(x) = x 8 2 jj jj jj jj It should be clear that every eigenvalue of an isometry must have modulus 1: if T(w) = lw, then w 2 = T(w) 2 = lw 2 = l 2 w 2 jj jj jj jj jj jj j j jj jj Example 6.37. Let T = L (R2), where A = 1 4 3 . Then A 2 L 5 3− 4 2 2 2 x 1 4x 3y 1 x T = − = (4x 3y)2 + (3x + 4y)2 = x2 + y2 = y 5 3x + 4y 25 − y The matrix in this example is very special in that its inverse is its transpose: 1 1 4 3 1 4 3 T A− = = = A 16 + 9 3 4 5 3 4 25 25 − − We call such matrices orthogonal. Definition 6.38. A unitary operator T on an inner product space V is an invertible linear map satis- fying T∗T = I = TT∗.A unitary matrix is a matrix satisfying A∗ A = I. • If V is real, we usually call these orthogonal operators/matrices: this isn’t necessary, since unitary encompasses both real and complex spaces. Note that an orthogonal matrix satisfies AT A = I. • If b is an orthonormal basis of a finite-dimensional V, then T (V) is unitary if and only if 2 L the matrix [T]b is unitary. • We need only assume T∗T = I (or TT∗ = I) if V is finite-dimensional: if b is an orthonormal basis, then T∗T = I [T∗] [T] = I [T] [T∗] = I TT∗ = I () b b () b b () If V is infinite-dimensional, we need T∗ to be both the left- and right-inverse of T. This isn’t an empty requirement: see Exercise 6.5.12.. 1 i 2+2i Example 6.39. The matrix A = 3 2 2i i is easily seen to be unitary: − 1 i 2 + 2i i 2 2i 1 i2 + 4 + 8i + 4i2 2 2i A∗ A = − − = − − 9 2 2i i 2 + 2i i 9 2 + 2i i − − − 1 Theorem 6.40. Let T be a linear operator on V. 1. If T is a unitary/orthogonal operator, then it is a linear isometry. 2. If T is a linear isometry and V is finite-dimensional, then T is unitary/orthogonal. Proof. 1. If T is unitary, then x, y V, x, y = T∗T(x), y = T(x),T(y) (†) 8 2 h i h i h i In particular taking x = y shows that T is an isometry. 2. (I T T) = I (T T) = I T T is self-adjoint. By the spectral theorem, there exists an − ∗ ∗ ∗ − ∗ ∗ − ∗ orthonormal basis of V of eigenvectors of I T T. For any such x with eigenvalue l, − ∗ 2 2 2 0 = x T(x) = x, x T(x),T(x) = x, (I T∗T)x = l x jj jj − jj jj h i − h i h − i jj jj = l = 0 ) Since I T T = 0 on a basis, T T = I. Since V is finite-dimensional, we also have TT = I − ∗ ∗ ∗ whence T is unitary. The finite-dimensional restriction is important in part 2: we use the existence of adjoints, the spectral theorem, and that a left-inverse is also a right-inverse. Again, see Exercise 6.5.12. for an example of a non-unitary isometry in infinite dimensions. The proof shows a little more: Corollary 6.41. On a finite dimensional space, being unitary is equivalent to each of the following: (a) Preservation of the inner producta (†). In particular, in a real inner product space isomteries x,y also preserve the angle q between vectors since cos q = xh yi . jj jjjj jj (b) The existence of an orthonormal basis b = w ,..., w such that T(b) = T(w ),...,T(w ) f 1 ng f 1 n g is also orthonormal. (c) That every orthonormal basis b of V is mapped to an orthonormal basis T(b). a(†) is in fact equivalent to being an isometry in infinite dimensions: recall the polarization identity. While (a) is simply (†), claims (b) and (c) are also worth proving explicitly: see Exercise 6.5.8. If b is n the standard orthonormal basis of F and T = LA, then the columns of A form the orthonormal set T(b). This makes identifying unitary/orthogonal matrices easy: Corollary 6.42. A matrix A M (R) is orthogonal if and only if its columns form an orthonormal 2 n basis of Rn with respect to the standard (dot) inner product. A matrix A M (C) is unitary if and only if its columns form an orthonormal basis of Cn with 2 n respect to the standard (hermitian) inner product. 2 Examples 6.43. 1. The matrix A = cos q sin q M (R) is orthogonal for any q. Example 6.37 is q sin q −cos q 2 2 this with q = tan 1 3 . More generally (Exercise 6.5.6.), it can be seen that every real orthogonal − 4 2 2 matrix has the form A or × q cos q sin q B = q sin q cos q − for some angle q. The effect of the linear map LAq is to rotate counter-clockwise by q, while that 1 of LBq is to reflect across the line making angle 2 q with the positive x-axis. p2 p3 1 1 2. A = p2 0 2 M3(R) is orthogonal: check the columns!. p6 − p2 p3 1 ! 2 − − 3. The matrix A = 1 1 i is unitary: indeed it maps the standard basis to the orthonormal basis p2 i 1 1 1 1 i T(b) = , p i p 1 2 2 It is also easy to check that the characteristic polynomial is 1 t i 2 p2 p2 1 1 1 pi/4 p(t) = det − 1 = t + = t = (1 i) = e± i t − p2 2 ) p2 ± p2 p2 − ! whence the eigenvalues of T both have modulus 1. 4. Here is an example of an infinite-dimensional unitary operator. On the space C[ p, p], the − function T( f (x)) = eix f (x) is linear. Moreover p p ix 1 ix 1 ix ix e f (x), g(x) = e f (x)g(x) dx = f (x)e− g(x) dx = f (x), e− g(x) 2p p 2p p D E Z− Z− D E ix 1 whence T∗( f (x)) = e− f (x). Indeed T∗ = T− and so T is a unitary operator. Since C[ p, p] is infinite-dimensional, we don’t expect all parts of the Corollary to hold: − • Being unitary, T preserves the inner product. However, in contrast to a unitary operator on a finite-dimensional complex space, T has no eigenvalues/eigenvectors, since T( f ) = l f x, eix f (x) = l f (x) f (x) 0 () 8 () ≡ • T certainly maps any orthonormal set to an orthonormal set, however it can be seen that C[ p, p] has no orthonormal basis!a − aAn orthonormal set b = f : k Z can be found so that every function f equals an infinite series in the sense that f k 2 g f ∑ a f = 0. However, these are not finite sums and so b is not a basis. Moreover, given that the norm is defined by jj − k kjj an integral, this also isn’t quite the same as saying that f = ∑ ak fk as functions. Indeed there is no guarantee that such an infinite series is itself continuous! For these reasons, when working with Fourier series, one tends to consider a broader class than the continuous functions. 3 Unitary and Orthogonal Equivalence Suppose A M (R) is symmetric (self-adjoint) AT = A. By the spectral theorem, A has an orthonor- 2 n mal eigenbasis b = w ,..., w : Aw = l w . If we write U = (w w ), then the columns of U f 1 ng j j j 1 ··· n are orthonormal and thus U is an orthogonal matrix. We can therefore write l1 0 1 . ···. T A = UDU− = U 0 . .. 1 U 0 l B ··· nC @ A The same approach works if A M (C) is normal: we now have A = UDU where U is unitary. 2 n ∗ 1+i 1+i Example 6.44. The matrix A = 1 i 1+i is normal as can easily be checked. Its characteristic polynomial is − − p(t) = t2 2(1 + i)t + 4i = (t 2i)(t 2) − − − with corresponding orthonormal eigenvectors 1 1 1 1 w = , w = 2 p i 2i p i 2 − 2 We conclude that 1 1 1 1 2 0 1 1 1 − 1 1 1 0 1 i A = = p i i 0 2i p i i i i 0 i 1 i 2 − 2 − − − This is an example of unitary equivalence: Definition 6.45. Square matrices A, B are unitarily equivalent if there exists a unitary matrix U such T that B = U∗ AU. Orthogonal equivalence is similar: B = U AU. The above discussion proves half the following: Theorem 6.46. A M (C) is normal if and only if it is unitarily equivalent to a diagonal matrix 2 n (the matrix of its eigenvalues). Similarly, A M (R) is symmetric if and only if it is orthogonally equivalent to a diagonal matrix.
Recommended publications
  • Lecture Notes: Qubit Representations and Rotations
    Phys 711 Topics in Particles & Fields | Spring 2013 | Lecture 1 | v0.3 Lecture notes: Qubit representations and rotations Jeffrey Yepez Department of Physics and Astronomy University of Hawai`i at Manoa Watanabe Hall, 2505 Correa Road Honolulu, Hawai`i 96822 E-mail: [email protected] www.phys.hawaii.edu/∼yepez (Dated: January 9, 2013) Contents mathematical object (an abstraction of a two-state quan- tum object) with a \one" state and a \zero" state: I. What is a qubit? 1 1 0 II. Time-dependent qubits states 2 jqi = αj0i + βj1i = α + β ; (1) 0 1 III. Qubit representations 2 A. Hilbert space representation 2 where α and β are complex numbers. These complex B. SU(2) and O(3) representations 2 numbers are called amplitudes. The basis states are or- IV. Rotation by similarity transformation 3 thonormal V. Rotation transformation in exponential form 5 h0j0i = h1j1i = 1 (2a) VI. Composition of qubit rotations 7 h0j1i = h1j0i = 0: (2b) A. Special case of equal angles 7 In general, the qubit jqi in (1) is said to be in a superpo- VII. Example composite rotation 7 sition state of the two logical basis states j0i and j1i. If References 9 α and β are complex, it would seem that a qubit should have four free real-valued parameters (two magnitudes and two phases): I. WHAT IS A QUBIT? iθ0 α φ0 e jqi = = iθ1 : (3) Let us begin by introducing some notation: β φ1 e 1 state (called \minus" on the Bloch sphere) Yet, for a qubit to contain only one classical bit of infor- 0 mation, the qubit need only be unimodular (normalized j1i = the alternate symbol is |−i 1 to unity) α∗α + β∗β = 1: (4) 0 state (called \plus" on the Bloch sphere) 1 Hence it lives on the complex unit circle, depicted on the j0i = the alternate symbol is j+i: 0 top of Figure 1.
    [Show full text]
  • Parametrization of 3×3 Unitary Matrices Based on Polarization
    Parametrization of 33 unitary matrices based on polarization algebra (May, 2018) José J. Gil Parametrization of 33 unitary matrices based on polarization algebra José J. Gil Universidad de Zaragoza. Pedro Cerbuna 12, 50009 Zaragoza Spain [email protected] Abstract A parametrization of 33 unitary matrices is presented. This mathematical approach is inspired by polarization algebra and is formulated through the identification of a set of three orthonormal three-dimensional Jones vectors representing respective pure polarization states. This approach leads to the representation of a 33 unitary matrix as an orthogonal similarity transformation of a particular type of unitary matrix that depends on six independent parameters, while the remaining three parameters correspond to the orthogonal matrix of the said transformation. The results obtained are applied to determine the structure of the second component of the characteristic decomposition of a 33 positive semidefinite Hermitian matrix. 1 Introduction In many branches of Mathematics, Physics and Engineering, 33 unitary matrices appear as key elements for solving a great variety of problems, and therefore, appropriate parameterizations in terms of minimum sets of nine independent parameters are required for the corresponding mathematical treatment. In this way, some interesting parametrizations have been obtained [1-8]. In particular, the Cabibbo-Kobayashi-Maskawa matrix (CKM matrix) [6,7], which represents information on the strength of flavour-changing weak decays and depends on four parameters, constitutes the core of a family of parametrizations of a 33 unitary matrix [8]. In this paper, a new general parametrization is presented, which is inspired by polarization algebra [9] through the structure of orthonormal sets of three-dimensional Jones vectors [10].
    [Show full text]
  • The Spectral Theorem for Self-Adjoint and Unitary Operators Michael Taylor Contents 1. Introduction 2. Functions of a Self-Adjoi
    The Spectral Theorem for Self-Adjoint and Unitary Operators Michael Taylor Contents 1. Introduction 2. Functions of a self-adjoint operator 3. Spectral theorem for bounded self-adjoint operators 4. Functions of unitary operators 5. Spectral theorem for unitary operators 6. Alternative approach 7. From Theorem 1.2 to Theorem 1.1 A. Spectral projections B. Unbounded self-adjoint operators C. Von Neumann's mean ergodic theorem 1 2 1. Introduction If H is a Hilbert space, a bounded linear operator A : H ! H (A 2 L(H)) has an adjoint A∗ : H ! H defined by (1.1) (Au; v) = (u; A∗v); u; v 2 H: We say A is self-adjoint if A = A∗. We say U 2 L(H) is unitary if U ∗ = U −1. More generally, if H is another Hilbert space, we say Φ 2 L(H; H) is unitary provided Φ is one-to-one and onto, and (Φu; Φv)H = (u; v)H , for all u; v 2 H. If dim H = n < 1, each self-adjoint A 2 L(H) has the property that H has an orthonormal basis of eigenvectors of A. The same holds for each unitary U 2 L(H). Proofs can be found in xx11{12, Chapter 2, of [T3]. Here, we aim to prove the following infinite dimensional variant of such a result, called the Spectral Theorem. Theorem 1.1. If A 2 L(H) is self-adjoint, there exists a measure space (X; F; µ), a unitary map Φ: H ! L2(X; µ), and a 2 L1(X; µ), such that (1.2) ΦAΦ−1f(x) = a(x)f(x); 8 f 2 L2(X; µ): Here, a is real valued, and kakL1 = kAk.
    [Show full text]
  • Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 2 Unitary
    Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 2 Unitary similarity and normal matrices n ∗ Definition A set of vector fx1; : : : ; xkg in C is orthogonal if xi xj = 0 for i 6= j. If, in addition, ∗ that xj xj = 1 for each j, then the set is orthonormal. Theorem An orthonormal set of vectors in Cn is linearly independent. n Definition An orthonormal set fx1; : : : ; xng in C is an orthonormal basis. A matrix U 2 Mn ∗ is unitary if it has orthonormal columns, i.e., U U = In. Theorem Let U 2 Mn. The following are equivalent. (a) U is unitary. (b) U is invertible and U ∗ = U −1. ∗ ∗ (c) UU = In, i.e., U is unitary. (d) kUxk = kxk for all x 2 Cn, where kxk = (x∗x)1=2. Remarks (1) Real unitary matrices are orthogonal matrices. (2) Unitary (real orthogonal) matrices form a group in the set of complex (real) matrices and is a subgroup of the group of invertible matrices. ∗ Definition Two matrices A; B 2 Mn are unitarily similar if A = U BU for some unitary U. Theorem If A; B 2 Mn are unitarily similar, then X 2 ∗ ∗ X 2 jaijj = tr (A A) = tr (B B) = jbijj : i;j i;j Specht's Theorem Two matrices A and B are similar if and only if tr W (A; A∗) = tr W (B; B∗) for all words of length of degree at most 2n2. Schur's Theorem Every A 2 Mn is unitarily similar to a upper triangular matrix. Theorem Every A 2 Mn(R) is orthogonally similar to a matrix in upper triangular block form so that the diagonal blocks have size at most 2.
    [Show full text]
  • Linear Operators
    C H A P T E R 10 Linear Operators Recall that a linear transformation T ∞ L(V) of a vector space into itself is called a (linear) operator. In this chapter we shall elaborate somewhat on the theory of operators. In so doing, we will define several important types of operators, and we will also prove some important diagonalization theorems. Much of this material is directly useful in physics and engineering as well as in mathematics. While some of this chapter overlaps with Chapter 8, we assume that the reader has studied at least Section 8.1. 10.1 LINEAR FUNCTIONALS AND ADJOINTS Recall that in Theorem 9.3 we showed that for a finite-dimensional real inner product space V, the mapping u ’ Lu = Óu, Ô was an isomorphism of V onto V*. This mapping had the property that Lauv = Óau, vÔ = aÓu, vÔ = aLuv, and hence Lau = aLu for all u ∞ V and a ∞ ®. However, if V is a complex space with a Hermitian inner product, then Lauv = Óau, vÔ = a*Óu, vÔ = a*Luv, and hence Lau = a*Lu which is not even linear (this was the definition of an anti- linear (or conjugate linear) transformation given in Section 9.2). Fortunately, there is a closely related result that holds even for complex vector spaces. Let V be finite-dimensional over ç, and assume that V has an inner prod- uct Ó , Ô defined on it (this is just a positive definite Hermitian form on V). Thus for any X, Y ∞ V we have ÓX, YÔ ∞ ç.
    [Show full text]
  • Matrices That Commute with a Permutation Matrix
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Matrices That Commute With a Permutation Matrix Jeffrey L. Stuart Department of Mathematics University of Southern Mississippi Hattiesburg, Mississippi 39406-5045 and James R. Weaver Department of Mathematics and Statistics University of West Florida Pensacola. Florida 32514 Submitted by Donald W. Robinson ABSTRACT Let P be an n X n permutation matrix, and let p be the corresponding permuta- tion. Let A be a matrix such that AP = PA. It is well known that when p is an n-cycle, A is permutation similar to a circulant matrix. We present results for the band patterns in A and for the eigenstructure of A when p consists of several disjoint cycles. These results depend on the greatest common divisors of pairs of cycle lengths. 1. INTRODUCTION A central theme in matrix theory concerns what can be said about a matrix if it commutes with a given matrix of some special type. In this paper, we investigate the combinatorial structure and the eigenstructure of a matrix that commutes with a permutation matrix. In doing so, we follow a long tradition of work on classes of matrices that commute with a permutation matrix of a specified type. In particular, our work is related both to work on the circulant matrices, the results for which can be found in Davis’s compre- hensive book [5], and to work on the centrosymmetric matrices, which can be found in the book by Aitken [l] and in the papers by Andrew [2], Bamett, LZNEAR ALGEBRA AND ITS APPLICATIONS 150:255-265 (1991) 255 Q Elsevier Science Publishing Co., Inc., 1991 655 Avenue of the Americas, New York, NY 10010 0024-3795/91/$3.50 256 JEFFREY L.
    [Show full text]
  • MATRICES WHOSE HERMITIAN PART IS POSITIVE DEFINITE Thesis by Charles Royal Johnson in Partial Fulfillment of the Requirements Fo
    MATRICES WHOSE HERMITIAN PART IS POSITIVE DEFINITE Thesis by Charles Royal Johnson In Partial Fulfillment of the Requirements For the Degree of Doctor of Philosophy California Institute of Technology Pasadena, California 1972 (Submitted March 31, 1972) ii ACKNOWLEDGMENTS I am most thankful to my adviser Professor Olga Taus sky Todd for the inspiration she gave me during my graduate career as well as the painstaking time and effort she lent to this thesis. I am also particularly grateful to Professor Charles De Prima and Professor Ky Fan for the helpful discussions of my work which I had with them at various times. For their financial support of my graduate tenure I wish to thank the National Science Foundation and Ford Foundation as well as the California Institute of Technology. It has been important to me that Caltech has been a most pleasant place to work. I have enjoyed living with the men of Fleming House for two years, and in the Department of Mathematics the faculty members have always been generous with their time and the secretaries pleasant to work around. iii ABSTRACT We are concerned with the class Iln of n><n complex matrices A for which the Hermitian part H(A) = A2A * is positive definite. Various connections are established with other classes such as the stable, D-stable and dominant diagonal matrices. For instance it is proved that if there exist positive diagonal matrices D, E such that DAE is either row dominant or column dominant and has positive diag­ onal entries, then there is a positive diagonal F such that FA E Iln.
    [Show full text]
  • 216 Section 6.1 Chapter 6 Hermitian, Orthogonal, And
    216 SECTION 6.1 CHAPTER 6 HERMITIAN, ORTHOGONAL, AND UNITARY OPERATORS In Chapter 4, we saw advantages in using bases consisting of eigenvectors of linear opera- tors in a number of applications. Chapter 5 illustrated the benefit of orthonormal bases. Unfortunately, eigenvectors of linear operators are not usually orthogonal, and vectors in an orthonormal basis are not likely to be eigenvectors of any pertinent linear operator. There are operators, however, for which eigenvectors are orthogonal, and hence it is possible to have a basis that is simultaneously orthonormal and consists of eigenvectors. This chapter introduces some of these operators. 6.1 Hermitian Operators § When the basis for an n-dimensional real, inner product space is orthonormal, the inner product of two vectors u and v can be calculated with formula 5.48. If v not only represents a vector, but also denotes its representation as a column matrix, we can write the inner product as the product of two matrices, one a row matrix and the other a column matrix, (u, v) = uT v. If A is an n n real matrix, the inner product of u and the vector Av is × (u, Av) = uT (Av) = (uT A)v = (AT u)T v = (AT u, v). (6.1) This result, (u, Av) = (AT u, v), (6.2) allows us to move the matrix A from the second term to the first term in the inner product, but it must be replaced by its transpose AT . A similar result can be derived for complex, inner product spaces. When A is a complex matrix, we can use equation 5.50 to write T T T (u, Av) = uT (Av) = (uT A)v = (AT u)T v = A u v = (A u, v).
    [Show full text]
  • Matrix Theory
    Matrix Theory Xingzhi Zhan +VEHYEXI7XYHMIW MR1EXLIQEXMGW :SPYQI %QIVMGER1EXLIQEXMGEP7SGMIX] Matrix Theory https://doi.org/10.1090//gsm/147 Matrix Theory Xingzhi Zhan Graduate Studies in Mathematics Volume 147 American Mathematical Society Providence, Rhode Island EDITORIAL COMMITTEE David Cox (Chair) Daniel S. Freed Rafe Mazzeo Gigliola Staffilani 2010 Mathematics Subject Classification. Primary 15-01, 15A18, 15A21, 15A60, 15A83, 15A99, 15B35, 05B20, 47A63. For additional information and updates on this book, visit www.ams.org/bookpages/gsm-147 Library of Congress Cataloging-in-Publication Data Zhan, Xingzhi, 1965– Matrix theory / Xingzhi Zhan. pages cm — (Graduate studies in mathematics ; volume 147) Includes bibliographical references and index. ISBN 978-0-8218-9491-0 (alk. paper) 1. Matrices. 2. Algebras, Linear. I. Title. QA188.Z43 2013 512.9434—dc23 2013001353 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for such permission should be addressed to the Acquisitions Department, American Mathematical Society, 201 Charles Street, Providence, Rhode Island 02904-2294 USA. Requests can also be made by e-mail to [email protected]. c 2013 by the American Mathematical Society. All rights reserved. The American Mathematical Society retains all rights except those granted to the United States Government.
    [Show full text]
  • Unitary-And-Hermitian-Matrices.Pdf
    11-24-2014 Unitary Matrices and Hermitian Matrices Recall that the conjugate of a complex number a + bi is a bi. The conjugate of a + bi is denoted a + bi or (a + bi)∗. − In this section, I’ll use ( ) for complex conjugation of numbers of matrices. I want to use ( )∗ to denote an operation on matrices, the conjugate transpose. Thus, 3 + 4i = 3 4i, 5 6i =5+6i, 7i = 7i, 10 = 10. − − − Complex conjugation satisfies the following properties: (a) If z C, then z = z if and only if z is a real number. ∈ (b) If z1, z2 C, then ∈ z1 + z2 = z1 + z2. (c) If z1, z2 C, then ∈ z1 z2 = z1 z2. · · The proofs are easy; just write out the complex numbers (e.g. z1 = a+bi and z2 = c+di) and compute. The conjugate of a matrix A is the matrix A obtained by conjugating each element: That is, (A)ij = Aij. You can check that if A and B are matrices and k C, then ∈ kA + B = k A + B and AB = A B. · · You can prove these results by looking at individual elements of the matrices and using the properties of conjugation of numbers given above. Definition. If A is a complex matrix, A∗ is the conjugate transpose of A: ∗ A = AT . Note that the conjugation and transposition can be done in either order: That is, AT = (A)T . To see this, consider the (i, j)th element of the matrices: T T T [(A )]ij = (A )ij = Aji =(A)ji = [(A) ]ij. Example. If 1 2i 4 1 + 2i 2 i 3i ∗ − A = − , then A = 2 + i 2 7i .
    [Show full text]
  • Path Connectedness and Invertible Matrices
    PATH CONNECTEDNESS AND INVERTIBLE MATRICES JOSEPH BREEN 1. Path Connectedness Given a space,1 it is often of interest to know whether or not it is path-connected. Informally, a space X is path-connected if, given any two points in X, we can draw a path between the points which stays inside X. For example, a disc is path-connected, because any two points inside a disc can be connected with a straight line. The space which is the disjoint union of two discs is not path-connected, because it is impossible to draw a path from a point in one disc to a point in the other disc. Any attempt to do so would result in a path that is not entirely contained in the space: Though path-connectedness is a very geometric and visual property, math lets us formalize it and use it to gain geometric insight into spaces that we cannot visualize. In these notes, we will consider spaces of matrices, which (in general) we cannot draw as regions in R2 or R3. To begin studying these spaces, we first explicitly define the concept of a path. Definition 1.1. A path in X is a continuous function ' : [0; 1] ! X. In other words, to get a path in a space X, we take an interval and stick it inside X in a continuous way: X '(1) 0 1 '(0) 1Formally, a topological space. 1 2 JOSEPH BREEN Note that we don't actually have to use the interval [0; 1]; we could continuously map [1; 2], [0; 2], or any closed interval, and the result would be a path in X.
    [Show full text]
  • Complex Inner Product Spaces
    Complex Inner Product Spaces A vector space V over the field C of complex numbers is called a (complex) inner product space if it is equipped with a pairing h, i : V × V → C which has the following properties: 1. (Sesqui-symmetry) For every v, w in V we have hw, vi = hv, wi. 2. (Linearity) For every u, v, w in V and a in C we have langleau + v, wi = ahu, wi + hv, wi. 3. (Positive definite-ness) For every v in V , we have hv, vi ≥ 0. Moreover, if hv, vi = 0, the v must be 0. Sometimes the third condition is replaced by the following weaker condition: 3’. (Non-degeneracy) For every v in V , there is a w in V so that hv, wi 6= 0. We will generally work with the stronger condition of positive definite-ness as is conventional. A basis f1, f2,... of V is called orthogonal if hfi, fji = 0 if i < j. An orthogonal basis of V is called an orthonormal (or unitary) basis if, in addition hfi, fii = 1. Given a linear transformation A : V → V and another linear transformation B : V → V , we say that B is the adjoint of A, if for all v, w in V , we have hAv, wi = hv, Bwi. By sesqui-symmetry we see that A is then the adjoint of B as well. Moreover, by positive definite-ness, we see that if A has an adjoint B, then this adjoint is unique. Note that, when V is infinite dimensional, we may not be able to find an adjoint for A! In case it does exist it is denoted as A∗.
    [Show full text]