Lecture Notes on Operator Algebras John M. Erdman Portland State
Total Page:16
File Type:pdf, Size:1020Kb
Lecture Notes on Operator Algebras John M. Erdman Portland State University Version March 12, 2011 c 2010 John M. Erdman E-mail address: [email protected] Contents 1 Chapter 1. LINEAR ALGEBRA AND THE SPECTRAL THEOREM3 1.1. Vector Spaces and the Decomposition of Diagonalizable Operators3 1.2. Normal Operators on an Inner Product Space6 Chapter 2. THE ALGEBRA OF HILBERT SPACE OPERATORS 13 2.1. Hilbert Space Geometry 13 2.2. Operators on Hilbert Spaces 16 2.3. Algebras 19 2.4. Spectrum 21 Chapter 3. BANACH ALGEBRAS 23 3.1. Definition and Elementary Properties 23 3.2. Maximal Ideal Space 26 3.3. Characters 27 3.4. The Gelfand Topology 28 3.5. The Gelfand Transform 31 3.6. The Fourier Transform 32 Chapter 4. INTERLUDE: THE LANGUAGE OF CATEGORIES 35 4.1. Objects and Morphisms 35 4.2. Functors 36 4.3. Natural Transformations 38 4.4. Universal Morphisms 39 Chapter 5. C∗-ALGEBRAS 43 5.1. Adjoints of Hilbert Space Operators 43 5.2. Algebras with Involution 44 5.3. C∗-Algebras 46 5.4. The Gelfand-Naimark Theorem|Version I 47 Chapter 6. SURVIVAL WITHOUT IDENTITY 49 6.1. Unitization of Banach Algebras 49 6.2. Exact Sequences and Extensions 51 6.3. Unitization of C∗-algebras 53 6.4. Quasi-inverses 55 6.5. Positive Elements in C∗-algebras 57 6.6. Approximate Identities 59 Chapter 7. SOME IMPORTANT CLASSES OF HILBERT SPACE OPERATORS 61 7.1. Orthonormal Bases in Hilbert Spaces 61 7.2. Projections and Partial Isometries 63 7.3. Finite Rank Operators 65 7.4. Compact Operators 66 iii iv CONTENTS Chapter 8. THE GELFAND-NAIMARK-SEGAL CONSTRUCTION 69 8.1. Positive Linear Functionals 69 8.2. Representations 69 8.3. The GNS-Construction and the Third Gelfand-Naimark Theorem 70 Chapter 9. MULTIPLIER ALGEBRAS 73 9.1. Hilbert Modules 73 9.2. Essential Ideals 76 9.3. Compactifications and Unitizations 78 Chapter 10. FREDHOLM THEORY 81 10.1. The Fredholm Alternative 81 10.2. The Fredholm Alternative { continued 82 10.3. Fredholm Operators 83 10.4. The Fredholm Alternative { Concluded 84 Chapter 11. EXTENSIONS 87 11.1. Essentially Normal Operators 87 11.2. Toeplitz Operators 88 11.3. Addition of Extensions 90 11.4. Tensor Products of C∗-algebras 94 11.5. Completely Positive Maps 94 Chapter 12. K-THEORY 99 12.1. Projections on Matrix Algebras 99 12.2. The Grothendieck Construction 101 12.3. K0(A)|the Unital Case 103 12.4. K0(A)|the Nonunital Case 105 12.5. Exactness and Stability Properties of the K0 Functor 106 12.6. Inductive Limits 107 12.7. Bratteli Diagrams 108 Bibliography 113 Index 115 It is not essential for the value of an education that every idea be understood at the time of its accession. Any person with a genuine intellectual interest and a wealth of intellectual content acquires much that he only gradually comes to understand fully in the light of its correlation with other related ideas. Scholarship is a progressive process, and it is the art of so connecting and recombining individual items of learning by the force of one's whole character and experience that nothing is left in isolation, and each idea becomes a commentary on many others. - NORBERT WIENER 1 CHAPTER 1 LINEAR ALGEBRA AND THE SPECTRAL THEOREM 1.1. Vector Spaces and the Decomposition of Diagonalizable Operators 1.1.1. Convention. In this course, unless the contrary is explicitly stated, all vector spaces will be assumed to be vector spaces over C. That is, scalar will be taken to mean complex number. 1.1.2. Definition. The triple (V; +;M) is a (complex) vector space if (V; +) is an Abelian group and M : C ! Hom(V ) is a unital ring homomorphism (where Hom(V ) is the ring of group homomorphisms on V ). A function T : V ! W between vector spaces is linear if T (u + v) = T u + T v for all u, v 2 V and T (αv) = αT v for all α 2 C and v 2 V . Linear functions are frequently called linear transformations or linear maps. When V = W we say that T is an operator on V . The collection of all linear maps from V to W is denoted by L(V; W ) and the set of operators on V is denoted by L(V ). Depending on context we denote the identity operator x 7! x on V by idV or IV or just I. Recall that if T : V ! W is a linear map, then the kernel of T , denoted by ker T , is T (f0g) = fx 2 V : T x = 0g. Also, the range of T , denoted by ran T , is T !(V ) = fT x: x 2 V g. 1.1.3. Definition. A linear map T : V ! W between vector spaces is invertible (or is an −1 −1 −1 isomorphism) if there exists a linear map T : W ! V such that T T = idV and TT = idW . Recall that if a linear map is invertible its inverse is unique. Recall also that for a linear operator T on a finite dimensional vector space the following are equivalent: (a) T is an isomorphism; (b) T is injective; (c) the kernel of T is f0g; and (d) T is surjective. 1.1.4. Definition. Two operators R and T on a vector space V are similar if there exists an invertible operator S on V such that R = S−1TS. 1.1.5. Proposition. If V is a vector space, then similarity is an equivalence relation on L(V ). 1.1.6. Definition. Let V be a finite dimensional vector space and B = fe1; : : : ; eng be a basis k k for V . An operator T on V is diagonal if there exist scalars α1; : : : ; αn such that T e = αke for each k 2 Nn. Equivalently, T is diagonal if its matrix representation [T ] = [Tij] has the property that Tij = 0 whenever i 6= j. Asking whether a particular operator on some finite dimensional vector space is diagonal is, strictly speaking, nonsense. As defined the operator property of being diagonal is definitely not a vector space concept. It makes sense only for a vector space for which a basis has been specified. This important, if obvious, fact seems to go unnoticed in beginning linear algebra courses, due, n I suppose, to a rather obsessive fixation on R in such courses. Here is the relevant vector space property. 1.1.7. Definition. An operator T on a finite dimensional vector space V is diagonalizable if there exists a basis for V with respect to which T is diagonal. Equivalently, an operator on a finite dimensional vector space with basis is diagonalizable if it is similar to a diagonal operator. 3 4 1. LINEAR ALGEBRA AND THE SPECTRAL THEOREM 1.1.8. Definition. Let M and N be subspaces of a vector space V . If M\N = f0g and M+N = V , then V is the (internal) direct sum of M and N. In this case we write V = M ⊕ N: We say that M and N are complementary subspaces and that each is a (vector space) comple- ment of the other. The codimension of the subspace M is the dimension of its complement N. 1.1.9. Example. Let C = C[−1; 1] be the vector space of all continuous real valued functions on the interval [−1; 1]. A function f in C is even if f(−x) = f(x) for all x 2 [−1; 1]; it is odd if f(−x) = −f(x) for all x 2 [−1; 1]. Let Co = ff 2 C : f is odd g and Ce = ff 2 C : f is even g. Then C = Co ⊕ Ce. 1.1.10. Proposition. If M is a subspace of a vector space V , then there exists a subspace N of V such that V = M ⊕ N. 1.1.11. Proposition. Let V be a vector space and suppose that V = M ⊕N. Then for every v 2 V there exist unique vectors m 2 M and n 2 N such that v = m + n. 1.1.12. Definition. Let V be a vector space and suppose that V = M ⊕ N. We know from 1.1.11 that for each v 2 V there exist unique vectors m 2 M and n 2 N such that v = m + n. Define a function EMN : V ! V by EMN v = n. The function EMN is the projection of V along M onto N. (Frequently we write E for EMN . But keep in mind that E depends on both M and N.) 1.1.13. Proposition. Let V be a vector space and suppose that V = M ⊕N. If E is the projection of V along M onto N, then (i) E is linear; (ii) E2 = E (that is, E is idempotent); (iii) ran E = N; and (iv) ker E = M. 1.1.14. Proposition. Let V be a vector space and suppose that E : V ! V is a function which satisfies (i) E is linear, and (ii) E2 = E. Then V = ker E ⊕ ran E and E is the projection of V along ker E onto ran E. It is important to note that an obvious consequence of the last two propositions is that a function T : V ! V from a finite dimensional vector space into itself is a projection if and only if it is linear and idempotent. 1.1.15. Proposition. Let V be a vector space and suppose that V = M ⊕N. If E is the projection of V along M onto N, then I − E is the projection of V along N onto M.