<<

Notes on C∗-

Robert Yuncken

Updated: December 14, 2015 Contents

I Fundamentals 2

I.1 Motivation ...... 2

I.2 ∗-algebras ...... 3

I.3 Abelian C∗-algebras, part I ...... 8

I.4 Unitalizations ...... 12

I.5 Spectra in Banach algebras ...... 15

I.6 Abelian Banach algebras and the Gelfand Transform . . . 20

I.7 Abelian C∗-algebras and the continuous functional cal- culus ...... 24

I.8 Positivity and order ...... 29

I.9 Representations of C∗-algebras ...... 36

II The Toeplitz & the Toeplitz Index Theorem 47

II.1 Matrices of operators ...... 47

II.2 Compact operators & their representations ...... 47

II.3 Fredholm operators ...... 50

II.4 The Toeplitz algebra ...... 54

III C∗-algebras 60

III.1 algebras ...... 60

III.2 C∗-algebras of abelian groups ...... 69

III.3 The C∗-algebras of the free group ...... 75

1 Chapter I

Fundamentals

I.1 Motivation

There are various possible motivations for studying C∗-algebras.

1. C∗-algebras abstract the properties of the bounded op- erators on a . This is maybe not a very compelling motivation, but it is easy to describe, so we will start there.

2. C∗-algebras generalise the properties of locally compact Hausdorff topological spaces. This is a great motivation, but it is harder to explain. We will see more and more of this as we proceed.

3. Quantum Mechanics / Representation Theory. These were the original motivations, but let’s avoid those for now.

I.1.1 Bounded operators on Hilbert space.

Let H be a Hilbert space, B(H) the set of bounded operators on H.

Recall that B(H) has the following properties:

1. B(H) is a with

A = sp A = sp , Ay . k k  1 k k  , y 1 |〈 〉| k k≤ k k k k≤

2 2. B(H) has a product satisfying AB A B k k ≤ k kk k ∗ ∗ 3. B(H) has an involution A A defined by , A y = A, y for all , y H. It satisfies 7→ 〈 〉 〈 〉 ∈ ∗ (a) A = A ∀A B(H), k ∗k k k 2 ∈ (b) A A = A ∀A B(H), k k k k ∈

Let’s abstract these properties.

I.2 ∗-algebras

I.2.1 Definitions

We always work over the field C.

Definition I.2.1. An algebra A (over C) is a C- with an associative bilinear product : A A A, i.e. ∀, b, c A, λ C, · × → ∈ ∈ (b)c = (bc) (b + c) = b + c ( + b)c = c + bc (λ)b = λ(b) = (λb)

If b = b for all , b A then A is called commutative or abelian. ∈

Definition I.2.2. A ∗-algebra (or involutive algebra) is an algebra A equipped with an involution ∗ : A A s.t. ∀, b A, λ C, → ∈ ∈ ∗ ∗ ( ) =  (involution) ∗ ∗ ∗ ∗ ∗ ( + b) =  + b , (λ) = λ (conjugate-linear) ∗ ∗ ∗ (b) = b  (anti-homomorphism)

∗ Example I.2.3. 1. A = C with z = z, t C ∗ 2. A = Mn( ) with T = T , ∗ 3. A = B(H) with T = adjoint .

3 Definition I.2.4. A normed algebra is an algebra A with a vector space s.t. b  b , ∀, b A. k k ≤ k kk k ∈ If A is complete w.r.t. it is called a . k · k Definition I.2.5. A C∗-algebra is a Banach ∗-algebra which satisfies

∗ 2   =  ∀ A. (I.2.1) k k k k ∈ Equation (I.2.1) is called the C∗-identity. It is not clear yet why it is a good idea. In fact, it is a truly brilliant idea of Gelfand and Naimark (1943), but we will only see why later.

Example I.2.6. 1. A = C with z = z . k k | | 2. A = B(H) with operator norm T = sp  1 T . C Ckn k k k≤ k k This includes A = Mn( ) = B( ). 3. Any norm-closed ∗-subalgebra1 of a C∗-algebra is again a C∗- algebra. ∗ In particular, any closed subalgebra of B(H) is a C -algebra. We will refer to these as concrete C∗-algebras.

One of the major theorems about C∗-algebras (the Gelfand-Naimark Theorem) says that any C∗-algebra is isometrically isomorphic to an algebra of operators on some Hilbert space H, i.e. a concrete C∗- algebra. But it will take some time to prove this. Often it is more useful to treat C∗-algebras abstractly. Remark I.2.7. For examples of Banach algebras which are not C∗- algebras, see the exercises.

The C∗-identity can be weakened slightly without changing the defi- nition:

Lemma I.2.8. Let A be a Banach ∗-algebra s.t.

∗  2 ∀ A. k k ≥ k k ∈ Then A is a C∗-algebra.

Proof. For any  A, ∈  2 ∗ ∗  , k k ≤ k k ≤ k kk k  ∗ . ⇒ k k ≤ k k 1 A ∗-subalgebra of a ∗-algebra A is a subset which is closed with respect to all B bb B b∗ B the *-algebra operations, i.e. a linear subspace such that 0 and for all b, b B ∈ ∈ 0 . ∈

4 ∗ ∗ ∗ Exchanging  and  gives   , so  =  . Thus, k k ≤ k k k k k k 2 ∗ ∗ 2      =  . k k ≤ k k ≤ k kk k k k

It is worth repeating the following observation from the proof.

Lemma I.2.9. In a C∗-algebra A,

∗  =  , ∀ A. k k k k ∈

I.2.2 Homomorphisms; Representations

Definition I.2.10. A ∗-homomorphism φ : A B between ∗-algebras is a map which respects all ∗-algebra operations,→ i.e. a s.t. ,  A ∀ 0 , ∈ φ  φ  φ  , ( 0) = ( ) ( 0) ∗ ∗ φ( ) = φ() .

If A, B are Banach ∗-algebras, we usually demand that φ be bounded.

Remark I.2.11. Later, we will see that boundedness is automatic for ∗-homomorphisms between C∗-algebras.

Definition I.2.12. 1. Let A be a Banach algebra, E a Banach space. A homomorphism π : A B(E) is called a representation of A on E. →

2. Let A be a Banach ∗-algebra, H a Hilbert space. A ∗-homomorphism π : A B(H) is called a ∗-representation of A on H. → 3. If ker π = 0 then π is called a faithful representation.

Thus, a faithful ∗-representation of a C∗-algebra A is a realization of A as a concrete C∗-algebra of operators on H.

We will mostly be interested in Hilbert space representations. Here is one useful exception.

5 Definition I.2.13. Let A be a Banach algebra. The left multiplier representation of A on itself is

L : A B(A); L()b := b. 7→ Remark I.2.14. There is no obvious ∗-structure on B(A), so we are not talking about a ∗-homomorphism here. Nevertheless, this map has particularly nice properties when A is a C∗-algebra.

Proposition I.2.15. If A is a Banach algebra, L : A B(A) has L 1. 7→ k k ≤

If A is a C∗-algebra, then L is an isometry, i.e. ∀ A ∈  = L() B(A) = sp b . k k k k b A, b 1 k k ∈ k k≤

∗ Proof. Direct computation. For the C -algebra case, consider b = 1 ∗   . k k

I.2.3 Basic terminology

Definition I.2.16. A unit in an algebra is a nonzero element 1 A ∈ such that 1 =  = 1 for all  A. An algebra with unit is called unital. ∈

If A,B are unital algebras, a homomorphism φ : A B is called unital → unital unifère if φ(1) = 1. ≡ Lemma I.2.17. 1. Let A be a ∗-algebra. Suppose 1 A (nonzero) ∈ ∗ is a left unit, i.e. 1 =  for all  A. Then it is a unit and 1 = 1 . ∈ ∗ 2. In a unital C -algebra, 1 = 1. k k

Proof. Exercise .

Terminology for elements of a C∗-algebra A follows that of operators on Hilbert space:

∗ ∗ •  A is normal if   =  . ∈ ∗ ∗ •  A is unitary if   = 1 =  . ∈ ∗ •  A is self-adjoint if  =  . ∈ 2 ∗ • p A is a projection if p = p = p . ∈

6 Remark I.2.18. Normality is a very important property. It allows us to ∗ ∗ 2 2 ∗ allow = talk of in  and  , such as ( )  2  1 without permettre worrying about the order of  and ∗. − −

Example I.2.19. The classic example of a non- is the unilateral shift. shift = décalage 2 Let H = ℓ (N). The unilateral shift is the operator

T B(H); T(0, 1, 2,...) (0, 0, 1,...). ∈ 7→ Its adjoint is the left shift:

T B(H); T(0, 1, 2,...) (1, 2, 3 ...). ∈ 7→ ∗ Then T T = d while

∗ TT B(H); T(0, 1, 2,...) (0, 1, 2,...). ∈ 7→ The difference T∗T TT∗ is a rank-one projection. −

I.2.4 Ideals; Quotients

Definition I.2.20. An ideal  / A of an algebra will always mean a two-sided ideal, i.e. A , A . ⊆ ⊆ ∗ If A is a ∗-algebra, we say  is a ∗-ideal if  = .

An ideal  / A is proper if  = A. 6 Remark I.2.21. In a proper ideal , no element is invertible. For if b  b 1b      A ∈ is invertible, then 1 = − and so = 1 ∀ . ∈ ∈ ∈ Example I.2.22. Let H be a Hilbert space and T B(H). The rank of T is ∈ rnk(T) = dim im(T).

Denote the set of finite-rank operators by K0(H).

It is easy to check that ∀S,T B(H), ∈ rnk(ST) rnk(S). rnk(T), ∗ ≤ rnk(T ) = rnk(T).

Therefore K0(H) is a ∗-ideal in B(H).

It is not a closed ∗-ideal. The closure K(H) = K0(H) is the ideal of compact operators.

7 Remark I.2.23. Equivalently, T B(H) is compact iff the image of the unit ball in H under T has compact∈ closure. We won’t need this. Lemma I.2.24. The kernel of a bounded (∗-)homomorphism is a closed (∗-)ideal.

Proof. Direct calculation.

For a closed (∗-)ideal  / A, the quotient A/ = { +   A} is again a | ∈ Banach (∗-)algebra with the usual quotient norm: + = min  +  (direct check). k k ∈ k k Remark I.2.25. The analogous statement is also true for C∗-algebras: ∗ ∗ statement = if  is a closed ∗-ideal in a C -algebra A then A/ is a C -algebra. But énoncé this is harder to prove, since the C∗-identity isn’t obvious. We will prove it later.

I.3 Abelian C∗-algebras, part I

I.3.1 C0(X)

This is a very important class of examples.

Let X be a locally compact .

(Recall: X is locally compact if every  X has a relatively compact open neighbourhood, i.e. ∃U  open s.t.∈ U is compact. e.g. Rn, or any closed or open3 subset of Rn.) Definition I.3.1. A continuous ƒ : X C vanishes at ∞ if → ∀ε > 0, ∃K X compact s.t. ƒ () < ε ∀ X K. ⊂ | | ∈ \ C Notation: C0(X) = {ƒ : X continuous, vanishing at ∞}. → ∗ Proposition I.3.2. C0(X) is a commutative C -algebra with

• ƒ = ƒ ∞ = sp ƒ () (ƒ C0(X)), k k k k  X | | ∈ ∈ • Pointwise algebra operations:

(ƒ + g)() = ƒ () + g(), (λƒ )() = λƒ (), ∗ (ƒ g)() = ƒ ()g(), ƒ () = ƒ (), C for all ƒ , g C0(X), λ ,  X. ∈ ∈ ∈ 8 Proof. From a previous course we know C0(X) is a Banach space. The axioms of a ∗-algebra follow from those for C, since operations are pointwise. Finally, ∀ƒ , g C0(X), ∈ ‚ Œ ‚ Œ ƒ g = sp ƒ ()g() sp ƒ () sp g() = ƒ g . k k  X | | ≤  X | |  X | | k kk k ∈ ∈ ∈

∗ 2 ƒ ƒ = sp ƒ ()ƒ () = ƒ . k k  X | | k k ∈

Remark I.3.3. If X is compact, then every function ƒ : X C vanishes → at ∞, so C0(X) = C(X). Exercise I.3.4. (This is just a sketch. See the exercise sheet for details.)

1. Let X be a locally compact Hausdorff space. Show that X is compact if and only if C0(X) is unital. 2. Suppose X is not compact. Define the one-point compactifica- tion X˜ and show that it is compact, Hausdorff in which X sits as an open subset.

3. Show that C0(X) sits inside C(X˜) as a closed ideal and that C(X) = C1 1 C0(X) , where denotes the constant function 1 on X˜. Hint: ⊕ Show that C0(X) = ker(ev∞).

4. Show that RÝn Sn, e.g. by stereographic projection. '

I.3.2 Representations of C0(X)

Let X be locally compact Hausdorff space. Let μ be a Borel probability measure on X.

2 Then A = C(X) is represented on H = L (X; μ) by pointwise multiplica- tion: 2 πμ(ƒ )h := ƒ h, (ƒ C0(X), h L (X; μ)). ∈ ∈

Note that “functions on X” are being used in two ways:

∗ • ƒ is a in the C -algebra C0(X), 2 2 • h is an L -function in the Hilbert space L (X; μ).

9 2 In fact the space H = L (X; μ) is not really a space of functions, but of equivalence classes of functions (modulo equality almost every- where). almost everywhere μ δ This process can be quite brutal. For instance if =  (Dirac measure presque partout≡ 2 at  X) then L X; δ C, since two functions are equivalent if and ( ) ∼= only∈ if they have the same value at . The resulting representation

2 C C π : C0(X) B(L (; μ)) = B( ) = → ∼ is given by evaluation at , i.e. π(ƒ ) = ƒ () for ƒ C0(X). ∈ The representation πμ is faithful if and only if μ is a strictly positive measure, i.e. μ(U) > 0 for every open set U X (exercise). ⊆

I.3.3 *-Homomorphisms and proper maps

Definition I.3.5. A continuous map α : X Y between locally com- pact Hausdorff spaces is called proper if the→ preimage of any compact set is compact.

Remark I.3.6. If X is compact this condition is automatic.

Roughly, α is proper if α(n) ∞ whenever n ∞. This is literally → → true if we allow (n) to be a net (see later). Example I.3.7. Some proper maps:

• Any α : X Y if X is compact. → • α = id : R R. → • α : R R; α() =  . → | | • α : R R; α() = log(1 +  ). → | |

Not proper:

• α : R R; α() = c (constant map). →  • α : R R; α() = . → p1+2

∗ The pull-back by a proper map α is the map α : C0(Y) C0(X) is defined by →

∗ α ƒ () = ƒ α(), (ƒ C0(Y),  X). ◦ ∈ ∈

10 Remark I.3.8. The pull-back α∗ can be defined even if α is not proper, but the image will be in C(X) not C0(X). See the proof of the next lemma.

Lemma I.3.9. For any continuous proper map α : Y X the pull- ∗ → back α is a ∗-homomorphism C0(X) C0(Y). →

Proof. The first thing to prove is that α∗ is well-defined, i.e. its image lies in C0(X). Let ε > 0. If ƒ C0(Y) then there is K Y compact such ƒ y < ε y Y K ∈ α∗ƒ  ƒ α  ⊆ε  X α 1 K that ( ) ∀ . Then ( ) = ( ( ) ∀ − ( ). | α 1| K ∈ \ | | | | ≤ α∗ƒ ∈ C\ X Since − ( ) is compact by properness, this proves 0( ). ∈ To check that α∗ is a ∗-homomorphism is a direct calculation: e.g. ∗ ∗ ∗ α (ƒ + g)() = (ƒ + g)(α()) = ƒ (α()) + g(α()) = (α ƒ + α g)(). Example I.3.10. Let Y X be a closed subspace. ⊆ The inclusion α : Y , X is proper. (This needs Y to be closed. Why?) → ∗ ∗ The pull-back α : C0(X) C0(Y) is restriction α : ƒ ƒ Y . The kernel is → 7→ | ∗ ker α = {ƒ C0(X) ƒ Y 0} = C0(X Y). ∈ | | ≡ ∼ \ In particular, if  X then restriction to {} is evaluation: ∈ C ev : C0(X) C({}) = ; ƒ ƒ (). → ∼ 7→ The kernel ker(ev) is an ideal of codimension 1. Remark I.3.11. A (non-zero) ∗-homomorphism A C is called a character → of A. Evaluation at a point on C0(X) is a very important example.

I.3.4 Noncommutative

The above examples (which will be completely developed in a later lecture) could be viewed in two different ways:

∗ 1. C0(X) is a particular example of a C -algebra. ∗ 2. General C -algebras are noncommutative generalizations of C0(X). “” ⇒

This idea goes back to the foundations of quantum mechanics, and appears throughout its history. But it was developed greatly by in the ’80s & ’90s.

11 We will see that many algebraic properties of C∗-algebras are anal- ogous to topological properties of locally compact Hausdorff spaces. Here are some very simple examples. Let X, Y be locally compact Hausdorff spaces, A = C0(X), B = C0(Y) and α : X Y a continuous proper map. →

Topology Algebra

X is compact A is unital ⇐⇒ (For X compact) A has a non-triv. projection ⇐⇒ X is connected 0 = p = 1 6 6 α is surjective α∗ is injective α is injective ⇐⇒ α∗ is surjective ⇐⇒

Proof (first two lines of the table). If ƒ C0(X) is a unit then ƒ g = g 1 ∈ 1 ∀g C0(X). So ƒ = (constant function). But is in C0(X) iff X is compact.∈

2 If p C0(X) is a projection then p() = p() ∀ X. So p() {0, 1} ∀ ∈X.A {0, 1}-valued continuous function is constant∈ on connected∈ components.∈ The result follows.

Exercise I.3.12. Prove the last two lines.

I.4 Unitalizations

Non-unital algebras are important, e.g. K(H), C0(X). But we can always add a unit if we need to. This is called a unitalization. Lemma I.4.1. Any (∗-)algebra A can be embedded into a unital (∗- ˜ embedded = )algebra A as an ideal of codimension 1. plongé

Proof. Put A˜ = A C; We will write the element (, λ) A˜ formally as ⊕ ∈  + λ1. This notation suggests the following definition for a product on A˜: ( + λ1)(b + μ1) := (b + μ + λb) + (λμ)1. This is associative, bilinear and A is an ideal (direct check).

If A is a ∗-algebra, then so is A˜ with the involution

∗ ∗ ( + λ1) =  + λ1.

12 Proposition I.4.2. Any Banach algebra A can be isometrically em- bedded into a unital Banach algebra A˜ as a closed ideal of codimen- embedded = sion 1. plongé

Proof. We define a Banach space norm on A˜ by2

(, λ) A˜ =  A + λ . k k k k | |

It remains to check:

1 1 ( + λ )(b + μ ) A˜ = (b + μ + λb) A + λμ k k k k | |  A b A + μ  A + λ b A + λ μ ≤ k k k 1k | |k 1k | |k k | || | =  + λ A˜ b + μ A˜. k k k k

Proposition I.4.3. Any non-unital C∗-algebra can be isometrically embedded in a unital C∗-algebra as a closed ideal of codimension 1.

Proof. Again, we let A˜ = A C with the ∗-algebra structure as above. The hard part is defining⊕ a C∗-norm on A˜. For this we will use the left-multiplier representation L : A B(H). → We can extended L to A˜ as follows:

L˜ : A˜ B(A); L˜( + λ1) = L() + λ d . → i.e. L˜( + λ1)b =  + λb. This is an algebra homomorphism (direct check).

We then define 1 ˜ 1  + λ A˜ := L( + λ ) B(A). k k k k Since L() B(A) =  A (Proposition I.2.15), this agrees with the orig- inal normk onk A. k k

To be a well-defined norm, we need to check that L˜ is injective. Sup- L˜  λ1 λ L λ 1 λ 1 pose ( + ) = 0. If = 0 we get ( − ) = dA so − is a left 6 − − unit, which is absurd since A is non-unital. So λ = 0. But L is injective, so  = 0.

Finally we need to prove the C∗-identity. Let ˜ A˜. The definition of ∈ the norm on B(A) gives: ∀ε > 0, ∃b A with b 1 s.t. b˜ A ∈ k k ≤ k k ≥ 2 1 In fact, there are many possible norms that would work here, e.g.  + λ A˜ := mx{  A, λ }. k k k k | |

13 ˜ A˜ ε. Then k k − ∗ ∗ ∗ ˜ ˜ A˜ b A ˜ ˜ A˜ b A k k ≥ k ∗k ∗k k k k b (˜ b˜ ) A ≥ k ∗ k = (b˜ ) b˜ A k k b 2 = ˜ A k k 2 ( ˜ A˜ ε) . ≥ k k − ˜∗˜ ˜ 2 So A˜ A˜. By Lemma I.2.8, this completes the proof. k k ≥ k k Remark I.4.4. We could make the construction A˜ above even if A is unital. In that case, A˜ is isomorphic to A C with ∗-algebra structure ⊕ , z  , z  , zz , ( )( 0 0) = ( 0 0) , z ∗ ∗, z , ,  A, z, z C. ( ) = ( ) ∀ 0 0 ∈ ∈

I.4.1 The one-point compactification

Recall C0(X) is unital iff X is compact. Here we describe the unitaliza- tion Cã0(X) topologically. Definition I.4.5. Let X be locally compact Hausdorff. The one-point compactification of X is the space

X˜ := X {∞} t where ∞ is a formal element called the point at infinity, equipped with the topology of open sets

τ˜ := {U X open} {X˜ K K X is compact}. ⊂ t \ | ⊂

This means that n ∞ iff (n) is eventually outside any given com- → eventually = pact set. ultimement Lemma I.4.6. τ˜ is indeed a topology on X˜.

Proof. Exercise.

Example I.4.7. RÝn Sn. ∼=

Recall that Sn {N} ∼= Rn where N is the “north pole”, for instance by stereographic\ projection.→ Denote the homeomorphism by α. We augment this to a map n α˜ : S RÝn → 14 by sending N to ∞. This is a bijection between compact Haussdorff α α 1 spaces, so it suffices to show that one of ˜ or ˜− is continuous. n n Let U S be open. If U S N then α(U) is open. Otherwise, U N, ⊆ n ⊂ \ n n 3 so K = S U is a compact subset of S N, so R˜ α˜(U) = α˜(K) is \ α U \ α 1 \ compact, so ( ) is open. This proves that ˜− is continuous.

Note that C0(X) = {ƒ C(X˜) ƒ (∞) = 0} = ker ev∞ ∈ | is an ideal of codimension 1 in C(X˜).

Proposition I.4.8. For any locally compact Hausdorff space X, Cã0(X) = ∗ ∼ C0(X˜) as C -algebras.

Proof. Consider the map 1 1 φ : Cã0(X) C(X˜); ƒ λ ƒ + λ X˜ → ⊕ 7→ 1 where X˜ denotes the constant function 1. This is a ∗-homomorphism (direct check). It is bijective, since every ƒ C(X˜) can be written as ∈ 1 1 C1 ƒ = [ƒ ƒ (∞) X˜ ] + ƒ (∞) X˜ C0(X) + X˜ − ∈ and this decomposition is unique.

I.5 Spectra in Banach algebras

I.5.1 The set of invertible elements

Let A be a unital Banach algebra.  A  1 A As usual is invertible if it has a two-sided inverse − s.t.  1 ∈ 1 ∈ − = 1 = − . Remark I.5.1. A one-sided inverse is not enough: consider the unilat- eral shift. Lemma I.5.2. Let A be a unital Banach algebra. If  A with  < 1 then 1  is invertible, and ∈ k k −  1  2 (convergence in norm). (1 )− = 1 + + + − ··· Proof. The sum is absolutely convergent since n  n. k k ≤ k k Moreover, 1  lim 1  n lim 1 n+1 1, ( ) n ∞( + + + ) = n ∞ = − → ··· → − so the limit is a right-inverse to 1 . A similar calculation shows it is a left-inverse. −

15 Let  A be invertible. If   1 1 then   Corollary I.5.3. − − is invertible, with ∈ k k ≤ k k −

∞ X   1  1  1 n. ( )− = − ( − ) − n=0

Therefore the set G(A) of invertible elements is open in A.

Proof.    1 . Write = (1 − ) − −

I.5.2 Spectra

A is still a unital Banach algebra.

Definition I.5.4. The spectrum of  A is ∈ Sp() = {λ C (λ1 ) is not invertible}. ∈ | − The complement ρ() := C Sp() is called the resolvent set. The function \ ρ  A λ λ  1 R : ( ) ; ( 1 )− → 7→ − is called the resolvent function.

Exercise I.5.5. Let , b A. Show that (1 b) is invertible if and ∈ − only if (1 b) is invertible. Hence show that Sp(b) and Sp(b) are the same,− except possible for 0, that is

Sp(b) {0} = Sp(b) {0}. ∪ ∪ Theorem I.5.6. For any  A, Sp() is a non-empty compact subset ∈ of C, contained in B(0;  ), and the resolvent function is analytic on k k C Sp(). \

Proof. The resolvent set is the preimage of G(A) under the continuous map λ λ1 , so is open. Thus Sp() is closed. 7→ − If λ >  then λ1  is invertible (Lemma I.5.2) so Sp() is bounded, hence| | compact.k k −

Let λ0 ρ(). Then λ1  is invertible for λ in some ball around λ0. ∈ λ λ <− λ  1 1 Specifically, if 0 ( 01 )− − we have the formula | − | k − k λ λ  1 λ  λ λ 1 R( ) := ( 1 )− = [( 01 ) + ( 0)1]− − − X − λ  1 λ λ n λ  n. = ( 01 )− ( 0) ( 0 )− − − −

16 This is a power series in (λ λ0) so R is an analytic function. − power series = série entière If Sp() were empty, then R would be an entire function. Also, for λ >  , | | k k ∞ X λ  1 λ 1 λ nn λ 1 λ 1  1 ( 1 )− − − − (1 − )− k − k ≤ | | n=0 k k ≤ | | − | | k k λ  1 λ , = ( )− 0 as ∞ | | − k k → | | → so R is bounded. By Liouville’s Theorem, R(λ) would be constant, which is absurd.

If A is not unital, we define the spectrum of  A to be its spectrum in the unitalization A˜. ∈

Example I.5.7. If A = C(X) with X a compact Hausdorff space, then Sp() = im() for any  C(X). ∈ If A = C0(X) with X locally compact Hausdorff, then Sp() = im() ∪ {0} for any  C0(X). ∈ Theorem I.5.8 (Gelfand-Mazur). The only complex Banach algebra in which every non-zero element is invertible is C.

Proof. Suppose every element of A is invertible. Let  A. Since ∈ Sp() is non-empty, there is λ Sp() s.t. λ1  is not invertible, i.e. ∈ − λ1  = 0. Thus  is scalar. −

I.5.3

This section is not profound, but it prepares us for the much more powerful “continuous functional calculus” later on.

Let A be an algebra. Let C[z] be the algebra of polynomials in z. Fix  A and consider the map ∈ C[z] A X n → X n p = cnz p() = cn . n 7→ n This map is called the polynomial functional calculus (applied to ).

Proposition I.5.9. The map p p() is an algebra homomorphism, and satisfies the spectral mapping7→ property:

Sp(p()) = p(Sp()).

17 Proof. Exercise

2 3 This allows us to consider polynomial functions of , e.g.  ,  ++1.

In a Banach algebra, one can extend this to define ƒ () for any ƒ : C C which is holomorphic on some neighbourhood of the spectrum → Sp() via the Cauchy integral formula: 1 I ƒ  ƒ z z  1dz, ( ) := ( )( )− 2π  − for any curve (or union of curves)  encircling the spectrum. This is called the holomorphic functional calculus. We won’t do this in this course.

In a C∗-algebra, we will be able to define continuous functions of , e.g. p,  , exp(), log(), etc. This is the continuous functional calculus. | |

In a one can go even further to the Borel 1 functional calculus, e.g. phase Ph(), or indicator functions Y () for Y Sp(). We may look at this later if we have time (but probably not).⊂

I.5.4 formula

Let A be a Banach algebra.

Definition I.5.10. The spectral radius of  A is ∈ spectarl radius = rayon  λ λ  . spr( ) = sp{ Sp( )} spectral | | | ∈ Theorem I.5.11 (Spectral radius formula). For any  A, ∈ n 1 spr  lim  n . ( ) = n ∞ → k k

Proof. Put r = spr(). So Sp() B(0; r). ⊆ Recall that the resolvent function R : C A is an A-valued analytic → function on C Sp(), in particular on z > r. Also it tends to 0 in norm as z ∞. This\ means that the function| | | | → ∞ X z z 1 z 1  1 zn+1n R( − ) = ( − 1 )− = 7→ − n=0

18 B r 1 is analytic on (0; − {0} with removable singularity at 0. \ Cauchy’s nth root test implies

n 1 lim sp  n r. n ∞ k k ≤ →

n n At the same time, there is λ Sp() with λ = r. Then λ Sp( ) n n n ∈ | | n 1 ∈ for all n. So r = λ  for all n. Therefore,  n r for all n. The result follows.| | ≤ k k k k ≥

I.5.5 Spectral radius in C∗-algebras

For normal elements in a C∗-algebra, the spectral radius formula is even cleaner. (This is part of the power of the C∗-identity.)

Proposition I.5.12. Let A be a C∗-algebra. For any normal  A, ∈ spr() =  . k k

Proof. For  A normal ∈ 1 1 2 ∗ ∗ ∗ ∗ 2 ∗ 2 2  =   = ( ) ( ) 2 = ( ) ( ) 2 =  . k k k k k k k k k k By induction, 2k 2k  =  , ∀k N. k k k k ∈ Thus, k 1 2 k spr() = lim  2 =  . k ∞ k k k k →

Example I.5.13. To understand this it helps to think of the case A = C Mn( ).

A is normal iff it is diagonalizable (exercise!). So Proposition I.5.12 says that the operator norm of a diagonalizable matrix is equal to the largest absolute value of its eigenvalues.

A non- has non-diagonal Jordan form. For instance, the ‚ Œ 0 1 matrix  has only one eigenvalue, namely 0. But it does = 0 0 not have operator norm 0.

Nevertheless, we can still use the spectral radius formula to calculate the norm of a non-normal element.

19 Corollary I.5.14. For any  in a C∗-algebra A,

1 ∗  = spr( ) 2 . k k

1 ∗ ∗ Proof.   is normal and  =   2 . k k k k

This result is philosophically very important. It says that the norm on a C∗-algebra is determined completely by the algebra structure (invertibility of elements)!

Corollary I.5.15. If a ∗-algebra admits a C∗-norm, it is unique.

I.6 Abelian Banach algebras and the Gelfand Transform

I.6.1 Characters

Let A be a unital Banach algebra.

Definition I.6.1. A character is a nonzero homomorphism φ : A C. → Lemma I.6.2. Characters are automatically bounded of norm 1.

Proof. First, note that φ(1) = 1, so φ 1 (or φ is unbounded). k k ≥ Suppose φ = 1. Then ∃ A with  < 1 but φ() = 1. Put b k 1k 6 ∈ k k | | = (1 )− . − φ(1) = φ(b b) = φ(b) φ()φ(b) = 0. − − Contradiction.

Note that ker(φ) is a closed ideal of codimension 1. In particular it is a maximal ideal, i.e. it is a proper ideal and there are no ideals ker(φ)    A. Lemma I.6.3. Maximal ideals are automatically closed.

Proof. Let M be a maximal ideal. Recall that a proper ideal contains no invertible elements. But the open ball of radius 1 around 1 A ∈ consists entirely of invertible elements, so dist(1,M) = 1. Thus M is a closed proper ideal containing M. By maximality, M = M.

20 Theorem I.6.4. Let A be an abelian unital Banach algebra. The map

φ ker(φ) 7→ defines a bijective correspondence between multiplicative linear func- tionals and maximal ideals.

Proof. For any characer φ, ker(φ) is an ideal of codimension 1, so maximal. Moreover, φ is completely determined by ker(φ) and the value φ(1) = 1, so the map is injective.

Surjectivity: Let M be a maximal ideal. The quotient A/M is a simple Banach algebra, so every element is invertible. By Gelfand-Mazur, A/M C. The quotient map is a multiplicative linear functional with ∼= kernel M.

I.6.2 The maximal ideal space

Here A is an abelian Banach algebra, possible non-unital.

Definition I.6.5. The set

Aˆ := {characters of A} = { maximal ideals of A}.

is called the maximal ideal space or spectrum of A.

Example I.6.6. Consider A = C0(X).

Associated to each point  X is a character ∈ ev : ƒ ƒ (). 7→ The associated maximal ideal is  = {ƒ C0(X) ƒ () = 0}. ∈ | Thus we have a map X , Aˆ via  ev. An application of the Stone- Weierstrass Theorem shows→ that this7→ is a bijection.

What about the topology on Aˆ?

We have Aˆ A∗. But A∗ has several possible . ⊂ The norm topology is not a good choice. For instance, with A = C(X), given any , y X there is a continuous function ƒ C0(X) with ∈ ∈ ev(ƒ ) = ƒ () = 0 and evy(ƒ ) = ƒ (y) = 1. Therefore e evy 1. Therefore the norm topology makes X Aˆ into a discretek − space.k ≥ ⊂ 21 On the other hand, if n  in X, then for any fixed ƒ C0(X) we have → ∈

(evn ev)ƒ = ƒ (n) ƒ () 0. | − | | − | → That is evn ev in the weak*-topology, i.e. the topology of point- wise convergence.→

Proposition I.6.7. If A is a unital Banach algebra, then Aˆ is a com- pact Hausdorff space with the weak*-topology. If A is non-unital then Aˆ is locally compact Hausdorff.

Proof. The weak∗-topology is always Hausdorff.

Let A be unital. A weak*-limit of multiplicative linear functionals is again multiplicative (direct check) so Aˆ is a closed subset of the unit ball of A∗. By Banach-Alaoglu, this is compact.

If A is not unital, embed A , A˜. → If φ is a character of A then we can extend it to

φ˜ : A˜ C; φ˜( + λ1) := φ() + λ, → which is a character of A˜ (direct check). So we get an inclusion Aˆ , A˜ˆ. It is continuous for the weak*-topologies (direct check). →

Conversely, if ψ is a character of A˜ then we can restrict it to a linear functional ψ A on A. It is multiplicative, but may be zero! But there is only one character| of A˜ which maps to zero, since it must satisfy

ψ( + λ1) = 0 + λψ(1) = λ, ∀ + λ1 A.˜ ∈ Let us denote this particular character by ψ∞. Then restriction gives ˆ a continuous map A˜ {ψ∞} Aˆ, inverse to the above. \ → Remark I.6.8. If A is non-unital then A˜ˆ is the one-point compactifica- tion of Aˆ, with ψ∞ being the point at infinity.

I.6.3 The Gelfand Transform

We now have:

∗ • A loc. comp. Hausdorff space X gives us a C -algebra C0(X), • A Banach algebra A gives us a loc. comp. Hausdorff space Aˆ.

22 What is the relation? For a C∗-algebra we will see that these pro- cesses are inverse to one another (up to natural isomorphism).

Here we start with an abelian Banach algebra A.

Definition I.6.9. The Gelfand transform is the map  : A C0(Aˆ) defined by  ˆ where → 7→ ˆ(φ) := φ()(φ Aˆ character of A). ∈ (We will prove that ˆ really is in C0(Aˆ) in the Theorem below.) Theorem I.6.10. The Gelfand transform is an algebra homomor- phism A C0(Aˆ) of norm  1. The image separates the points of → k k ≤ Aˆ, i.e. ∀φ = ψ Aˆ, ∃ A st ˆ(φ) = ˆ(ψ). 6 ∈ ∈ 6

Proof. This is an exercise in following definitions.

Well defined: For  A, ˆ is continuous by definition of the weak*- ∈ topology. If A is non-unital then ˆ(ψ∞) = 0, so ˆ C0(X). ∈ Algebra homomorphism: Direct check, e.g.

(Ø+ b)(φ) = φ( + b) = φ() + φ(b) = ˆ(φ) + bˆ(φ). bounded of norm 1:

ˆ = sp ˆ(φ) = sp φ()  . k k φ Aˆ | | φ Aˆ | | ≤ k k ∈ ∈

Separates points: Tautology.

Corollary I.6.11.  A is invertible ˆ C0(Aˆ) is invertible. Con- sequently, ∈ ⇐⇒ ∈ Sp() = Sp(ˆ) = {φ() φ Aˆ}. | ∈

Proof.  Ô1 ˆ If is invertible, − is an inverse to . If  is not invertible then A is a proper ideal so is contained in some maximal ideal M. The character φ Aˆ with ker(φ) = M is s.t. ˆ(φ) = ∈ φ() = 0, so ˆ is not invertible.

Question: Is the Gelfand transform compatible with ∗-structures?

We equip C0(Aˆ) with the usual involution ∗ of pointwise conjugation.

Lemma I.6.12. Let A be a Banach ∗-algebra.  : A C0(Aˆ) is a ∗ → ∗-homomorphism iff ∀ A with  =  , Sp() R. ∈ ⊂ 23 ∗ ∗ Proof. If  is a ∗-homomorphism then  =  ˆ = ˆ so ˆ is real ⇒ valued. Thus Sp() = Sp(ˆ) R. ⊂ ∗ ∗ Suppose Sp() R for  =  . Then im(ˆ) Sp() R so ˆ = ˆ. For ⊂ ⊆ ⊂ general b A, write b =  + y with ∈  1 b b∗ , y 1 b b∗ . = 2 ( + ) = 2 ( ) − ∗ These are self-adjoint. So bÓ∗ = Ùy = ˆ yˆ = (bˆ) . − − Example I.6.13. The is an example for which  is not a ∗-homomorphism (see exercises).

But C∗-algebras are always okay. . .

I.7 Abelian C∗-algebras and the continu- ous functional calculus

I.7.1 Abelian C∗-algebras, part II

Now another very important consequence of the C∗-identity.

Lemma I.7.1. Every self-adjoint element in a C∗-algebra has real spectrum.

Proof. Let  A be self-adjoint. Let λ R. From the calculation ∈ ∈ 2 ∗ 2 2 2 2  λ = ( λ)( λ) =  + λ 1 λ +  , k − k k − − k k k ≤ k k p the spectrum of  is contained in the ball of radius λ2 +  2 about λ. The intersection of all these balls is thek k interval [  ,  ] R. −k k k k ⊂ Theorem I.7.2 (Commutative Gelfand-Naimark Theorem). Let A be a commutative C∗-algebra. Then A is isometrically ∗-isomorphic to C0(X) for some locally compact Hausdorff space X. Specifically X = Aˆ.

Proof. The Gelfand transform  : A C0(Aˆ) preserves the ∗ and all spectra. The norm on a C∗-algebra→ is determined completely by these (see Corollary I.5.14). So  is an isometry, hence injective.

The image of  separates points, so is dense by Stone-Weierstrass. It is also complete. So  is surjective.

24 Proposition I.7.3. Every non-zero ∗-homomorphism between unital abelian C∗-algebras is the pull-back of a continuous map between the underlying topological spaces.

Proof. Let ψ : C(Y) C(X) be a ∗-homomorphism. For any  X˜, → ˜ ∈ ˜ ev ψ is a character of C0(Y), so equals evα() for some α() Y. This◦ defines α : X Y. ∈ → U Y α 1 U To prove continuity, we take be open and show − ( ) is open. ⊆  α 1 U ƒ C Y ƒ Take − ( ). By Urysohn’s Lemma, there is ( ) with 0 1 ∈ ∈ ≤ ≤ s.t. ƒ (α()) = ψ(ƒ )() = 1 and ƒ (y) = 0 for all y / U. We get an open neighbourhood of : ∈ neighbourhood = voisinage  X ψ ƒ  >  X ƒ α  > α 1 U . { 0 ( )( 0) 0} = { 0 ( ( 0)) 0} − ( ) ∈ | ∈ | ⊆ α 1 U So − ( ) is open.

In categorical language, the Gelfand transform yields an equivalence of categories between the of unital abelian C∗-algebras with ∗-homomorphisms and the category of compact Hausdorff spaces with continuous maps.

I.7.2 Invariance of spectra

A — unital C∗-algebra.

Definition I.7.4. The C∗-algebra generated by  A is the smallest unital subalgebra containing . ∈

If  is normal then,

∗ C () := {p(, ∗) p C[z, z]}, | ∈ since every C∗-subalgebra containing  must contain all polynomials ∗ ∗ in  and  . Note that this algebra is abelian, so C () = C(X) for some compact Hausdorff space X. In the next section we will see that X = Sp(). ∗ Lemma I.7.5. Let  A be normal. The spectrum of  in C () is the same as the spectrum∈ of  in A.

∗ Proof. Let b C (). We show b is invertible in A iff it is invertible in ∗ ∈ C ().

25 ∗ If b is invertible in C () then b is invertible in A.

∗ b A C  ∗ b Suppose is invertible in but not in ( ). Then 0 SpC ()( ) = ∈ im(bˆ), where bˆ C(X) is the Gelfand transform of b. By Urysohn’s ∈ Lemma, let cˆ C(X) be a continuous function with 0 cˆ 1 and ∈ ≤ ≤ c   bˆ 1 , ˆ( ) = 1 for all − ({0}) ∈ c   bˆ 1 ε, , ˆ( ) = 0 for all − [ ∞) ∈ ε 1 b 1 1 where = 2 − − . Then k k c = cˆ = 1, k k k k bc bˆc 1 b 1 1, = ˆ 2 − − k k k k ≤ k k so c b 1 bc 1 . 1 = − 2 k k ≤ k kk k ≤ Contradiction.

Corollary I.7.6. The spectrum of  in any closed unital subalgebra  B A is the same. ∈ ⊆ Corollary I.7.7. Any injective ∗-homomomorphism φ : A B be- tween C∗-algebras is an isometry. →

Proof. WLOG, assume A and B are unitary. (If not, φ extends uniquely WLOG = to the unitalizations.) Since φ preserves spectra and ∗, the result “without loss of follows from the spectral radius formula. generality”

I.7.3 Continuous functional calculus

Let A be a unital ∗-algebra.

If  A is normal, we can extend the polynomial functional calculus ∈ C[z] A; p p() → 7→ to a polynomial functional calculus on  and ∗:

∗ C[z, z] A; p p(,  ). → 7→

Using the Gelfand transform on a C∗-algebra, we can extend even further.

26 ∗ Proposition I.7.8. Let  A be normal. Then C () is ∗-isomorphic ∈ to C(Sp()) via a map sending  to the function z : z z. 7→

Proof. Put X := CÚ∗(). Consider the function ˆ : X C. → ∗ Any character on C () is completely determined by its value on the generating element , so ˆ is injective.     Its image is im( ˆ) = SpC(X)( ˆ) = Sp( ). So ˆ is a homeomorphism ˆ : X Sp(A) C. → ⊂ Thus the Gelfand transform identifies ˆ∗ C∗  C X C Sp  ( ) ∼= ( ) ∼= ( ( )) and under this identification  ˆ z. 7→ ←[

Definition I.7.9. Fix  A normal. The map ∈ = ∗ C(Sp()) ∼ C () A → ⊆ is called the continuous functional calculus of . The image of ƒ A ∈ is denoted ƒ ().

Note that we have the spectral mapping property: ƒ  ƒ ƒ ƒ  . Sp( ( )) = SpC(Sp()) = im( ) = (Sp( )) Example I.7.10. Suppose  A has positive spectrum Sp() [0, ∞). 1 ∈ ⊂ The function ƒ : z z 2 is continuous on Sp(), so the functional cal- 1 7→ culus gives  2 := ƒ () A. ∈ Since ƒ .ƒ = z (the identity function on Sp()), the functional calculus 1 1 homomorphism gives z 2 z 2 = . Example I.7.11. The function exp : C C is continuous everywhere. → Thus, for any  A we can define exp() A. It inherits the usual ∈ ∈ properties of the function exp C(Sp()), e.g. ∈ exp(s) exp(t) = exp((s + t)), (s, t R). ∈ Pn zk The polynomials pn z : converge to exp z uniformly on ( ) = k=0 k! ( ) any bounded subset of C, so the functional calculus homomorphism gives the norm convergent series ∞ k X  exp() = . k! k=0

27 Remark I.7.12. If A is non-unital then we have a continuous functional calculus on A˜. A continuous function ƒ C(Sp()) has ∈ ƒ () A ƒ z ƒ (0) = 0, ∈ ⇐⇒ ∈ 〈 〉 ⇐⇒ where z denotes the ideal generated by z in C(Sp()). In other words,〈 we〉 have the functional calculus

C0(Sp(A) {0}) A; ƒ ƒ (A). \ → 7→

I.7.4 Spectral projections

One particularly useful application of the functional calculus is to pro- duce spectral projections.

2 ∗ Definition I.7.13. An element p A s.t. p = p = p is called a projection. ∈

Remark I.7.14. (Recall) in the case A = B(H),

2 • p = p p is a projection onto a closed subspace of H parallel to some⇒ complementary subspace;

∗ • p = p implies this projection is orthogonal. ⇒ Example I.7.15. A = C(X). If Y X is a clopen subset, then the indicator function ⊂ ( ,  Y, 1 1 if Y :  ∈ → 0, if  / Y, ∈ is continuous, and is a projection in C(X).

Example I.7.16. If  A is normal, and Y Sp() is a connected ∈ 1 ⊆ clopen subset of its spectrum, then Y () is a projection called the spectral projection associated to Y Sp(). ⊂ ∗ Exercise I.7.17. Let A B(H) be a concrete C -algebra. Suppose T A is normal and that⊆ λ is an isolated point in the spectrum of ∈ 1 . Show that P = {λ}(T) is the orthogonal projection onto the λ- eigenspace of T.

Hint: Consider the case λ = 0 and show that P is the projection onto the kernel of T.

28 I.8 Positivity and order

I.8.1 Positivity

Let A be a C∗-algebra. For simplicity, we will consider only unital A here, although all results work for non-unital C∗-algebras by passing to the unitalization.

There are several different equivalent definitions of a positive ele- ment.

∗ Theorem I.8.1. Let  A with  =  . The following are equivalent. ∈

1. Sp() [0, ∞). ⊂ 2. For all t  , t1  t. ≥ k k k − k ≤ 3. For some t  , t1  t. ≥ k k k − k ≤ 2 4.  = b for some b A self-adjoint. ∈ ∗ 5.  = b b for some b A. ∈ ∗ Remark I.8.2. If A B(H) is a concrete C -algebra, then these are equivalent to ⊆ ξ, ξ 0 ∀ξ H. 〈 〉 ≥ ∈

Proof of 1 2 3 4. ⇔ ⇔ ⇔ 1 2: Recall that Sp() [0,  ]. ⇒ ⊂ k k Thus Sp(t1 ) [t  , t]. So t1  t. − ⊂ − k k k − k ≤ 2 3: Obvious. ⇒ 3 1: Suppose t has t1  t. Then ⇒ k − k ≤ t t1  = (t z) C(Sp()). ≥ k − k k − k Thus z is non-negative on Sp().

1 4: Take b = p, by the functional calculus. ⇒ 2 4 1: Sp() = Sp(b) [0, ∞). ⇒ ⊂

Proving 5 takes more effort. We will need some lemmas. So, for now, we will call  A positive if it has positive spectrum. ∈ 29 Definition I.8.3.

A+ := { A positive }, s ∈ | A := { A self-adjoint}. ∈ | Remark I.8.4. The positive square root of a  is unique. If b is any positive square root, then functional calculus on b gives p b = b2 = p. Lemma I.8.5. Any self-adjoint element  A can be decomposed as ∈

 = +  , − − where  are positive and + = 0. ± −

Proof. Define ƒ : R C by ± →

ƒ+() = mx(0, ), ƒ () = mx(0, ). − −

So ƒ are positive functions with z = ƒ+ ƒ and ƒ+ƒ = 0. Put  := ± − − − ± ƒ (). ± Corollary I.8.6. Any  A is a linear combination of four positive elements. ∈

Proof.      1  ∗ Recall that = Re( ) + m( ) where Re( ) = 2 ( + ) and  1  ∗ m( ) = 2 ( ) are both self-adjoint. Using the lemma, we can − decompose both Re() and m() as a sum of two positive elements.

Proposition I.8.7. A+ is a salient , i.e.

1. , b A+  + b A+. ∈ ⇒ ∈ + R + 2.  A , λ + λ A . ∈ ∈ ⇒ ∈ 3.  A+ and  A+  = 0. ∈ − ∈ ⇒

Proof. 1. Let s  s.t. s1  s and t b s.t. t1 b t. Then ≥ k k k − k ≤ ≥ k k k − k ≤

(s + t)1 ( + b) s1  + t1 b s + t. k − k ≤ k − k k − k ≤ 2. Sp(λ) = λSp() [0, ∞). ⊂

30 3. Sp() {0}, so  = 0 by the Spectral Radius Formula. ⊂ k k

This means the positive elements can be used to define an order on the subspace As of self-adjoint elements.

Definition I.8.8. Write  b if  b A+. ≥ − ∈

So  0 means  is positive. We say  is negative if  0  is ≥ ≤ ⇐⇒ − positive Sp() ( ∞, 0]. ⇐⇒ ⊂ − More generally, note that

•  t1 Sp() ( ∞, t], ≤ ⇒ ⊂ − •  t1 Sp() [t, ∞), ≥ ⇒ ⊂

Warning! The order is well-behaved with respect to addition, but not multiplication. e.g.

0  b 2 b2 (in general). ≤ ≤ 6⇒ ≤ But it is well-behaved for multiplication of commuting operators.

Finally, we can complete the proof of Theorem I.8.1.

∗ Proof of Theorem I.8.1 (5 1). First we show  = b b is not negative ⇒ ∗ ∗ unless  = 0. Note that the spectra of b b and bb are the same (except perhaps 0), so if one is negative, both are.

Write b =  + y with , y self-adjoint. Then

∗ ∗ 2 2 b b + bb = 2 + 2y 0. ≥ We get  = 0.

1 2 Next, write  = +  as in Lemma I.8.5. Put c = b . Then − − − 1 1 ∗ 2 2 2 c c =   =  . − − − − ∗ So c c 0, hence  = 0. ≤ −

Some extra properties

Proposition I.8.9. 1. For any  As,   1. ∈ ≤ k k 31 2. If 0  b then  b . ≤ ≤ k k ≤ k k 3. Let , b As. If  b then ∗ ∗b∗ for all  A. ∈ ≤ ≤ ∈ 4. If , b As are invertible, then  b b 1  1. 0 − − ∈ ≤ ≤ ⇒ ≤ 5. If 0  1 then 2 . ≤ ≤ ≤

Proof. 1. Sp() [ , ] so Sp(  1 ) [0, 2  ]. ⊆ − k k − ⊆ k k 2. 0  b b 1. By the spectral radius formula,  Sp() so ≤ ≤ ≤ k k k k ∈ b  Sp( b 1 ) [0, ∞]. k k − k k ∈ k k − ⊆ 3. Let c = pb . Then − ∗ ∗ ∗ ∗  b   =  cc = (c) (c) 0. − ≥ 4.

1 1 b 2 b− 2 1 1 1 ≤ 1 1 ∗ = ( 2 b− 2 ) ( 2 b− 2 ) 1 ⇒ 1 1 ≤ =  2 b− 2 1 ⇒ k 1 1k ≤ 1 1 ∗ = ( 2 b− 2 ) = b− 2  2 1 ⇒ k 1 1 k k k ≤  2 b 1 2 = − 1 ⇒ ≤ b 1 1. = − ⇒ ≤

∗ = 2 5. Under Gelfand-Naimark, C () ∼ C(Sp()) with  z. But z → 7→ ≤ z on Sp() [0, 1]. ⊆

I.8.2 Moore-Smith convergence (Nets)

Sequences are insufficient to determine the topology of a general topological space. Note e.g. the difference between “compactness” and “sequential compactness”.

To resolve this, one trick is to introduce “generalized sequences” or “nets”. Remark I.8.10. The French typically prefer using filters, an equivalent technical device.

32 Remark I.8.11. If we restricted our attention to separable C∗-algebras, everything here would be possible with ordinary sequences. But it would be harder work.

Definition I.8.12. A directed set (“ensemble ordonné filtrant”) is a set  equipped with a partial order such that any two elements have an upper bound: ≤

∀, j , ∃k  s.t.  k and j k. ∈ ∈ ≤ ≤

A net (“suite généralisée”) valued in a space X is a function  :  X → from some directed set . We denote it by () . ∈ Example I.8.13. N is a directed set with its usual order. A net in- dexed by N is a sequence.

Example I.8.14. X – loc. compact Hausdorff. C0(X; [0, 1]) = {ƒ : X [0, 1] continuous} is a directed set with ƒ g → ≤ iff ƒ () g() ∀ X. ≤ ∈ Definition I.8.15. Let X be a topological space, U X a subset. We ⊆ say a net ()  valued in X is eventually in U if ∃0  s.t.  U ∈ ∈ ∈ ∀ 0. ≥ Say ()  converges to  X if for every open neighbourhood U  ∈ ∈ 3 ()  is eventually in U. ∈ Example I.8.16. A net indexed by N converges iff it converges as a sequence.

Example I.8.17. Let ƒ : [0, 1] R be piecewise cts. The Riemann R 1 ƒ  d → ƒ   integral 0 ( ) is (by definition) the limit of the net (k ( k)Δ k) indexed by the directed set of partitions (0 = 0 < 1 < < N = 1) ··· of [0, 1] ordered by refinement.

Example I.8.18. Let  C0(X). The net (ƒ )ƒ C0(X;[0,1]) converges ∈ ∈ to  in C0(X). Remark I.8.19. Nets are the correct alternative to sequences for gen- eral topological spaces: e.g. a space X is compact iff every net in X has a convergent subnet. The definition of subnet is subtle, though.

I.8.3 Approximate units (“Unités approchées”)

A — C∗-algebra.

33 Definition I.8.20. An approximate unit (“unités approchée”) for A is a net ()  of positive elements with  1 which is increasing and satisfies∈ k k ≤

lim   = 0, lim   = 0 ∀ A. (I.8.1)   k − k   k − k ∈ ∈ ∈

Example I.8.21. The net (ƒ )ƒ C0(X;[0,1]) is an approximate unit for ∈ C0(X).

The same idea works for any C∗-algebra.

Theorem I.8.22. Every C∗-algebra has an approximate unit.

Proof. A+  A+  < Let <1 := { 1} with the usual order on self- adjoint elements. ∈ | k k

A+ Step 1: We show <1 is directed. ,  A+ Let <1. Consider the bijection ∈ t g : [0, 1) [0, ∞), g(t) = . → 1 t − Note that 1 g 1 , , , g 1 t . − : [0 ∞) [0 1) − ( ) = 1 → − 1 + t g g 1 A A Since and − map 0 0, they will send to under the functional calculus, even if A is non-unital.7→

 g 1 g  g  g  g  1 Put := − ( ( ) + ( )) = 1 (1 + ( ) + ( ))− . −  ,  <  A+ g  g  Then Sp( ) [0 1) so 1 and <1. Also, 1 + ( ) + ( ) ⊂ k k ∈ ≥ 1 + g() implies  g  g  1 g  1 g 1 g  . = 1 (1 + ( ) + ( ))− 1 (1 ( ))− = − ( ( )) = − ≥ − −   A+ Likewise . Thus <1 is directed. ≥ + Step 2: Take the net   A+ of all elements of A . We need to ( ) <1 <1 prove the limits (I.8.1). ∈

 A  ∗ C∗  C X ƒ  Let with = . We have ( ) = 0( ). The net ( ( ))ƒ C0(X;[0,1)) ∈ ∼ ∈ is an approximate unit. In particular, ∀ε > 0 ∃ƒ C0(X; [0, 1)) s.t. ∈ ƒ ()  < ε. k − k  A+  ƒ  But now, ∀ <1 with ( ) we have ∈ ≥ 2 2 (1 ) = (1 )  (1 ) (1 ƒ ()) (1 ƒ ()) < ε. k − k k − k ≤ k − k ≤ k − k ≤ k − k 34 (working in the unitalization A˜). That implies,

s lim   = 0 ∀ A .  A+ <1 k − k ∈ ∈ and the other limit is similar.

∗ For  =  we have 6 2 ∗ lim (1 ) = lim (1 ) (1 ) = 0,  k − k  k − − k 2 and similarly for lim (1 ) . k − k Remark I.8.23. If A is separable, in fact it has an approximate unit which is a sequence: (n)n N. ∈ If A is unital then the unit (1) suffices.

We note that approximate units are “approximately central”, in the following sense.

Lemma I.8.24. Let () be an approximate unit for A. For any  A, ∈ lim [, ] = 0,   k k ∈ where [, ] =  . −

Proof. Follows from   and  . → →

I.8.4 Ideals and quotients in C∗-algebras

Ideals and quotients in C∗-algebras are nice because they are very rigid.

Proposition I.8.25. Any closed ideal J in a C∗-algebra A is closed ∗ under ∗, i.e. J = J.

Proof. Exercise.

Theorem I.8.26. The quotient A/J is a C∗-algebra with the quotient norm and obvious involution.

35 Proof. Denote the image of  A in the quotient A/J by []. Recall that the quotient norm is: ∈

[] A/J := inf  + j . k k j J k k ∈ This is a Banach algebra (direct check) with involution (because J = J∗).

Let () be an approximate unit for J.

Claim: ∀ A, ∈ [] A/J = lim (1 ) . k k   k − k ∈ Proof of Claim:

By defn of quotient norm: (1 ) [] ∀ . So [] A/J k − k ≥ k k ∈ k k ≤ lim  (1 ) . ∈ k − k For the reverse, fix ε > 0. For any j J, ∈ lim (1 ) lim ( ( + j)(1 ) + j(1 ) )  + j .   k − k ≤   k − k k − k ≤ k k ∈ ∈ The claim follows.

To conclude, we prove the C∗-inequality:

∗ ∗ [] [] = lim  (1 ) k k  k − k ∗ lim (1 ) (1 ) ≥  k − − k 2 2 = lim (1 ) = [] .  k − k k k

I.9 Representations of C∗-algebras

I.9.1 Basic definitions

Recall that a representation of a C∗-algebra A is a ∗-homomorphism π : A B(H). → H H π A H H Definition I.9.1. A subspace 0 is invariant if ( ) 0 0. ⊂ ⊂ H (Sometimes we abuse terminology and refer to 0 as a subrepresen- tation.)

36 Let H be an invariant subspace of a C∗-algebra rep- Lemma I.9.2. 0 resentation.

1. The orthocomplement H is invariant. 0⊥ 2. The closure H is invariant. 0

Proof.  A η H ξ H If , 0⊥ then ∀ 0, ∈ ∈ ∈ ∗ ξ, π()η = π( )ξ, η = 0, 〈 〉 〈 〉 π  η H so ( ) 0⊥. ∈ H H The second statement follows from 0 = ( 0⊥)⊥. H H Definition I.9.3. A closed invariant subspace 0 is called a H ⊂ π H subrepresentation of . (More correctly, the restriction of to 0 is called a subrepresentation.)

A representation is irreducible if it contains no non-trivial subrepre- sentations.

Example I.9.4.   ∗ ∗ 0   A M C M C ∗ ∗ 0 M C = 2( ) 1( ) ∼=   3( ) ⊕  0 0 ∗  ⊂

C3 H e , e is represented on in the obvious way. The subspaces 0 = spn{ 1 2} H e and 00 = spn{ 3} are subrepresentations. Remark I.9.5. The lemma above says that for a C∗-algebra, any sub- representation of a representation is complemented. This is not true for general algebras, e.g.   ∗ ∗ ∗   A   M C = ∗ ∗ ∗ 3( )  0 0 ∗  ⊂

C3 H e , e is represented on and 0 = spn{ 1 2} is a subrepresentation but it has no complementary subrepresentation. But this A is not a C∗-algebra. It is the ∗-operation which gives us complements.

Definition I.9.6. A vector ξ H is cyclic if π(A)ξ = {π()ξ  A} is dense in H. ∈ | ∈ cyclic vector = vecteur A representation π : A B(H) is cyclic if it has a cyclic vector. totalisateur →

37 NB: Every nonzero vector ξ in an irreducible representation is cyclic, since π(A)ξ is a subrepresentation. Example I.9.7. For the representation of Example I.9.4, the vectors e1 and e3 are not cyclic, since π(A)e1 = spn{e1, e2} and π(A){e3} = spn{e3}. But e1 + e3 is cyclic. So π is a cyclic representation.

The zero representation π() = 0 ∀ A is not interesting. Nor do we ∈ want π(A) acting by 0 on some non-trivial closed subspace. We make the following definition.

Definition I.9.8. A representation π : A B(H) is non-degenerate if → π(A)H is dense in H.

In fact, this is equivalent to the a priori stronger statement π(A)H = H. Theorem I.9.9 (Cohen Factorization Theorem). If π is a non-degenerate ∗ representation of a C -algebra A, then π(A)H = H.

Proof. Exercise.

Remark I.9.10. The name is because every  H can be factorized ∈ as  =  for some  A,  H. ∈ ∈ If A is unital, non-degeneracy is equivalent to the representation π : A B(H) being unital. →

I.9.2 The GNS Representation Theorem (statement & idea of proof)

Definition I.9.11. An representation is called faithful if it is injective. faithful fidèle ≡

Thus a faithful representation π : A , B(H) is a realization of A as a concrete C∗-algebra of bounded operators→ on H. A major result in C∗-algebras says that every C∗-algebra is isomorphic to a concrete C∗-algebra.

Theorem I.9.12 (Gelfand-Naimark-Segal). Every C∗-algebra A ad- mits a faithful representation π : A B(H) on some Hilbert space H. →

The proof will be given in the next few sections. To motivate it, let us recall the abelian case.

38 X — compact Hausdorff space, μ — Radon probability measure on X.

2 Then A = C(X) is represented on H = L (X; μ) by pointwise multiplica- tion: 2 π()ƒ := ƒ , ( C0(X), ƒ L (X; μ)). ∈ ∈ This already contains many of the key ideas of the proof. Recall that 2 the Hilbert space H = L (X; μ) needs to be constructed from A = C0(X) by

R ∗ 1. quotienting by the functions with ( ƒ ƒ dμ) = 0,

1 R ∗ 2. completing w.r.t. ƒ := ( ƒ ƒ dμ) 2 . k k w.r.t. with respect≡ to par We recall that this does not usually produce a faithful representation rapport à ≡ (think of μ = δ). But to make a faithful representation, we can take a of a collection of non-faithful representations, as long as each element of A is not killed by at least one of them.

The main difference for non-abelian C∗-algebras is that we replace probability measures by states.

I.9.3 States

Definition I.9.13. A linear functional φ; A C on a C∗-algebra is linear functional + R+ → positive if φ maps A to . = forme linéaire. ∗ Equivalently, φ is positive if φ( ) > 0 for all  A. ∈ Lemma I.9.14. Any positive linear functional on a C∗-algebra is bounded.

Proof. Suppose φ was positive but not bounded. Then we could find n N n A with n = 1 but φ(n) > 4 for all n . We can write n as a∈ sum of fourk k positive elements,| | so for at least∈ one of these, call it b φ  > n 1 n, we have ( n) 4 − . Consider X b (n 1)b . = 2− − n (absolutely convergent) n

b (n 1)b φ b > (n 1)φ b > n 1 n Then 2− − n so ( ) 2− − ( n) 2 − for all , which is a contradiction.≥

39 Definition I.9.15. A positive linear functional of norm 1 is called a state. state = état. The set of all states of A is called the state space and denoted S(A).

∗ A state σ is called faithful if φ( ) > 0 for all  = 0. 6 Example I.9.16. A regular Borel probability measure μ on a compact Hausdorff space X gives a state φμ on C(X) by Z φμ : ƒ ƒ dμ. 7→ X Every state on C(X) is of this form by the Riesz Representation Theo- rem.

In particular, if μ = δ is Dirac measure at a point  X, we get the state ev. ∈ Remark I.9.17. The name state comes from quantum physics.

Definition I.9.18. Let π : A B(H) be a representation of A on a → Hilbert space H, and let ξ H, ξ = 1. The functional ∈ k k Functional = forme linéaire φ() := ξ, π()ξ 〈 〉 is a state (direct check). It is called the vector state associated to ξ.

Example I.9.19. The state φμ on C(X) above is a vector state for the 2 constant function 1 L (X; μ), since ∈ Z 1 1 φμ(ƒ ) = ƒ dμ = , ƒ ∀ƒ C(X). X 〈 〉 ∈

I.9.4 Inner products from states

A — C∗-algebra. Definition I.9.20. Let φ : A C be a state. We define a sesquilinear form on A by → ∗ , b σ := σ( b). 〈 〉 Remark I.9.21. , σ is positive semidefinite (i.e. ,  σ 0 ∀ 〈· ·〉 〈 〉 ≥ ∈ A). But it is definite (i.e. ,  σ > 0 ∀ = 0) only if σ is faithful. 〈 〉 6 Semidefiniteness is enough to prove the Cauchy-Schwarz Inequality: Lemma I.9.22 (Cauchy-Schwarz Inequality).

1 1 2 2 , b σ ,  σ b, b σ ∀, b A. |〈 〉 | ≤ 〈 〉 〈 〉 ∈

40 Proof. Same as the classical proof.

Corollary I.9.23. Let  A. We have ,  σ = 0 iff , b σ = 0 for all b A. ∈ 〈 〉 〈 〉 ∈ Moreover, the set of null vectors

Nσ := { A ,  σ = 0} ∈ | 〈 〉 is a left ideal in A

Proof. The first statement follows immediately from Cauchy-Schwartz.

If  Nσ then ∈ ∗ ∗ ∗ c, b = σ( c b) = , c b = 0, ∀b, c A, 〈 〉 〈 〉 ∈ so c Nσ. ∈

I.9.5 A characterization of states

The next proposition is extremely useful for recognizing states.

∗ Proposition I.9.24. Let A be a C -algebra and (j) an approximate C unit. A bounded linear functional φ : A is positive iff limj φ(j) = → φ . In particular, if A is unital then φ is positive iff φ(1) = φ . k k k k

Thus a state on a unital C∗-algebra is a linear functional of norm 1 with φ(1) = 1.

Proof. After rescaling φ we may assume that φ = 1. k k rescale = faire une ( ): Let φ be positive. homothétie. ⇒ Then φ(j) is an increasing net of positive numbers bounded by 1, so converges. Write r = lim φ(j). j

Note that r φ = 1. ≤ k k For any  A with  1, we have φ() = limj φ(j). But, by Cauchy-Schwarz,∈ k k ≤

φ  2 φ ∗ φ 2 φ 2 φ  r. ( j) ( ) ( j ) ( j ) ( j) | | ≤ ≤ ≤ ≤ 2 2 So φ() r for all  1. Thus r φ = 1. | | ≤ k k ≤ ≥ k k 41 We thus have r = 1 as desired.

s R ( ): Suppose limj φ(j) = 1. First we prove φ : A (this is the hard⇐ part), then we prove φ : A+ R+. → → Let  A be self-adjoint with  1. Write φ() =  + y with , y R. ∈ k k ≤ ∈ Suppose y = 0. WLOG we assume y > 0 (otherwise replace  by ). 6 − WLOG = without loss of To get the idea of the proof, let us first the case where A is unital. generality = Consider the elements n1  where n N. Note that sans perte de − ∈ généralité. 2 ∗ 2 2 2 n1  = (n1 ) (n1 ) = n 1 +  n + 1. k − k k − − k k k ≤ Therefore 2 2 2 φ(n1 ) n1  n + 1. | − | ≤ k − k ≤ But

2 2 2 2 2 2 2 φ(n1 ) = n  + y = (n + y) +  = n + ny + ( + y ). | − | | − | 2 This is greater than n + 1 for sufficiently large n contradiction. ⇒ If A is not unital, we must use an approximate unit (j) instead of 1. R N Let us write φ(j) = ξj + ηj with ξj, ηj . Fix n . Since ξj + ηj 1, we may choose j “sufficiently large” s.t.∈ ∈ →

2 2 (nξj + y) + (nηj ) (n + y)  < 1 | − | − | − | and also (by Lemma I.8.24)

1 [, j] < . k k n We then have

n  2 n22 n ,  2 n2 . j = j + [ j] + + 2 k − k k k ≤ Therefore 2 2 φ(nj ) n + 2. | − | ≤ But

2 2 φ(nj ) = (nξj + y) + (nηj ) | − | | 2 − | (n + y)  1 ≥ | 2 − |2 − 2 = n + ny + ( + y ).

2 This is greater than n + 2 for n sufficiently large contradiction. Thus φ : As R. ⇒ →

42 Finally, we need to prove φ : A+ R+. Let  A+ with 0  1. Then, → ∈ ≤ ≤

1 j  1 ∀j − ≤ − ≤ = j  1 ∀j ⇒ k − k ≤ = φ(j ) 1 ∀j ⇒ − ≤ = 1 φ() 1 (by taking limj) ⇒ − ≤ = φ() 0. ⇒ ≥

Corollary I.9.25. Any state σ on a non-unital C∗-algebra A extends uniquely to a state σ˜ on the unitalization A˜.

Proof. By the proposition above, if σ˜ is an extension of σ to a state on A˜, we must have σ˜(1) = 1. This gives uniqueness.

Conversely, given σ on A, we can define σ˜ by

σ˜( + λ1) := σ() + λ.

For any  + λ1 A˜ we get ∈ 1 ∗ 1 ∗ σ˜(( + λ ) ( + λ )) = lim σ˜(( + λj) ( + λj)) 0.  ≥ Therefore σ˜ is positive, so bounded.

The next corollary shows the real power of the above characterization of states.

Corollary I.9.26. Let A be a C∗-algebra. For every self-adjoint  A, ∈ there is a state σ on A s.t. σ() =  . | | k k

Proof. By passing to A˜ we may assume that A is unital.

∗ ∗ ∗ Consiser the C -subalgebra C () A. Recall that C () = C(Sp()). ⊂ ∼ By the spectral radius formula, there is z Sp() with z =  . Put C ∈ | | k k φ = evz : C(Sp()) . It is bounded and φ(1) = φ = 1 so it is a ∗ → k k state on C (). It has φ() = z =  . | | | | k k By the Hahn-Banach Theorem φ can be extended to a bounded linear functional σ on A with the same norm: σ = 1. It is therefore a state k k on A with σ() =  . | | k k

∗ In particular, for every 0 = b A there is a state σ such that σ(b b) = 0. 6 ∈ 6

43 I.9.6 The GNS construction

The Gelfand-Naimark-Segal (GNS) construction produces a Hilbert space representation of A from a state on A, in the same way that 2 we produce L (X; μ) from A = C(X). Definition I.9.27. A — C∗-algebra. σ — state on A. ∗ Let Nσ := { A σ( ) = 0} be the set of null vectors for , σ. ∈ | 〈· ·〉 By Corollary I.9.23, if , b A, m, n Nσ then ∈ ∈  + m, b + n σ = , b σ, 〈 〉 〈 〉 so , σ descends to a it positive definite inner product on A/Nσ. 〈· ·〉 Define Hσ to be the completion of A/Nσ with this inner product. It is a Hilbert space, called the GNS space of A for the state σ. Let us write []σ =  + Nσ for the class of  in A/Nσ. Sometimes we will drop σ from the notation. Proposition I.9.28. The map

πσ : A End(A/Nσ); πσ()[b]σ = [b]σ → is well-defined and extends to a ∗-representation

πσ : A B(Hσ). → Moreover πσ is cyclic, with a unit cyclic vector Ωσ Hσ whose vector state is σ, i.e. ∈ Ωσ, π()Ωσ = σ(), ∀ A. 〈 〉 ∈ Definition I.9.29. πσ is called the GNS representation associated to σ.

Proof. Recall that Nσ is a left ideal. Thus left multiplication πσ() is well-defined on A/Nσ for any  A. ∈ It is bounded for the operator norm, since ∀b A, ∈ 2 πσ()[b] = πσ()[b], πσ()[b] σ k k 〈 ∗ ∗ 〉 = σ(b  b) 2 ∗  σ(b b) ≤ k k2 =  [b], [b] σ, k k 〈 〉 ∗ ∗ ∗ 2 where we have used b  b b  b. Therefore π() extends to a bounded linear map on the≤ completionk k Hσ. Moreover, it is a ∗-homomorphism: e.g.

∗ ∗ πσ( )[b], [c] σ = σ(b c) = [b], πσ()[c] ∀, b, c A 〈 〉 〈 〉 ∈ 44 ∗ ∗ so πσ( ) = πσ() ; the other identities are similar.

If A is unital then Ωσ := [1] is a unit vector, and [1], πσ()[1] = 〈 〉 σ().

If A is not unital we must use an approximate unit (j). Note that if  j ≥ 2 2 [] [j] = σ(( j) ) σ( j) 1 1 = 0 k − k − ≤ − → − shows that ([j]) is a Cauchy net in Hσ so admits a limit which we denote Ωσ. Then ∀ A, ∈ Ω, πσ()Ω = lim [j], πσ()[j] = lim σ(jj) = σ(). 〈 〉 j 〈 〉 j

Example I.9.30. Let A = C(X) and σ be the state associated to a μ σ ƒ R ƒ dμ , Radon probability measure , i.e. ( ) := X . Then σ is the usual L2-inner product (with respect to μ) and the GNS〈· space·〉 is 2 Hσ = L (X; μ).

The GNS representation is the usual multiplication representation:

πσ(ƒ )[g] = [ƒ g]

2 where here [g] denotes the L -class of a continuous function g ∈ C(X). 1 The cyclic vector is Ωσ = X, the constant function 1.

Following this example, for general A, the space Hσ could be reason- 2 ably denoted by L (A; σ). Theorem I.9.31. Every C∗-algebra is isomorphic to a concrete C∗- algebra of operators on some Hilbert space.

Proof. For every 0 = b A there is a state σ s.t. 6 ∈ ∗ σ(b b) = 0 6 = πσ(b)Ωσ = 0 in Hσ ⇒ 6 = πσ(b) = 0. ⇒ 6 Thus if we take the enormous direct sum of representations, M M  = πσ : A B( Hσ). σ S(A) → σ S(A) ∈ ∈ it is faithful (i.e. injective).

45 Remark . H L H I.9.32 The Hilbert space = σ S(A) σ is really enormous (typically non separable). Usually, a much∈ smaller selection of states σ suffices. For instance, if A admits a faithful state σ then the GNS representation πσ is already faithful.

The GNS construction shows that every state can be realized as a vector state of some cyclic representation. There is a uniqueness result too:

Proposition I.9.33. Let π1 : A B(H1) and π2 : A B(H2) be cyclic → → representations with cyclic vectors Ω1 and Ω2, resp. If the vector states σ() = Ω, π()Ω are equal then the representations are 〈 〉 isomorphic, i.e., there is a U : H1 H2 such that π  Uπ  U 1 for all  A. → 2( ) = 1( ) − ∈

Proof. Let σ = σ1 = σ2. Note that

π1()Ω1 = 0 π1(b)Ω1, π1()Ω1 = 0 ∀b A ⇐⇒ 〈 ∗ 〉 ∈ σ(b ) = 0 ∀b A ⇐⇒ ∈ π2()Ω2 = 0 ⇐⇒ So the map

U0 : π1(A)Ω1 π2(A)Ω2; π1()Ω1 π2()Ω2 → 7→ is well-defined between dense subspaces of H1 and H2, respectively. It is also isometric, so extends to an isometry U : H1 H2. The rest is a direct check. →

46 Chapter II

The Toeplitz Algebra & the Toeplitz Index Theorem

II.1 Matrices of operators

H H H H H H If = 1 2 and 0 = 10 20 are (orthogonal) direct sums of Hilbert ⊕ ⊕ T H H spaces, then any bounded linear operator : 0 can be written as → ‚ bŒ T = c d  H H b H H c H H d H H where : 1 10 , : 2 10 , : 1 20 , : 2 20 are → → → → bounded linear operators. Specifically, one can write  pH TpH , = 01 1 where pH1 : H H1 is the orthogonal projection, etc. → This extends in an obvious way to higher-rank decompositions. The usual laws of matrix multiplication hold.

II.2 Compact operators & their represen- tations

II.2.1 Compact operators

Here we recall some of the basic facts about compact operators. Here H is a Hilbert space.

47 1. The rank of an operator T B(H) is ∈ rnk(T) = dim im(T).

An operator is finite-rank if rnk(T) < ∞. 2. Any finite rank operator is a sum of rank-one operators. Every rank-one operator on H has the form

 ,  y 7→ 〈 〉 for some nonzero , y H. ∈ 3. The finite rank operators form a *-algebra, and in fact a ∗-ideal in B(H), since rnk(ST) min(rnk(S), rnk(T)}. But it is not a norm-closed ideal (unless≤ H is finite dimensional).

4. The closure of the finite-rank operators is the C∗-ideal of compact operators K(H). (The usual definition of a is an operator K ∈ B(H) such that K(B(0; 1)) has compact closure, where B(0; 1). But for C∗-algebras, the definition as limits of finite-rank opera- tors is often more useful.)

5. The for Compact Normal operators: Let T K(H) be normal. Then T admits an orthonormal eigen- ∈ basis (ek). Moreover, the corresponding eigenvalues tend to zero: λk 0. (If the Hilbert space is uncountable dimensional, i.e. non-separable,→ then only countably many eigenvalues are nonzero, and these converge to zero.) Otherwise stated: Sp(T) is discrete, except for a possible accu- mulation point at 0 C, and all nonzero λ Sp(T) are eigenval- ∈ ∈ accumulation ues with finite dimensional eigenspaces. point = valeur d’adhérence 6. Still with T K(H) normal, note that for any nonzero λ Sp(T) ∈ 1 ∈ the indicator function {λ} is continuous on Sp(T) and satisfies 12 1∗ 1 {λ} = {λ} = {λ}. Therefore, under functional calculus the ele- 1 ment pλ := {λ}(T) is a projection. It is the orthogonal projection onto the λ-eigenspace of T (exercise). This is called a spectral projection.

II.2.2 Representations of K(H)

The C∗-algebra of compact operators comes with a canonical repre- sentation π : K(H) , B(H). →

48 Lemma II.2.1. Let π : A B(H) be an irreducible representation of a ∗ → C -algebra. If π(A) contains a single nonzero compact operator, then it contains all compact operators.

Proof. It is no loss of generality to suppose A B(H). Note that A contains a finite-rank projection, namely any spectral⊆ projection of a nonzero compact operator. Let P A be a projection of minimal nonzero rank. ∈

For any self-adjoint T A, PTP is a self-adjoint compact operator. Any spectral projection of PTP∈ (for a nonzero spectral value) is a projection onto a subspace of P.H. By minimality, it must be a scalar multiple of C P, i.e. PTP = cT P for some cT . ∈ Now suppose rnk(P) > 1. Then there are unit vectors , y P.H with ∈ , y = 0. For any T A we have 〈 〉 ∈ , Ty = , PTPy = cT , y = 0. 〈 〉 〈 〉 〈 〉 Therefore Ay is a non-trivial closed invariant subspace. This contra- dicts irreducibility.

Therefore rnk(P) = 1. In other words, there is  H,  = 1 such that ∈ k k P = ,   for all  H. 〈 〉 ∈

Let y, z H. By irreducibility, A.H is dense in H, so for any ε > 0 there are S,T∈ A such that y S , z T < ε. Then ∈ k − k k − k ∗ SPT  = T,  S for all  nH. 〈 〉 One then has SPT∗ z, y < 2ε. Thus, A contains all rank one operators, andk hence− all 〈 compact·〉 k operators.

Remark II.2.2. We used here that A is dense in H. In fact, there is a stronger theorem—the Kadison Transitivity Theorem—which says that for any irreducible representation of a C∗-algebra A on a Hilbert space H, A acts transitively on H, i.e. for all , y H there is T A ∈ ∈ such that T = y. This is similar to the Cohen Factorization Theorem. We won’t prove it.

From this, we can also prove the simplicity of K(H). ∗ Corollary II.2.3. K(H) is a simple C -algebra, i.e. has no non-trivial closed ideals.

49 Proof. J K H H H Let ( ) be a nonzero ideal. Suppose 0 is a closed invariant subspace⊂ for J. By the Cohen Factorization⊆ Theorem, any  H  T j J  H 0 can be written as = for some , 0. Then for ∈ S K H S ST H ST∈ J ∈ H any ( ) we have = 0 since . Therefore 0 is ∈ K H H H ∈ H ∈ J invariant for ( ), so 0 = . That is, is irreducible for . So by Lemma II.2.1, J = K(H).

II.3 Fredholm operators

Recall the definition of the kernel and cokernel of a linear operator T : V W on a vector space. → ker(T) = { V T = 0} ∈ | coker(T) = W/ im(T)

From now on T : H1 H2 will be a between Hilbert spaces. →

Definition II.3.1. A linear operator T : H1 H2 is Fredholm if its kernel and its cokernel are both finite dimensional.→

The index of a is

ind(T) = dim ker(T) dim coker(T). − We will write Fred(H) for the space of bounded Fredholm operators on H.

The following observation is useful.

Lemma II.3.2. If T : H1 H2 is Fredholm then ker(T) and im(T) are closed subspaces. →

Proof. The kernel of a bounded operator is always closed. T T Note that restricts to a bounded linear bijection from ker( )⊥ to im(T).

Let W H2 be a complementary subspace for im(T) in H2. Note that ⊂ W is finite dimensional, since the quotient map H2 H2/ im(T) = coker T restricts to a bijection W coker T . → ( ) ∼= ( ) Now define T˜ T W H T˜ ,  T  . : ker( )⊥ 2; ( ) = ( ) + ⊕ → This is a bounded linear bijection, so is a topological isomorphism. It T T˜ T follows that im( ) = (ker( )⊥ 0) is closed. ⊕ 50 Remark II.3.3. It follows that one could take the complementary sub- W T T space of im( ) to be im( )⊥. In other words, if we wish, we could identify coker T im T when T is Fredholm. ( ) ∼= ( )⊥ T T∗ Recall also that im( )⊥ = ker( ). Example II.3.4. Every linear map between finite dimensional Hilbert spaces T : V1 V2 is Fredholm. The Rank-Nullity Theorem shows that → Rank-Nullity Theorem = ind(T) = dim ker(T) (dim V2 dim im(T)) = dim V1 dim V2, − − − Théorème du rang i.e. in this case the index depends only on the spaces V1 and V2, not the operator T.

In particular, any endomorphism of a finite dimensional Hilbert space has index 0.

2 2 Example II.3.5. The right shift T : ℓ (N) ℓ (N) is Fredholm with index 1. The left shift T∗ is Fredholm with→ index 1. − Remark . T H H T H H II.3.6 If : 1 2 and 0 : 10 20 are both Fredholm, then the operator → → ‚ Œ T 0 T T : H H H H 0 = T 1 10 2 20 ⊕ 0 0 ⊕ → ⊕ is Fredholm and

T T T T . ind( 0) = ind( ) + ind( 0) ⊕ Theorem II.3.7 (Atkinsons’ Theorem). Let T B(H). The following are equivalent: ∈

1. T is Fredholm;

2. T is invertible modulo finite rank operators, i.e. there exists S ∈ B(H) s.t.  ST and  TS are finite rank operators; − − 3. T is invertible modulo compact operators i.e. there exists S ∈ B(H) s.t.  ST and  TS are compact operators. − −

Proof. (1) (2): As noted in the previous proof, T restricts to a linear ⇒ T H T ∼= H T S H H homeomorphism 0 : 1 = (ker )⊥ 2 = im( ). Let : be defined by → →

S H H H H ,  T 1  . : = 2 2⊥ ; ( ) 0− ( ) ⊕ → 7→ It is easy to check that ST is the orthogonal projection onto H1 and TS is the orthogonal projection onto H2. Since both have finite codi- mension, the result follows.

51 (2) (3): Immediate. ⇒ (3) (1): Suppose  ST and  TS are compact. Then there are ⇒ − − F1,F2 finite rank such that  ST F1 < 1 and  TS F2 < 1. k − − k k − − k Hence ST + F1 and TS + F2 are invertible.

The restriction of ST + F1 to ker(T) is finite rank. So the restriction of  ST F 1 ST F T T = ( + 1)− ( + 1) to ker( ) is finite rank. Thus dim ker( ) is finite.

Similarly, writing

 TS F TS F 1 TS TS F 1 F TS F 1 = ( + 2)( + 2)− = ( + 2)− + 2( + 2)− shows that im(T)+im(F2) = H. Thus im(T) has finite codimension.

II.3.1 The Calkin algebra and essential spectra

∗ Definition II.3.8. The Calkin algebra is the quotient C -algebra Q(H) = B(H)/K(H). We denote the quotient map by q : B(H) Q(H). →

Atkinson’s Theorem immediate gives the following.

Corollary II.3.9. An operator T B(H) is Fredholm iff q(T) is invert- ∈ ible in Q(H). Definition II.3.10. The of a bounded operator T B(H) is ∈ T q T Spess( ) = SpQ(H)( ( ))

By Atkinson’s Theorem, this is equivalent to

C Spess(T) = {λ λ dH T is not Fredholm}. ∈ | −

II.3.2 Stability of the index

Theorem II.3.11. The space Fred(H) of Fredholm operators is open in B(H) (with the norm topology) and the index is constant on each connected component of Fred(H).

Proof. Fred(H) is the preimage by q of the set of invertibles in Q(H) which is open.

52 T H V T V T H V Let Fred( ). Put 1 = ker( ), 2 = im( )⊥ and  = ⊥ so that we ∈ have decompositions H = H1 V1 = H2 V2. With respect to these, T decomposes as ⊕ ⊕ ‚ Œ T 0 T = : H1 V1 H2 V2, 0 0 ⊕ → ⊕ T H H T T where 0 : 1 2 is just the restriction of . Note that 0 is an isomorphism. →

A B H A < T 1 1 A Let ( ) with 0− − . Decompose as ∈ k k k k ‚ bŒ A = : H1 V1 H2 V2. c d ⊕ → ⊕

 < T 1 T  Since 0− , we have 0 + is invertible. Set k k k k ‚  T  1bŒ ( 0 + )− N = − : H1 V1 H1 V1, 0  ⊕ → ⊕ ‚ Œ  0 N : H V H V . 0 = c T  1  2 2 2 2 ( 0 + )− ⊕ → ⊕ − These are both invertible (with inverses given by the usual 2 2- matrix law), and × ‚ Œ T  0 N T A N 0 + . 0( + ) = c T  1b d 0 ( 0 + )− + − From the finite dimensionality of V1 and V2, we get

T A N T A N ind( + ) = ind 0( + ) c T  1b d = ind( ( 0 + )− + ) − = dim(V1) dim(V2) − = ind(T).

We have thus proven that Fred(T) is open and ind(T) is constant on some open ball around any T Fred(T). It is therefore constant on connected components. ∈

Corollary II.3.12. Let T Fred(H) and K K(H). Then T + K is ∈ ∈ Fredholm and ind(T + K) = ind(T).

Proof. The linear path Tt = T + tK for t = [0, 1] is norm-continuous and every every Tt is Fredholm by Atkinson’s Theorem. The result follows from the previous theorem.

53 II.4 The Toeplitz algebra

II.4.1 The unilateral shift

2 2 Consider H = ℓ = ℓ (N).

Define the unilateral shift:

2 2 T : ℓ ℓ ; T(0, 1, 2,...) = (0, 0, 1 ...). → Then ker T 0 and coker T im T spn{e } where e ( ) = ( ) ∼= ( )⊥ = 0 0 = (1, 0, 0,...) is the first canonical basis vector. In particular,

nd(T) = 1. − Remark II.4.1. • T∗ is the left-shift, i.e.

∗ T (0, 1, 2,...) = (1, 2, 3 ...).

∗ It has nd(T ) = 1. ∗ ∗ • T T = 1 but TT = 1 P0 where − P0 = projection onto spn{e0},

with e0 = (1, 0, 0,...). In particular, T is not normal. Definition II.4.2. The Toeplitz algebra T is the C∗-algebra gener- ated by the unilateral shift T.

Remark II.4.3. The Toeplitz algebra is not abelian because T is not normal.

2 By definition T , B(ℓ ) and we shall call this the canonical representation of the Toeplitz algebra.→ (It is not the only possible representation.)

Lemma II.4.4. The canonical representation of T is irreducible, non- degenerate and contains the compact operators.

∗ 2 2 Proof. T (ℓ ) = ℓ , which is enough to show that the representation is non-degenerate.

H ℓ2 y H Let 0 be a nonzero subrepresentation. Let 0 nonzero. ⊆ ∗ ∈ By applying T sufficiently many times, we get  = (0, 1,...) = T∗ ny H  P  H e H ( ) 0 with 0 = 0. Then 0 0, so 0 0. Therefore Tne ∈e H 6 n N ∈ H ∈H 0 = n 0 for all . It follows that 0 = . This proves irreducibility.∈ ∈

Now T contains all compact operators by Lemma II.2.1.

54 Thus we have K / T . What it the quotient C∗-algebra?

Note that T /K is a subalgebra of the Calkin algebra Q. It is generated by a single

U = q(T) Q. ∈ Therefore T /K C X where X Sp U Sp T . ∼= ( ) = Q( ) = ess( ) Lemma II.4.5. The essential spectrum of the unilateral shift T is S1 (the unit circle in C).

Proof. We repeat that Spess(T) = SpQ(U) where U is unitary. The spectrum of any unitary U in a C∗-algebra is always contained in S1, ∗ because under the Gelfand Transform C (U) = C(Sp(U)), U maps to ∼ ∗ 2 the inclusion function z : Sp(U) , C, which must satisfy z z = z = 1. → | |

1 1 It remains to show that Spess(T) is all of S . Suppose ω S were not in the essential spectrum. Then the operators ∈

Tt := tω T (t [0, 2]) − ∈ is a continuous path of Fredholm operators. But

nd(T0) = nd(T) = 1, while T2 = 2ω T is invertible (since T < 2), so nd(T2) = 0. This contradicts the− stability of the index. k k

Corollary II.4.6. We have T /K C S1 . That is, there is a short ∼= ( ) exact sequence of C∗-algebras

1 0 K , T  C(S ) 0 → → → where the quotient map sends the unilateral shift T to the function z : S1 C. →

II.4.2 Toeplitz operators

Here is another point of view on the Toeplitz algebra, using Fourier series.

2 1 1 C R Z 1 Consider L (S ). Here, we use S = {z z = 1}, but also / = S 2πt ∈ | | | ∼ 1 via t e . The measure is Lebesgue measure λ = dt, so that S has total7→ measure 1.

55 1 As usual, put z C(S ), z(z) = z. ∈ NB: Via R/Z S1 we have zn t e2πnt. ∼= ( ) = n 2 1 Fourier Theory (z )n Z is an for L (S ). In other words, the map⇒ ∈

2 1 2 L (S ) ℓ (Z) 7→ ƒ (ˆƒn); 7→ is an isometric isomorphism, where Z ˆƒ n, ƒ e 2πntƒ t dt. n := z = − ( ) 〈 〉 S1 Definition II.4.7. The is

2 1 2 1 H (S ) = {ƒ L (S ) ˆƒn = 0 ∀n < 0}. ∈ |

By Fourier transform, H2 S1 ℓ2 N . ( ) ∼= ( ) 2 1 2 1 Let P be the orthogonal projection of L (S ) onto H (S ). On Fourier series, P acts by

P(..., ˆƒ 2, ˆƒ 1, ˆƒ0, ˆƒ1, ˆƒ2,...) = (..., 0, 0, ˆƒ0, ˆƒ1, ˆƒ2,...). − −

∗ 1 2 1 The C -algebra C(S ) is represented on L (S ) by pointwise multipli- cation. We denote the representation by  M, i.e. 7→ 1 2 1 M : ƒ ƒ ,  C(S ), ƒ L (S ). 7→ ∈ ∈ In particular z acts by n n+1 Mz : z z . 7→ n That is, z acts as the bilateral shift with respect to the basis (z )n Z. ∈ Remark II.4.8. This is a special case of the fact that the Fourier trans- 1 form converts multiplication into convolution: If  C(S ) then ∈ F ƒ €  ˆƒ Š . ( ) = ( ˆ ∗ )n n N ∈ where ˆ ∗ ˆƒ denotes convolution of Fourier coefficients: X (ˆ ∗ ˆƒ )n = ˆmˆƒn m. m Z − ∈ 1 Definition II.4.9. The Toeplitz operator with symbol  C(S ) is the operator ∈ 2 1 2 1 T := PMP : H (S ) H (S ). →

56 Remark II.4.10. Note that T  ∞. In fact, T =  ∞, but we will prove this later. k k ≤ k k k k k k

In particular,

• Tz = unilateral shift on (one-sided) Fourier series,

• Tzn = shift right by n places.

1 The map  T is not a representation of C(S ). For instance, 7→ TzTz 1 = T1 = d since − 6 T z 1 Tz ˆ ˆ ˆ − ˆ ˆ ˆ ˆ ˆ TzTz 1 : (ƒ0, ƒ1, ƒ2,...) (ƒ1, ƒ2, ƒ3,...) (0, ƒ1, ƒ2,...). − 7−→ 7−→

0 Note though that T1 TzTz 1 is the rank-one projection onto spn{z }. − In fact T is a representation− modulo compacts, in the following sense. 2 1 Proposition II.4.11. Write H = H (S ).

1 1. For any , b C(S ) we have TTb Tb K(H). ∈ − ∈ 1 2. The map C(S ) Q(H);  [T] is a ∗-homomorphism. → 7→ n Proof. 1. First consider the case  = z . We get

Tznb Tzn Tb = PMzn MbP PMzn PMbP − − = PMzn (d P)MbP. − P P Put d = ⊥, the projection onto the orthocomplement of 2 1 − H (S ), i.e. P ..., ˆƒ , ˆƒ , ˆƒ , ˆƒ , ˆƒ ,...... , ˆƒ , ˆƒ , , , ,... . ⊥( 2 1 0 1 2 ) = ( 2 1 0 0 0 ) − − − − Then ( 0,..., n 1 , n > , spn{z z − } if 0 im(PMzn P ) = ⊥ 0, if n 0. ≤ In particular, it is finite rank. Therefore Tznb Tzn Tb is finite-rank. − It follows that Tpb TpTb is finite-rank for any “trigonometric PN− n polynomial” p nz . = n= M − 1 Finally, these polynomials are dense in C(S ) (Stone-Weierstrass). 1 That is, for any  C(S ) and any ϵ > 0, ∃p trigonometric poly- nomial s.t.  p ∈< ϵ. We have k − k Tb TTb = (Tb Tpb) + (Tpb TpTb) + (TpTb TTb) − − − − where

57 • Tb Tpb = Tb pb = b pb ∞ ϵ b , k − k k − k ≤ k − k ≤ k k • TpTb TTb = Tp Tb ϵ b , k − k k − k ≤ k k • Tpb TpTb is finite-rank. − Since ϵ was arbitrary, Tb TTb is compact. − ∗ 2. Direct calculation T∗ = T , T + Tb = T+b, and TTb Tb ⇒ ≡ mod K(H).

1 Corollary II.4.12. The quotient map T C(S ) of Corollary II.4.6 sends T to . → 1 Corollary II.4.13. For any  C(S ), T =  ∞. ∈ k k k k

Proof. Recall that T = PMP. So T P M P =  ∞. k k ≤ k kk kk k k k Also, any C∗-homomorphism has norm 1, so Corollary II.4.12 gives ≤  ∞ T . k k ≤ k k

II.4.3 The Toeplitz Index Theorem

Proposition II.4.14. The Toeplitz operator T is Fredholm iff its sym- 1 bol  C(S ) is invertible (i.e. nowhere zero). ∈ Proof. This follows from Corollary II.4.12 and Atkinson’s Theorem.

Natural question: What is the index of T when Fredholm?

The answer, surprisingly, is calculated by topology, not by analysis. For  S1 C continu- Theorem II.4.15 (Toeplitz Index Theorem). : × ous, → nd(T) = Winding Nmber(). − 1 The winding number of  is the number of times (t) turns around 0 (anticlockwise) as t passes once around the circle (anticlockwise).

The winding number is most elegantly defined via algebraic topology: the fundamental group of C is π C Z with the generator given × 1( ×) ∼= S1 C by the inclusion z : × as the unit circle. Let’s give a quick overview of all this. →

Let X, Y be topological spaces. Two continuous maps ƒ0, ƒ1 : X Y → are homotopic to one another if ƒ0 can be continuously deformed into ƒ1. Precisely: 1 In French, the winding number is called l’indice, which leads to an unfortunate statement of the Toeplitz Index Theorem: nd(T) = nd(). −

58 Definition II.4.16. The continuous maps ƒ0, ƒ1 : X Y are homotopic if there is a continuous map →

F : X [0, 1] Y × → with restrictions F( , 0) = ƒ0 and F( , 1) = ƒ1. · ·

If we put ƒt = F( , t) for all t [0, 1] then the family of functions ƒt is · ∈ the continuous deformation of ƒ0 into ƒ1.

Homotopy is an equivalence relation on the set of continuous maps X Y, since they can be inverted and concatenated. → Any continuous function  S1 C is homotopic Theorem II.4.17. : × to one, and only one, of the functions zn for n Z→. The number n is the winding number of . ∈

I’ll give a rapid proof of this in class, but leave the details for the course on algebraic topology.

Proof of the Toeplitz Index Theorem. The key is to show that the in- dex of T depends only on the homotopy class of .

F S1 , C ƒ F , t Let : [0 1] × be continuous, and put t = ( ). Then × → 1 · t ƒt is a continuous function [0, 1] C(S ). since T =  ∞, the 7→ → k k k k map t Tƒt is continuous. Since Tƒt is Fredholm for all t [0, 1], their indices7→ are all equal. ∈

Z It therefore suffices to calculate the index of Tzn for each n . ∈ But Tzn is the right unilateral shift by n places if n 0 and the left unilateral shift by n places if n 0. An easy calculation≥ gives − ≤ n nd(Tzn ) = n = Winding Nmber(z ). − −

59 Chapter III

Group C∗-algebras

III.1 Convolution algebras

III.1.1 Topological groups; Haar measure

Definition III.1.1. A locally compact group is a group G equipped with a locally compact Hausdorff topology such that the group oper- ations

m :G G G; (g, h) gh × → 7→ ι G G g g 1 : ; − → 7→ are continuous maps. (Here G G has the .) × Any locally compact group G has a unique (up to scalar multiple) nonzero Borel measure μ which is finite on compact sets and invariant under left translations, i.e.

μ(gE) = μ(E) for any E G (measurable), g G, ⊂ ∈ or equivalently Z Z ƒ  dμ  ƒ y 1 dμ  ƒ L1 G , y G. ( ) ( ) = ( − ) ( ) for any ( )  G  G ∈ ∈ ∈ ∈ This μ is called Haar measure. Usually we will write d fo dμ(). NB: Existence and uniqueness of Haar measure on a general l.c. group is a rather difficult theorem in measure theory. But in prac- tice, there is often an obvious left-invariant measure, so the abstract theorem is not necessary. If G is compact, then the Haar measure is finite, and we normalize is so that μ(G) = 1.

60 Example III.1.2. 1. Any group G with the discrete topology. In n particular, finite groups or countable groups such as Z, Z , Fn (the free group on n generators). The Haar measure is counting measure.

2. The groups Rn with their usual topology. The Haar measure is Lebesgue measure.

n n n 3. The tori T = R /Z with their usual topology. Haar measure is Lebesgue measure. C C 4. Matrix groups G Mn( ) with the subspace topology from Mn( ) = 2 ∼ Cn . For instance:⊆

• GL(n, C), SL(n, C), GL(n, R), SL(nR), U(n), SU(n), O(n), SO(n),. . . • The Heisenberg group   1  b   H  c , b, c R , = 0 1  :  0 0 1 ∈ 

3 with the usual topology on R = {(, b, c)}. Haar measure 3 is Lebesgue measure on R , i.e. dμ = d db dc. • The “ + b”-group ¨‚ bŒ « G  R , b R . = : × 0 1 ∈ + ∈

In general, the Haar measure is not right-invariant. (Although it often is: of all the groups in the above list, the only one for which Haar measure is not right-invariant is the  + b-group.)

In any case, for any fixed g G, the measure g∗μ defined by ∈ g∗μ(E) = μ(Eg) is again left-invariant, so is a multiple of Haar measure. Therefore, there is a function G R Δ: × → + such that

μ(Eg) = Δ(g)μ(E) for all E G (measurable), g G. ⊂ ∈ It is an exercise to check that Δ is a continuous G R G ×. It is called the modular function of . → + Lemma III.1.3. Let g G be fixed. We have the following change- of-variable formulas for∈ the Haar measure:

61 1. dμ(g) = dμ(), 2. dμ(g) = Δ(g) dμ(), 3. dμ  1  1 dμ  . ( − ) = Δ( )− ( )

Proof. (1) is by definition of the Haar measure. (2) is by definition of the modular function. 1μ ƒ The second implies that Δ− is a right-invariant measure, since ∀ ∈ Cc(G), Z Z ƒ   1 dμ  ƒ g g 1 dμ g ( )Δ( )− ( ) = ( )Δ( )− ( )  G  G ∈ Z ∈ ƒ g  1 dμ  . = ( )Δ( )− ( )  G ∈ dμ  1 Now, ( − ) is also right-invariant, so by uniqueness it is a scalar 1μ dμ  1 1 dμ  multiple of Δ− . Considering (( − )− ) = ( ) one sees the scalar multiple must be 1.

III.1.2 The L1 algebra

Let G be a l.c.group, μ its Haar measure. 1 Lemma III.1.4. The Banach space L (G; μ) is a Banach ∗-algebra with convolution product and involution as follows: Z ƒ g  ƒ y g y 1 dy ∗ ( ) = ( ) ( − ) y G ∈ ƒ ∗   1ƒ  1 ( ) = Δ( )− ( − ) 1 ∗ for ƒ , g L (G; μ),  G. Also, ƒ L1 = ƒ L1 . ∈ ∈ k k k k Proof. The fact that L1 is an algebra is a direct consequence of ba- sic properties of Lebesgue integrals: linearity, change of variables, Fubini’s Theorem.

1 To show that it is a Banach algebra: ∀ƒ , g L (G), ∈ Z Z ƒ g ƒ y g y 1 dy d ∗ 1 = ( ) ( − ) k k  G y G Z ∈ Z ∈ ƒ y g y 1 d dy ( ) ( − ) (Fubini) ≤ y G  G | | | | Z ∈ Z ∈ = ƒ (y) g() d dy (left-invariance) y G  G | | | | ∈ ∈ = ƒ 1 g 1. k k k k

62 The ∗-algebra properties are left as an exercise.

1 ∗ ∗ 2 Remark . L G μ C ƒ ƒ 1 ƒ III.1.5 ( ; ) is not a -algebra, because L = L1 . k k 6 1 k k But (as we will soon see), we can put a different norm on L (G; μ) which does satisfy the C∗-identity, so the completion will be a C∗- algebra.

In fact, in general there are several possible C∗-norms. But we should leave that discussion until later.

1 1 If G is discrete then L (G) = ℓ (G). Let us write [] for the delta function δ supported at  G (which is continuous because G is ∈ 1 discrete). Then {[]  G} is basis of a dense subspace of ℓ (G). | ∈ Convolution of these elements is

[] ∗ [y] = [y].

Thus, convolution is just the linear extension of the group multiplica- tion law.

1 The delta function at the identity [e] is an unit for the algebra ℓ (G).

1 If G is not discrete, then the delta functions [] are not L -functions 1 1 (or rather, they are trivial as L -functions). The elements ƒ L (G) are not “sums of group elements” but “integrals of group elements:∈ formally we might write Z ƒ = “ ƒ ()[] d”. G

In this purely formal sense, Z ‚Z Œ ƒ g ƒ y g y 1 dy  d ∗ = “ ( ) ( − ) [ ] ” G G Z ƒ y g  y dy d . “ ( ) ( 0)[ 0] 0” G G × That is, convolution is an ”integrated” linear extension of the group law. This motivates the definition of convolution.

1 In this case L (G) is not unital. However, it does have an approximate unit consisting of positive continuous functions of norm 1: take a net () of bump functions of norm 1 supported in decreasing neighbour- hoods of e.

63 III.1.3

For a discrete group G (e.g. a finite group) a unitary representation is simply a group homomorphism from G to the group of unitary op- erators on some Hilbert space H. For a topological group, we should demand the representations are continuous homomorphisms. But the kind of continuity should be chosen carefully. There are several topologies on B(H).

The norm topology on B(H) is the usual metric topology defined by the operator norm. It is a bad choice:

Example III.1.6. Let G = R. It is a topological group with its usual topology.

The left regular representation is the representation λ of R on H = 2 L (R) by translations:

λ  ƒ y ƒ  1y , , y R. [ ( ) ]( ) = ( − ) ∈

It is not continuous in the norm topology. For, given any  R (very 2 R ∈ small) there is a function ƒ L ( ) with ƒ 2 = 1 but spp(ƒ ) 1 , 1  ∈ k k ⊆ [ 2 2 ]. Then − λ  λ ƒ 2 ( ( ) (0)) L2 = 2 k − k so p λ() λ(0) B(H) 2 ∀ = 0. k − k ≥ 6 Therefore λ() λ(0) as  0 in norm. 6→ → Definition III.1.7. The (SOT) on B(H) is the B H H T T coarsest topology such that the maps ( ) ; are contin- coarser = → 7→ uous for every  H. In other words, the net (T) B(H) converges moins fine ∈ ⊂ strongly to T B(H) iff T T for all  H. ∈ → ∈ A base of open neighbourhoods of 0 is the following: for any 1, . . . , n H, ε > 0 ∈

U(0; 1, . . . , n; ε) = {T B(H) T < ε ∀ = 1 dots, n}. ∈ | k k (One could take just ε = 1 here without changing anything.) Remark III.1.8. There is also a (WOT), which is the coarsest topology s.t. the maps B(H) C; T , T are continuous for every ,  H. → 7→ 〈 〉 ∈ For linear subspaces of B(H), closure in the WOT and SOT are equiv- alent. A ∗-subalgebra A of B(H) which is WOT or SOT closed is called

64 a (concrete) von Neumann algebra. The abelian von Neumann alge- ∞ bras (acting on separable H) are all of the form L (X; μ) for some probability measure space X, and in general von Neumann algebras behave like “noncommutative measure spaces” in the same way that C∗-algebras behave like “noncommutative” topological spaces. In particular, there is a Borel functional calculus

∞ L (Sp(T); μ) A; ƒ ƒ (T) → → for any T A, where μ is some Borel measure on the Sp(T). ∈ We won’t discuss von Neumann algebras any further in this course.

III.1.4 Unitary representations of topological groups

Definition III.1.9. By a unitary representation of a locally compact Hausdorff topological group G we shall mean a map π : G U(H) which is continuous for the strong operator topology. →

Example III.1.10. The left regular representation

λ G L2 G λ  ƒ y ƒ  1y . : ( ); [ ( ) ]( ) = ( − ) → is a strongly continuous unitary representation. Unitarity is immedi- ate from the left-translation invariance of the Haar measure. Strong continuity is a consequence of the Dominated Convergence Theorem.

Lemma III.1.11. Let G be a l.c. group. There is a bijective corre- spondence between:

• unitary representations of G, and

1 • non-degenerate ∗-representations of L (G).

Specifically, the correspondence sends a unitary representation π : G U(H) to the integrated representation → Z 1 π : L (G) B(H); π(ƒ ) := ƒ ()π() d. (III.1.1) → G

Proof. Full proof for G discrete:

Let π be a unitary representation of G. The integrated representation is X π(ƒ ) = ƒ ()π().  G ∈ 65 1 This series is norm-convergent since π() = 1 and ƒ ℓ (G), and k k ∈ π(ƒ ) ƒ 1 so π is bounded. Note that π([]) = π(). Therefore ≤ k k π([])π([y]) = π([y])

1 and by linearity, π(ƒ )π(g) = π(ƒ g) for all ƒ , g ℓ . By unitarity ∈ X π ƒ ∗ ƒ  π  1 d π ƒ ∗ . ( ) = ( ) ( − ) = ( )  G ∈ It is non-degenerate since π([e]) = .

1 Conversely, if π : ℓ (G) B(H) is a ∗-representation then we can → define a representation π : G U(H) by π() := π([]). →

Sketch proof for G locally compact: (Details in Davidson [Dav96, p.183] or Dixmier [Dix96, §13.3])

The integral (III.1.1) is bounded since Z

π(ƒ ) = ƒ ()π() d k k G Z ƒ () π() d = ƒ 1. ≤ G | |k k k k Thus π 1. The fact that π is a ∗-representation is a direct check (althoughk k ≤ one needs to use the modular function Δ when checking the ∗-invariance). For non-degeneracy, we need to use the approxi- mate unit  of norm 1 functions supported in decreasing neighbour- hoods of e.

1 Conversely, if π : L (G) B(H) is a ∗-representation then we can define a unitary representation→ of G as follows. Let  H be such 1 ∈ that  = π(ƒ ) for some ƒ L (G) and  H (such  are dense in H by non-degeneracy). Define∈ ∈

π   π  π ƒ  π ƒ  1 . ( ) = ( ) ( ) := ( ( − )) · One can check, that

π  π   1 , ( ) = SOT lim ( ( − )) −  · which implies in particular that π() is well-defined.

With the limit definition above, one gets π()π(y) = π(y) and π(e) =  π  π  1 p  1 π  . One also gets ( ) 1 and ( )− = ( − ) 1 so ( ) is π  1k πk ≤∗ k k k k ≤ unitary and ( − ) = ( ) .

66 Example III.1.12. The left regular representation

λ G U L2 G λ  ƒ y ƒ  1y : ( ( )); ( ( ) )( ) = ( − ) → integrates to the representation

1 2 λ : L (G) B(L (G)); λ(ƒ )g = ƒ ∗ g → 1 2 1 2 for ƒ L (G), g L (G). In particular, we get ∀ƒ L (G), g L (G) ∈ ∈ ∈ ∈ ƒ ∗ g 2 ƒ 1 g 2. k k ≤ k k k k One can also check this directly.

III.1.5 The C∗-enveloping algebra of a Banach ∗- algebra

We want to make a C∗-algebra out of the L1-algebra. Let us start by considering the general problem of embedding a Banach ∗-algebra in a C∗-algebra.

The easiest way to turn a Banach algebra into a C∗-algebra is to represent it on a Hilbert space. Given a ∗-representation of norm 1

π : A B(H) → ∗ the closure π(A)k · k is a C -algebra.

NB: This will not be a embedding of A unless π is faithful.

We would like to define a “universal representation” M M  = π : A B( Hπ) → which is the direct sum of all ∗-representations of norm 1. This has annoying set-theoretic problems (the class of all representations is too big to be a set), so we need to restrict the class of representations in the direct sum.

We could take all irreducible representations (up to unitary equiva- lence). But this is a little hard to work with because of the fact that not every infinite dimensional ∗-representation decomposes into ir- reducibles.

In the end, it is technically easiest to use the cyclic representations. These have the nice property that every ∗-representation decom- poses into a direct sum of cyclic representations (by Zorn’s Lemma).

67 On the other hand, the class of all cyclic representations (up to uni- tary equivalence) is not too large thanks to the GNS construction.

Let A be a Banach ∗-algebra. We can still talk of states: A state is ∗ a linear functional σ : A C of norm 1 with σ( ) = 1 for all  A. Then we can define the inner→ product ∈

∗ , b σ = σ(b ) 〈 〉 just as for a C∗-algebra. The GNS completion gives a representa- tion πσ : A B(Hσ). Moreover, any cyclic representation of A is iso- morphic to→ the GNS representation of the vector state for the cyclic vector. Definition III.1.13. The C∗-enveloping algebra of a Banach ∗-algebra A is the norm closure of (A), where M  = πσ σ S(A) ∈ ∗ is the direct sum of all GNS representations of A. It is denoted C (A). Lemma III.1.14. The enveloping C∗-algebra has the following uni- versal property: any ∗-homomorphism of a Banach ∗-algebra A into ∗ ∗ a C -algebra B factors through C (A): ∗ A C (A) B. → → ∗ Proof. It suffices to take B = B(H) since any C -algebra is isomor- phic to a subalgebra of some B(H). But now we can decompose the resulting representation of A on H into a direct sum of cyclic subrep- ∗ resentations, and these all factor through C (A). Remark III.1.15. What we cannot show for a Banach ∗-algebra is that there are a lot of states. For instance we cannot ensure there are enough states to make a faithful ∗-representation A , B(H) for some H. Constructing states needed the commutative Gelfand→ Theorem, and the characterization of states by σ(1) = σ = 1. Both of these only work for C∗-algebras k k

In fact, for a general Banach ∗-algebra, there are not always enough ∗ states. That means that A C (A) is not always injective. → But we will see that for the L1-algebras of groups this problem doesn’t occur.

III.1.6 The maximal group C∗-algebra

Let G be a l.c. group.

68 ∗ ∗ ∗ Definition III.1.16. The maximal group C -algebra C (G) is the C - 1 enveloping algebra of L (G). 1 ∗ Lemma III.1.17. The map L (G) C (G) is injective, with dense image. →

Proof. By Lemma III.1.14, it suffices to have one faithful ∗-representation 1 π : L (G) , B(H) on a Hilbert space. The regular representation 1 → 2 λ : L (G) B(L (H)) is faithful (exercise). → ∗ 1 Density of the image is automatic, since C (L (G)) is defined as the 1 closure of the image of L (G) in some representation. Proposition III.1.18. There is a bijective correspondence between

• Unitary representations of G, and ∗ • ∗-representations of C (G).

Proof. It suffices to show that there is a bijective correspondence be- 1 ∗ tween ∗-representations of L (G) and ∗-representations of C (G). This is immediate from Lemma III.1.14.

III.1.7 The reduced group C∗-algebra

Definition III.1.19. The reduced C∗-algebra of G is the closure of the regular representation:

C∗ G λ G B L2 G . r ( ) := ( )k · k ( ( )) ⊆ From the bijective correspondence of Proposition III.1.18, the regular C∗ G C∗ G representation gives a map ( )  r ( ). It is sometimes but not always an isomorphism. Remark III.1.20. The groups for which it is an isomorphism are called amenable groups. We will not define amenability in this course. F amenable = Abelian groups are amenable. The free group 2 is not. moyennable

III.2 C∗-algebras of abelian groups

III.2.1 The Pontryagin dual Gˆ

In this section we consider locally compact abelian groups G. Note that abelian groups are always unimodular (Δ 1), since left- and right-invariant Haar measure are equal. ≡

69 ∗ ∗ Lemma III.2.1. If G is abelian, then C (G) is an abelian C -algebra and C∗ G G,ˆ where Gˆ Hom G; T . ( ) ∼= = ( ) Here, Hom(G; T) denotes the set of continuous group homomorphisms from G to the circle. Its topology is induced from the weak-∗ topol- 1 T ogy on L (G). That is, a net of continuous homomorphisms φ : G converges to φ : G T iff → → Z Z 1 φ()ƒ () d φ()ƒ () d ∀ƒ L (G). (III.2.1) G → G ∈

If G is discrete, this is equivalent to pointwise convergence of φ φ. →

1 Proof. For any , b L (G) we have ∈ Z ƒ g  ƒ y g y 1 dy ∗ ( ) = ( ) ( − ) y G Z ∈ ƒ z 1 g z dz y z 1 = ( − ) ( ) (using = − ) z G ∈ = g ∗ ƒ ().

1 ∗ So L (G) is commutative, hence also C (G) by density.

By the commutative Gelfand Theorem

∗ C (G) = C0(X)

∗ where X is the space of characters of C (G), i.e. ∗-representations of 1. These are in bijective correspondence with one- dimensional unitary representations of G, so X Gˆ. ∼= Under the Gelfand transform the topology on X Gˆ is the weak-∗ ∼= topology, i.e.

φ φ → ∗ φ(ƒ ) φ(ƒ ) ∀ƒ C (G) ⇐⇒ → ∈ 1 φ(ƒ ) φ(ƒ ) ∀ƒ L (G), ⇐⇒ → ∈ 1 ∗ since L (G) is dense in C (G). This is exactly Equation (III.2.1) (see Equation III.1.1).

If G is discrete then pointwise convergence of φ is equivalent to (III.2.1) where ƒ is any delta function. But the delta functions have 1 dense linear span in L (G).

70 Definition III.2.2. The set Gˆ is also an with product, unit, and inverse as follows: for φ, ψ Gˆ, ∈ φψ() = φ()ψ() ∀ G, ∈ 1ˆ() = 1 ∀ G, ∈ φ 1  φ  1  G. − ( ) = ( )− ∀ ∈ Moreover, these operations are continuous w.r.t. the weak-∗ topology 1 on L (G) (exercise). The l.c. group Gˆ is called the Pontryagin dual of G.

One can also prove that Gˆ G for any l.c. abelian group G, but we ∼= won’t do that here.

Definition III.2.3. The Gelfand homomorphism

∗ F : C (G) C0(Gˆ) → is called the Fourier transform. In particular, the restriction of the 1 Fourier transform to L (G) is given by Z (Fƒ )(φ) = ƒ ()φ() d.  G ∈ Remark III.2.4. It is immediate that F(ƒ ∗ g) = F(ƒ ).F(g) for all ƒ , g 1 ∈ L (G).

Example III.2.5. 1. G = Z. A homomorphism φ : Z T is completely determined by the image of its generator: φ→1 . Therefore Zˆ T via φ φ 1 . The ( ) ∼= ( ) inverse map is 7→

T θ nθ Z T e [φθ : n e ] Hom( ; ) 3 7→ → ∈ ∗ Z T 1 Z The associated Fourier transform C ( ) C0( ) sends ƒ ℓ ( ) → ∈ to the corresponding Fourier series in C(T). More generally Zˆn Tn. ∼= 2. G = R. Some facts which I will add as exercises: Any continuous group homomorphism φ : R T factors through a group homomor- phism → φ˜ exp( ) R R · T. → → Moreover, any continuous homomorphism φ˜ : R R is automat- ically linear, i.e. φ˜ :  ξ for some θ R. → → ∈

71 R R Admitting this, we get ˆ = via φ ξ. The inverse map is ∼ 7→ R ∼= Rˆ ξ eξ ; • → 7→ The associated Fourier transform is the usual Fourier transform (up to factors of 2π). More generally, Rˆn Rn via ∼= Rn Rn  ξ, ˆ ; ξ e 〈 •〉 → 7→ 3. G = T. eξ R T The homomorphism · : descends to a homomorphism T T Z → Z iff ξ 2π . Therefore Tˆ = via → ∈ ∼ R Rˆ n e2πn ; • → 7→ 1 The associated Fourier transform sends ƒ L (T) to its Fourier Z ∈ coefficients in c0( ). Remark III.2.6. Gˆ is compact iff G is discrete, since

∗ 1 C (G) unital L (G) unital G discrete. ⇐⇒ ⇐⇒

C∗ G C∗ G That deals with ( ). What about r ( )?

Gˆ C∗ G Let us temporarily write r for the Gelfand dual of r ( ) which is also Gˆ Gˆ C∗ G commutative. (We will soon show r = .) The surjection r ( )  ∗ C (G) implies that Gˆr is a subset of Gˆ. If G is abelian then C∗ G C∗ G . Theorem III.2.7. r ( ) = ( )

Proof. We will show that Gˆr = Gˆ. That is, every unitary character φ G, T C∗ G Hom( ) extends continuously to a character of r ( ) via the ∈ 1 usual integration formula on L (G): Z φ(ƒ ) = ƒ ()φ() d.  G ∈ So we need to prove that

1 φ ƒ C ƒ C∗ G : C λ ƒ 2 ∀ƒ L G . ( ) r ( ) = ( ) B(L (G)) ( ) | | ≤ k k k k ∈

∞ 1 Note first that φ L (G) so φƒ L (G). We claim that, for any ƒ 1 ∈ ∈ ∈ L (G) we have λ(φƒ ) = λ(ƒ ) k k k k 72 where λ is the left regular representation of G. To see this, we calcu- 2 late ∀g L (G)), ∈ Z λ φƒ g  φ y ƒ y g y 1 d ( ( ) )( ) = ( ) ( ) ( − ) y G ∈ Z φ  ƒ y φ  1y g y 1 d = ( ) ( ) ( − ) ( − ) y G ∈ Mφλ ƒ Mφ 1 g  , = ( ( ) − )( ) where Mφ is (as usual) the operator of pointwise multiplication by φ 2 T on L (G). But Mφ is unitary since im(φ) . ⊆ Now fix ψ Gˆ some unitary character which extends continuously to C∗ G ∈ Gˆ r ( ) (at least one such character must exist, since r is not empty). φ ψ 1φ Put 0 = − . Then Z Z φ ƒ φ  ƒ  ψ  φ  ƒ  ψ φ ƒ . ( ) = ( ) ( ) = ( ) 0( ) ( ) = ( 0 )  G  G ∈ ∈ So φ ƒ ψ φ ƒ λ φ ƒ λ ƒ . ( ) = ( 0 ) ( 0 ) = ( ) | | | | ≤ k k k k

III.2.2 Plancherel’s Theorem

Theorem III.2.8 (Plancherel’s Theorem). The Fourier transform ex- tends to an isometric isomorphism

2 = 2 F : L (G) ∼ L (Gˆ), → for appropriate choice of Haar measure.

This statement is not quite precise.

1 2 If G is a discrete group, then ℓ (G) ℓ (G). In this case, the isometry 2 2 ⊆ 1 ℓ (G) L (Gˆ) really is an extension of the Fourier transform ℓ (G) → → C(Gˆ) of the previous paragraph. The appropriate Haar measures are counting measure on G and the Haar measure of total mass 1 on Gˆ.

If G is not discrete, then we must start with the restriction of the 1 2 Fourier transform to L (G) L (G) C0(Gˆ). There is no canonical normalization of the Haar measures∩ → in general, which is why all the problems about factors of 2π in the usual Fourier transform.

73 2 Proof (for G discrete). Let [e] ℓ (G) be the delta function at the ∈ identity. Note that the Fourier transform of [e] is the unit 1 C(Gˆ): ∈ F([e])(φ) = φ(e) = 1 ∀φ G.ˆ ∈ ∗ The corresponding vector state for the regular representation of C (G) is

τ(ƒ ) = ƒ ∗ [e], [e] 〈 〉 1 = ƒ (e)(when ƒ L (G)). ∈ It is also a state for C(Gˆ), so by the Riesz representation theorem there is a positive Borel probability measure ν on Gˆ such that Z ∗ τ(ƒ ) = F(ƒ )(φ) dν ∀ƒ C (G). φ Gˆ ∈ ∈ 1 If ψ Hom(G, T) and ƒ L (G) then ∈ ∈Z (F(ƒ ψ))(φ) = ƒ ()ψ()φ() d = (F(ƒ ))(ψϕ).  G ∈ Therefore Z Z Fƒ (ψφ) dν(φ) = F(ƒ ψ)(φ) dν(φ) φ Gˆ φ Gˆ ∈ ∈ Z = τ(ƒ ψ) = ƒ (e)ψ(e) = ƒ (e) = Fƒ (φ) dν(φ). φ Gˆ ∈ Therefore ν is a left-invariant measure, i.e. a Haar measure.

Both of these Hilbert spaces are cyclic representations of C∗ G ( ) ∼= C(Gˆ). ∗ The first is the regular representation of C (G), with cyclic vector [e].

The second is the multiplication representation of C(Gˆ) with cyclic vector 1.

The corresponding vector states are equal: Z (Fƒ )1, 1 = Fƒ (φ) dν(φ) = τ(ƒ ). 〈 〉 φ Gˆ ∈ The uniqueness part of the GNS theorem says that there is a unitary isomorphism 2 2 U : ℓ (G) L (Gˆ) → U e Uλ ƒ U 1 M with : [ ] 1 such that ( ) − = F(ƒ ). Then → 1 Uλ(ƒ )[e] = MF(ƒ ) = Uƒ = F(ƒ ) ⇒ 2 where ƒ denotes the function ƒ ℓ (G). ∈ 74 For the general proof, one needs to work with approximate units again.

III.3 The C∗-algebras of the free group

In this section, we study the maximal and reduced C∗-algebras of a very non-abelian group: the free group.

III.3.1 The free group on two generators

Definition III.3.1. Fix two symbols (called “letters”)  and b.A word  1 b 1 bb 1bb 1b 1b 1 will mean a finite string of the symbols ± , ± , e.g. − − − − , b2 1bb 3 which we simplify as − − . The empty word (of length zero) is also allowed, and is denoted e.

 1 bb 1 A word is reduced if it contains no substring of the form − , − , 1 b 1b , − .

The free group on two generators F2 is the set of all reduced words. Multiplication is given by concatenation of strings, and reduction, e.g. b2b 1 . b 2b2 b2 1b2. ( − ) ( − ) = − The inverse is given by reversing a word and inverting each letter: b2b 1 1 b 1b 2 1. ( − )− = − − − The unit is the empty word e.

The free group is the universal group on two generators, in the sense that if G is any group and g, h G are any two elements, then there is a homomorphism ∈ φ : F2 G → such that φ() = g and φ(b) = h. We simply send a word in , b to the corresponding word in g, h.

It is non-abelian (since b = b) and torsion-free: it contains no ele- ments of finite order. 6

III.3.2 The Cayley graph

Definition III.3.2. The Cayley graph of a group G with respect to a generating set S is the graph whose vertex set V and edge set E are vertex = sommet, edge 75 = arête V = G, E g, h G G h gs 1 s S . = {( ) = ± for some generator } ∈ × | ∈ s 1 g 1h The generator ± = − will be called the edge label of the edge (g, h) Example III.3.3. 1. The Cayley graph of Z2 (w.r.t. the canonical generators 1, 0 and 0, 1 ) is a grid. ( ) ( ) grid = grille 2. The Cayley graph of F2 (w.r.t. the defining generators , b) is an infinite tree of valence 4.

(See pictures in class.)

The Cayley graph of F2 is a tree, i.e. it contains no loops. Between ev- ery pair of vertices, there is a unique shortest path, called a geodesic. g 1h The edge labels of the geodesic are given by the reduced word − .

III.3.3 The canonical

Let G be a discrete group. The Dirac function at the identity [e] 2 ∈ ℓ (G) is a unit vector. It defines a vector state for the regular repre- sentation: τ ƒ λ ƒ e , e ƒ C∗ G . ( ) := ( )[ ] [ ] ( r ( )) 〈 〉 ∈ 1 If ƒ ℓ (G) then τ(ƒ ) = ƒ , [e] = ƒ (e) is just evaluation at the identity. In particular,∈ 〈 〉 ( 1, if  = e τ([]) = 0, otherwise.

Definition III.3.4. A trace on a C∗-algebra A is a linear functional tr : A C such that → tr(b) = tr(b) ∀nb A. ∈ τ is a trace on C∗ A . It is also faithful. Proposition III.3.5. r ( )

C∗ G It is called the canonical trace on r ( ).

Proof. First we prove that τ(ƒ ) = λ(ƒ )[], [] for any  G. It suf- 〈 〉 ∈ fices to prove this for ƒ = [y] another delta function, since these span 1 a dense subspace of ℓ (G); We calculate

λ([y])[], [] = [y], [] = δy, = δy,e = τ([y]). 〈 〉 〈 〉

76 1 Therefore, for any ƒ ℓ (G), any  G ∈ ∈ τ(ƒ []) = λ(ƒ [])[e], [e] 〈 〉 = λ([]ƒ [])[e], λ([])[e] because [] is unitary 〈 〉 = λ([]ƒ )[], [] 〈 〉 = τ([]ƒ ).

1 Again, by density, we get τ(ƒ g) = τ(gƒ ) for all ƒ , g ℓ (G). ∈ For faithfulness, observe that [e] is a cyclic vector for the regular representation, since

λ([])[e] = [] ∀ G ∈ 2 and these span a dense subspace of ℓ (G). Therefore, the GNS repre- sentation associated to τ is isomorphic to the regular representation, which is faithful.

III.3.4 The canonical trace for F2

In the case of the free group, there is another formula for the canoni- cal trace.

F In this section, I will just write  instead of λ() whenever  2. In  b C∗ G ∈ particular, and are the two unitary generators in r ( ). For any ƒ C∗ F we have Proposition III.3.6. r ( 2) ∈ m n 1 X X  j j  lim  b ƒ b  = τ(ƒ )1. m,n ∞ mn − − → =1 j=1

Here is the idea of the proof. As usual, it suffices to prove the case ƒ = [] for some  G. Then ∈ bjƒ b j  bjb j . − = − − − C∗ G These are all unitaries in r ( ).

 e bjb j  e , j If = then − − = for every , so the averages of the proposition are all equal to [e] = 1.

 e bjb j  If = then the words − − are (almost) all different. We will show6 that these averages tend to 0.

We need some lemmas about operators with orthogonal ranges

77 2 Lemma III.3.7. Let A,B B(H) with m(A) m(B). Then A + B 2 2 ∈ ⊥ k k ≤ A + B . k k k k

Proof.

2 A + B = sp A + B, A + B k k  1〈 〉 k k≤ 2 2 = sp ( A, A + B, B ) A + B .  1 〈 〉 〈 〉 ≤ k k k k k k≤

Lemma III.3.8. Let H = H1 H2 be an orthogonal decomposition ⊕ of a Hilbert space. Let T B(H) map H2 into H1. Let U1,...,Un be unitaries such that U∗U maps∈ H into H for all  j. Then j  1 2 = 6

1 Pn U TU∗ 2 T . n  1   n = ≤ p k k

Proof. First, suppose that T maps all of H into H1.

Then,

2 2 Pn ∗ P ∗ ∗ UTU T U UTU U1 =1  = + =1 1  6 2 2 P ∗ ∗ T U UTU U + =1 1  1 ≤ k k 6 2 2 P ∗ T UTU . + =1  ≤ k k 6 By induction,

2 Pn U TU∗ Pn T 2 n T 2.  1    1 = (III.3.1) = ≤ = k k k k

Now, suppose that T only maps H2 into H1.

Write T = TP1 + TP2 where P is the orthogonal projection onto H. ∗ Note that TP2 maps all of H into H1. Also P1T = (TP1) maps all of H into H1. So each of TP1 and TP2 satisfy the norm estimate (III.3.1). We get

Pn U TU∗ n T 2.  1   2p = ≤ k k The result follows.

2 2 Proof of proposition III.3.6. Put H = L (F ). Let n 1 X j j Sn : B(H) B(H); Sn(T) = b Tb n − → =1

78 denote the averaging of the conjugates by b, b2, . . . , bn. We consider Sn() where  G. ∈ k If  = b for some k then Sn() = . Otherwise, I claim that 2 Sn(λ()) . (III.3.2) k k ≤ pn  b  bk b k,  Z  Note that if the word = then = 0 for some where 0  6 1 ∈ starts and ends with ± . Then S λ  λ bk S λ  λ b S λ  . n( ( )) = ( ) n( ( 0)) ( ) = n( ( 0)) k k k k k k So it suffices to prove the estimate (III.3.2) for words  which begin  1 and end with ± .

Let us decompose H = H1 H2 where ⊕ H y y  1 , 1 = spn{[ ] begins with ± } | H y y b 1 y e . 2 = spn{[ ] begins with ± or = } |  G  1 λ  H H1 If begins and ends with ± then ( ) maps 2 to . The ∈ U j U∗U  j λ b j H operators j = are such that j  = − = ( − ) which maps 1 to H2 for every  = j. The previous lemma immediately gives (III.3.2). 6 Now let m 1 X   S : B(H) B(H); Sm(T) =  T m0 m − → =1 be the average of conjugates by . We get:

k 2 if  = b , S (Sn()) Sn() , 0m pn 6 k k k ≤ k k ≤ 2 if  = b (k = 0), S (Sn()) = S () , 0m 0m pm  e 6 Sk S e k e k k ≤ if = , 0m( n( )) = = 1

Therefore, ( e, if  = e lim S (Sn()) = m,n ∞ m0 0, otherwise. → This completes the proof.

C∗ F III.3.5 Simplicity of r ( 2)

C∗ F is simple, i.e. it has no non-trivial ideals. Theorem III.3.9. r ( 2) Remark . C∗ F III.3.10 It is equivalent to say r ( 2) has no non-trivial closed ideals, since maximal proper ideals are always closed.

79 Proof. J C∗ F C∗  J  τ ∗ Let / r ( 2) be a -ideal. Let , = 0. Then ( ) = 0 τ S S ∗ J ∈ 6 m, n τ ∗ 6 J since is faithful. But 0m( n( )) for all , so ( )1 . J J C∗ F ∈ ∈ Therefore 1 , i.e. = r ( 2). ∈ C∗ F is not isomomorphic to C∗ F . Corollary III.3.11. r ( 2) ( 2)

Proof. Recall that any unitary representation of F2 extends to a rep- resentation of the maximal C∗-algebra. For instance, the trivial rep- F C ∗ F C resentation π : 2 extends to a ∗-homomorphism π : C ( 2) . → ∗ F → The kernel is a non-trivial ideal, so C ( 2) is not simple. ∗ F Remark III.3.12. In fact, C ( 2) has many ideals. For instance, let G be any finite group with two generators, and π : G U(n) any irreducible unitary representation of G (which must be finite→ dimen- sional). Then there is a finite dimensional irreducible representation

F π π˜ : 2  G U(n) → by the universal property of the free group, and hence a finite dimen- ∗ F sional irreducible representation of C ( 2). None of these represen- tations factor through the reduced C∗-algebra.

C∗ F The simplicity of r ( 2) suggests that there is not much hope of do- ing “Fourier theory” on the free group. Ideals in a C∗-algebra play the role of open sets in topological spaces: recall that the ideals of C0(X) are all of the form C0(Y) where Y is an open subset of X. Thus, F roughly speaking, the topology on the reduced dual ( ˆ2)r has no non- trivial open or closed subsets.

The situation is very different for real reductive groups, like G = C SL(n, ), where Gˆr is a very nice (almost Hausdorff) locally . Harish-Chandra described Gˆr for these groups and showed there is a Plancherel measure on Gˆr which allows a generalization of the formula L2 G L2 Gˆ for abelian groups. These are the groups ( ) ∼= ( ) which interest physicists, for instance, so we’re kind of lucky that things work so nicely there.

III.3.6 The Kadison-Kaplansky conjecture

Definition III.3.13. A discrete group is called torsion-free if it has no non-trivial finite subgroups.

Equivalently, G is torsion-free iff it has no elements of finite order, other than the identity.

80 If G has a finite subgroup H, then the "average over H"

1 X pH := [] H  H | | ∈ C∗ G is a projection in r ( ) (direct check). The Kadison-Kaplansky Con- C∗ G jecture says that torsion is the only way to get projections in r ( ): If G is a torsion-free discrete group then C∗ G Conjecture III.3.14. r ( ) has no non-trivial projections.

Example III.3.15. G Z is torsion-free. In this case C∗ Z C T . = r ( ) ∼= ( ) A projection in C(T) is a {0, 1}-valued continuous function on T. But T is connected, so the only projections are the constant functions 0 and 1.

In this sense, the Kadison-Kaplansky Conjecture states that the re- duced dual of a torsion-free group is connected.

The Conjecture has been proven for a very large class of groups, but not all groups. We will prove it for the free group. The method of proof is perhaps as interesting as the result. It uses K-homology, which is a homology theory for C∗-algebras. Remark III.3.16. The existence of a simple unital C∗-algebra with no projections was not known for a long time. The first examples were C∗ F constructed using AF-algebras. But r ( 2) is a very pretty example.

III.3.7 Fredholm modules

C∗ F Here is the fundamental concept which will let us prove that r ( 2) is projectionless.

Definition III.3.17. A Fredholm module for a C∗-algebra A is a pair of Hilbert space representations

π : A B(H), ( = 0, 1) → with a unitary operator U : H0 H1 that essentially intertwines them in the sense that →

Uπ0() π1()U is a compact operator ∀ A. (III.3.3) − ∈

It is notationally convenient to combine H0 and H1 into a single Z/2Z-graded Hilbert space H := H0 H1. ⊕ 81 We keep track of the subspaces H0 and H1 by introducing the grading operator ‚ Œ 1 0 γ = B(H) 0 1 ∈ − so that H0 and H1 are the 1-eigenspaces of γ. A pair (H, γ) (where γ ± 2 is a self-adjoint operator with γ = 1) is called a Z/2Z-graded Hilbert space.

We then put

‚ Œ ‚ ∗Œ π0() 0 0 U π() = and F = 0 π1() U 0 in B(H). We then get the following equivalent definition: ∗ Definition III.3.18. A Fredholm module for a C -algebra A is M = (H, γ, π, F), where (H, γ) is a Z/2Z-graded Hilbert space, π : A B(H) → is a ∗-representation, and F B(H) is a bounded self-adjoint operator satisfying the following conditions:∈

• π() commutes with γ for all  A, ∈ • F anticommutes with γ, i.e. γF = Fγ, − 2 • F = 1

• [F, π()] K(H) for all  A. ∈ ∈ Example III.3.19. Let A C∗ Z C T . Let π π λ be the = r ( ) ∼= ( ) 0 = 1 = 2 Z 2 T 2 Z regular representation on H0 = H1 = ℓ ( ) = L ( ). Let U : ℓ ( ) 2 ∼ → ℓ (Z) be the operator of pointwise multiplication by ( , n , ∞ Z 1 if 0 χ ℓ ( ); χ(n) = ≥ ∈ 1, if n < 0. − Then ‚ ‚ ŒŒ 0 U∗ H, π, F = U 0 C∗ Z is a Fredholm module for r ( ). It turns out that it’s not a very inter- esting one. We’ll see a more interesting one soon. ‚ ‚ ŒŒ 0 U∗ Lemma III.3.20. Let H, π, F be a Fredholm module = U 0 for A. For any projection p A, the operator Up := π1(p)Uπ0(p) : ∈ π0(p)H0 π1(p)H1 is Fredholm. →

82 Proof. Using the essential intertwiner condition,

U∗U π p U∗π p Uπ p p p = 0( ) 1( ) 0( ) ∗ π0(p)U Uπ0(p)π0(p) mod K ≡ = π0(p), which is the identity on π0(p)H0. Definition III.3.21. We define Z ndexM : Proj(A) , → ndexM(p) := ndex(π1(p)Uπ0(p) : π0(p)H0 π1(p)H1) → where Proj(A) is the set of projections in A.

Remark III.3.22. By the stability of the Fredholm index, ndexM is lo- cally constant on Proj(A).

In summary, Fredholm modules can be used to detect projections.

III.3.8 operators

Definition III.3.23. Let H be a Hilbert space with orthonormal basis 1 (e). The trace norm or L -norm of an operator T B(H) is ∈ X T L1 := T e, e . k k  〈| | 〉

An operator T B(H) is called trace class if t L1 < ∞. The set of ∈ 1 k k trace class operators is denoted L (H).

2 Compare the Hilbert-Schmidt operators T HS(H) = L (H), which are those operators s.t. ∈

X 2 X T L2 := T e, e = Te, Te < ∞. k k  〈| | 〉  〈 〉

Some facts (analogous to those for Hilbert-Schmidt operators):

1. Trace-class operators are compact.

1 2. If T L (H) then the sum of “diagonal matrix entries” ∈ X Tr(T) := Te, e  〈 〉

is finite and independent of the choice of orthonormal basis (e).

83 1 3. If P sL (H) is a projection then it is finite dimensional and ∈ Trce(P) = rnk(P). 1 1 4. If T L (H) and S B(H) then ST, TS L (H) and ∈ ∈ ∈ Tr(ST) = Tr(TS).

Moreover, ST 1 S T 1 . L B(H) L (H) k k ≤ k k k k

The reason why we care about trace class operators here is that they will give us another way to calculate the index of Fredholm operators.

Proposition III.3.24. Let T B(H) be Fredholm. If S is an inverse for T modulo trace-class operators,∈ then

ndex(T) = Tr(1 ST) Tr(1 TS). − − −

Sketch of proof. One has to show that the right-hand side is inde- pendent of the choice of S. Given this, we saw in the section on

Fredholm operators that one can choose S such that 1 ST = Pker(T) TS P − T and 1 = im(T) , for which the right-hand side is dim(ker( )) − ⊥ − dim(coker(T)).

III.3.9 Summability of Fredholm modules

Definition III.3.25. The domain of summability of a Fredholm mod- ule M = (H, π, F) is the set

1 A := { A [F, π()] L (H)}. ∈ | ∈ A Fredhlom module M is called summable if its domain of summability is dense in A.

Example III.3.26. The domain of summability of the Fredholm mod- ule of Example III.3.19 includes all finite sums X C Z  = n[n] [ ]. n Z ∈ ∈ since the commutator [F, ] is finite rank (so trace class). Therefore, this Fredholm module is summable.

In fact, the domain of summability A is always a ∗-subalgebra of A (though usually not closed). It is even a Banach ∗-algebra with the norm  A :=  A + [F, π()] L1 . k k k k k k 84 (exercise).

The point of summability is that it gives us a new way of calculating the M-index of a projection in A.

Lemma III.3.27. Let M be a Fredholm module for A. The linear func- tional τM defined on the domain of summability A by

τ  1 γF F,  M( ) := 2 Tr( [ ]) is a trace on the domain of summability A.

(I am suppressing π from the notation here.)

Proof. Recall that the commutator is a derivation in both variables, e.g. [, yz] = [, y]z + y[, z].

For any , b A, ∈ τ b 1 γF F, b γF F,  b M( ) = 2 Tr( [ ] + [ ] ) 1 γF F, b γ F,  Fb F2,  ,  = 2 Tr( [ ] [ ] )(because [ ] = [1 ] = 0) − 1 γF F, b γFb F,  Fγ γF . = 2 Tr( [ ] + [ ])(because = ) −

This is clearly symmetric in , b.

Lemma III.3.28. If p A is a projection in the domain of summabil- ity, then ∈ τM(p) = ndexM(p). In particular, τM(p) is an integer.

Proof. We have

2 τM(p) = τM(p ) = Tr(γFπ(p)[F, π(p)]) (see previous proof) 2 = Tr(γFπ(p) [F, π(p)]) = Tr(γπ(p)[F, π(p)]Fπ(p)) − = Tr(γ(π(p)Fπ(p)Fπ(p) π(p))) − − = Tr(γ(π(p) π(p)Fπ(p)Fπ(p))). − A calculation gives

‚ ∗ Œ π0(p)U π1(p)Uπ0(p) 0 π(p)Fπ(p)Fπ(p) = ∗ 0 π1(p)Uπ0(p)U π1(p)

85 So τ p π p U∗U π p U U∗ M( ) = Tr( 0( ) p p) Tr( 1( ) p p ) − − − = ndex[Up : π0(p)H π1(p)H] (by Proposition III.3.24) → = ndexM(p).

C∗ F III.3.10 A Fredholm module for r ( 2)

C∗ F We now apply the above machinery to r ( 2). The Fredholm module F will be defined geometrically using the Cayley graph (V, E) of 2. Let us fix a root vertex, say e. Let us also add one more element to E, which we will denote by ξe.

2 2 Put H0 = L (V) and H1 = L (E {ξe}). We will denote the basis ∪ vectors by [] (for  V) and [ξ] (for ξ E) respectively ∈ ∈ 2 F Note that H0 = L ( 2), and we equip it with the regular representation π0 = λ. On the other hand, we can separate the edges into horizontal and vertical edges, and each will be in bijection with F2 (by taking left- 2 F hand vertices and bottom vertices, respectively). Thus, H1 = L ( 2) 2 F C ∼ ⊕ L ( 2) . We equip H1 with two copies of the regular representation, ⊕ with the zero representation on spn{[ξe]}. That is, π1 = λ λ 0. ⊕ ⊕ In other words, π0 and π1 are the representations induced from the obvious translation actions of F2 on V and E respectively.

Now fix a root vertex of the tree, say e. If  V, we let ξ denote the ∈ first edge along the geodesic path from  back towards e. If  = e, we get ξe. Note that the map  ξ is a bijection V E {ξe}. Thus, the operator 7→ → ∪

U : H0 H1; [] [ξ] → 7→ is a unitary.

Let us show that it is an essential intertwiner. Consider π  Uπ  1 . 1( ) 0( − ) This sends [] to the first edge along the geodesic path from  to . This is different from the edge towards e only if  lies on the geodesic e  π  Uπ  1  U  between and . That is 1( ) 0( − )[ ] = [ ] for all but a finite number of vertices . Thus, F π1()U Uπ0() is finite rank ∀ 2. − ∈ By density, we obtain a summable Fredholm module M = (H, π, F).

86 The trace τ τ (the canonical trace of C∗ F ) Proposition III.3.29. M = r ( 2) on the domain of summability.

Proof.  C∗ F It suffices to check this on the group elements [ ] r ( 2). ∈ If  = e, then [e] = 1 is a projection. We have π0(e) = d while π e L2 E ξ τ e 1( ) is the projection onto ( ) = [ e]⊥. One gets that M([ ]) = ndexM([e]) = 1.

If  = e then 6 τM([]) = Trce(γF[F, π()]) = Trce(γ(π() Fπ()F). − But translation by  fixes no vertices or edges, so the operators π(), Fπ()F have no nonzero diagonal entries. Thus, the trace is zero.

III.3.11 Kadison-Kaplansky for F2

Now, the fruit of all our labour. C∗ F has no non-trivial projections. Theorem III.3.30. r ( 2)

Proof. p C∗ F A Let r ( 2) be a projection. Let be the domain of summa- bility of the above∈ Fredholm module M.

If p A, we get ∈ Z τ(p) = τM(p) = ndexM(p) . ∈ But p is positive of norm 1, so τ(p) [0, 1]. If τ(p) = 0 then p = 0 ∈ since τ is faithful. If τ(p) = 1 then τ(1 p) = 0 so p = 1. − Finally, a standard argument shows that the projections in A are dense in the projections in A. Doing this requires the holomorphic functional calculus on the dense Banach ∗-subalgebra A. Here is the argument.

p A ε > ε > 1 Let be a projection, and 0. We assume 2 . ∈ Choose any q A with p q < ε/2. Since Sp(p) = {0, 1} we ∈ k − k get Sp(q) B(0; ε/2) B(1, ε/2). The function h which is constant z⊆ < 1 ∪ z > 1 0 on {Re( ) 2 } and constant 1 on {Re( ) 2 } is holomorphic q p h q A p on a neighbourhood of Sp( ). So 0 = ( ) and has Sp( 0) = h q , p A ∈ h < (Sp( )) = {0 1}, i.e. 0 is a projection in . Moreover, z ∞ k − k ε/2 on Sp(q) (where z is the identity function on C as usual) so p q ε/ p p < ε 0 2. Therefore 0 . k − k ≤ k − k Therefore, by continuity of τ, we get τ(p) = 0 or 1 for all projections p C∗ F r ( 2), and the result follows. ∈

87 Bibliography

[Dav96] Kenneth R. Davidson. C∗-algebras by example, volume 6 of Fields Institute Monographs. American Mathematical Soci- ety, Providence, RI, 1996.

[Dix96] Jacques Dixmier. Les C∗-algèbres et leurs représentations. Les Grands Classiques Gauthier-Villars. [Gauthier-Villars Great Classics]. Éditions Jacques Gabay, Paris, 1996. Reprint of the second (1969) edition.

88