<<

Simplicity and Minimality in Crossed Products

Jackson Morris A Thesis Presented to the Honors Tutorial College In Partial Fulfillment of the Requirements for Graduation from the Honors Tutorial College with the degree of Bachelor of Science in .

2018 April Contents

1 Introduction 2

2 General Background7

2.1 Hilbert Spaces...... 7

2.2 Analysis...... 13

2.2.1 Measure Theory...... 13

2.2.2 ...... 15

2.3 Group Theory...... 20

2.3.1 Fundamentals of Group Theory...... 20

2.3.2 Topological Groups...... 23

2.4 Dynamical Systems...... 25

3 C*-Algebras 33

3.1 Definitions and Examples...... 33

3.2 Positivity...... 40

3.3 Representations...... 45

3.4 Completely Positive Maps...... 50

4 Crossed Products 56

4.1 Amenability...... 56

4.2 Crossed Fundamentals...... 62

4.3 Minimality and Simplicity...... 73

4.4 Minimality in Abelian Groups...... 81

1 Chapter 1

Introduction

Our objective in this document is to study a construction known as the crossed product of a dynamical system. This construction allows us to take a dynamical system and create a

C*-algebra which encodes information about the original dynamical system. Constructing such algebras out of changing spaces is an established technique, explored initially by Murray and von Neumann in [25], [26], and [24], and later by Zeller-Meier [27], Effros, and Hahn [7].

From this baseline, many, such as Elliot [8], Kishimoto [12], Power [18], Archbold and Spielberg

[1], and Quigg [19] began to explore the relationship between properties of the crossed product and the properties of the base system. For the purposes of this document, we will mostly be following Power [18] as adapted in Davidson [5], and Archbold-Spielberg [1].

But what is a dynamical system? What is a C*-algebra? How does this crossed product even work? Let us answer each of these questions in turn, beginning with dynamical systems.

In broad terms, dynamical systems is the study of iterated maps on spaces. To give the classical example from Brin [3], imagine a taking a circle and rotating it by an angle, doing so again and again repeatedly. The field then asks questions about this system-how do points travel over time? Do points initially close together stay close together? How frequently does a point come back to where it started? These questions are easy if not trivial in this case, but give an idea as to the kind of things that are done in the study of dynamical systems.

As for what a C*-algebra is, that is a bit more difficult to pin down. The best way to illustrate the concept is to give some examples and break down why these examples are C*-algebras. For our first example, consider M2(R), the set of two-by-two matrices with real coefficients. Recall

2 from linear algebra that such a matrix defines a on R2, the real plane. This collection of matrices will be the first example of a C*-algebra.

The first important property this has is that it is closed algebraically. You can multiply a matrix by a number to get another matrix, and you can take the sum or product of two matrices to get another matrix. It is also important that we can define a “” for each matrix. The word “norm” has a precise mathematical definition, but we needn’t worry about it. We need only to understand that a norm is generalization of the idea of length. For our real matrices, the norm is the measure of how much their linear operation stretches vectors, without paying heed to how much it might rotate them. Two more important, but technical properties this satisfies is that it is closed and complete with respect to the norm. These properties can collectively be thought of as there not being any holes in our set, or there not being anything missing that should be there.

The most important condition for a C*-algebra, however, is that of the adjoint, or “*” operation. It is this * that distinguishes C*-algebras from other kinds of algebraic constructions and in fact, is part of the name. For our matrices, our adjoint is the conjugate, the operation that flips the matrix along the diagonal. The adjoint needs to satisfy a small collection of basic algebraic properties that the transpose does satisfy in this case. It is this collection of norm, closure, completeness, and adjoint which characterize a C*-algebra. In general, C*-algebras are defined over complex numbers, not just the reals. Furthermore, we can generalize this to matrices of any dimension, where it acts analogously on Cn. For the details on the matrix example in this case, please see Example3 .1.3.

A second example which is illustrative of the breadth of these concepts is that of the algebra of continuous functions on a space, such as the circle. For a visualization of this, imagine the collection of various ways to stretch a short, circular length of string. The details are a bit more technical than matrices, but it’s also fairly easy to see that they are a C*-algebra in their own right.

A natural question at this point is what either of these concepts have to do with each other at all. And this is where the crossed product construction and the function algebra example come in. This is because, if we have a dynamical system, the set of continuous functions on that space is again a C*-algebra, and one that the dynamical system acts upon. For a visualization, return to the example of the rotating circle and a on it. As we rotate the “base” circle

3 of the string, we rotate the string itself. And this is where much of theory comes in, giving us a vector to consolidate this system into one C*-algebra that encodes information about the original system. The details of this are highly technical, but in broad strokes, we begin by simply smashing the structures together, which creates an algebraic structure that needs only a norm to be a C*-algebra. To do this, we represent the entire interwoven system onto an appropriate type of space, and use the “largest” of these representations as the norm.

After completing the “proto”-algebra with respect to this norm, we get a C*-algebra that we call the crossed product.

Using this, we will explore connections between the original dynamical system and this crossed product. In particular, we want to find equivalent conditions for the “simplicity” of the crossed product. Simplicity is an algebraic property that has to do with a lack of a certain kind of substructure, one that is invariant under multiplication. And as it turns out, there is also a property based on lacking of invariant substructures for dynamical systems, known as minimality.

A dynamical system is minimal if it does not have a smaller part that the iterated map doesn’t change. And in fact, it turns out that the minimality of a dynamical system is equivalent to the simplicity of its crossed product. Our primary objective in this text will be to rigorously show this equivalence.

We begin with a basic overview of various background material. First, we discuss Hilbert spaces, which can be though of as infinite-dimensional generalizations of Cn. This will be pri- marily utilized for the aforementioned representations of C*-algebras. Next, we cover measure theory, which is a broadly applicable generalization of the idea of area or volume. After some useful but technical results, we cover groups, a generalized structure to which the integers belong to. Finally, we cover many basic ideas in dynamical systems, providing formal definitions for the concepts we’ve discussed in this introduction and several other highly useful results.

In the next chapter, we begin to dig in to the basics of C*-algebras. We naturally start with formalized definitions and examples that for the most part have been discussed in this introduction. The most important addition here is that the bounded operators on are a C*-algebra in Example3 .1.7. The next major topic is that of the spectrum (Definition

3.2.1). This is a little technical, but we can see it as a mutual generalization of the idea of a function’s range and a matrix’s eigenvalues. And, in the vein of a positive function, we can call an element of a C*-algebra positive if its entire spectrum is positive. A big result from this

4 section is the Continuous (Theorem3 .2.18), which lets us take continuous functions on the spectrum of an element and apply them to the element in the C*-algebra. We can then use this in Lemma3 .2.20 to show the extremely useful characterization of positive elements as those of the form b∗b.

Next, we deal with those aforementioned representations of C*-algebras. Bounded operators on Hilbert space are actually really easy to work with, so our representations will be maps from C*-algebras to these operator spaces. From our examples, this is also pretty easy to see- matrices act on bounded operators on Cn, and Hilbert space is a generalization of Cn. It is also in this section that we define ideals, which are those invariant structures we talked about earlier with simplicity. The most important result from this section is the GNS construction

(Example3 .3.17, Theorem3 .3.18), which gives us that for any C*-algebra, such a representation onto the bounded operators of a Hilbert space exists. The final section of this chapter deals with completely positive maps, which will come in for later results. The most important result here is the Stinespring Dilation Theorem3 .4.6 which gives us an easy characterization of all completely positive maps.

For our final chapter, we begin with a discussion of amenability. This is a technical concept that primarily serves as a means to make a finer distinction in the formal discussion of the crossed product. After this, we start with the legwork to finally define the crossed product. We begin with defining a C*-dynamical structure (Definition4 .2.1). The dynamical system construction we mentioned earlier is an example of this, but the construction is more general. From this, we go through the outlined steps to define the crossed product in Definition4 .2.8, with the remainder of the section hammering out the technical details. It is in the final two sections that the major results are shown. In Theorems4 .3.4 and4 .3.9 we show the equivalence between simplicity of the crossed product and minimality of the dynamical system. In the last section, we expand beyond those crossed products generated by classical dynamical systems and in Theorem4 .4.14 we show the equivalence of simplicity and a generalized kind of minimality, subject to certain conditions.

To some it may occur to question the purpose of such an investigation. There are three general answers that apply here. The first is that the pursuit of expanded knowledge is always worthwhile. The second is that all abstract mathematical study comes with the potential to have direct applications in physics, biology, or computer science. The third is that mathematical

5 study comes with natural applications to abstract mathematics itself. In particular, the subjects we cover here have ramifications in functional analysis, group theory, dynamical systems, and operator algebras, among others. But beyond these, as a study mostly from the perspective of operator algebras, we can use the existing applications of C*-algebras -that of modeling phenomena in quantum mechanics- as a justification for further study into them.

6 Chapter 2

General Background

To begin the technical discussion of this document, we begin by delving into the wide swathe of background material. In this section, we will discuss the fundamental ideas of dynamical systems that we will be using, as well as the large amount of requisite background to C*-algebras.

Additionally, we will also be discussing several more technical results that we will nonetheless need to occasionally use.

2.1 Hilbert Spaces

Our first background topic will be the subject of Hilbert space. For a general idea of Hilbert space, the simplest way to think of it is as a generalization of Cn to infinite dimensions. Historically, Hilbert space ideas are used frequently in quantum physics and optimization problems, notably for its use in modeling spaces of regular motion. Mathematically, they are used frequently for partial differential equations and functional analysis. For our purposes, this concept will come up most frequently in our discussions of C*-algebras.

We begin our discussion with the definitions. The first characteristic that defines a Hilbert space is that it has an inner product structure.

Definition 2.1.1. () Let X be a over C. We call X an inner product space if there is a function h., .i from X × X to C that satisfies the following conditions:

•∀ x, y ∈ X, hx, yi = hy, xi,

7 •∀ x, y ∈ X, a ∈ C, hax + z, yi = ahx, yi + hz, yi,

•∀ x ∈ Xhx, xi ≥ 0, hx, xi = 0 ⇐⇒ x = 0.

Example 2.1.2. (Cn) An archetypal example of an inner product space (and eventually a n n Hilbert space) is that of C . Naturally, C is a vector space over C. Let x = {x1, x2, ..., xn} and n n y = {y1, y2, ..., yn} ∈ C . We define an inner product on C by

n X hx, yi = xiyi. i=1

To verify this is an inner product, we first need to show that hy, xi = hx, yi;

n n n X X X hy, xi = yixi = xiyi = xiyi = hx, yi. i=1 i=1 i=1

We then need to show that it is linear in the first argument;

n n n X X X hax + z, yi = (axi + zi)yi = axiyi + ziyi = ahx, yi + hz, yi. i=1 i=1 i=1

Finally, to show hx, xi is nonnegative, note that

n n X X 2 hx, xi = xixi = |xi| . i=1 i=1

This is a sum of nonnegative elements, and as such, is nonnegative. If it is zero, then as each

2 2 |xi| is nonnegative, each |xi| must be zero, so xi = 0 ∀i and thus x = 0.

To discuss the other defining characteristic of Hilbert space, we will need to work with the inner product a bit more. The second property is completeness, which is concerned with the convergence of . However, with only an inner product, we do not have a basis for whether or not a converges. Thus, our first goal is to establish a notion of magnitude, or length, of elements in our space. We do this through the framework of what is known as a normed space.

Definition 2.1.3. (Normed Space) Let X be a vector space over C. We call X a normed space

8 if there is a “norm” function ||.|| from X to R such that:

•∀ x ∈ X, ||x|| ≥ 0, ||x|| = 0 ⇐⇒ x = 0,

•∀ x ∈ X, λ ∈ R or C, ||λx|| = |λ|||x||,

•∀ x, y ∈ X, ||x + y|| ≤ ||x|| + ||y||.

The final property is usually referred to as the triangle inequality.

We may then define a norm on any inner product space by setting ||x|| = phx, xi.

Lemma 2.1.4. An inner product space X can be made into a normed space with a norm defined by ||x|| = phx, xi.

Proof. That ||x|| ≥ 0 is given exactly by the property of the inner product that hx, xi ≥ 0. That

||x|| = 0 ⇐⇒ x = 0 is similarly given by the equivalent hx, xi = 0 ⇐⇒ x = 0. To show the property,

||λx|| = phλx, λxi = p|λ|2hx, xi = p|λ|2||x||2 = |λ|||x||.

To show the triangle inequality, we will need to make use of the Cauchy-Schwarz inequality. This is a famous inequality that states, for a function of broadly in the form of an inner product,

|hx, yi| ≤ ||x||||y||. Thus,

||x + y|| = phx + y, x + yi = phx, xi + hy, yi + hx, yi + hy, xi

≤ p||x||2 + ||y||2 + 2||x||||y|| = p(||x|| + ||y||)2 = ||x + y||.

Example 2.1.5. (Lp(C)) Some examples of norms that are not generated by an inner product are the p-norms for p 6= 2. For a p such that 1 ≥ p < ∞, define Lp(C) to be the set of functions f from to such that R |f|pdx < ∞. Additionally, we identify f and g if R |f − g|pdx = 0. R R R R p R p 1/p On L ( ), we define ||f||p = ( |f| dx) . The details of verifying that this is a norm are on R R the technical side and beyond what we have currently established, but if interested, please see

[22, Theorem 3.9] and the preceding discussion.

9 From a norm, we can get a metric, a way of measuring how far apart elements of the space are, by setting δ(x, y) = ||x − y||. With a metric, we may then formally speak about notions of

Cauchy sequences and convergence. A sequence {xn} is Cauchy if ∀ > 0, ∃n ∈ N such that for m, k ≥ n, δ(xm, xk) < . A sequence {xn} converges to a point x if for all  > 0 there exists an n ∈ N such that if m ≥ n, δ(x, xm) < . We then call a metric space where every converges complete.

Definition 2.1.6. (Complete) If X is a such that for every Cauchy sequence {xn} in X, there is an x ∈ X that {xn} converges to, we call X complete.

With this, we may finally give a formal definition of Hilbert space.

Definition 2.1.7. (Hilbert Space) A Hilbert space is an inner product space that is complete with respect to the metric generated by the inner product.

We now turn to discussing examples of Hilbert spaces. We begin by verifying that Hilbert space is indeed a generalization of Cn,

Example 2.1.8. (Cn) We established the inner product earlier. To establish that Cn is complete with respect to this, note that the norm generated by this inner product actually aligns with the standard vector length; v u n uX 2 ||x|| = t |xi| . i=1

Thus, as Cn is complete under that metric, we know it is complete under the norm generated by the inner product and therefore that Cn is a Hilbert space.

We have then shown that Hilbert space is indeed a generalization of Cn. It is possible to show that any Hilbert space can be thought of as a possibly infinite-dimensional version of Cn, but the ideas involved are beyond the scope of this document. If you are interested, please see

[9, Chapter 8] for the details.

Example 2.1.9. (`2(X,Y )) A Hilbert space that will be useful to us later is the space known as `2(X,Y )-the set of square-summable functions from a countable discrete set X to a Hilbert space Y . We say a function f from X to Y is “square-summable” if P ||f(x)||2 < ∞. In fact, x∈X this will precisely be our norm on `2(X,Y );

X ||f||2 = ||f(x)||2. x∈X

10 The norm is generated by the inner product

X hf, gi = hf(x), g(x)i. x∈X

To show this is a Hilbert space, we must first show that `2(X,Y ) is closed under scalar multi- plication and sums. Both are fairly easy.

X X ||λf||2 = ||λf(x)||2 = |λ|2||f(x)||2 = |λ|2||f||2, x∈X x∈X

X X ||f + g||2 = ||f(x) + g(x)||2 ≤ ||f(x)||2 + 2||f(x)||||g(x)|| + ||g(x)||2 x∈X x∈X X = ||f||2 + ||g||2 + 2 ||f(x)||||g(x)|| x∈X X ≤ ||f||2 + ||g||2 + (||f(x)||2 + ||g(x)||2) x∈X

= 2||f||2 + 2||g||2.

For the second inequality, note that (||f(x)|| − ||g(x)||)2 ≥ 0 and thus ||f(x)||2 + ||g(x)||2 ≥

2||f(x)||||g(x)||. We also need to show that the inner product produces finite values. Using the

Cauchy-Schwarz inequality,

X X X |hf, gi| = | hf(x), g(x)i| ≤ |hf(x), g(x)i| ≤ ||f(x)||||g(x)|| x∈X x∈X x∈X 1 X 1 ≤ (||f(x)||2 + ||g(x)||2) = |(||f|| + ||g||) 2 2 x∈X is finite.

Next, we wish to verify the properties of the inner product. These are largely inherited from those of Y ; X X hg, fi = hg(x), f(x)i = hf(x), g(x)i = hf, gi, x∈X x∈X

11 X X haf + h, gi = haf(x) + h(x), g(x)i = ahf(x), g(x)i + hh(x), g(x)i x∈X x∈X

= ahf, gi + hh, gi.

X X hf, fi = hf(x), f(x)i ≥ 0 = 0, x∈X x∈X and since this is a sum of nonnegative elements, it is zero only when each summand is zero-i.e, when f(x) = 0 ∀x ∈ X, or f = 0. Therefore, we have defined an inner product, and by Lemma

2.1.4, this is enough to define a norm, and thus a metric, as well.

2 It remains to show completeness. Let {fn} be a Cauchy sequence in ` (X,Y ). In our metric,

P 2 this means that for a given  > 0, there is a k ∈ N such that ||fn(x) − fm(x)|| <  for x∈X 2 n, m ≥ k. As the sum is of positive values, this gives us in particular ||fn(x) − fm(x)|| <  for every x ∈ X. Thus, {fn(x)} is a Cauchy sequence in Y . As Y is complete, we can find a limit for {fn(x)} for all x ∈ X. Call this limit f(x). We must now show that the defined function f

P 2 is square-summable. As {fn} is Cauchy, we have that ||fn − fm|| = ||fn(x) − fm(x)|| < , x∈X P 2 so in particular, for any finite subset Z of X, ||fn(x) − fm(x)|| < . Then, over this finite x∈Z P 2 subset, we can safely let m run to infinity to get ||fn(x) − f(x)|| ≤ . As X is countable, x∈Z we can enumerate the elements of X as {xi}, as i runs from 1 to ∞. Then, we can take

P 2 Zk = {xi|1 ≤ i ≤ k}. The earlier discussion gives us that ||fn(x) − f(x)|| ≤ . We can x∈Zk P 2 then let k run to ∞ to get that ||fn(x) − f(x)|| ≤ -i.e. that fn − f is square-summable. x∈X Thus, as `2(X,Y ) is closed under addition, f is square-summable as well. This also gives us that

||fn − f|| goes to zero, so the proof is complete.

Finally, we would like to show that the distinction between an inner product space and a

Hilbert space is nontrivial; namely that there is an inner product space which is not complete.

Example 2.1.10. (Finite Sequence Space) Let H be the subspace of C∞ consisting of elements where only finitely many entries are nonzero. It is trivial that H is closed under scalar multi- plication. That H is closed under vector addition is also quite easy. If a and b are elements of H such that there are n and m nonzero entries of a and b respectively, then a + b must consequently have at most n + m nonzero entries, and thus still must be in H. Similarly, H inherits its inner product from Cn. Thus, we now only need to find a Cauchy sequence in H that does not have a limit in H. For this, take the sequence {xi} where x1 = {1, 0, 0, 0, ...},

12 ∞ P 2 2 x2 = {1, 1/2, 0, 0, ...}, x3 = {1, 1/2, 1/3, 0, ...}, and so on and so forth. As (1/i) = π /6 and i=1 s ∞ P 2 for n, m ≥ k, ||xn − xm|| is bounded by (1/i) , {xi} is Cauchy. However, its limit, the k sequence x = {1, 1/2, 1/3, 1/4, 1/5, ...} is not in H, and thus, H is not complete. Therefore, H is an inner product space that is not complete, and thus, not a Hilbert space, and we are done.

The material in this section exclusively comes from [9].

2.2 Analysis

Next, we will also need a good amount of background from analysis, and functional analysis. In particular, we will begin with measure theory.

2.2.1 Measure Theory

Measure theory is a fundamental concept in analysis and has an extreme amount of wide-ranging applications. We will need it to even discuss many of the ideas dealt with later. We begin with the most natural starting point for a discussion of measure theory- a definition of a measure.

Definition 2.2.1. (Measure) A measure on a space X is a construction with three core elements.

The first is the space X itself. The second is a σ-algebra of subsets of X that we consider

“measurable”. Finally, the third and most important piece is a function µ from these measurable ∞ ∞ + S P sets to R ∪ {∞} such that µ(∅) = 0 and µ( Ei) = µ(Ei), for a pairwise disjoint collection i=1 i=1 of measurable Ei.

To give an intuition of what a measure is, think of it as a generalization of the idea of “area” or “volume” in two- or three-dimensional space. We are assigning a size to each of our measurable sets in a similar vein to those ideas. In fact, a generalization of length/area/volume is one of the canonical examples of a measure.

Example 2.2.2. (Lebesgue Measure) The Lebesgue measure on Rn is a direct extension of the elementary-school concepts of length, area, and volume. The technical details are a touch in- depth for this document, but on standard open sets and cubes, the Lebesgue measure coincides with these ideas. For further reading, please see [10, Chapter 2]. But, we can see the basic principles of measures do apply here. The empty set has no “area”, and if we combine disjoint sets, the “area” of the result will be the sum of the individual pieces.

13 There are also a few more off-the-wall examples of measure that are useful for understanding the limits of what the idea covers.

Example 2.2.3. (Counting Measure) The most direct example of a measure is the counting measure. Let X be a set, declare all subsets measurable, and for E ⊆ X finite, let µ(E) = |E|,

µ(E) = ∞ otherwise. By definition, as the empty set has zero cardinality, so µ(∅) = 0. If

{Ei} ⊆ X are pairwise disjoint, then the cardinality of their union is the sum of their cardinalities, n n so therefore | S (E)| = P |E|. Therefore, this is a measure. When dealing with discrete sets, i=1 i=1 the counting measure is often implicitly assumed.

Example 2.2.4. (Atomic Measure) Another example of a measure is one where the measure is determined by particular points. In the simplest example of such an “atomic measure”, let X is a set and x be a distinguished point in X. Let all subsets be considered measurable, and let

µ(E) be 1 if x ∈ E and 0 otherwise. Since x∈ / ∅, µ(∅) = 0. Since at most one of a pairwise disjoint collection of {Ei} can contain x, the summation property is also easy to see. The general class of atomic measures are an expansion of this-measures with several distinguished points that contribute varying values to a set’s measure. For an example, pick two distinguished points x1 and x2. A subset’s measure will be zero if it contains neither point, 1/2 if it contains x1 but not x2, 1 if it contains x2 but not x1, and 3/2 if it contains both.

One useful fact about measurable spaces is that a subset of a measure space can itself be made into a measure space.

Lemma 2.2.5. Let (X, σ, µ) be a measure space and let Y be a subset of X. Then, if σ0 is σ restricted to Y and µ0(E) = µ(E) for a member of σ0, (Y, σ0, µ0) is a measure space.

Proof. The first step is to show that σ0 is indeed a σ-algebra. This is fairly simple- ∅ ∩ Y = ∅, so ∞ ∞ ∞ 0 S S T T ∅ ∈ σ , (Ai ∪ Y ) = Y ∪ Ai and (Ai ∪ Y ) = Y ∪ Ai. It remains to show closure under i=1 i=1 i=1 i=1 complements, but this is fairly simple as well. If E ∈ σ0, there is a E0 ∈ σ such that E0 ∪ Y = E.

Then, E0c ∈ σ, and E0c ∪ Y ∈ σ0, which is exactly Y \ E. That µ0(∅) = 0 and µ0 is subadditive is given entirely by these properties of µ.

We may now speak about some potential properties that measures might have.

Definition 2.2.6. (Borel Measure) We call a measure “Borel” if its σ-algebra of measurable sets is precisely the Borel sets- the σ-algebra generated by all open sets of the space.

14 The Lebesgue Measure can be shown to be not Borel in its full generality-there are measurable sets that are not Borel [10, Proposition 2.22]. However, since the Lebesgue measure corresponds with area or volume on basic open sets, open sets, and thus the Borel sets are measurable. Thus, we can simply restrict it to the Borel sets to get a Borel Measure. Whether or not the counting or atomic measures are Borel depends on the of the space they are defined on-given the discrete topology, for instance, they will be Borel, but if there is any non-Borel set in the

σ-algebra, they will not be.

Definition 2.2.7. (Regular) We call a measure µ on a space X regular if, for every measurable set E, µ(E) = inf µ(U) = sup µ(K), where Us are open sets and Ks are compact sets. E⊂U K⊂E This is essentially a “reasonability” condition. There are non-regular measures, but we will not be dealing with them in the main body of this document. All examples we have discussed are regular.

Definition 2.2.8. () The support of a measure µ on a space X is the largest closed subset A of X such that for every a ∈ A and every open neighborhood O of a, µ(O) is positive.

The support can be thought of as the region on which the measure is defined, or at least nonzero. For the Lebesgue or counting measures, this can easily be seen to be the entire space.

For atomic measures, the support is exactly the distinguished points.

Definition 2.2.9. (Translation invariant measure) A finite Borel measure on a space X is translation invariant with respect to a homeomorphism f from X to itself if µ(f −1(E)) = µ(E) for every Borel subset E of X.

It is easy to see that counting measures are translation invariant under any bijection. Simi- larly, it works out that the Lebesgue measure is invariant under translations. However, atomic measures are translation invariant only when their unique points are preserved.

2.2.2 Functional Analysis

Functional analysis is a far more advanced topic than measure theory, but it is where the ideas of operator algebras originated. As such, we will need many ideas from it, though many are complex results that we will need to reference only once. To begin with, we wish to discuss the

Hahn-Banach Theorem, an often-used theorem that deals with extensions of functions. To do this, we first need to define a seminorm.

15 Definition 2.2.10. (Seminorm) A seminorm is a function ρ from a vector space V to R+ that satisfies the following conditions.

•∀ v ∈ V, c ∈ C, ρ(cv) = |c|ρ(v),

•∀ u, v ∈ V, ρ(u + v) ≤ ρ(u) + ρ(v).

Note that this is simply the definition of a norm without the requirement that an element of zero norm must be zero itself. With this, we can state the Hahn-Banach Theorem.

Theorem 2.2.11. (Hahn-Banach) Let V be a vector space, U be a subspace of V , and f be a linear function on U. If ρ is a seminorm on V such that f(u) ≤ ρ(u) for all u ∈ U, then we can extend f to a linear functional g on all of V such that g(v) ≤ p(v) for all v ∈ V .

For a proof, please see [23, Theorem 3.2].

We will use the Hahn-Banach theorem in our discussion of amenability. Next, we wish to discuss the Markov-Kakutani fixed point theorem. Again, we first need to establish a definition, in this case, convexity.

Definition 2.2.12. (Convex) A set X in a vector space is convex if for every x, y ∈ X and every t ∈ [0, 1], tx + (1 − t)y ∈ X.

For an intuition, it is easier to discuss sets that are not convex than those that are. A set that is not convex is a set with a “hole” or an “indentation”, as if we pick our x and y on opposing sides of these, the straight line connecting them is not contained within the set.

Theorem 2.2.13. (Markov-Kakutani) If {fs} is a family of continuous linear maps from a convex, compact set X to itself, such that fsfs0 = fs0 fs for all fs, fs0 ∈ {fs}, then there is an x ∈ X such that fs(x) = x for all fs ∈ {fs}.

Please see [5, Theorem7.2.1] for a proof. We will use this in our discussion of amenability and in our discussion of dynamical systems.

Another idea we can discuss with convex sets is that of their extreme points.

Definition 2.2.14. (Extreme Points) An extreme point x in a convex set X is a point such that x 6= ta + (1 − t)b for any a, b ∈ X distinct from x and t ∈ [0, 1].

For the standard example of a convex set in a polyhedron, the extreme points are just its vertices. Another example would be a sphere, where the extreme points are its entire boundary.

16 Having established this, we can state a theorem we will use in our discussion of dynamical systems.

Theorem 2.2.15. (Krein-Milman) If A is a nonempty, closed, and convex subset of a locally convex vector space X, then A is the intersection of all closed convex subsets of X that contain its extreme points. This is often referred to as the closed convex hull of A. In particular, this also implies that A has extreme points.

This is called the Krein-Milman Theorem. Please see [10, Section 14.6] for a proof.

We will also need a corollary of the Hahn-Banach theorem for our discussion of amenability.

Corollary 2.2.16. (Hahn-Banach Separation) Let V be a locally convex vector space on the reals or complexes and A, B be disjoint convex subsets of V . If A is compact and B is closed, then there exists a continuous linear map λ from V to R or C and s, t ∈ R such that

Re(λ(a)) < s < t < Re(λ(b))∀a ∈ A, b ∈ B.

For a proof, please see [23, Theorem 3.4].

Something else we can do with topological vector spaces is to take the tensor product of them. We will have to wait for our discussion of group theory to handle this topic in full rigor, but for now, the concept is easy enough to grasp.

Definition 2.2.17. (Tensor Product) Let A and B be vector spaces. The tensor product A ⊗ B is the collection of all finite sums of elements of the form (a, b), where a ∈ A and b ∈ B, subject to the following identifications:

(a + a0, b) = (a, b) + (a0, b),

(a, b + b0) = (a, b) + (a, b0),

(a, rb) = (ra, b), where r is a scalar.

Example 2.2.18. (Rnm) One example of this is that when A and B are Rn and Rm. We can break down any element in A into sums over basis elements e1, e2, ...en, and any element in B

17 into sums over basis elements f1, f2, ...fm. With the identifications, we can see that we can break down any element of A ⊗ B to sums of (ri,jei, fj). It’s also fairly easy to see that these are linearly independent, making {(ei, fj)} a basis for A ⊗ B. Since this still scales over the reals thanks to the third identification, this is a real vector space with a basis of size nm. Hence,

Rn ⊗ Rm = Rnm.

We will need to make use of the Tietze Extension Theorem as well.

Theorem 2.2.19. If K is a closed subset of a compact Hausdorff space X and f is a continuous function from K to R, then there exists a continuous function F from X to R such that FK = f and ||F ||∞ = ||f||∞.

For a proof, see [22, Theorem 20.4]

Another general concept we will use throughout this document is the idea of the dual of a .

Definition 2.2.20. (Dual) The dual X∗ of a topological vector space X is the set of all continu- ous linear functionals on X. Note that this is also a vector space where (f + g)(x) = f(x) + g(x) and af(x) = a(f(x)).

One useful example of this duality is between measures and continuous functions.

Example 2.2.21. (Measure-function duality) The idea of a measure as something to measure the size of sets is one appropriate way to think of them, but another that is far more useful for our purposes is to think of a measure as something to integrate functions against. The details can be found in [10, Chapter 4], but broadly, one can define R fdµ for positive real functions as the supremum of the easy-to-define integrals simple piecewise constant functions that are less than f. One then extends linearly to complex-defined f. In such a way, a measure µ defines a linear functional on the set of continuous functions of a space. In fact, if this space X is compact, the Reisz Representation theorem ([10, Section 21.5]) gives us that the set of positive regular

finite Borel measures is exactly the dual of C(X).

We may then define a standard topology on this dual.

Definition 2.2.22. (Weak-* Topolgy) A topology T1 on a space X is weaker than a topology

∗ T2 on X if every set that is open in T1 is open in T2. The weak-* topology on the A

18 of some A is the weakest topology on A∗ such that for every a ∈ A the map given by a(f) = f(a) is continuous.

We can also define several other through convergence. However, we cannot always use sequences, so a structure of more generality is needed.

Definition 2.2.23. () Let A be a set with an operation ≤ such that

• For every a ∈ A, a ≤ a,

• For every a, b, c ∈ A, if a ≤ b and b ≤ c, a ≤ c,

• For every a, b ∈ A, there exists a c ∈ A such that a ≤ c and b ≤ c.

This is referred to as a directed set. A net is a function from A to a topological space X. The image of a point a ∈ A is usually denoted by xa, with the entire net denoted {xa}. We say that a net converges to a point x ∈ X if for every neighborhood U of x there exists an a ∈ A such that for every b ∈ A such that a ≤ b, xb ∈ U.

Definition 2.2.24. (Weak Convergence) We say a net {xi} in a spaceX converges weakly to x ∈ X if for every f in the dual of X, {f(xi)} converges to f(x) in R or C as appropriate.

Definition 2.2.25. (Strong Operator Topology) The strong operator topology on B(H) where

H is a Hilbert space is the topology generated by the open sets

S(T, x) = {A ∈ B(H): ||(T − A)x|| < 1},

where T ∈ B(H) and x ∈ H. Furthermore, we say a net {Ta} converges to T with respect to the strong operator topology if and only if

lim Tax = T x∀x ∈ H. a

And finally, we will need the Banach-Alaoglu theorem.

Theorem 2.2.26. (Banach-Alaoglu) If V is some neighborhood of zero in a topological vector space X and if

K = {f ∈ X∗ : ||fv|| ≤ 1∀v ∈ V } then K is compact in the weak-* topology.

19 Please see [23, Theorem 3.15] for a proof.

All results regarding measure theory come from [10]. All un-labled results from functional analysis come from [23].

2.3 Group Theory

In a similar vein to analysis, group theory is another fundamental topic for our discussion of crossed products. In particular, a group is a critical part of the C*-dynamical system construction that we will need to construct crossed products in the first place. Group theory also frequently comes up in discussions of dynamical systems, and as such, we will need it there as well.

2.3.1 Fundamentals of Group Theory

Once again, we can start directly from the core definition.

Definition 2.3.1. (Group) A group G is a set equipped with an operation from G × G to G, usually denoted by concatenation, that satisfies the following properties:

•∀ a, b, c ∈ G, (ab)c = a(bc),

•∃ e ∈ G, ∀a ∈ G, ae = ea = a,

•∀ a ∈ G, ∃a−1 ∈ G, a−1a = aa−1 = e.

If we have that ∀a, b ∈ G, ab = ba, we call the group commutative, or abelian.

Example 2.3.2. (Zn) One example of a group is that of the integers modulo some positive number n. Here, our group operation is addition modulo n. Since addition is associative, it is easy to see that we get associativity. We have an identity element in the class of zero, and inverses are similarly the class of −a.

Example 2.3.3. (Rn and Cn) In a similar vein, the real line can also be made into a group under addition. We have associativity, inverses in negation, and an identity in zero. In fact, this can be extended componentwise to Rn. Associativity is still natural, negation can be extended componentwise, and (0, 0, ...0) is still an identity.

20 Cn can similarly be made into a group under addition. However, if we take out zero, we can turn C \{0} into a group under multiplication. Associativity and inverses are natural, and our identity is 1.

Example 2.3.4. (Permutation Group) The canonical example of a group is that of a permuta- tion group, namely, the number of ways that we can permute n elements. Our operation here will be composition of permutations. Thus, the identity and inverses are quite natural-the identity is the permutation that preserves all elements, while the inverse of a permutation p which sends a1 to b1, a2 to b2, so on and so forth is the permutation that sends b1 to a1, b2 to a2, so on and so forth. Note that this is not an abelian group for n greater than 2. For n = 3, let a be the permutation that flips 1 and 2 and let b be the permutation that sends 1 to 2, 2 to 3, and 3 to

1. Then, ab is the permutation that flips 1 and 2, while ba is that which flips 1 and 3.

Example 2.3.5. (Circle Group) Another example of a group is that of rotations on the circle.

Our operation here is iteration-performing the first rotation, then the second. Therefore, we naturally have an identity in the 0 rotation that does not move the circle at all, and inverses in rotations of the same magnitude in the opposite direction. We can also see that this is abelian.

In fact, a very natural way to think of the circle group is as the interval [0, 1] under addition, modulo 1 by identifying x ∈ [0, 1] with e2πix. This is useful in particular because it allows us to apply a natural measure and a topology on the circle, the former inherited from the Lebesgue measure of R and the latter from the standard topology on R. Another useful way is to think of it as the elements of modulus 1 in C under multiplication, though it should be noted that

R2 and C are identical in topology and measure. These various interpretations will be used interchangeably.

Definition 2.3.6. (Free Group) The free group F generated by a set G is the set of elements n Q ±1 of the form gi , for some collection of gi ∈ G. We further assume that any consecutive pairs i=1 −1 of gi and gi are removed. Our group action is then defined by concatenating these products together with potential reductions where the concatenation is made.

If you recall the tensor product from the last section, the formal definition is that of the free group of A and B, subject to the identifications mentioned.

One of the most interesting and relevant things we can do with a group is to have them act upon a set.

21 Definition 2.3.7. (Group Action) We say a group G acts on a set X if for any g ∈ G and x ∈ X we can define a gx ∈ X such that

• ex = x ∀x ∈ X where e is the identity of G,

• h(gx) = (hg)x ∀g, h ∈ X.

In essence, we are our group in the set of maps from the set to itself.

The next thing which we will need is a dual for groups. The definition is much like that for topological vector spaces.

Definition 2.3.8. (Dual Group) If G is a group, we call the set of homomorphisms from G to the unit circle in C the dual group of G and denote it Gˆ.

We can then show that the dual of a group is itself a group.

Lemma 2.3.9. If G is a group, Gˆ is a group.

Proof. For two homomorphisms f, h from G to the unit circle in C, define fh(g) = f(g)h(g). It is then easy to see the identity in Gˆ is the homomorphism that takes everything to 1 and that inverses exist in the form of f −1(g) = f(g)−1. Associativity is similarly trivial, and thus, Gˆ is a group.

Example 2.3.10. (Integer-Circle Duality) Let us first consider the dual group of the integers.

Any homomorphism from Z to the unit circle is naturally defined entirely by the image of 1.

Since 1 generates Z, if we have an image of 1, we are forced to define the image of n to be the nth iterate of the image of 1. Thus, the dual group of Z is exactly the number of possible images of 1, namely the circle group.

We can also show the dual group of the circle is Z. We do this by considering the circle as a subset of C. First, note that any homomorphism from the circle to itself must preserve the identity. It also must preserve the roots of unity, as if c3 = 1, then 1 = f(1) = f(c3) = f(c)3.

Let f be a theoretical homomorphism from the circle to itself and let p and q be two distinct

n m primes. Assume that f sends the first root of unity of p, up to up and that of q, uq to uq for distinct n, m. Then, consider the roots of unity of order pq. If we assume that f sends the first

k root of unity, upq to upq, then we know by homomorphism properties that all roots of unity of k order pq, x must be sent to x . However, up and uq are roots of unity of order pq, but are sent

22 n to distinct powers. Therefore, n must equal m, and for all primes p, f must send up to up for the same n. By a similar argument to that above, we can then show that f must send x to xn for any root of unity x. Since these correspond to the rationals in the [0, 1] construction of the circle, they are dense, and thus any arbitrary point can be approximated by roots of unity.

Therefore, for any point x in the circle, f must send x to xn. Thus, the dual of the circle is Z.

To further discuss the dual, we will need to begin mixing some topological concepts in with our group theory.

2.3.2 Topological Groups

This is done through the definition of a topological group.

Definition 2.3.11. (Topological Group) A topological group G is a group equipped with a topology such that

• The operation × from G × G to G that takes (a, b) to ab is continuous.

• The operation −1 from G to G that takes a to a−1 is continuous.

Example 2.3.12. (Discrete) Note first that any group can be made into a topological group by equipping it with the discrete topology. Since every set is open, every operation on the set is continuous, and thus, the topology satisfies the axioms of a topological group.

Example 2.3.13. (R) Another example of a topological group is the real line with respect to addition. Starting with R, the topology naturally gives that for any open set O, −O is open as well, so the inverse property holds. For the operation property, first note that the inverse image of a point x of R, in R2, the set (a, b) such that a + b = x is a line of slope −1. Therefore, if we take a basic open interval in R of the form {x|n < x < m}, its inverse image is the set

{(a, b)|n < a+b < m}, which is itself an open set in R2. Thus, our group operation is continuous and R is a topological group.

Example 2.3.14. (Circle) The circle group is also a topological group, using the natural topol- ogy on [0, 1]. Since 1 − x is a continuous function on the real numbers, we easily get that our inverse is continuous. Similarly, as the function that takes (a, b) to a + b on R2 is continuous, so is our group operation on the circle. In fact, this can be seen as a sub-result of the previous example thanks to this way of considering the circle group.

23 With this, we can discuss one more useful property of the dual group.

Theorem 2.3.15. If G is an abelian discrete group, then its dual group Gˆ is compact.

Please see [13, Theorem 12] for this

Another thing we will need from the structure of topological groups is the concept of the

Haar measure.

Definition 2.3.16. (Haar measure) A Haar measure on a locally compact topological group X is a regular Borel measure µ such that µ(K) is finite for every compact subset K of X and µ is translation invariant with respect to the group operation.

It is also established and useful to know that every locally compact group has a Haar measure.

Theorem 2.3.17. For a locally compact topological group G, there exists a Haar measure µ on

G.

Please see [23, Theorem 5.14] for a proof of this.

Example 2.3.18. (Discrete Groups) Let G be a group endowed with the discrete topology.

Note that this is locally compact, as any finite subset containing a given point is a compact neighborhood of that point. On this group, the counting measure serves as a Haar measure.

This µ is naturally finite for any compact, and thus finite, subset of G. It is also naturally translation invariant, as translation does not affect the cardinality of a subset. Finally, as every set is open and every compact set is finite, regularity is entirely natural.

Example 2.3.19. (R) Let G be the real line under addition. G here is not compact, but it is fairly easily seen to be locally compact simply by taking bounded neighborhoods of your points.

And in fact, the Lebesgue measure restricted to the Borel sets suffices as a Haar measure in this case. Note that first it is invariant under translation by definition. To show that it is finite on compact sets, note that since R is a metric set, subsets are compact if and only if they are closed and bounded. And since they are bounded, they are contained in the closed interval of their bounds. Since this has finite measure, our compact subset must itself have finite measure and we are done.

Example 2.3.20. (Circle Group) Let G be the circle group. Since G is compact, it is naturally locally compact. Here, the Lebesgue measure (considering G to be [0, 1]) works as a Haar

24 measure. It is by definition translation invariant. It is also defined to be regular. And since it is

finite on the entire circle, it is naturally finite on all compact subsets of G.

General group theory comes from [11]. Duality and topological groups come from [13] and

[23].

2.4 Dynamical Systems

Finally, we want to cover a basic background in dynamical systems. While this document is mostly written from the perspective of operator algebras, there is a collection of ideas and results that we will need to use to prove our desired equivalence. Once again, we can start from the basic definition, that of a classical dynamical system.

Definition 2.4.1. (Classical Dynamical System) By a classical dynamical system, we mean a compact Hausdorff space X and a homeomorphism σ from X to itself. We will often denote this

(X, σ).

Example 2.4.2. (Rotations) The archetypical example of a dynamical system is that of iterated rotations on the circle. Our space X will be the circle in C of elements of norm 1. Our homeomorphism will be a rotation of this circle. Considering the circle as a group, this can be thought of as repeatedly composing by a specified element α that corresponds to that rotation.

Example 2.4.3. (Sequence Space) Another example of a dynamical system is in sequence space.

Let X be the set of all sequences of zeroes and ones indexed by Z. For a topology, let {xi} be a sequence in this space. A basic open neighborhood of {xi} will be the set of all sequences {yi} such that yj = xj for m ≤ j ≤ n and m, n natural numbers such that m < n. Then, let σ be the map that shifts the entire sequence down one, i.e. sends {xi} to {xi+1}. It’s clear that this is a homeomorphism.

Now, the association with the standard idea of “iterated map upon a system” for dynamical systems is clear. X is the space and σ is the function we are iterating upon X. A type of dynamical systems that is central to this paper is a minimal dynamical system.

Definition 2.4.4. (Minimality) A classical dynamical system (X, σ) is minimal if X contains no proper closed sets Y such that σ(Y ) = Y .

25 To give a rationalization for the name, this essentially means we cannot find a dynamical system using the same homeomorphism on a subset of X. Note that injectivity and continuity are inherited by the restriction, and because σ(Y ) = Y , the restriction is also surjective. That the inverse is continuous is similarly inherited, and thus σY is a homeomorphism. In a similar vein to translation invariance of measure, we can define a translation invariant set under a dynamical system.

Definition 2.4.5. (Translation invariant set) A set E ⊆ X in a classical dynamical system

(X, σ) is translation invariant if σ−1(E) ⊆ E

Example 2.4.6. (Rational Rotations) For a classical dynamical system that is not minimal, take the rotations on the circle from Example2 .4.2 when the rotation α in question is rational.

If it is rational, then after some number n iterations of α, the map returns to the identity. So, for a particular point x, {x, α(x), α2(x), ..., αn−1(x)} form a proper closed subset that is invariant under α.

Example 2.4.7. (Irrational Rotations) For an example that is not minimal, consider the itera- tions of an irrational rotation on the circle. Assume that there is a closed, nonempty α-invariant subset X of the circle. Then, as there is a point x in X and X is invariant under α, α−n(x) ∈ X for every n ∈ N, since α−1(X) ⊆ X. We wish to show that this is dense in the circle. Begin by considering the circle as an interval through the e2πix. Here, our rotation α is represented by an a such that α = e2πia, and αn corresponds to taking na(mod1). Note, considering it in this way, that the first time na crosses over 1, na ≤ a(mod1), as else (n − 1)a would be an earlier cross. It cannot be equal, however, as a is irrational. Thus, we can find an m ∈ N such that ma(mod1) is arbitrarily small, which we can then iterate to get a power of a in any open subset of the interval. Therefore, {na} is dense in the interval, and thus {α−n(x)} is dense in the circle. Since

X is closed and contains a , X must be the whole circle. Thus, there are no proper closed sets that are invariant under α and thus, the system is minimal.

Returning to the measure-theoretic definition of translation invariance, we can show that every dynamical system has an invariant measure.

Lemma 2.4.8. Every classical dynamical system (X, σ) has a translation invariant measure with respect to σ.

26 Proof. Denote by S the function from C(X) to itself given by Sf = f ◦ σ−1. If we define a norm on functionals on C(X) by ||K|| = sup{||Kf||∞|||f||∞ = 1}, it’s easy to see that S is an automorphism of norm 1. We can then consider the induced dual map S∗ on the space M(X) of finite regular Borel measures on X defined by R fdS∗µ = R Sfdµ. From this, we know S∗ is linear on M(X). Define a norm on M(X) by

Z ||µ|| = sup{| fdµ| : ||f|| = 1}

. Since S is of norm one if we take f and µ to be of norm one,

Z Z | fdS∗µ| = | Sfdµ| ≤ 1.

Thus, ||S∗|| ≤ 1. To show the reverse inequality, let  > 0, µ be of norm 1 and let f be a function such that | R fdµ − 1| < . If we define f 0(x) = f(σ(x)), then we can see R f 0dS∗µ = R Sf 0dµ =

R fdµ, and thus ||S∗|| ≥ 1 − . Thus, ||S∗|| = 1. Furthermore, S∗(µ)(X) = µ(σ−1(X)) = µ(X), so if P is the subset of M(X) of “probability measures”, or measures such that µ(X) = 1, S∗ maps P into itself. P is clearly convex, as

aµ1(X) + (1 − a)µ2(X) = a1 + (1 − a)1 = 1.

Our next step is to show that it is compact in the weak-* topology. We begin by describing the open unit ball of C(X) under the L∞ norm. These are exactly those functions such that R |f(x)| < 1 for all x ∈ X. By the definition of integral, fdµ ≤ ||f||∞µ(X), so for µ ∈ P and f in this open unit ball, R fdµ ≤ 1. Thus, the collection of measures such that µ(X) ≤ 1 is compact by Theorem2 .2.26. However, since P is a closed subset of this compact set, it must

∗ be compact itself. Thus, by Lemma2 .2.13, S has a fixed point, call it µ0. For any Borel E,

−1 ∗ µ0(σ (E)) = S µ0(E) = µ0(E). Therefore, µ0 is a translation invariant measure and we are done.

Continuing with the theme of measures on a dynamical system, we want to discuss a particular kind of dynamical system with a measure that is known as ergodic.

Definition 2.4.9. (Ergodic) Let µ be a Borel measure on X and (X, φ) be a classical dynamical system. We call this entire system ergodic if µ is translation invariant with respect to φ, µ(X) =

27 1, and if E is a measurable set such that φ−1(E) = E, µ(E) = 0 or 1.

By an ergodic measure, we mean a measure on a classical dynamical system that makes the entire system ergodic.

Example 2.4.10. (Rotations) For an intuition for ergodicity, let us look at the familiar rotation algebra examples. We will take µ to be the standard Lebesgue measure on the circle seen as the interval. First, let us show the rational rotations are not ergodic. Let α be a rational rotation and n be the denominator of the rational rotation so that αn(x) = x for every x. Let E be a n subset of the line of measure less than 1/n but greater than zero. Let F = S αi(E). As the i=0 n rotation does not change the measure of a set 0 < µ(F ) ≤ P µ(E) < n/n = 1. By construction i=0 F is invariant with respect to α but has measure strictly between 0 and 1-thus, the rational rotations are not an ergodic system.

Next, we wish to show the irrational rotations are ergodic. Let α be an irrational rotation and assume that F is an α-invariant set of nonzero measure. As F is of nonzero measure, there exists some x ∈ F . Then, as F is translation invariant, α−i(x) ∈ F for every i ∈ N. Thus, F contains a dense set and is of nonzero measure- and thus, must be of measure 1. Therefore, the irrational rotations are ergodic.

With our earlier proof that every dynamical system has a translation-invariant measure, we can show that every system has an ergodic measure as well.

Lemma 2.4.11. Every classical dynamical system (X, σ) has an ergodic measure.

Proof. By Lemma2 .4.8, the set of σ-invariant probability measures is nonempty. We begin by showing this set T of σ-invariant probability measures is convex. For µ1 and µ2 σ-invariant probability measures and a ∈ [0, 1], aµ1 + (1 − a)µ2 is clearly also σ-invariant and probability. We also wish to show that T is closed in the weak-* topology. If we can show that T is the inverse image of a closed set in R+ under some map induced by a function of C(X), then we have that it is closed in the weak-* topology. Consider the constant function f = 1. The map induced by f is simply f(µ) = R 1dµ = µ(X). Thus, the inverse image of 1 under this map is exactly T -and thus, T is closed. Then, using2 .2.15 to see that T is the closed convex hull of its extreme points-and notably, it has extreme points. Our goal is now to show these extreme points are ergodic. If a measure µ is not ergodic, then there is a translation invariant set E such

28 that 0 < µ(E) < 1. Let

−1 µ1(A) = µ(E) µ(A ∪ E), and

c −1 c µ2 = µ(E ) µ(A ∪ E ),

c where E refers to the complement of E. If we can show that both µ1 and µ2 are in T , we are

c done, as it is easy to see that µ = µ(E)µ1 + (1 − µ(E))µ2 = µ(E)µ1 + µ(E )µ2. µ1 is fairly simple;

−1 −1 µ1(σ A) = µ(E)µ(σ A ∪ E) = µ(E)µ(A ∪ σ(E)) = µ(E)µ(A ∪ E) = µ1(A).

Note the translation invariance of µ and E are used here. The proof that µ2 is likewise translation invariant is entirely parallel-one must simply note that Ec is translation invariant as well.

We will also need a technical lemma known as Rohlin’s Lemma.

Lemma 2.4.12. (Rohlin’s Lemma) If (X, σ, µ) is an ergodic dynamical system such that the support of µ is not finite, then for for every  > 0 and every positive integer n, there is a Borel subset F of X such that σi(F ) are disjoint for 0 ≤ i ≤ n − 1 and the measure of their union is greater than 1 − .

Proof. First, we select an integer k > −1. By induction, we wish to create a set E of non- zero measure such that {σi(E)} is pairwise disjoint for 0 ≤ i < kn. For the base case, any set suffices-{E} is pairwise disjoint. Proceeding by induction, assume E is such that σi(E) is pairwise disjoint for 0 ≤ i < m. Consider the set

m−1 [ F = {x ∈ E : σ−1(x) ∈/ σi(E). i=0

Assume that µ(F ) is nonzero. By setting E0 = σ−1(F ), we get a set of nonzero measure such that E0 is disjoint from σi(E) for 0 ≤ i ≤ m and σ(E0) ⊂ (E) and we are done. If µ(F ) = 0, then ∞ m−1 by setting E0 = E \ S σi(F ) and taking G = S σi(E0), we have a translation invariant set of i=0 i=0 nonzero measure, and therefore by ergodicity, of measure one. Assume that for every subset H m−1 of E, µ(σ−1(H) \ ( S H)) = 0. Then, by removing σi(σ−1(H) \ H) for 0 < i and ergodicity, we i=0

29 m−1 can see that for every subset E0 of E, it is of either measure 0 or 1/m, and thus, S σi(E0) is of i=0 measure 0 or 1. Take a point u in the support of µ and take v to be another point in the support of µ such that v is distinct from σi(u) for 0 ≤ i < m. Select U and V to be open neighborhoods of u and v such that the collection {σi(U), 0 ≤ i < m} ∪ {σi(V ), 0 ≤ i < m} is disjoint-we may do this since {σi(u), 0 ≤ i < m} ∪ {σi(v), 0 ≤ i < m} is finite. Then, let

m−1 [ U 0 = σi(U) i=0 and [ V 0 = σi(V ). i=0

Since U 0 and V 0 have nonzero measure, they must intersect G such that the intersection has nonzero measure. However, since they are disjoint, this contradicts that every subset of G has m−1 measure 0 or 1. Therefore, there is a subset M of E such that σ−1(M) \ S σi(M) is of nonzero i=0 measure. By taking m−1 [ E0 = σ−1(M) \ σi(M), i=0 we get a set of nonzero measure such that E0 is disjoint from σi(E) for 0 ≤ i ≤ m, and we are done. Eventually, by proceeding via this induction, we get up to m = kn

i Assume that {Er} is an increasing chain of subsets such that σ (Er) are disjoint for 0 ≤ i <

kn. From this, we extract a sequence {Ern } such that sup{µ(Ern )} = sup{µ(Er)}. Then, if E is

the union of {Ern }, then E must contain each Er up to a set of measure zero, as the chain was increasing. Therefore, by Zorn’s lemma, there is a maximal such E up to sets of measure zero such that σi(E) is pairwise disjoint for 0 ≤ i < kn.

Let G = σkn−1(E). For each g ∈ G, define L(g) to be the smallest positive integer such that

σL(g)(g) ∈ E. Suppose that σi(x) ∈ σj(E) for some i < L(g) and j < kn. Then, if i > j, then

σi−j(x) ∈ (E) and if i ≤ j then x ∈ σj−i(E); both of which are impossible by the definition of

L(g) and the disjointness of σl(E) for 0 ≤ l < kn. Thus, we can define

Gl = {g ∈ G : L(g) = l}

30 for 1 ≤ l ≤ kn and

G0 = {g ∈ G : L(g) > kn}.

Note that G0 must be of measure zero. If it was not, then E ∪ σ(G0) is a larger set than E with its first kn iterates disjoint, contradicting the Zorn’s lemma construction. If we take the union

kn−1 kn j−1 [ j [ [ i σ (E) ∪ σ Gj, j=0 j=1 i=0 then by construction it is σ-invariant and nonzero measure, so by ergodicity it is of measure one.

Finally, we define k−1 kn [ sn [ [ (s−1)n+1 F = σ (E) ∪ σ Gj. s=0 j=n+1 sn

We wish to show that F has the properties we desire. The idea of this construction is that F is every nth iterate of the various parts of the full-measure union. Thus, by construction, σi(F ) are disjoint for 0 ≤ i < n. It remains to show that their union has measure greater than 1 − .

n−1 k [ i [ [ (s−1)n+i X − σ (F ) = σ (G(s−1)n+j), i=0 s=1 1≤i≤j≤n which is of measure

k k X X X X µ(G(s−1)n+j) ≤ nµ(G(s−1)n+j) = nµ(G) = nµ(E). s=1 1≤i≤j≤n s=1 1≤j≤n

Since σi(E) are disjoint for 0 ≤ i < kn and have the same measure, nkµ(E) ≤ 1 and nµ(E) ≤ k−1 <  as desired.

Another thing we can talk about in dynamical systems is the idea of periodic points.

Definition 2.4.13. (Periodic points) A point in a dynamical system (X, σ) is a periodic point if there is a positive integer n such that σn(x) = x.

And to tie this section together, we can show the following nice result.

Lemma 2.4.14. If (X, σ, µ) is an ergodic dynamical system such that the support of µ is not

finite, the set of periodic points of X is of measure zero.

31 Proof. Let

n i Xn = {x ∈ X|σ (x) = x, σ (x) 6= x, 0 < i < n}

be the subset of points in X of period of x is exactly n. Then, as each Xn is clearly translation invariant, µ(Xn) = 0 or 1. Assume µ(Xn) = 1 for some n. Then, if I ⊆ Xn and E = n { S σi(x)|x ∈ I}, then µ(E) = 0 or µ(E) = 1. Pick some x in the support of µ and assume that i=1 σn(x) is distinct from σi(x) for all 0 ≤ i < n. As X is Hausdorff and {σi(x)|0 ≤ i ≤ n} is a finite

n i set, we may find open sets O0 and On with x ∈ O0 and σ (x) ∈ On such that σ (O0) ∩ On = ∅ for 0 ≤ i < n. Note that because µ is translation invariant, x being in the support implies σ(x),

n and thus σ (x), must be as well. Therefore, O0 and On are of nonzero measure and thus contain some collection of n-periodic points that are themselves of nonzero measure. Let

i Sx = {σ (k)|0 ≤ i < n, k ∈ O0 ∩ Xn}, and

i Sn = {σ (k)|0 ≤ i < n, k ∈ On ∩ Xn}.

By construction, Sn and Sx are disjoint, translation invariant and of measure greater than zero-a contradiction of ergodicity. Thus, σk(x) = x for some 0 < k ≤ n. As the support is infinite, we may repeat this construction to get a y in the support such that y is periodic as well. Since the set

S = {σi(x)|0 ≤ i} ∪ {σi(y)|0 ≤ i}

is finite and X is Hausdorff, we can find open sets Ox and Oy around x and y such that the

i i collection {σ (Ox)|0 ≤ i < n} ∪ {σ (Oy)|0 ≤ i < n} is disjoint. Since x and y are in the n−1 S i support, the measures of Ox and Oy are nonzero, and by construction, Ex = σ (Xn ∩ Ox) i=0 n−1 S i and Ey = σ (Xn ∩ Oy) are disjoint and translation invariant. This contradicts ergodicity, i=0 and thus, µ(Xn) = 0 for all n. Therefore, the set of periodic points of X is of measure zero.

The material in this section comes entirely from [3].

32 Chapter 3

C*-Algebras

With the background out of the way, we may finally begin with the central ideas of this document.

We will be working largely from the perspective of operator algebras, and in particular, C*- algebras. We begin by establishing definitions for C*-algebras and core concepts related to them.

3.1 Definitions and Examples

The natural first step at this point is to ask exactly what a C*-algebra is. Before we can provide a formal definition, however, we need a preliminary definition.

Definition 3.1.1. ()A Banach algebra B is a normed, complete algebra over the complex numbers such that

||xy|| ≤ ||x||||y||.

In essence, this is a normed space with a norm inequality on element multiplication as well.

Banach algebras then form the basis for the definition of a C*-algebra.

Definition 3.1.2. (C*-algebra) A C*-algebra A is a Banach Algebra equipped with an involution

* such that (x + y)∗ = x∗ + y∗,(λx)∗ = λx¯ ∗, and ||x∗x|| = ||x∗||||x||.

The * is often referred to as the “adjoint”, and the last identity is often known as the “C*- identity”. In general, a C*-algebra may or may not have a unit, i.e. an element 1 such that

33 1x = x for every x in the algebra. However, for the purpose of this paper, we will assume that every algebras we work with does.

Example 3.1.3. (Square Matrices) A fairly basic example of a C*-algebra is square (n × n) matrices over the complex numbers. We can sum two elements, multiply by scalars, and multiply elements together as usual for matrices. For the adjoint, we can take the conjugate transpose, i.e. if we simplify a matrix to individual elements, {ai,j}i,j, the conjugate transpose would be

{aj,i}i,j. This representation actually allows us to easily verify a couple of the required properties;

∗ ∗ ∗ ∗ (A + B) = ({ai,j + bi,j}i,j) = ({aj,i}i,j + {bj,i}i,j) = A + B ,

∗ ∗ ∗ (λA) = (λai,j) = {λaj,i}i,j = λ{aj,i}i,j = λA .

In order to define a norm on the matrices, we need to think of a n × n matrix as an operator on

Cn by left multiplication. We can then define a norm of a matrix A by ||A|| = sup ||Ax|| over n all elements x such that ||x|| = 1. Recall that the norm of an element {x1, x2, ..., xn} in C is p 2 2 2 |x1| + |x2| + ... + |xn| . We now verify that this is a norm.

To see that ||A|| ≥ 0 is fairly easy-since the norm on Cn is nonnegative, ||Ax|| ≥ 0 for all x, and thus, ||A|| ≥ 0. Next, we must show that ||A|| = 0 if and only if A = 0. If A = 0, Ax = 0 for all elements, so in particular ||Ax|| = 0 for all ||x|| = 1, so ||A|| = 0. For the reverse implication,

x if A 6= 0, there is an x such that Ax 6= 0. Then, we can take y = ||x|| to get an element of norm Ax 1, and then Ay = ||x|| , which is nonzero. Thus, the supremum of ||Ax|| for ||x|| = 1 is nonzero and the norm of A is nonzero. To show scalar multiplication,

||λA|| = sup ||λAx|| = |λ| sup ||Ax|| = |λ|||A||.

For the triangle inequality, first note that ||A + B|| = sup ||(A + B)x|| = sup ||Ax + Bx||. Since the norm on Cn has the triangle inequality, we know that ||Ax + Bx|| ≤ ||Ax|| + ||Bx||. Then, as these are suprema, we get that

||A + B|| = sup ||Ax + Bx|| ≤ sup(||Ax|| + ||Bx||) ≤ sup ||Ax|| + sup ||Bx|| = ||A + B||.

34 Finally, note that

||AB|| = sup ||ABx|| ≤ sup ||A||||Bx|| = sup ||B||||Ax|| = ||A||||B||, and all desired properties of our norm are satisfied.

It remains to show completeness and the C*-identity. The first step towards showing the C*- identity comes from properties of the norm- ||A∗A|| ≤ ||A∗||||A||. To prove the reverse inequality,

n n recall the inner product on C defined by hx, yi = Σi=1(xiyi). There are a few things to note here. First, that for any matrix hAx, yi = hx, A∗yi. Second, that the Cauchy-Schwarz inequality applies. With all of this, we can show that ||Ax||2 = hAx, Axi = hx, A∗Axi ≤ ||x||||A∗Ax||. By taking the square root, we get ||Ax|| ≤ p||A∗A||||x||. Then, as ||A|| is the supremum over all such x of norm 1, ||A||2 ≤ ||A∗A||. It remains to show that ||A∗|| = ||A||. This is given by [9,

Theorem 13.5] and its corollaries.

To show completeness, let (An) be a Cauchy sequence of matrices. Since the norm is a supremum, we know that ||(An −Am)x|| ≤ ||An −Am||||x||, and thus for an x such that ||x|| = 1,

Anx is a Cauchy sequence itself. In particular, this gives us Cauchy sequences in {Anei}, where

n ei is (0, 0, ..., 1, ..., 0). Since C is complete, there is a limit to this sequence for each i. We may n then define a linear operator A on C -i.e. a matrix- by setting Aei to be the respective limit n and extending by linearity to all of C . To show that An converges to A, we need to show that for any x such that ||x|| = 1, ||(An − A)x|| becomes smaller than any  > 0. For a given x, x = (x1, x2, ..., xn). Then, since A is linear,

Ax =A(x1, 0, ..., 0) + A(0, x2, 0, ..., 0) + ... + A(0, 0, ..., xn)

=|x1|A(e1) + |x2|A(e2) + ... + |xn|A(en)

Similarly, we can break down each An(x) in the same way. Since An(ei) converges to A(ei) and there are only finitely many ei, we may find a large enough n such that

n n n X X X  ||(A − A)x|| = || (A − A)x e || ≤ ||(A − A)x e || < ||x || = , n n i i n i i n i i=1 i=1 i=1 and thus, Mn(C) is complete.

Example 3.1.4. (Functions on the interval) Another example of a C* algebra is the set of

35 complex continuous functions on the interval [0,1]. Summing, scalar multiplication, and element multiplication will be done . For the adjoint, simply take the conjugate of the function.

(f +g)∗ = f ∗ +g∗ and (λf)∗ = λf ∗ are elementary properties of the conjugate. For the norm, the

L∞ norm will be used-the supremum of the modulus of elements in the range of f. By definition, this norm is nonnegative, and it is pretty clearly zero only when the function is zero-if it is zero everywhere but a set of measure zero, continuity implies it is zero everywhere. Similarly, since we are taking the modulii,

||λf|| = |λ|||f|| is also easy to see. For the triangle inequality,

||f + g|| = sup |f(x) + g(x)| ≤ sup |f(x)| + sup |g(x)| = ||f|| + ||g||.

Finally,

||fg|| = sup |f(x)g(x)| = sup |f(x)||g(x)| ≤ sup |f(x)| sup |g(x)| = ||f||||g||.

The C*-identity is also fairly easy to see-

||f ∗f|| = sup |f ∗(x)f(x)| = sup |f(x)|2 = ||f||2 = ||f ∗||||f||.

Completeness is a little trickier. Let (fn) be a Cauchy sequence in this space. This means that for a given  > 0, there is a large enough n such that for every x ∈ [0,1] and every m, k ≥ n,

|fm(x) − fk(x)| < . Thus, for every such x, fn(x) is a Cauchy sequence and thus converges. We can then define a f by setting f(x) to be the limit of this sequence for every x. It remains to show that f is continuous. Select an n such that for every m ≥ n, ||fm − fn|| < . In particular, since f(x) = lim fi(x), |fn(x) − f(x)| ≤  for every x. Then, since fn is continuous, there is a

δ > 0 such that if |y − x| < δ, |fn(y) − fn(x)| < . Thus, for every y in this δ-neighborhood of x, |fn(y) − fn(x)| < . From our choice of n, we have that both |fn(y) − f(y)| ≤  and

|fn(x) − f(x)| ≤ . Therefore, for every y such that |y − x| < δ, |f(y) − f(x)| < 3, and f is continuous. It remains to show that {fn} converges to f. Select an n such that ||fm − fk|| <  for m, k ≥ n. Then, |fm(x) − fk(x)| <  for all x, and we can let k run to infinity to get that

|fm(x) − f(x)| ≤  for all x, and thus ||fm − f|| < 2.

36 In general, we can define this on a compact Hausdorff space, not just [0,1]. In particular, we can perform this exact same construction on the circle group, viewing it as [0, 1] with the endpoints identified.

Example 3.1.5. (L∞(X)) In a similar vein to the continuous functions, another example of a

C*-algebra is what is known as the L∞ space of functions on a measure space X. First, consider the L∞ norm defined in the previous example. In general, the previous definition does not quite work for non-continuous functions. Rather than being the supremum of |f|, we want to take the supremum of |f| outside a set of measure zero. We call functions who, disregarding a set of measure zero, are bounded essentially bounded functions. Our L∞(X) space will consist of these essentially bounded functions. Furthermore, we identify f and g if ||f − g||∞ = 0, carving the space into equivalence classes. Once again, by definition, ||f||∞ is nonnegative, and only zero when f is zero outside of a set of measure zero-and therefore, in the equivalence class of the zero function. Scalar multiplication is trivial. The triangle and multiplicative inequalities can be seen as a result of the union of two sets of measure zero being measure zero. Finally, the proof of the C*-identity is identical to the previous example. We now need only show completeness.

∞ Let {fn} be a Cauchy sequence in L (X). By the same argument as in the last example, we may construct a f(x) as the pointwise limit of this sequence. Also by the last example, we know

∞ {fn} must converge to f. We need only show that f is in L (X), which is an easy consequence

∞ of {fn} being a Cauchy sequence in L (X)-for every  > 0, there exists an n ∈ N such that

||fm − fk|| <  for m, k ≥ n. Thus, we can see that ||fk − f|| ≤ , and thus, for every x,

||f(x) − fk(x)|| ≤ . Since fk is bounded almost everywhere, so too must f and we are done.

Example 3.1.6. (`∞(G)) The next example of a C*-algebra is `∞(G), the set of bounded functions on a group G given the discrete topology. By considering G under the counting measure, we can see that `∞(G) coincides with L∞, so we may once again use the L∞ norm and conjugation for the adjoint. This `∞(G) is clearly closed under scalar multiplication, products, and sums. That the norm works with the adjoint to produce a C*-algebra structure was verified in the continuous functions example. Showing completion is the exact same technique as in the above example.

Example 3.1.7. (B(H)) A very important example of a C*-algebra is the set of bounded, linear functions from a Hilbert space to itself. We call such an operator A bounded if sup ||Ax|| is finite

37 over all elements x of norm 1. In fact, this supremum will be our norm in B(H). One thing to

x note about this norm is that for any x ∈ H, ||Ax|| ≤ ||A||||x||-as ||x|| is of norm one and A is linear. Addition and scalar multiplication work pointwise, and it is easy to check that B(H) is closed under these. For multiplication, we compose the operators together. It is worth checking that this still produces a . If x is of norm one,

||(AB)x|| = ||A(Bx)|| ≤ ||A||||Bx|| ≤ ||A||||B||.

Thus, AB is still bounded. We next verify norm properties. Nonnegativity is inherited from the norm on the Hilbert space being nonnegative itself. If A = 0, then Ax = 0 for all x, so ||A|| = 0.

If ||A|| = 0, then sup ||Ax|| is zero over all norm 1 x, so Ax = 0 for each x of norm one, and thus, A = 0. For scalars,

||λA|| = sup ||λAx|| = |λ| sup ||Ax|| = |λ|||A||.

For the triangle inequality,

||A + B|| = sup ||Ax + Bx|| ≤ sup(||Ax|| + ||Bx||)

≤ sup ||Ax|| + sup ||Bx|| = ||A|| + ||B||.

And for multiplication,

||AB|| = sup ||ABx|| ≤ sup ||A(||B||x)|| = sup ||B||||Ax|| = ||B||||A||.

The adjoint of an operator A is an operator A∗ such that hAx, yi = hx, A∗yi for all x, y ∈ H.

We claim that this will be our adjoint in the C*-algebra as well. For the proof of an adjoint’s existence, please see [9, Theorem 14.1]. The same theorem also gives us that ||A|| = ||A∗||. For some basic properties, let us verify sums and scalar multiplication.

hx, (A∗ + B∗)yi = hx, A∗y + B∗yi = hx, A∗yi + hx, B∗yi

= hAx, yi + hBx, yi = h(A + B)x, yi.

38 hx, (λA∗)yi = hx, λA∗yi = λhx, A∗yi = λhAx, yi = h(λA)x, yi.

Attentive readers may notice that this is just a generalization of the matrices example from earlier. For the C*-identity,

||Af||2 = hAf, Afi = hf, A∗Afi ≤ ||f||||A∗Af|| ≤ ||A∗A||||f||2.

Then, we take the square root to get ||Af|| ≤ p||A∗A||||f||. Thus, as ||A|| = sup ||Af|| over fs of norm 1, we get

||A∗||||A|| = ||A||2 ≤ ||A∗A||.

The reverse inequality is again due to the multiplication property of the norm.

For completeness, begin by taking a Cauchy sequence of operators {An}. Since {An} is

Cauchy, we get a Cauchy sequence {Anx} for each x of norm 1. Thus, we can define an operator A by defining Ax to be the limit of this sequence for every x of norm 1. For any other y, we

y define Ay = ||y||A( ||y|| ). Thus, if A is a bounded linear operator that {An} converges to, we have established completion of B(H). Boundedness is fairly easy-

||A|| = sup Ax = sup lim Anx ≤ lim sup Anx = lim ||An|| x x n n x n

and as An is Cauchy, this is bounded. The scalar multiplication half of linearity is clear by the definition. We then need to show that A(x + y) = Ax + Ay. Since A is linear with respect to scalar multiplication, we can without loss of generality assume x and y are of norm one. Then,

A(x + y) = lim An(x + y) = lim(Anx + Any) = lim Anx + lim Any = Ax + Ay

and A is linear. It remains to show that {An} that converges to A. For a given , select a n such that ||Ak − Am|| <  for m, k ≥ n. For any x ∈ H, we then get

||Amf − Akf|| ≤ ||Am − Ak||||f|| < ||f||.

We then can let k go to infinity to get that ||Af − Amf|| ≤ ||f|| and thus ||A − Am|| < ||. Thus, B(H) is complete.

39 The material in this section comes from [14].

3.2 Positivity

With the basics established, we can establish our major goal for this chapter, the Gelfand-

Naimark theorem. This states that, in short, we can view any C*-algebra as a C*-subalgebra of the bounded operators on a Hilbert space. This is a fair ways away, however, and we will need to get through a good body of material first. Our first goal will be to establish what it means for an element in a C*-algebra to be positive. To begin with, we define the spectrum of an element.

Definition 3.2.1. (Spectrum) The spectrum of an element a in a unital algebra A, denoted

σA(a), is the set of complex numbers λ such that λ1 − a is not invertible in A.

This may seem like an awkward definition, but in truth it is a generalization of several fairly fundamental concepts-eigenvalues of a finite square matrix and the range of a function.

Example 3.2.2. (Function Spectrum) In the algebra of complex functions on the interval, when is a function non-invertible? Since our operation is pointwise, when and only when it has a zero somewhere in its range. Therefore, for a given f and an element a in C, a − f has a zero only when f(x) = a for some x ∈ [0, 1]. Thus, a − f is non-invertible for every a in the range of f.

Thus, the range of f is a subset σ(f). Furthermore, if a ∈ σ(f), a − f is non-invertible, so there is a zero in the range of a−f, and so a is then in the range of f. Thus, we have σA(f) = ran(f).

Example 3.2.3. (Matrix Spectrum) In the algebra of matrices, when an element is invertible is a bit less cut-and-dry. However, if λ is an eigenvalue of a matrix A, then we know that Ax = λx for some nonzero vector x. Performing some algebra, we get that (λ1 − A)x = 0. This tells us that λ1 − A is non-invertible-it is a zero divisor, after all, and thus, that λ ∈ σ(A) for any eigenvalue λ of A. Furthermore, if λ ∈ σ(A), λ1 − A is not invertible, so λ is a zero of A’s characteristic polynomial, and thus, an eigenvalue of A.

Theorem 3.2.4. The spectrum of an element is closed and nonempty.

We will implicitly be using this throughout the paper. Please see [14, Section 1.2] for a proof of this. Another thing we can do with the spectrum is define the spectral radius.

Definition 3.2.5. (Spectral Radius) The spectral radius r(a) of an element a of a C*-algebra

A is sup {|λ|}. λ∈σ(a)

40 For a simple example, note that the spectral radius of a function is that of sup |f(x)| over all x in the domain of f. It can be shown that r(a) = lim ||an||1/n. See[14, Theorem 1.2.7] for a n→∞ proof. However, with this, we can show something useful.

Lemma 3.2.6. If a is a self-adjoint element of a C*-algebra, then r(a) = ||a||.

Proof. We know that r(a) = lim ||an||1/n. Since a is self-adjoint, ||a||2 = ||a||||a|| = ||a||||a∗|| = n→∞ n n n n ||a2||. Thus, by induction, ||a2 || = ||a||2 Therefore, r(a) = lim ||an||1/n = lim ||a2 ||1/2 = n→∞ n→∞ ||a||.

Example 3.2.7. (Examples and counterexamples) Consider the space C(X). Begin by noting for every f ∈ C(X), since the spectrum is its range and its norm is the supremum of elements in its range, then r(f) = ||f||.

For an example of an element that does not have this property, consider the matrix

  0 1   V =   . 0 0

Note that ||V || = 1, as V (0, 1) = (1, 0), both norm one elements. Consider an eigenvector (a, b) of V . We have V (a, b) = (b, 0) = k(a, b), and thus ka = b, kb = 0, and since (a, b) is nonzero, we are forced to have k = 0, and thus r(V ) = 0.

A useful corollary of this is that homomorphisms between C*-algebras are norm-decreasing.

Corollary 3.2.8. If φ is a unital ∗-homomorphism between two unital C*-algebras A and B, then ||φ(a)|| ≤ ||a|| for any a ∈ A.

Proof. We begin by showing that σ(φ(a)) ⊆ σ(a). If λ1B − φ(a) = λφ(1A) − φ(a) = φ(λ1A − a) is noninvertible, then λ1 − a must be noninvertible in A. If λ1A − a, had an inverse c, then φ(c)

∗ would be an inverse of λ1B − φ(a). Thus, as a a is self-adjoint and by Lemma3 .2.6, we have

||φ(a)2|| = ||φ(a)∗φ(a)|| = ||φ(aa∗)|| = r(φ(a∗a)) ≤ r(a∗a) = ||a∗a|| = ||a||2.

Our goal now is to leverage this material towards a discussion of positivity in C*-algebras.

We begin by showing a small, useful result.

Lemma 3.2.9. If a is an element of a C*-algebra A and ||a|| < 1, 1 − a is invertible.

41 n ∞ P i P n Proof. Consider the sequence {an} defined by an = a . Since ||a|| < 1, ||a|| converges i=0 n=0 and thus {an} is a Cauchy sequence with a limit α. Consider

n X n n α(1 − a) = lim an(1 − a) = lim(1 − a) a = lim 1 − a = 1. i=0

By a similar calculation, (1 − a)α = 1. Thus, (1 − a) is invertible.

Next, we wish to discuss some useful classifications of elements in a C*-algebra.

Definition 3.2.10. (Properties of elements in C*-algebras) An element a in a C*-algebra is normal if a∗a = aa∗. The element is hermitian if a∗ = a. It is unitary if aa∗ = a∗a = 1.

Note that hermitian and unitary elements are normal. Further note that these align with the standard definitions of these terms on matrices. For continuous functions, we have all elements are normal, and real-valued functions are hermitian. Unitary functions are those who take values in the unit circle. At this point, we will focus on the unitary elements to prove some useful results.

Lemma 3.2.11. The collection U of unitary elements in a C*-algebra A is a group under the multiplication operation of A.

Proof. The first step is to show closure. If u and v are unitary, (uv)∗(uv) = v∗u∗uv = v∗1v = 1, and (uv)(uv)∗ = uvv∗u∗ = u1u∗ = 1, so uv is unitary. Associativity is inherited from the associativity of A’s multiplication. The identity of the group is 1, as it is clearly unitary. And

finally, by definition of unitary elements, u−1 = u∗, which is itself unitary.

Lemma 3.2.12. If u is a unitary element of a C*-algebra A, if λ ∈ σ(u), then |λ| = 1.

Proof. To begin with, note that for any x ∈ A, if |λ| > ||x|| then ||λ−1x|| < 1, and by Lemma

3.2.9, 1 − λ−1x is invertible. By multiplying by λ, we see that λ1 − x must be invertible as well.

Thus, λ∈ / σA(x). Since u is unitary and of norm one, this gives us that if λ ∈ σ(u), |λ| ≤ 1. Let

−1 |λ| < 1 and assume λ ∈ σA(u). Then, λ − u is noninvertible, and so, 1 − λ u is as well. Hence,

∗ −1 −1 ∗ ∗ −1 u − λ . Therefore, −λ ∈ σA(u ). However, ||u || = 1 and |λ | > 1, which contradicts our earlier remarks. Thus, all elements in the spectrum of u must be of norm 1.

With this out of the way, we can define positive elements.

42 Definition 3.2.13. (Positive Element) An element a in a C*-algebra A is positive if it is hermi-

+ tian and σA(a) ⊆ R . A linear map between C*-algebras is positive if it maps positive elements to positive elements.

The easy example here is functions. A function is positive in the continuous function algebra if its range is positive. From this, we can define a positive map.

Definition 3.2.14. (Positive Map) A linear map f from a C*-algebra A to a C*-algebra B is positive if for every positive element a of A, f(a) is positive in B.

With these established, we can prove a lemma that will be of central importance to proving the GNS representation.

Lemma 3.2.15. If τ is a positive linear functional on a C*-algebra A, τ(a∗a) = 0 if and only if τ(ab) = 0 for every b ∈ A.

Proof. The reverse direction is a trivial statement. The other is a consequence of the Cauchy-

Schwarz inequality. We can define a sufficient form here by hx, yi = τ(x∗y). Then, τ(ba) = hb∗, ai ≤ phb, b∗ipha∗, ai = ||b∗||0 = 0

We can also show a few additional useful facts about positive elements and positive maps.

Lemma 3.2.16. If a is a positive element in a C*-algebra A, then ||a||1 − a is positive.

Proof. Consider some λ ∈ σ(||a||1−a) Then, λ1−(||a||1−a) = a−(λ−||a||)1 = −((λ−||a||)1−a) is invertible, and thus, (λ − ||a||)1 − a is invertible. Thus, as a is positive, λ − ||a|| is a positive number, and as ||a|| is nonnegative, λ must be positive. Thus, ||a||1 − a is positive.

Lemma 3.2.17. If a is a normal element of a C*-algebra A, there is a positive linear function

τ on A such that ||a|| = |τ(a)|.

The proof of this leads outside the scope of this paper, so for details please consult [14,

Theorem 3.3.6]. We can then use the methods from [14, Theorem 2.2.1] to prove

Theorem 3.2.18. (Continuous Functional Calculus) If a is a normal element in a C*-algebra

A and f is a continuous function on σ(a), we can define an element f(a) in A such that the map

π from C(σ(a)) to A defined by π(f) = f(a) is an injective morphism that preserves adjoints, sums, and products and if x is the identity function and e is the constant one function, then

π(x) = a and π(e) = 1.

43 While the details here are rather complicated, the idea is rather simple. Note that for any polynomial f, that f(a) is rather easy to define as quite literally the appropriate sum of an. As we can then approximate any continuous function with polynomials, it’s not a stretch to think that we can then approximate any f(a) by these polynomials. A note; with the continuous functional calculus, we can define the C*-algebra “generated” by a collection of elements via the set of continuous functions on their spectrums. For a specific usage of this theorem, let us prove a consequence that will be of use.

Corollary 3.2.19. If a is a positive element of a C*-algebra A, then there is a unique element b in A such that b2 = a.

Proof. Since a is positive, it is both normal and has a nonnegative spectrum. Then, by Theorem

3.2.18, we can define a a1/2, as the square root function is continuous on nonnegative reals. We wish to show that (a1/2)2 = a.

√ √ 2 (a1/2)2 = π( x)2 = π( x ) = π(x) = a .

For the proof of the uniqueness, please see [14, Theorem 2.2.1].

We can then use this to prove a useful characterization of positive elements.

Lemma 3.2.20. An element a in a C*-algebra A is positive if and only if it is equal to b∗b for some element b ∈ A

Proof. Please see [14, Theorems 2.2.4] for the proof that an element of the form a∗a is positive.

If a is positive, by Corollary3 .2.19, there is a positive b such that b2 = b∗b = a.

From this, we can get another useful lemma.

Lemma 3.2.21. If f : A → B is a homomorphism between C*-algebras, then f is a positive map.

Proof. Let α be a positive element in A. By Lemma3 .2.20, α = a∗a for some a ∈ A. Then, f(α) = f(a∗a) = f(a)∗f(a), which is again positive by Lemma3 .2.20.

And another useful fact about positive elements.

44 Lemma 3.2.22. If a is a positive element in a C*-algebra A, and v is an operator on A, then v∗av is a positive element.

Proof. By Lemma3 .2.20, a = a1/2a1/2. Then, v∗av = v∗a1/2av = (av)∗(av) and is thus positive.

We will need one more lemma for our discussion the GNS construction. However, the proof delves into ideas that are largely unnecessary for our purposes here, so it will be omitted. For details, see [14, Theorem 3.3.7]

Lemma 3.2.23. If τ is a positive linear functional on a C*-algebra A, τ(b∗a∗ab) ≤ ||a∗a||τ(b∗b) for all a, b, ∈ A.

The material in this section comes from [14].

3.3 Representations

We are almost to the GNS construction. The last piece we need before this is to define exactly what we mean by a representation of a C*-algebra.

Definition 3.3.1. (Representation) A representation of a C*-algebra is a *-preserving homo- morphism from a C*-algebra to the bounded operators on a Hilbert space. It is called a faithful representation if the homomorphism is injective.

Something to note is that from a collection of representations on a C*-algebra A, we can take their direct sum to get another representation on A, mapping component-wise to the operators over the direct sum of the respective base Hilbert spaces. There is another useful type of representation of a C*-algebra we can discuss.

Definition 3.3.2. (Unital representation) A unital representation of a C*-algebra A is a repre- sentation φ of A onto B(H) such that φ(1) = I, where I is the identity map on H.

Note that for any representation φ, φ(1) is not necessarily the identity, but we can restrict our H to make it so. Consider φ(1), for some representation φ of A onto B(H). We have

φ(1) = φ(12) = φ(1)2 and φ(1) = φ(1∗) = φ(1)∗. In fact, this is exactly the requirements for an operator in B(H) to be a projection onto some closed subspace [9, Theorem 15.2]. Therefore, since for any a ∈ A, φ(a) = φ(1a) = φ(1)φ(a), and our multiplication is composition, we can

45 restrict H to φ(1)(H) in order to get a representation ψ of A onto B(φ1)(H) such that ψ is unital.

We can also talk about disjoint representations and subresentations.

Definition 3.3.3. (Disjoint) We say two representations π and φ from a C*-algebra A to the same B(H) are disjoint if there is no operator V ∈ B(H) such that V π = φV .

While this definition provides a strong baseline idea, it is hard to work with. We will need some equivalent definitions of disjoint representations. For this purpose, we delve into some more theory.

Definition 3.3.4. (Subrepresentation) Let π be a representation from a C*-algebra A to the bounded operators on a Hilbert space H. If there is a subspace H1 of H such that π(a)(h) ∈ H1

for all a ∈ A and h ∈ H1, then σ : A → H1 defined by σ(a) = π(a)|H1 is called a subrepresentation of π.

Lemma 3.3.5. If π and φ are representations of a C*-algebra A to the bounded operators on the same Hilbert space H, the following are equivalent:

• Two representations π and φ are disjoint

• No subrepresentation of π is equal to a subrepresentation of φ

• If π and φ are disjoint representations of a C*-algebra A, there is a net {xi} in A such

that π(xi) → 1 and φ(xi) → 0 in the strong operator topology.

For details, see [6, Proposition 5.2.1].

Finally, we can talk about irreducible representations and the spectrum of a C*-algebra.

Definition 3.3.6. (Irreducible Representation) A representation π of a C*-algebra A on a

Hilbert space H is called irreducible if there is not a nontrivial subspace K of H such that K is invariant under π(a) for every a ∈ A.

Definition 3.3.7. (Spectrum of a C*-algebra) The spectrum, or dual, of a C*-algebra A is the collection of irreducible representations of A where we identify under unitary equivalence. In other words, Aˆ is the set of irreducible representations π of A where we identify π and π0 if

π = uπ0 where u is unitary.

46 This gives us the basic set, but we will also need a topology on the spectrum. For both this definition of the topology and the last piece we need before discussing the GNS construction, we turn to the idea of an ideal.

Definition 3.3.8. (Ideal) A left, or respectively, right, ideal I of an algebra A is a closed vector subspace of A such that for any a ∈ A and i ∈ I, ai ∈ I. A non-qualified ideal is such a subspace that both ai and ia are in I.

We call an ideal proper if it is neither the entirety of A or 0. The simplest, but least interesting examples of ideals are that of the entire space or simply zero. A more illustrative example, can be found in C(X).

Example 3.3.9. (Zero functions) Let X be a compact Hausdorff space and A be a subset of X.

Consider the set CA(X) of continuous functions on X that are zero outside of A. We claim that this is an ideal in C(X). And this is easy to see, since multiplication is pointwise, if f ∈ CA(X) and g ∈ C(X), then fg and gf must also be zero outside A and thus be in CA(X).

In general, we may also produce single-sided ideals in any algebra by left multiplication and visa versa-aA for a ∈ A is always a right ideal.

To define our topology on the spectrum, we will need a specific type of ideal known as a primitive ideal. The way one defines primitive ideals is a construction from another ideal.

Theorem 3.3.10. For every left ideal L in a unital algebra A, there is a largest ideal I contained in L. In particular, I can be characterized via

I = {a ∈ A|aA ⊆ L}.

Proof. We begin by showing I is an ideal of A. If aA ⊆ L, then baA ⊆ bL ⊆ L and abA ⊆ aA ⊆ L, so ab, ba ∈ I. It’s easy to show that I ⊆ L, as for all a ∈ I, a = a1 ∈ aA ⊆ L, so a ∈ L. Assume that J is another ideal of A that is a subset of L. By the definition of ideal, if j ∈ J, then jA ⊆ J ⊆ L, so j ∈ I, and therefore I is maximal.

We call such ideals generated from maximal left ideals the primitive ideals of the algebra A, and we denote the set of them by Prim(A). To connect this with a topology on Aˆ, we will need

Theorem 5.4.2 from [14].

47 Theorem 3.3.11. An ideal I of a C*-algebra A is primitive if and only if there exists a non-zero irreducible representation φ of A such that I = ker(φ).

This gives us an equivalence between the spectrum of A and the primitive ideals of A. We then need only to define a topology on Prim(A). We do this through what is known as the hull-kernel topology. First, we define these operations in the context of Prim(A).

Definition 3.3.12. (Hull, Kernel) If A is a C*-algebra and X ⊆ A, define hull(X) to be the set of primitive ideals that contain X. Analogously, if R is a non-empty set of primitive ideals, we denote the intersection of all ideals in R by ker(R). Define ker(∅) = A.

Our objective is then to use these to define a topology on Prim(A). To do this, we need to show that primitive ideals are also a type of ideal known as a prime ideal.

Definition 3.3.13. (Prime Ideal) An ideal I in a C*-algebra A is prime if whenever J1 and J2 are ideals of A such that J1J2 ⊆ I, either J1 ⊆ I or J2 ⊆ I.

Lemma 3.3.14. If I is a primitive ideal of a C*-algebra A, then I is prime.

[14, Theorem 5.4.5]

In particular, we will use ker(hull(I)) as a closure operation.

Theorem 3.3.15. (Prim(A) Topology) If A is a C*-algebra, then there is a unique topology on

Prim(A) such that for each subset X of Prim(A) the set hull(ker(X)) is the closure of X.

Proof. For a X ⊂ Prim(A), denote hull(ker(X)) by X0. It’s easy to see that X ⊂ X0 and ker(X) = ker(X0), so X0 = (X0)0. Also easy to see is that ∅0 = ∅.

We want to show a few properties of this closure. We begin with its behavior over arbitrary intersections. Let {Xk}k∈K be an arbitrary collection of subsets of Prim(A). Note for every

T 0 0 T 0 0 0 T 0 0 T 0 index j ∈ K, Xk ⊆ Xj, so ( Xk) ⊆ Xj, and therefore ( Xk) ⊆ Xk. The reverse k∈K k∈K k∈K k∈K inequality was discussed above, so we have that

\ 0 0 \ 0 ( Xk) = Xk. k∈K k∈K

Next, we show that our closure distributes over finite unions. Let X1 and X2 be subsets of Prim(A). Define I1 to be the intersection of all elements in X1 and I2 similarly. Then, the intersection of all elements in X1 ∪ X2 is I1 ∩ I2 = I1I2. Select any ideal I ∈ Prim(A). Then,

48 0 I ∈ (X1 ∪ X2) if and only if I1I1 ⊆ I, which by Lemma3 .3.14 happens iff I1 ⊆ I or I2 ⊆ I,

0 0 which is equivalent to I ∈ X1 ∪ X2. Therefore, our closure distributes over finite unions. With this, we have the properties needed to define a closure, and thus a collection of closed sets in Prim(A). Their complements is then a topology on Prim(A) and we are done.

We can then use this to define a topology on Aˆ. We can define a surjective map θ from Aˆ to Prim(A) by θ(φ) = ker(φ) using Theorem3 .3.11. The topology on Aˆ will be the weakest topology such that θ is continuous. Note that θ is well-defined, as unitary equivalence does not affect the kernel of a representation.

Definition 3.3.16. (Quotients by Ideals) If I is an ideal of an algebra A, then by A/I we refer to the collection of equivalence classes {a + I|a ∈ A}. This is itself a linear space if we define

(a + I) + (b + I) = (a + b + I) and λ(a + I) = (λa) + I.

We can expand this into a full C*-algebra by defining (a + I)(b + I) = (ab + I) and (a + I)∗ =

(a∗+I), but for our purposes we need only the linear structure. With this, we finally turn to GNS.

We begin by constructing the GNS representations, and from that, prove the Gelfand-Naimark theorem.

Example 3.3.17. (GNS Representation) Let A be a C*-algebra and for each τ a positive linear functional on A, let Nτ of A defined by

∗ Nτ = {a ∈ A|τ(a a) = 0}.

Lemma3 .2.23 gives us that these are left ideals of A. From there, we can define an inner product

∗ on the quotient spaces A/Nτ by ha + Nτ , b + Nτ i = τ(b a). We then can complete this inner product space into a Hilbert space, which will be denoted Hτ . We can define a map φ from A to the bounded functions on A/Nτ by φ(a)(b + Nτ ) = ab + Nτ . With Lemma3 .2.23 we can see that

2 ∗ ∗ 2 ∗ 2 2 ||φ(a)(b + Nτ )|| = τ(b a ab) ≤ ||a|| τ(b b) = ||a|| ||b + Nτ ||

2 Dividing this out by ||b + Nτ || , we get that ||φ(a)|| ≤ ||a||, so φ is bounded. We can then uniquely extend the domain of φ to Hτ , and this extension φτ can be easily shown to be a ∗-homomorphism, and thus, a representation of A.

49 We can take the direct sum of the GNS representations for all τ to get what is called the universal representation of A. Our next goal is to show that this universal representation is faithful.

Theorem 3.3.18. (Gelfand-Naimark) If A is a C*-algebra, then it has a faithful representation.

In particular, the universal representation is faithful.

Proof. We want to show that the universal representation φ is injective, so begin by assuming a is an element such that φ(a) = 0. First, note that (a∗a)∗ = a∗a, so a∗a is hermitian and thus normal. We may then use Lemma3 .2.17 to find a τ such that ||a∗a|| = τ(a∗a). If we apply

Theorem3 .2.18, we can find a b ∈ A such that b4 = (a∗a). From this,

2 ∗ ∗ 4 2 ||a|| = ||a a|| = τ(a a) = τ(b ) = ||φτ (b)(b + Nτ )|| = 0.

4 ∗ Since φτ (b ) = φτ (a a) = 0, φτ (b) = 0, so a = 0 and therefore, representation is faithful.

The material in this section comes from [14].

3.4 Completely Positive Maps

Before we move on to the crossed product, there’s one last thing we can discuss when it comes to C*-algebras alone. We wish to discuss completely positive maps, but before we can do that, we need to define what we mean by completely. To this end, we show that if A is a C*-algebra, then Mn(A) is a C*-algebra.

Example 3.4.1. (Mn(A)) Let A be a C*-algebra and consider Mn(A)- the set of n×n matrices with entries in A. We would like to define a C*-algebra structure on this set. This is done largely as might be expected-since elements of A can be viewed as bounded operators on some H via the Gelfand-Naimark representation, we can view n × n matrices of elements of A as bounded

n operators on H . If we denote an element in Mn(A) by {ai,j}, this is done via

n n X X {ai,j}(h1, ..., hn) = ( a1,jhj, ..., an,jhj). j=1 j=1

∗ Multiplication here works as matrix multiplication, and the adjoint is defined by {ai,j} =

∗ {aj,i}. It remains to show that there exists a norm that makes Mn(A) into a C*-algebra, and

50 in particular, that that norm is unique. Because we can represent Mn(A) as a subalgebra of B(Hn), we do have a norm that satisfies the conditions of a C*-algebra. We now must show that this norm is unique. First, note that for any a ∈ M n(A), a∗a is self-adjoint and ||a||2 = ||a∗a||.

Therefore, if we can show that the norm of ||a∗a|| is uniquely determined, we are done. However, by Lemma3 .2.6, the norm of a∗a is given by its spectral radius-an inherent property of M n(A) regardless of how we represent it. Therefore, the norm on M n(A) that turns it into a C*-algebra is well-defined.

With this, we can define the idea of a “completely-” prefix for maps on C*-algebras.

Definition 3.4.2. (Completely) A map φ on a C*-algebra A with a property P is said to be completely P if the induced map φn on Mn(A) given by taking {ai,j} to {φ(ai,j)} satisfies P for every n.

For an example, a map φ on A is completely positive if for every n, φn is positive on Mn(A). We will primarily take interest in completely positive maps.

Example 3.4.3. (∗-representations are completely positive) Let A be a C*-algebra and let π be a ∗-representation of A onto the bounded operators of a Hilbert space H. Let M be a positive element in Mn(A) and let πn be the induced map from π on Mn(A). Note that πn is a

∗-homomorphism on Mn(A).

n n X X {πn(K)πn(L)i,j} = π(Ki,k)π(Lk,j) = π( Ki,kLk,j) = πn(KL), k=1 k=1 for K,L ∈ Mn(A) and

∗ ∗ ∗ ∗ πn(M ) = {π(Mj,i)}i,j = {π(Mj,i) }i,j = πn(M) .

Therefore, by Lemma3 .2.21, πn is positive and π is completely positive.

Example 3.4.4. (Conjugation by operators completely positive) Consider the C*-algebras B(H) and B(K) and let V is a bounded operator from H to K. We wish to show the map φ(A) = V ∗AV

51 from B(K) to B(H) is completely positive. Note that for any positive M ∈ Mn(B(H)),

∗ ∗ ∗ 1/2∗ 1/2 φ(M) = {V Mi,jV } = diag(V )M diag(V ) = diag(V )M M diag(V )

= (M 1/2diag(V ))∗M 1/2 diag(V ), where diag(V ) is the diagonal matrix with entries consisting only of V . This is itself a positive

∗ element of Mn(B(K)), as it is of the form K K, so we are done.

Example 3.4.5. (Transposition is not completely positive) An example of a map that is positive but not completely positive is the transposition operation in Mn. Please see [2] for details.

We now prove the Stinespring Dilation theorem, which broadly states that Examples3 .4.3 and3 .4.4 cover every completely positive map;

Theorem 3.4.6. (Stinespring Dilation) Let A be a C*-algebra and let φ : A → B(H) be a completely positive map. Then, there exists a Hilbert space K, a unital ∗-homomorphism π from

A to B(K) and a bounded operator V : H → K where ||φ(1)|| = ||V ||2 such that

φ(a) = V ∗π(a)V.

Proof. Consider the tensor product A ⊗ H, and define a symmetric bilinear function on it by

∗ ha ⊗ h, b ⊗ gi = hφ(b a)x, yiH , and extending by linearity, where the second inner product is the one taken in H. We wish to show that hx, xi is nonnegative. Since φ is completely positive,

    h1 h1 n n     X X ∗  .   .  h ai ⊗ hi, aj ⊗ hji = hφn({aj ai})  .  ,  . i ≥ 0.     i=1 j=1     hn hn

Therefore, as h, i consequently satisfies the Cauchy-Schwarz inequality, the set

N = {u ∈ A ⊗ H|hu, ui = 0} = {u ∈ A ⊗ H|hu, vi = 0, v ∈ A ⊗ H}

52 is a subspace of A ⊗ H. We can then define an inner product on (A ⊗ H)/N by hu + N, v + Ni = hu, vi. This will be an inner product, and we can complete this space to a Hilbert space which we will call K.

If a ∈ A, define an operator π(a): A ⊗ H → A ⊗ H by

X X π(a)( ai ⊗ xi) = (aai) ⊗ xi.

∗ ∗ Consider the matrix {ai a aaj}. We can factor this matrix into

  a1     a2        ∗  .  a a a1 a2 . . . an .  .   .    an

Remember from Lemma3 .2.16 that ||a||1 − a is positive and thus ||a||1 ≥ a. Thus, by Lemma

∗ ∗ ∗ ∗ ∗ ∗ 3.2.22, for any operator v, v ||a||1v ≥ v av and therefore {ai a aaj} ≤ ||a a||{ai aj}. Therefore,

X X X ∗ ∗ hπ(a)( aj ⊗ xj), π(a)( ai ⊗ xi)i = hφ(ai a aaj)xj, xii i,j

∗ X ∗ 2 X X ≤ ||a a|| hφ(ai aj)xj, xii = ||a|| h aj ⊗ xj, ai ⊗ xii. i,j

Therefore, π(a) preserves N and thus induces a transformation on (A ⊗ H)/N, which for sim- plicity will still be denoted π(a). Another consequence of the above is that π(a) is bounded with

||π(a)|| ≤ ||a||. Therefore, we can extend π(a) to a bounded linear operator over all of K, which again, will be denoted π(a) for simplicity. It is easy to see that π : A → B(K) is still a unital

∗-homomorphism.

We now define V : H → K by

V (x) = 1 ⊗ x + N.

To show boundedness, note that

||V x||2 = h1 ⊗ x, 1 ⊗ xi = hφ(1)x, xi ≤ ||φ(1)||||x||2.

53 In particular, ||V ||2 = sup{hφ(1)x, xi : ||x|| ≤ 1} = ||φ(1)||. At this point, we simply note that

hV ∗π(a)V x, yi = hπ(a)1 ⊗ x, 1 ⊗ yi = hφ(a)x, yi.

Therefore, V ∗π(a)V = φ(a).

Example 3.4.7. (Positive Linear Functionals are Completely Positive) Let A be a C*-algebra and let τ be a positive linear functional on A. Recall from Definition3 .3.17 that we can define a

Hilbert space Hτ and a representation φτ from A to B(Hτ ). Consider the operator V : Hτ → C given by V (a) = τ(a), extended by continuity. The adjoint of V is then the operator V ∗ from C to Hτ such that λτ(a) = hV a, λi = ha, V ∗λi = τ((V ∗λ)∗a).

∗ V λ = λ1 + Nτ clearly satisfies this by the linearity of τ and since the adjoint is unique we are

∗ done. Finally, consider V φτ (a)V as an operator on C for a fixed a ∈ A. We get that

∗ V φτ (a)V λ = V φτ (a)(λ1 + Nτ ) = V (λa + Nτ ) = τ(a).

Thus, τ is a ∗-representation conjugated by an operator and completely positive.

With the Stinespring Dilation Theorem, we can prove the following lemma;

Lemma 3.4.8. Let A and B be C*-algebras such that B ⊆ A. If H is a Hilbert space such that there exists a φ : A → B(H) that is norm one and completely positive such that φ|B is multiplicative, then for any a ∈ A and b ∈ B, φ(ab) = φ(a)φ(b) and φ(ba) = φ(b)φ(a).

Proof. By Theorem3 .4.6, there is a representation π of A on a Hilbert space H0 and a bounded

0 ∗ contractive V from H to H such that φ(x) = V π(x)V for x ∈ A. Since φ|B is multiplicative,

V ∗π(x)(1−VV ∗)π(y)V = V ∗π(x)π(y)V − V ∗π(x)VV ∗π(y)V

= V ∗π(xy)V − φ(x)φ(y) = φ(xy) − φ(xy) = 0 for x, y ∈ B. As x and y are arbitrary, (1 − VV ∗)π(B)V = 0. Then, for b ∈ B, π(b)V =

VV ∗π(b)V = V φ(b). Thus,

φ(ab) = V ∗π(ab)V = V ∗π(a)π(b)V = V ∗π(a)V φ(b) = φ(a)φ(b).

54 The second inequality is shown analogously.

The material from this section comes from [6]

55 Chapter 4

Crossed Products

And now, we can finally begin the breakdown of the core ideas of the document. Our objective is the construction of the crossed product, which allows us to entwine a group structure with a C*- algebra structure into one larger C*-algebra. However, we will first detour through a discussion of amenability, which will allow us to make better distinction in our definition of the crossed products.

4.1 Amenability

For amenability, we cannot jump straight into the definition of what it means for a group to be amenable. First, we must define a few other concepts, starting with what it means for a functional on L∞(G) to be translation invariant.

Definition 4.1.1. (Translation invariant functional) Let G be a group. A linear functional µ on L∞(G) is translation invariant if for every g ∈ G and f ∈ L∞(G), µ(gf) = µ(f), where gf(x) = f(g−1x).

And at last, we can define amenability.

Definition 4.1.2. (Amenable) We call a group G amenable if there exists a translation invariant positive linear functional of norm one on L∞(G). A positive linear function of norm one is usually shortened to just a mean.

This is a bit of an abstract definition, so the question naturally becomes, what is the motiva- tion here? Amenability was created in response to the Banach-Tarski paradox, the construction

56 that allows one to split the sphere into five parts and recombine them to create two identical spheres [21]. The idea would be that an is something where this cannot happen, where we can’t split the group into distinct parts that are just a translation of the whole group.

This is best illustrated by the key example of a group that is nonamenable, the free group.

Example 4.1.3. (Free Group) We begin by showing the free group generated by two elements is nonamenable. Let X = F2 be the free group generated by two elements a and b. Assume that X is amenable and let f be the translation invariant mean on X. Then, we can divide X

−1 −1 into four subsets Xa,Xb,X−a, and X−b consisting of all words that begin with a, b, a and b , respectively. Furthermore, we will include the identity element in Xa. Note that Xa is a subset

−1 −1 of b Xb, and in particular, b Xb \ Xb. By translation invariance of the mean, we have that

−1 f(χb Xb ) = f(χXb ), where χE denotes the characteristic function of E. Thus,

−1 f(χXa ) ≤ f(χb Xb ) − f(χXb ) = 0.

Therefore, f(χXa ) = 0. Similarly, f(χXb ) = f(χX−a ) = f(χX−b ) = 0. Since these four sets cover the entirety of X, we get that f(χX ) = 0, which contradicts that ||f|| = 1. Thus, there cannot be a translation-invariant mean on X.

With the assistance of another lemma, we can use the free group’s non-amenability to show the nonamenability of a good class of groups.

Lemma 4.1.4. (Amenable Subgroups) If G is a discrete amenable group and H is a subgroup of G, H is amenable.

Proof. Consider the composition of the invariant mean and the projection operator onto H in

`∞(G)∗. The result will be of norm one, positive, translation invariant and linear on H. In other words, a translation invariant mean on H.

With this, if we can find a non-amenable subgroup in a group, we know the group itself cannot be amenable.

Example 4.1.5. (SL2(Z)) Let SL2(Z) be the group of integer-valued matrices of determinant one. Our objective is to show that we can embed the free group on two generators, F2 into

SL2(Z). By the methodology of our previous proof, we can show that any invariant mean on

57 all of SL2(Z) would have to be identically zero on this subgroup. Since the restriction of an invariant mean to a subgroup is itself an invariant mean, this provides the same contradiction.

The first step to showing that F2 can be embedded into SL2(Z) is to find two elements which represent our a and b. Our candidates will be the matrices

    1 3 1 0     A =   and B =   . 0 1 3 1

First, note that for n ∈ Z,

    1 3n 1 0 n   n   A =   and B =   . 0 1 3n 1

Next, we consider the group generated by these two matrices. In order to show that it is isomorphic to F2, we need to show that we cannot find a relation in the elements of this group, or namely, that there are not a finite collection of integers {ui} such that

Au1 Bu2 Au3 ...Aun−1 Bun = 1.

2 We accomplish this by considering the action of SL2(Z) on Z by

  a b     (x, y) = (ax + by, cx + dy). c d

2 2 For our purposes, begin by taking two subsets of Z , T1 = {(x1, x2) ∈ Z : |x1| > |x2|} and 2 n T2 = {(x1, x2) ∈ Z : |x1| < |x2|}. We first show that A (1, 1) = (1 + 3n, 1) ∈ T1 and n B (1, 1) = (1, 1 + 3n) ∈ T2 for n 6= 0. This is a simple result of that |1 + 3n| > |1|∀n ∈ Z. Our n next step is to show that if (x1, x2) ∈ T2, A (x1, x2) = (x1 + 3nx2, x2) ∈ T1 and if it is in T1,

n B (x1, x2) = (x1, 3nx1 + x2) ∈ T2 for n 6= 0. This is once again a simple arithmetic result, as if x1 and nx2 are the same sign, then clearly |x1 + 3nx2| > |x2|, and if not, |x2| > |x1| clearly gives us that |x1 + 3nx2| > |2x2| > |x2|, so we are done. We have thus shown that, starting at (1, 1),

u1 u2 un−1 un a sequence of A B ...A B will start by going to one of T1 and T2 and then alternate between them until the sequence ends in one or the other. However, as |1| = |1|, (1, 1) is in

58 neither T1 nor T2. Thus, this sequence cannot be the identity. Therefore, our A and B have no relations, and thus produce the image of F2 in SL2(Z). The idea for this proof came from [4].

Now, we’ve established a lot of groups that are not amenable, but what about groups that are? With the Markov-Kakutani fixed point theorem, we can show that every discrete abelian group is amenable.

Lemma 4.1.6. If G is a discrete abelian group, then G is amenable.

Proof. Begin by considering the space S of norm-one linear functionals on `∞(G) under the weak-∗ topology. By Theorem2 .2.26, we know this is compact and convex. For a given g ∈ G,

−1 ∗ let Tg be the operator on S defined by Tg(f(x)) = f(g x). This has a dual operator Tg on the dual of `∞(G), which we can show must preserve positive linear functionals on `∞(G)- if φ is positive, then

∗ −1 Tg φ(f(x)) = φ(f(g x)), which is still positive. Let 1 be the constant one function on G and note that for any other

∞ ∗ function f ∈ ` (G) with norm 1, f ≤ 1. For a given positive φ, since φ and Tg φ are positive, ∗ ∗ ∗ ∗ φ(f) ≤ φ(1) and Tg φ(f) ≤ Tg φ(1). Thus, |φ(1)| = ||φ|| and |Tg φ(1)| = ||Tg φ||. This gives us ∗ that Tg preserves norm, as

∗ ∗ ||Tg φ|| = ||Tg φ(1)|| = ||φ(1)|| = ||φ||.

∗ Therefore, each Tg maps S into itself. Since G is abelian and {Tg } is a commutative set, we can use Theorem2 .2.13 to get a common fixed point s ∈ S. For any f ∈ `∞(G),

∗ s(Tgf) = Tg s(f) = s(f), so we have constructed our translation invariant norm one linear functional on `∞(G).

For our later use of amenability regarding crossed products, we will need an equivalent definition.

Definition 4.1.7. (Følner Condition) A discrete group G with the counting (Haar) measure µ satisfies the Følner condition if for every finite set F ⊂ G and every  > 0, there is a finite S ⊂ G

59 such that µ(S) > 0 and µ(gS 4 S) < µ(S) for all g ∈ F , where 4 is the symmetric difference.

While the Følner condition is technically equivalent to amenability, we need only show that it follows from it.

Lemma 4.1.8. If a discrete group G is amenable, it satisfies the Følner condition.

1 Proof. Let µ be an invariant mean on G. We want to show that there is a net {µi} in L (G) P such that µi ≥ 0 and µi(g) = 1 for all i that converges to µ in the weak-* topology. We g∈G call this set of functions the probability measures on G and denote it P rob(G). If such a net does not exist, then µ is outside the weak-* closure of P rob(G). It’s easy to see that P rob(G) is convex, so the Hahn-Banach separation theorem2 .2.16 gives us a f ∈ `∞(G) and s, t ∈ R such that for all ν ∈ P rob(G),

Re(ν(f)) < t < s < Re(µ(f)).

0 f+f 0 By linearity, we may replace f by f = 2 to get ν(f ) < t < s < µ(f). We may then also 0 00 0 0 00 0 0 00 replace f with f = f + ||f ||∞ to get that ν(f ) < s + ||f ||∞ < t + ||f ||∞ < µ(f ). For

00 0 0 simplicity, denote f by f, and s + ||f ||∞ or t + ||f ||∞ by s or t, respectively. It’s easy to see that ||f||∞ = sup{ν(f)|ν ∈ P rob(G)}, but this gives us a contradiction, as

||f||∞ = sup{ν(f)|ν ∈ P rob(G)} ≤ s < t < µ(f) ≤ ||f||∞.

Thus, there is a net µi which converges to µ in the weak-* topology. We then have that for all g ∈ G and f ∈ `∞(G),

g.µi(f) − µi(f) → g.µ(f) − µ(f) = 0.

1 ∞ However, considering ` (G) as the pre-dual of ` (G), this is exactly the statement of g.µi −

µi converging weakly to zero. Therefore, for any finite subset E of G, the weak closure of

⊕g∈E{g.ν −ν : ν ∈ P rob(G)} contains zero. As this is convex, the Hahn-Banach Theorem2 .2.11 gives us that the weak and norm closures coincide, and thus, for every  > 0, we can find a

ν ∈ P rob(G) such that X ||g.ν − ν|||1 < . g∈E

Given a finite subset E and an  > 0, find a ν such that ||gν − ν||1 < /|E| for each g ∈ E.

60 We may then take a ν such that X ν(g) = 1. g∈G

For a positive function f ∈ L1(G) and a r ≥ 0, define a subset of G by

F (f, r) = {g ∈ G|f(g) > r}.

Let χF (f,r) is the characteristic function of this set. Note that for a pair of positive functions

1 f, h ∈ L (G), |χF (f,r)(g) − χF (h,r)(g)| = 1 if and only if r is between f(g) and h(g). Thus, if f and h are bounded above by one,

Z 1 |f(g) − h(g)| = |χF (f,r)(g) − χF (h,r)(g)|dr. 0

Since our gν and ν are in `1(G) and bounded above by 1, we get that

X ||gν − ν||1 = |sν(g) − ν(g)| g∈G X Z 1 = |χF (gν,r)(g) − χF (ν,r)(g)|dr g∈G 0 Z 1 X = |χgF (ν,r)(g) − χF (ν,r)(g)|dr 0 g∈G Z 1 = |sF (ν, r) 4 F (ν, r)|dr. 0

Note that we can exchange the integral and the sum at the third step because only a finite number of |χsF (ν,r)(g) − χF (ν,r)(g) will be finite for all nonzero r. If infinitely many are, then one of ν(g) and sν(g) is above r for infinitely many g, and thus P ν(g) cannot be 1. g∈G R 1 We want to show that 0 |F (ν, r)|dr = 1. For each g ∈ G, by definition, g ∈ F (ν, r) if R 1 P ν(g) > r. If we let χg(r) = 1 if ν(g) > r and zero elsewhere, then this is equal to 0 χg(r)dr. g∈G Once again, for all r > 0, only finitely many summands can be nonzero, as else P ν(g) cannot g∈G be one. Thus, we can exchange the integral and sum to get

Z 1 Z 1 X X Z 1 X |F (ν, r)|dr = χg(r)dr = χg(r)dr = ν(g) = 1. 0 0 g∈G g∈G 0 g∈G

61 From this and the last paragraph, we get that

Z 1 X Z 1 X  |F (ν, r)|dr =  > ||gν − ν||1 = |gF (ν, r) 4 F (ν, r)|dr. 0 g∈E 0 g∈E

Therefore, for some r we have

X |(gF (ν, r) 4 F (ν, r))| < |(F (ν, r))|, g∈E and we are done.

This proof comes from [15] with assistance from notes from [16]

4.2 Crossed Product Fundamentals

We may now begin discussing the crossed product construction. We begin by generalizing the idea of a dynamical system to a C*-context, in what is known as a “C*-dynamical system.”

Definition 4.2.1. (C*-dynamical system) Let A be a C*-algebra and G be a countable discrete group such that there is a homomorphism φ from G into the automorphims on A. We refer to such a triple of a C*-algebra, countable discrete group, and homomorphism as a C*-dynamical system. In general, we will denote φ(g) for g ∈ G by φg for brevity.

Example 4.2.2. (Rotations) Recall from Examples2 .4.2,2 .4.6, and2 .4.7 that if T is the circle and α is a rotation of T given by e2πiα, then (T, α) is a dynamical system. We wish to turn this into a C*-dynamical system using a procedure that will soon become familiar. Let A = C(T ), where T is the circle and let G be Z. We verified that A is a C*-algebra in Example3 .1.4, and −2nπiα we know that Z is a group under addition. Our φ will be defined by φnf(x) = f(e x). We must now verify that φ is a homomorphism and that φn is an automorphism for every n ∈ Z.

The former is fairly easy. As Z is abelian,

−2(n+m)πiα −2nπiα −2mπiα φnmf(x) = f(e x) = f(e e x) = φnφmf(x).

Next, we must show that φn is an automorphism for all n ∈ Z. Invertibility is straightforward.

−2nπiα 2nπiα 2nπiα −2nπiα φnφ−nf(x) = f(e e x) = f(x) = f(e e gx) = φ−nφnf(x).

62 As is scalar multiplication.

2nπiα φn(λf)(x) = (λf)(e x) = λφnf(x).

Finally, for addition and multiplication of elements in C(T ),

2nπiα 2nπiα 2nπiα φn(f + h)(x) = (f + h)(e x) = f(e x) + h(2 x) = φnf(x) + φnh(x),

2nπiα 2nπiα 2nπiα φn(fh)(x) = (fh)(e x) = f(e x)h(e x) = (φnf(x))(φnh(x)).

Something to note is that if α is a rational rotation, i.e., if a is a rational number, then there is a number b such that ba = 0 (mod 1), and thus, αb is the identity. Essentially, this makes our group Z/bZ rather than Z. For irrational rotations, this does not occur by definition. Ergo, for every n, e2nπiα is distinct from every other e2mπiα. Therefore, in these cases, we are using the entirety of Z.

Example 4.2.3. (General Homeomorphisms) We can generalize the last example even more.

Let X be a any compact, Hausdorff space and A = C(X) be the continuous functions on it. We know from Example3 .1.4 that A is a C*-algebra. Finally, let G be a discrete group such that there exists a homomorphism φ from G to the homeomorphisms from X to C. We want to show that there is a C*-dynamical system between C(X) and Gop is a C*-dynamical system, where

Gop is the group with all the elements of G whose operation is given by g ∗ h = hg. However, the majority of the groups we deal with in this paper are abelian, so we may simply use G. In the

−1 same principle of Example4 .2.2, we can define a φg(f(x)) = f(φ(g )x). The proof that φ is a homomorphism of g and φg is an automorphism of A is essentially identical to that in Example 4.2.2;

−1 −1 −1 −1 −1 φghf(x) = f(φ((gh) )x) = f(φ(h g )x) = f(φ(h )φ(g )x) = φhφgf(x).

Note that the order flips here, which is why we require Gop. To show invertibility,

−1 −1 φgφg−1 f(x) = φ(gg )f(x) = f(x) = φ(g g)f(x) = φg−1 φgf(x).

63 And finally, for scalar multiplication, element addition, and element multiplication,

−1 φgλf(x) = λf(φ(g )x) = λφgf(x),

−1 −1 −1 φg(f + h)(x) = (f + h)(φ(g )x) = f(φ(g )x) + h(φ(g x)) = φgf(x) + φgh(x),

−1 −1 −1 φg(fh)(x) = (fh)(φ(g x)) = f(φ(g x))h(φ(g x)) = φgf(x)φgh(x).

Another thing that we can show is that the norm of φg is one in in the appropriate sense-that if f

∞ is of norm 1 in the L sense, then so too is φgf. This is fairly elementary. First, by Lemma3 .2.8,

φg is norm-decreasing, so ||φgf|| ≤ ||f||. To show the reverse inequality, let {xn} be a sequence of a points such that |f(xn)| converges to 1. Then, consider the sequence (yn) = (φ(g)(xn)).

−1 φgf(yn) = f(φ(g) φ(xn)) = f(xn), so |φgf(yn)| tends to 1 as well-hence ||φgf|| ≥ ||f||. Hence, for any sequence of points such that |f(xn)| converges to 1, there is a sequence of points such that |φ(g)f(yn)| converges to 1, and therefore, ||phi(g)f|| ≥ 1 = ||f||. Thus, ||φgf|| = ||f|| = 1. In particular, Example4 .2.2 is in fact a special case of this one, as the circle is a subset of the complex numbers.

Having a C*-dynamical system, we want to eventually consolidate it into a single C*-algebra that contains information on both pieces. Our first step is to define AG.

Definition 4.2.4. (AG) Let (A, G, φ) be a C*-dynamical system. Define AG to be the algebra consisting of finite sums of elements of the form agg, where g ∈ G and ag ∈ A. Our multiplication

−1 P P is given by the rule gag = φg(a). Therefore, if f = agg and h = btt, g∈G t∈G

X X X X −1 fg = aggbtt = ag(gbtg )gt g∈G t∈G g∈G t∈G X X = agφg(bt)gt g∈G t∈G

If we let g∗ = g−1, standard adjoint properties give us that

∗ ∗ ∗ −1 ∗ −1 ∗ −1 ∗ −1 (ag) = g a = g a = (g a g)g = φg−1 (a )g .

AG is then a kind of proto-C*-algebra, having the algebraic properties, but no norm, and thus no completion. Furnishing the norm that will complete AG into a C*-algebra will be our

64 next goal. To accomplish this, we first want to talk about representations of AG. To this end, we want to define a way in which representations of the individual pieces can be intertwined. To do this, we want to define a kind of representation of G.

Definition 4.2.5. (Unitary Representation) A unitary representation of a group G is a homo- morphism ψ from G to the group of unitary elements of B(H) for some Hilbert space H.

And with this, we can define the larger structure.

Definition 4.2.6. (Covariant representation) A covariant representation of a C*-dynamical system (A, G, φ) is pair of representations (π, s) of A and G. We require π to be a unital representation of A on a Hilbert space H and s to be a unitary representation from G to B(H).

∗ Finally, to be a covariant representation, it must also satisfy s(g)π(a)s(g) = π(φg(a)) for all a ∈ A, g ∈ G.

An important thing to note is that given a covariant representation (π, s) of (A, G, φ), we can convert it into a unital representation of AG.

Definition 4.2.7. (Standard Representation) Let (A, G, φ) be a C*-algebra and (π, s) be a P P covariant representation of (A, G, φ). If f ∈ AG = agg, we define σ(f) = π(ag)s(g). We g∈G g∈G first show σ preserves adjoints.

∗ X ∗ ∗ X −1 ∗ −1 σ(f) = s(g) π(ag) = s(g )π(ag)s(g)s(g ) g∈G g∈G

X ∗ −1 ∗ = π(φg−1 (ag))s(g ) = σ(f ). g∈G

Next, products of elements.

X X X X ∗ σ(f)σ(h) = π(ag)s(g)π(bt)s(t) = π(ag)(s(g)π(bt)s(g) )s(g)s(t) g∈G u∈G g∈G t∈G X X = π(ag)π(φg(bt))s(g)s(t) = σ(fh). g∈G t∈G

Element addition is trivial. To verify that it is a unital representation, we simply note that

σ(1) = π(1)s(1) = I × 1 = I, as π is a unital representation. We call this σ the standard representation of AG generated by (π, s).

With all of this, we can finally define the crossed product.

65 Definition 4.2.8. (Crossed Product) Let (A, G, φ) be a C*-dynamical system. We define a norm on AG by ||f|| = sup||σ(f)|| over all unital representations σ of AG. The crossed product

A ×φ G is the completion of AG with respect to this norm.

While this provides a formal definition, it is obviously not the end of the line here. There are two things we must do to finish this definition. First, to show that this supremum is taken over a non-empty collection of unital representations, and second, to verify that this is indeed a norm that satisfies the properties required of a C*-algebra. We begin by verifying the collection is nonempty.

Lemma 4.2.9. The collection of unital representations of AG is non-empty.

Proof. By the remarks after Definition4 .2.6, if we can construct a covariant representation of

(A, G, φ), we can construct a unital representation of AG. By the remarks after Definition3 .3.2 and Theorem3 .3.18, we know that there exists a unital representation π from A to a Hilbert space H. Our desired covariant representation will be from (A, G, φ) to the bounded operators on `2(G, H) (see Example2 .1.9), the set of square-summable functions from G to H. Define

(ψ, s) by

ψ(a)f(g) = π(φg−1 (a))(f(g)),

s(t)f(g) = f(t−1g).

Since ψ is a composition of maps that preserve sums, product, adjoints, and the identity, it is clearly a unital representation. Similarly, s is clearly a homomorphism, and it’s easy to see the adjoint of s(t) is s(t−1)-so s(t) is unitary for all t. Finally, to check the covariance condition,

∗ ∗ −1 ∗ −1 s(t)ψ(a)s(t) f(g) = ψ(a)s(t) f(t g) = π(φ(t−1g)−1 a)(s(t) f(t g))

= π(φg−1 φt(a))f(g) = ψ(φta)f(g).

Thus, we have constructed a covariant representation of (A, G, φ), and thus a unital representa- tion of AG.

We call the representations generated by this method regular representations.

Definition 4.2.10. (Reduced Crossed Product) In general, the collection of regular representa- tions is not enough to completely determine the crossed product of a C*-dynamical system. For

66 the purposes of this document, it will suffice, but in general, the crossed product generated by only these representations is referred to as the reduced crossed product, which we will denote

A ×rφ G. The full crossed product is also often referred to as the universal crossed product, and will always be denoted A ×φ G.

Our next step is to verify that the norm we defined indeed works as needed for a C*-algebra.

Lemma 4.2.11. If we define ||f|| = sup ||σ(f)|| over all unital representations σ of AG, ||f|| is a norm. If we complete AG with respect to this norm, the result, A ×φ G is a C*-algebra.

Proof. We begin by verifying that ||f|| is a norm. Since ||f|| is the supremum of nonzero values, it is nonzero itself. If this supremum is zero, then ||σ(f)|| is zero for every σ. We now show P P that for all nonzero f = agg ∈ AG there is a unital representation σ such that σ(agg) is g∈G g∈G nonzero. To begin, let f = agg By Theorem3 .3.18, there is an injective representation χ of A and by the remarks after Definition3 .3.2, we may assume it is unital. Let ξ be the regular of

AG generated from this χ. This is defined by

−1 (ξ(agg)f)(x) = χ(φx−1 (ag))f(g x).

2 We only want to find a singular x ∈ G and one l ∈ ` (G, H) that (ξ(agg)f)(x) is nonzero, so for the purposes of simplicity we may assume x is the identity. As χ is injective, χ(ag) is zero only when ag is. However, since ag is nonzero, χ(ag) is a nonzero function. Thus, there is some

2 function l ∈ ` (G, H) and some y ∈ G such that χ(ag)l(y) is nonzero. Define h(z) = l(gyz). Then,

−1 −1 (ξ(agg)h)(e) = χ(a)h(g ) = χ(ag)l(g gy) = χ(ag)f(y) is nonzero. Hence, we have found a σ such that σ(f) is nonzero, so we are done. If f is a general

P 0 agg, repeat the previous construction for some nonzero term agg of f, and define h (z) to be g∈G h(e) at e and zero elsewhere. Then,

0 X 0 −1 ξ(l)h (e) = χ(at)h (t ) = χ(ag)f(y) t∈G is nonzero and we are done. Note that that we can simplify this sum as h0(z) is zero for all z 6= g−1x.

67 This shows non-negativity. Our next steps are to show the additive, multiplicative, and scalar properties of the norm. To begin with,

||λf|| = sup ||σ(λf)|| = sup ||λσ(f)|| = |λ| sup ||σ(f)|| = |λ|||f||.

For the triangle inequality,

||f + g|| = sup ||σ(f + g)|| = sup ||σ(f) + σ(g)|| ≤ sup(||σ(f)|| + ||σ(g)||)

≤ sup ||σ(f)|| + sup ||σ(g)|| = ||f|| + ||g||.

Similarly,

||fg|| = sup ||σ(fg)|| = sup ||σ(f)σ(g)|| ≤ sup ||σ(f)||||σ(g)||

≤ sup ||σ(f)|| sup ||σ(g)|| = ||f||||g||.

It remains to show the C*-identity and that this norm does not produce infinite values. The

C*-identity is again a result of homomorphism properties and suprema. Note that because B(H) satisfies the C*-identity, ||σ(f)∗σ(f)|| = ||σ(f)∗||||σ(f)||. Remember that in B(H), ||A|| = ||A∗||, and consequently, by this norm ||f|| = ||f ∗||.

||f ∗f|| = sup ||σ(f ∗f)|| = sup ||σ(f)∗σ(f)|| = sup ||σ(f)∗||||σ(f)||

= sup ||σ(f)||2 = ||f||2 = ||f ∗||||f||.

Now, to ensure that the supremum is finite, we want to show that ||σ(agg)|| ≤ ||ag|| for P any ag ∈ A, g ∈ G. From the triangle inequality, this would get us that if f = agg, g∈G P ||f|| ≤ ||ag||. To accomplish this, note that the restrictions of σ to elements of the form g∈G 1g and a1 gives us a unitary representation of G and a unital representation π of A. Then, for any agg, σ(agg) = σ(ag1)σ(1g) = π(ag) ∗ u, where u is a unitary element. We have that ||u||2 = ||u∗u|| = ||1|| = 1, so ||u|| = 1. Hence,

||σ(agg)|| = ||σ(ag1)σ(1g)|| = ||π(ag)u|| ≤ ||π(ag)||||u|| = ||π(ag)|| ≤ ||ag||,

68 as ∗-homomorphisms are norm-decreasing (Lemma3 .2.8).

At this point, we have established the formal groundwork for the crossed product. A ×φ G is a C*-algebra, so by Theorem3 .3.18, we can represent it as the bounded operators on some

Hilbert space H. With this, we want to consider what the images of G, or more specifically, 1g under this representation ψ are. In particular, we wish to show that ψ(1g) is unitary for every g ∈ G. This is fairly easy.

ψ(1g)ψ(1g)∗ = ψ(1g)ψ((1g)∗) = ψ(1g)ψ(g∗1) = ψ(1gg−11) = ψ(1e1) = 1.

With this, we can create a simple way for thinking about elements of A ×φ G. An element P f ∈ A ×φ G can be thought of as the limit of elements of the form fn = aug where a ∈ A g∈G and ug is a unitary element. We close this section with an example of a crossed product.

Example 4.2.12. Our example of a crossed product will be from the C*-dynamical system discussed in Example4 .2.2. Specifically, we will take A to be the continuous functions on the circle, and G to be the integers acting on the circle and through that on A via irrational rotation.

−2πiθ Specifically, we consider φn(f)(x) = f(e x), where θ is irrational. Now, define Aθ to be the

∗ −2πiθ algebra with unitary elements U, V ∈ Aθ such that VUV = e u and that whenever B is another C*-algebra with unitary elements υ, ν ∈ B such that νυν∗ = e−2πiθυ, there is a

∗-homomorphism from Aθ onto B that carries U to υ and V to ν. We call Aθ the irrational rotaion algebra, and we claim that both Aθ exists and A ×φ Z = Aθ.

By the previous remark, an element f of A ×φ Z is the of elements of P A times a unitary element. We represent such a sum as fn,iyi. We can represent each fn,i i∈Z as the limit of a sequence of polynomials pn,i,k, so f itself can be represented as the limit of P pn,iyi, where pn,i are polynomials. The polynomials are generated by x, and since our group i∈Z 1 is Z, the unitaries are generated by the image of 1. Denoting the image of 1 in A ×φ Z by u 1 for simplicity, we then have that A ×φ Z is generated by u and x. In particular, we have that 1 1∗ 1 −1 −2πiθ u xu = u xu = φ1(x) = e x, so the unitaries that generate A ×φ Z satisfy the universal 0 relation for Aθ. Thus, there is a ∗-homomorphism from Aθ to A ×φ Z that takes U to xu and V to 1u1.

Our goal is now to to generate a homomorphism in the other direction. We begin by creating a covariant representation of (A, Z, φ) by π(f) = f(U) in Aθ (by Theorem3 .2.18 and Lemma

69 3.2.12) and s(n) = V n. Each V n is unitary itself, as V ∗nV n = V nV ∗n = 1, so s is a unitary representation. Similarly, it is easy to see that π is a ∗-homomorphism;

π(f + g) = (f + g)(U) = f(U) + g(U) = π(f) + π(g),

π(λf) = λf(U) = λπ(f),

π(f ∗) = f ∗(U) = f(U)∗ = π(f)∗,

π(fg) = fg(U) = f(U)g(U) = π(f)π(g).

It remains to show the covariance condition.

∗ n ∗n n ∗n s(n)π(f)s(n) = V f(U)V = lim V pk(U)V k→∞

−2nπiθ −2nπiθ = lim pk(e U) = f(e U) = π(φnf). k→∞

∗ 2πiθ Where pk are a sequence of polynomials that converge to f. Note that as VUV = e U, VU = e2πiθUV , and thus as V is unitary, V nU kV ∗n = e2nkπiθU. From this covariant representation, we P P can define a σ from A ×φ Z to Aθ by σ( ann) = π(an)s(n) and extending by continuity. n∈Z n∈Z Since there are pre-images for U and V in A ×φ Z in x0 and e1 respectively and A ×φ Z is complete, this is a surjection onto Aθ. Thus, we have surjective ∗-homomorphisms both ways between Aθ and the crossed product, and it is easy to see that they are inverses of each other. Thus, they are isomorphic.

One last thing we can show is that if (A, G, φ) is a C*-dynamical system such that G is amenable, then A ×φ G = A ×rφ G.

Theorem 4.2.13. If (A, G, φ) is a C*-dynamical system such that G is discrete and amenable, then A ×rφ G is isomorphic to A ×φ G.

Proof. Let σ be the canonical homomorphism from A ×φ G onto A ×rφ G and a be an element P of A ×φ G such that a = agug and only finitely many ag are nonzero. To show that σ is an g∈G , we want to show that ||σ(a)|| ≥ ||a||. To do this, fix a given covariant representation

(k, u) of (A, G, φ) onto a Hilbert space H with regular representation κ of AG. We need to show that for every  > 0, there is a representation π of A whose usual shift representation (l, v)

70 generates a unital representation λ of AG such that

||κ(a)|| −  ≤ ||λ(a)||.

To accomplish this, we will begin by taking π = k.

Let ν be the representation of G on L2(G) given by ν(g)f(x) = f(g−1x), and let (l, v) be the usual shift representation generated by k onto L2(G, H) = L2(G, µ) ⊗ H. Note that v(g) = ν(g) ⊗ 1 for all g ∈ G. Let z be the unique in B(L2(G, H)) such that

(zξ)(g) = u(g)−1(ξ(g)) for ξ ∈ L2(G, H). We want to show that z(v(h) ⊗ u(h))z−1 = v(h) ⊗ 1 for h ∈ G and z(1 ⊗ k(a))z−1 = l(a) for a ∈ A. To verify this, select a ξ ∈ L2(G, H) and a g ∈ G and note that

(z(v(h) ⊗ u(h))ξ)(g) = u(g)−1(((v(h) ⊗ u(h))ξ)(g)) = u(g)−1(u(h)(ξ(h−1g)))

= u(g−1h)(ξ(h−1g)) = (zξ)(h−1g) = ((v(h) ⊗ 1)zξ)(g).

To show z(1 ⊗ k(a))z−1 = l(a), we need to use both the covariance of (k, u) and the definition of l;

(z(1 ⊗ k(a))ξ)(g) = u(g)−1(((1 ⊗ k(a))ξ)(g)) = u(g)−1k(a)(ξ(g))

−1 = k(φg−1 (a))u(g) (ξ(g)) = k(φg−1 (a))((zξ)(g))

= (l(a)zξ)(g).

Thus, we have shown that z(v(h) ⊗ u(h))z−1 = v(h) ⊗ 1 and z(1 ⊗ k(a))z−1 = l(a).

If we write 1⊗k for the representation of A that takes a to 1⊗k(a) on L2(G)⊗H = L2(G, H), the previous paragraph gives us that (v ⊗ u, 1 ⊗ k) is a covariant representation. It also gives us that if π is the standard representation of AG generated by (v ⊗ u, 1 ⊗ k), and a is such that only finitely many ag are nonzero, ||π(a)|| = ||λ(a)||.

It then remains to show that

||π(a)|| > ||κ(a)|| − .

71 If a = 0 or  ≥ ||κ(a)||, this is immediate, so we may assume both are false. Set

||κ(a)|| −  2 δ = 2 − 1. ||κ(a)|| − 

 −1 −1 Then, as ||κ(a)||− 2 > ||κ(a)||−, δ > 0. Let S = supp(a)∪{1}. Then, S and S = {s |s ∈ S} are finite subsets of G, and we may use the Følner condition to get a nonempty finite K ⊂ G such that and |S−1K4K| < δ|K|. Since 1 ∈ S−1, the latter condition gives us that |S−1K\K| < δ|K|

−1 and thus that |S K| < (1 + δ)|K|. By the definition of norm, we can find a ξ0 ∈ H such that

 2 −1 ||ξ0|| = 1 and ||κ(a)(ξ0)|| > ||κ(a)|| − 2 . If we then define ξ ∈ L (G, H) to be 0 outside S K and ξ0 within it, then

−1 1/2 1/2 1/2 ||ξ|| = |S K| ||ξ0|| < (1 + δ) |K| .

We now wish to estimate ||π(a)ξ||. For some g ∈ K, we have

X (π(a)ξ)(g) = (((1 ⊗ k)(ah))(v(h) ⊗ u(h))ξ)(g) h∈G

X −1 X = k(ah)u(h)(ξ(h g)) = k(ah)u(h)ξ0 = κ(a)ξ0. h∈G h∈G

Therefore,  ||π(a)ξ|| ≥ |K|1/2||κ(a)ξ || > |K|1/2(||κ(a)|| − ). 0 2

From this, we get

|K|1/2(||κ(a)|| −  )  ||π(a)|| > 2 = (1 + δ)−1/2(||κ(a)|| − ) = ||κ(a)|| − , (1 + δ)1/2|K|1/2 2 and we are done.

Note that this means that if we have a dynamical system (X, σ) and take its generated

C*-dynamical system (C(X), Z, σ), the reduced crossed product is equal to the universal. The material in this section comes mostly from [5] and [18], with the amenability proof coming from [17].

72 4.3 Minimality and Simplicity

Now, with the background on crossed products and C*-dynamical systems out of the way, we turn to showing the main result of this document. First, we establish how to view a classical dynamical system (X, σ) as a C*-dynamical system. By the same procedure used in Examples

4.2.2 and4 .2.3, we may define a C*-dynamical system (C(X), Z, φ), where

−n φn(f) = f ◦ σ .

With this, we may naturally define the crossed product C(X) ×σ Z. Therefore, through this construction, we can construct a C*-algebra out of any classical dynamical system. Our first goal is to show that the minimality of the original dynamical system is equivalent to the simplicity of the generated crossed product. We begin by defining simplicity.

Definition 4.3.1. (Simplicity) A C*-algebra A is simple if it has no proper C*-ideals.

Example 4.3.2. (C) An example of a C*-algebra that is simple is C. Since all nonzero elements are invertible, if an ideal I contains a nonzero element x, then it must contain x−1x = 1, and thus y1 = y for all y ∈ C. Hence I = C. Therefore, all ideals of C are either zero or the entire set, and thus C is simple.

Example 4.3.3. (C(X)) However, an example of a C*-algebra that is not simple is that of continuous functions on a compact Hausdorff space X such that |X| > 1- if |X| = 1, C(X) = C.

Let A be an appropriately chosen nontrivial subset of X and let CA(X) be the set of continuous functions on X that are zero outside A. It is easy to see that, since element multiplication is defined pointwise, for any f ∈ CA(X) and g ∈ C(X), gf must be zero outside A, and thus, gf ∈ CA(X). Hence, CA(X) is an ideal. However, clearly not all continuous functions on X are zero outside of A for an appropriately selected A, so this ideal is neither the entire thing, nor is it trivial. Therefore, C(X) is in general not simple.

At this point, we can show that simplicity of the crossed product implies minimality of the original dynamical system. Or, more specifically, non-minimality implies non-simplicity of the crossed product.

Theorem 4.3.4. (Simplicity implies Minimality) If (X, σ) is a classical dynamical system and the crossed product C(X) ×σ Z is simple, then (X, σ) is minimal.

73 Proof. Assume (X, σ) is not minimal. Then there is a closed proper subset Y of X such that

σ(Y ) = Y . Consider the subset K of C(X) ×σ Z generated by elements of the form f(x)s(n), where f(x) are functions that are zero on Y and s(n) are unitary. We wish to show that K is k P an ideal. A general element of K is the limit of elements of the form fi(x)s(i) where k is i=1 some integer and fi(x) are (potentially zero) continuous functions on X that are zero on Y .A k P general element of C(X) ×σ Z is the limit of elements of the form gi(x)s(i) where k is some i=1 integer and gi(x) are general continuous functions on X. We wish to show the product of these is in K. It suffices to show that the product of any such fi(x)s(i) and gj(x)s(j) is within K.

j gj(x)s(j)fi(x)s(i) = gj(x)s(j)fi(x)s(−j)s(i + j) = gj(x)f(σ x)s(i + j).

j Since Y is σ-invariant and fi(x) is zero on Y , fi(σ x) is zero on Y as well, and so too must

j j gj(x)f(σ x). Thus, gj(x)fi(σ x)s(i + j) is an element of K and K is then a left ideal. To show that it is a right ideal is even easier-

i fi(x)s(i)gj(x)s(j) = fi(x)s(i)gj(x)s(−i)s(i + j) = fi(x)gj(σ x)s(i + j).

i Then, as fi(x) is zero on Y , fi(x)gj(σ x) also must be zero on Y , so this is an element of K. Thus, K is an ideal. It is also easy to see that 1s(1) is not in K, where 1 is the function that is

1 on all of X. Therefore, we have shown non-minimality implies non-simplicity.

To show the reverse implication takes a good deal more work. We will need a large collection of lemmas and theorems to finally reach it. We begin with a a circle integration lemma in the spirit of the continuous functional calculus.

Lemma 4.3.5. If f is a continuous function from the circle T to a C*-algebra A, there exists R a unique element T f(z)dz in A such that for any representation π of A on a Hilbert space H, and for h, k ∈ H, Z Z hπ( f(z)dz)h, ki = hπ(f(z))h, kidz. T T R Here, the second integral is taken in the usual sense. Furthermore, T f(z)dz also satisfies the following, where b is an element of A,

Z Z b( f(z)dz) = bf(z)dz T T

74 Z Z || f(z)dz|| ≤ ||f(z)||dz, T T

Proof. We begin by approximating f with simple step functions. Because f is a continuous function on a compact space, f is uniformly continuous. For a given  > 0, find the corresponding

δ such that ||f(x) − f(y)|| < /2 for any |x − y| < δ. Divide the circle evenly into half-open intervals such that the length of each interval is less than δ, and label these intervals Ti. Select xi to be any point in Ti. Then, for any point y in Ti, ||f(y) − f(x)|| ≤ /2 < . Define fi to P be the characteristic function of Ti. Then for any z ∈ T , || i fi(z)f(xi) − f(z)|| < , as the Ti partition the circle.

Next, we use this approximation to approximate the integral of the inner products. By

Theorem3 .3.18, A has a faithful representation τ : A → B(H) for some Hilbert space H. R Note that for h, k ∈ H, the operator σ that takes (h, k) to T hτ(f(z))h, kidz is bounded and sesquilinear. Then, by [9, Theorem 13.5], there is a bounded operator Q on H such that

Z hQh, ki = hτ(f(z))h, kidz. T

From our initial approximation,

X Z ||Q − fi(z)dzτ(f(xi))|| ≤ . i T

R As A is complete, so too is τ(A), so Q ∈ τ(A). Therefore, we may tentatively define T f(z)dz to be τ −1(Q). We have that

Z X Z || f(z)dz − ( fidz)f(xi)|| ≤ , T i T so if π is another representation of A,

Z Z |hπ( f(z)dz)h, ki − hπ(f(z))h, kidz| ≤ 2||h||||k||, T T

R R and as the  is arbitrary, this gets us that hπ( T f(z)dz)h, ki = T hπ(f(z))h, kidz. R R To show that b( T f(z)dz) = T bf(z)dz, take a faithful representation π and note that we

75 have just shown that

Z Z Z hπ( bf(z)dz)h, ki = hπ(bf(z))h, kidz = hπ(f(z))π(b)(h), ki T T T Z = hπ(b f(z)dz)h, ki. T

Since two elements x and y in a Hilbert space H are equal if and only if hx, hi = hy, hi for every h ∈ H and π is faithful, we have equality. We can also use this faithful π to get that that R R R R ||π( T f(z)dz)|| ≤ T ||f(z)||dz for any such π, and therefore || T f(z)dz|| ≤ T ||f(z)||dz||.

We may then use this integral over the circle to define what is known as a conditional expectation on our C*-dynamical system, which we will denote Φ.

Lemma 4.3.6. If (A, Z, φ) is a C*-dynamical system, there is a canonical faithful, positive, unital, map Φ from A ×φ Z to A such that Φ(Φ(a)) = Φ(a) for any a ∈ A and Φ sends nonzero elements to nonzero elements. This is usually referred to as a faithful conditional expectation.

∗ Since A is unital, there is a unitary u = e1 in the crossed product such that uau = φ1(a)

0 for all a ∈ A. We are then viewing A as a subalgebra of A ×φ Z by embedding it via a → au .

Proof. For any λ such that |λ| = 1, we can define a unitary representation of Z into A ×φ Z by taking n to λnun. It’s easy to see that

n −n n −n (λu) a(λu) = u au = φn(a), so the canonical embedding and λu form a covariant representation of (A, Z, φ). By the univer- sality of the crossed product, we get a homomorphism ρλ from the crossed product onto itself such that ρλ(a) = a and ρλ(u) = λu. Since ρλ is onto and merely rotates the image of 1, it is clear that this is injective and thus an automorphism. By Lemma3 .2.21, ρλ is positive.

If we consider the the function f(t) = ρe2πit (X) for some fixed X in the crossed product, we can see that this is continuous with respect to the norm-for X that are finite sums, continuity is trivial, and as all X can be expressed as the limit of finite sum elements, it must be continuous overall. Thus, by Lemma4 .3.5, we may define

Z 1 Φ(X) = ρe2πit (X)dt. 0

76 Since each ρe2πit is injective and positive, so too must be Φ. Note that for a, b ∈ A and X ∈ A×φZ

Z 1 Z 1 Φ(aXb) = ρe2πit (aXb)dt = a ρe2πit (X)dtb = aΦ(X)b. 0 0

In particular, this gives us that Φ(a1) = a. Also of note is that

Z 1 Z 1 k k 2πikt k Φ(u ) = ρe2πit (u )dt = e u dt = 0, 0 0

P n for k 6= 0. Therefore, since the integral is additive, if f = anu , then n

X n X n Φ(f) = Φ(anu ) = anΦ(u ) = a0. n n

This defines Φ on AZ. Since this is dense in A ×φ Z and A is complete, we may extend Φ by continuity to be defined on the entire crossed product. By the last note, we have that

Φ(Φ(a)) = Φ(a). That Φ is unital is a consequence of its integral construction.

With our conditional expectation defined, the next piece we will need is the ability to extend

“unimodular” functions. Unimodular simply describes those functions f such that |f(x)| = 1 for all x in the domain of f.

Lemma 4.3.7. If K is a closed set of a compact Hausdorff space X and f is a continuous unimodular function from K to C, then there exists a continuous unimodular function F from

X to C such that F restricted to K is equal to f.

Proof. First, decompose f into continuous f1 and f2 from K to R such that f(x) = f1(x)+if2(x).

Note that |f1(x)| ≤ 1. By Theorem2 .2.19, we can extend f1 to a g1 on all of X such that

p 2 g1|K = f1 and |g1(x)| ≤ 1. Define g2(x) = 1 − g1(x) , where we are explicitly taking the positive square root. It is clear that g2 is continuous. As f is unimodular, it is also clear that

|g2| = |f2| on K. By Zorn’s lemma, there must exist maximal subsets Ui of X such that g2 is positive on Ui. By continuity, we have that g2 is zero on δUi. Therefore, we may change the sign of g2 on each Ui without violating continuity. Note that this maintains the unimodularity of g1 + ig2. Hence, by changing the signs to align with f2, we have that g1 + ig2 is a unimodular extension of f1 + if2 = f to all of X.

Finally, we use this to show one last lemma.

77 Lemma 4.3.8. If (X, σ) is a minimal dynamical system on an infinite compact Hausdorff space

X, then for every c in C(X) ×σ Z and δ > 0, there are unimodular functions θ1, ...θm in C(X) such that m 1 X ||Φ(c) − θ cθ || < δ m s s s=1

Proof. Let µ be an ergodic measure for (X, σ). Such a measure exists by Lemma2 .4.11. We

first wish to construct a continuous unimodular function ψ and a set F such that for a given

 > 0 and a nonzero integer k, we have Re ψ(ψ ◦ σ−k) = 0 on F . To do this, let N > 2k−1 and apply Lemma2 .4.12 to obtain pairwise disjoint sets Fj for 0 ≤ j ≤ N such that the collection N S is pairwise disjoint, σ(Fj) = Fj+1, and µ( Fj) > 1 − /2. By the regularity of µ, we may find j=0 a compact and thus closed subset of F0 with measure within any δ > 0 of µ(F0). Using this, we can replace F0 and thus all of the Fjs with closed sets having these same properties. From there, since σ is continuous and Fj are disjoint, we can find open neighborhoods of these closed sets with the same properties and disjoint closures. For simplicity’s sake, we refer to these as Fj as well. Let N−k [ F = Fj. j=0

N S −1 Since µ( Fj) = Nµ(Fj) ≤ 1, µ(Fj) ≤ N < /2, µ(F ) > 1 − /2 − k/2k = 1 − . Let j=0 πi/2k j λ = e and define ψ to be λ on each Fj. From there, extend ψ continuously to a unimodular

−k function over all of X by4 .3.7. Then, ψ(ψ ◦ σ ) = −i on each Fj for j between 0 and N − k, which is all of F , so we have accomplished our initial goal-2Re (−i) = 0. Note that in particular,

ψukψ−1 + ψ−1ukψ = 2(Reψ(ψ ◦ σ−k))uk

n P j vanishes on F . Next, consider an element f = fju of C(X)Z such that Φ(f) = f0 = 0. j=−n −k For every 1 ≤ |k| ≤ n, let ψk be constructed as above so that Re ψk(ψk ◦ σ ) = 0 on an open

−1 set Fk such that µ(Fk) > 1 − (2n) . Note here that the Fj-and thus, our entire construction T depends on n which in turn depends on f. This gives us that the open set F = Fn has 1≤|k|≤n positive measure and is therefore nonempty. Define

Y sk θs = ψk , 1≤|k|≤n

78 Q 0 0 where s ∈ {−1, 1}. For each |k| ≤ n, consider a pair of s and s such that si = si 1≤|k|≤n −1 except when i = k. Therefore, θs = φψk and θs0 φψk where φ is θs sans the k component.

Without loss of generality, assume sk = 1. Then,

k k −k k θsfku θs + θs0 fku θs0 = 2φfk(Reψk(ψk ◦ σ ))u φ,

which is zero on Fk by our earlier remark. Therefore, if we take the sum of this over all possible

−2n P θs and all possible k, the result is still zero. Thus, 2 θsaθs is zero on F . By the minimality s of σ, the sets σj(F ) form an open cover of X, as if they only covered some subset, that would be a proper invariant closed set in the complement of that subset. Then, as X is compact, there M is a finite subcover, and thus, a highest index M of that subcover such that S σi(F ) = X. i=0 M (j) −j Q (j) Let θs = θs ◦ σ for 1 ≤ j ≤ M. Consider the functions θ{s1,...,sM } = θsj , where j=1 Q sj ∈ {−1, 1}. Then, by the above, 1≤|k|≤n

−2nM X 2 θzfθz

z={s1,...sn}

M S j is zero on σ F = X for our f such that f0 = 0-exactly equal to Φ(f). If Φ(f) is not zero, j=1 consider f − Φ(f);

−2nM X Φ(f − Φ(f)) = 0 = 2 θz(f − Φ(f))θz.

z={s1,...sn}

Separating this out, we get that

−2nM X −2nM X 2 θzfθz = 2 θzΦ(f)θz = Φ(f),

z={s1,...sn} z={s1,...s2} as Φ(f) ∈ C(X) and the θs are unimodular. Therefore, for any f ∈ C(X)Z, this average is in fact equal to Φ(f). For a general element C of the crossed product, we can take an element c of C(X)Z /2-close to C, and as both Φ and the averaging operator are norm-reducing,

||Φ(c) − Φ(C)|| < /2 and

−2nM X −2nM X ||2 θzcθz − 2 θzCθz|| < /2

z={s1,...sn} z={s1,...sn}

79 −2nM P . Thus, ||Φ(C) − 2 θzCθz|| < . Note that because the n depends on our initial z={s1,...sn} element (in this context the c), we cannot in general get equality.

And with this, we can finally prove

Theorem 4.3.9. If (X, σ) is a dynamical system on an infinite compact Hausdorff space X, then if (X, σ) is minimal, C(X) ×σ Z is simple.

Proof. Let J be a non-zero ideal of C(X) ×σ Z. Pick a nonzero element j of J. Since J is an ideal, j∗j ∈ J, and thus, by replacing j with j∗j if necessary, we may assume that j is positive.

By Lemma4 .3.8, Φ(j) is arbitrarily close to a sum of θsjθs, each of which are elements of J, and thus the sum must be. Therefore, as J is closed, Φ(j) ∈ J. Since j is positive, and as Φ is faithful, Φ(j) is a positive function. Define

n n X j i∗ X i jn = u (Φ(j))u = σ (Φ(j)). i=0 i=0

Since Φ(j) is positive, it is strictly positive on an open set O. We wish to show that there exists an integer m such that {σi(O)} covers X for 0 ≤ j ≤ m. Similarly to the proof of Lemma

4.3.8, the sets σj(O) must cover X, as else the complement of the union would be a σ-invariant closed proper subset, contradicting minimality. Thus, {σj(O)} is an open cover of X, which by compactness of X, has a finite cover {σi(O)}, i ∈ I. Since I is finite, it has a highest index m. m S j Thus, σ (O) covers X. Since Φ(j) is nonnegative and positive on O, then jm, is positive on all j=0 −1 −1 of X and thus, there exists an jm in C(X). Then, jm jm = 1 ∈ J and therefore, J = C(X)×σ Z is simple.

Example 4.3.10. (Simplicity of Irrational Rotations) As we established earlier in Example

2.4.7, the classical dynamical system consisting of the circle T and an irrational rotation α is minimal. We also showed in Example4 .2.12 that the crossed product generated by this system is the irrational rotation algebra Aθ. Therefore, we have shown that Aθ is simple without ever having to explicitly consider any ideals in it. This is quite a powerful result that tells us a lot about homomorphisms from Aθ, as it informs us that if there is a surjective homomorphism from

Aθ to another C*-algebra B, it has to be either an isomorphism or zero. Otherwise the kernel of the homomorphism would be a non-trivial ideal in Aθ, a contradiction.

80 While [10] and [23] were used for outsourcing details when noted in the above section, overall the material in this section draws mostly from [5] and [18]. The discussion of integrals, however, comes from [20].

4.4 Minimality in Abelian Groups

In this section, we will take a step back and provide another proof of the relation between minimality of a dynamical system and simplicity of the generated crossed product. However, this time we will be looking at general C*-dynamical systems rather than just those produced by a classical dynamical system, those with general abelian groups rather than just Z. To begin with, we define minimality in more generality.

Definition 4.4.1. (General Minimality) A C*-dynamical system (A, G, φ) is minimal if there does not exist a nontrivial ideal I ⊆ A such that φg(I) ⊆ I for all g ∈ G.

Note first that this does coincide with the definition of minimality in the case of a classical dynamical system (X, σ). Since invariant subsets of X would create an ideal in the functions that are zero on that set, the lack of invariant ideals means there cannot be an invariant sub- set. Similarly, if there are no invariant sets and I is a translation invariant ideal, then by the arguments in Theorem4 .3.9, I must be all of A.

We will need the idea of a topologically free action, but first, we must define what a free action is.

Definition 4.4.2. (Free) If G is a group that acts on a set X, we say the action is free if, for any x ∈ X and g ∈ G, gx = x implies that g = e.

Definition 4.4.3. (Topologically Free) Let (A, G, φ) be a C*-dynamical system. We say that the action is topologically free if, for any finite collection of g1, ..., gn ∈ G \{e}, the set X = n T {x ∈ Aˆ|gix 6= x} is dense. (Recall the definition of Aˆ from Definition3 .3.7) i=1 ˆ To see the connection, consider the induced action of G on A given by gt(a) = t(φg−1 a) for t ∈ Aˆ. Note that the action on the C*-dynamical system being topologically free means that this action on Aˆ is almost free-while a given g or a finite collection {gi} may fix some points in Aˆ, every representation is arbitrarily close to one that g or {gi} does not fix. Consider in particular

81 the case of A = C(X), where Aˆ = X. If a C*-dynamical system of (A, G, φ) is topologically free, this means that while not every g changes every x, every g changes almost every x.

Our next lemma shows that shifts cannot create nontrivial disjoint representations.

Lemma 4.4.4. Let (A, G, φ) be a C*-dynamical system where G is discrete. Let π be a nonzero representation of A and let ψ : A ×φ G be a norm one completely positive map that extends π.

If g ∈ G is such that π ◦ φg−1 and π are disjoint, then ψ(aug) = 0 for all a ∈ A.

Proof. Fix a g ∈ G such that π and π ◦ φg−1 are disjoint. Then, by Lemma3 .3.5, there is a net

{xi} such that π(xi) → 1 and π ◦ φg−1 (xi) → 0. Using Lemma3 .4.8, we can get that

ψ(aug) = lim ψ(xi)ψ(aug)

= lim ψ(xiaug)

= lim ψ(ugφg−1 (xia))

= lim ψ(ug)ψ(φg−1 (xi))ψ(φg−1 (a))

= 0.

For a C*-dynamical system (A, G, φ), recall the distinction between the reduced and the universal crossed product discussed in Definition4 .2.10. By the universality of A ×φ G there is a canonical surjection σ from A ×φ G to A ×rφ G. We will denote the kernel of σ by Iσ. Our next step is to prove some facts about Iσ, but to do this, we first want to define a type of conditional expectation in general. Before that, we need a few lemmas.

Lemma 4.4.5. Let (A, G, φ) be a C*-dynamical system such that G is discrete and π is a representation of A on a Hilbert space H. Consider the regular σ generated by π. Furthermore,

2 P define sg ∈ B(H,L (G, H)) by sg(η)(h) = η for h = g and zero otherwise. If a = agug is g∈G such that only finitely many ag are nonzero, then

∗ shσ(a)sk = π(φh−1 (ahk−1 )) for all h, k ∈ G.

2 P −1 Proof. First, note that for η ∈ L (G, H), (σ(a)η)(h) = π(φh−1 (ag))(η(g h)). Then, for g∈G

82 such an a, we have

∗ ∗ X shσ(a)sk(x) = sh (π(φg−1 (agk−1 )))x = π(φh−1 (ahk−1 ))(x). g∈G

P Lemma 4.4.6. If (A, G, φ) is a C*-dynamical system and if a ∈ A ×rφ G = agug is such g∈G that only finitely many ag are nonzero, then ||ag|| ≤ ||a|| for all g ∈ G.

Proof. Let π be an injective, nondegenerate representation of A onto some Hilbert space H.

Then, using notation from Lemma4 .4.5,

∗ ||ag|| = ||π(ag)|| = ||s1σ(a)sg−1 || ≤ ||σ(a)|| ≤ ||a||.

P Definition 4.4.7. (Eg) If a ∈ A ×rφ G and a = agug such that only finitely many ag are g∈G nonzero, define Eg(a) to be ag. By Lemma4 .4.6, if {an} is a Cauchy sequence of elements of this type, then {Eg(an)} is a Cauchy sequence in A. We may then extend Eg to all of A ×rφ G by continuity.

This works in general as an almost conditional expectation, but for abelian G we can take a more familiar tactic that gets us a more elegant construction, with some more theory behind it.

Lemma 4.4.8. If (A, G, φ) is a C*-dynamical system and G is a discrete group, then there is a canonical conditional expectation E from A ×rφ G to A.

Proof. This is largely analogous to the proof of Lemma4 .3.6. Begin by selecting some λ in the

ˆ ∗ dual of G, G and for g ∈ G, let ug be the canonical unitary such that ugaug = φga. From this, we can define a covariant representation of (A, G, φ) via the canonical embedding of A into the crossed product and taking g ∈ G to λ(g)ug. We may then let φλ be the standard representation generated by this. We wish to view φλ as a regular representation generated by some π, so that we may use the universality of the reduced crossed product. Select π to be a representation of A ×rφ G onto some Hilbert space H. We can define a ∗-homomorphism πλ from A ×rφ G to B(L2(G, H)) as follows;

−1 −1 (πλ(ug)x)(s) = λ(g) x(g s),

(πλ(a)x)(s) = π(φs−1 (a))(x(s)).

83 We can thus see that φλ is equivalent to a regular representation, and by the relative universality of the crossed product, we get a homomorphism ρλ from A ×rφ G to itself. Then, note that in the proof of Lemma4 .3.5 we only used that T was compact and that the Lebesgue measure on

T was translation invariant. Thanks to Theorems2 .3.17 and2 .3.15, we may thus define

Z E(X) = ρλ(X)dλ. Gˆ

The proof that E is a conditional expectation is entirely parallel to Lemma4 .3.6.

With this, we can then generalize to Eg for g ∈ G that pick out a particular coordinate of an element.

Definition 4.4.9. (Eg Second) If a ∈ A ×rφ G and g ∈ G, define Eg(a) to be E(aug−1 ), where

−1 ug−1 is the canonical unitary associated with g

Since E picks out the 0th term of a for a a finite sum, Eg picks out the gth. In fact, for the reduced crossed product, we can show that if all of these are zero, then the element itself is zero.

Lemma 4.4.10. If a ∈ A ×rφ G is such that Eg(a) = 0 for every g ∈ G, a = 0.

Proof. Let π be a representation of A onto the bounded operators of some Hilbert space H and let σ be the regular of A ×rφ G generated by π. If for some a ∈ A ×rφ G, Eg(a) = 0 for all g ∈ G,

∗ then sgσ(a)sk = 0 for arbitrary g, k ∈ G by Lemma4 .4.5. Since we can arbitrarily approximate 2 ∗ any function f in L (G, H) by a finite sum of sg(h), we get that sgσ(a)f = 0. Assume that 2 ∗ there was some f ∈ L (G, H) such that σ(a)f is nonzero. Then, as sgσ(a)f is zero,

∗ 0 = hsgσ(a)f, zi = hσ(a)f, sg(z)i,

2 for arbitrary z ∈ L (G, H). Therefore for any fixed g, σ(a)fg = 0 and therefore, σ(a)f is zero and σ(a) = 0. We can then vary π over all such representations to see that a = 0.

And with this, we can finally prove things about Iσ

Theorem 4.4.11. Let (A, G, φ) be a C*-dynamical system such that A is abelian and G is discrete. If the action of G on A is topologically free, then for every ideal I such that I ∩A = {0},

I ⊆ Iσ.

84 Proof. First, assume that I is an ideal such that I ∩ A = {0}. If I 6⊆ Iσ. Then, by Lemma

4.4.10, there is some a ∈ I and some g ∈ G such that Eg(a) 6= 0. Since I is an ideal, aug−1 ∈ I.

We also have that E(aug−1 ) = E1(aug−1 ) = Eg(a) is nonzero. Therefore, there is an element a P in I such that E(a) 6= 0. Approximate a by a b = bsus, where F is a finite subset of G such s∈F that ||a − b|| < ||E(a)||/2. Let

\ X = {x ∈ Aˆ|tx 6= x}. t∈F \{e}

By the definition of topologically free, X is dense in Aˆ. For some π ∈ X, defineπ ˆ from

0 0 A + I to B(Hπ) byπ ˆ(a + i) = π(a). Note that this is well-defined, as if a + i = a + i , then a − a0 = i0 − i ∈ A ∩ I = {0} and thusπ ˆ(a + I) =π ˆ(a0 + i0). Let ψ be a completely positive map that extendsπ ˆ to A×φ G. For an example that such a map exists, extendπ ˆ to all of A×φ G by setting ψ(a) = π(E(a)). Since A is unital, ||ψ|| = 1. Since π ∈ X, we get, via Lemma4 .4.4 that X ψ(b) = ψ(bsus) = ψ(be) = ψ(E(b)). s∈F

Therefore, ||π(E(b))|| = ||ψ(E(b))|| = ||ψ(b)|| = ||ψ(b − a)|| ≤ ||b − a||. This then holds for all

π ∈ X, and as X is dense, ||E(b)|| ≤ ||b − a||. Therefore, we get that

||E(a)|| ≤ ||E(a − b)|| + ||E(b)|| ≤ 2||a − b|| < ||E(a)||, a contradiction.

And from this, we immediately get

Corollary 4.4.12. Let (A, G, φ) is a C*-dynamical system such that G is discrete. If the action is topologically free and minimal, then A ×rφ G is simple.

We can then move on to show that these statements are equivalent for the full crossed product when A is abelian.

Lemma 4.4.13. Let (A, G, φ) be a C*-dynamical system such that G is discrete. The action is topologically free if and only if for every ideal I in A ×φ G such that I ∩ A = {0}, I ⊆ Iσ.

Proof. The first implication follows from Theorem4 .4.11. To show the second, pick some x ∈ Aˆ and let H(Gx) be the Hilbert space generated by the orthonormal basis δtx for t ∈ G. Because

85 A = C0(Aˆ) and G acts on A, we may define tx. We define a covariant representation πx of (A, G, φ) to H(Gx) by

πx(f)δtx = f(tx)δtx,

πx(ur)δtx = δrtx,

T for f ∈ A and r ∈ G. Let I = {ker(πx)|x ∈ Aˆ}. First, note that because for any f ∈ A, there is an x such that f(x) is nonzero, I is an ideal and I ∩ A = {0}. Then πx(f)δex = f(x), where e is the identity of G, is nonzero and therefore πx(f) is nonzero. Pick some s in G \{e} and f ∈ A such that the set of x ∈ Aˆ such that f(x) is nonzero, denoted the support of f or supp(f), is contained in {x ∈ Aˆ|sx = x}. For a given t ∈ G, either tx ∈ supp(f) or it is not. If it is, then as supp(f) ⊆ {x ∈ Aˆ|sx = x}, stx = tx and

πx(f − fus)δtx = f(tx)δtx − f(stx)δstx = 0.

If it is not in the support, then by the same containment, stx∈ / supp(f) and once again πx(f − fus) is zero. Thus, f − fus ∈ I. By assumption, f − fus ∈ Iσ and thus f = E(f − fus) = 0. Since f was continuous, we have the the interior of {x ∈ Aˆ|sx = x} is empty and thus we have topological freeness.

And from this, we get our major result for this section;

Theorem 4.4.14. If (A, G, φ) is a C*-dynamical system such that A is abelian and G is discrete, then A ×φ G is simple if and only if the action is minimal and topologically free. Furthermore,

A ×φ G and A ×rφ G are canonically isomorphic

Note the similarity to Theorems4 .3.4 and4 .3.9. In fact, combining these results gets us that, if (X, σ) is a classical dynamical system, then (X, σ) is minimal if and only if (C(X), Z, σ) is minimal and topologically free. In other words, if (X, σ) is minimal, then (C(X), Z, σ) is topologically free. We can also show this directly.

Theorem 4.4.15. If (X, σ) is a classical dynamical system that is minimal, then the action in the C*-dynamical system (C(X), Z, σ) is topologically free.

Proof. Note first that, since in our case A = C(X), for a given x ∈ X, the representations πx ˆ of A onto C given by πx(f) = f(x) are dense in A. Then, note that for these representations,

86 z zπx 6= πx for z ∈ Z means that σ (x) 6= x. Assume that, for some point x ∈ X and some n ∈ Z, n i nπx = πx. Then, σ (x) = x, and {σ (x), 0 ≤ i < n} is a proper invariant closed set. Since this ˆ contradicts minimality, nπx 6= πx for all x ∈ X and n ∈ Z. Since {πx} is dense in A, we are done.

The ideas in this section largely come from [1], with some of the results regarding expectations coming from [17].

87 Bibliography

[1] R.J. Archbold and J.S Spielberg. Topologically Free Actions and Ideals in C*-Dynamical

Systems. Proceedings of the Edinburgh Mathematical Society, 1993.

[2] William B. Averson. Subalgebras of C*-algebras. Acta Mathematica, 1969.

[3] Micheal Brin and Garrett Stuck. Introduction to Dynamical Systems. Cambridge University

Press, 2002.

[4] Micheal Coomaert. Topological Dimension and Dynamical Systems. Springer-Verlag, 2015.

[5] Kenneth R. Davidson. C*-Algebras by Example. American Mathematical Society, 1996.

[6] Jacques Dixmier. C*-algebras. North-Holland Publishing Company, 1977.

[7] E.G. Effros and F.Hahn. Locally compact transformation groups and c*-algebras. Mem.

Amer. Math. So. No. 75, 1967.

[8] George A. Elliot. Some simple c*-algebras constructed as crossed products with discrete

outer automorphism groups. Publ. Res. Inst. Math. Sci., 1980.

[9] Gilbert Helmberg. Introduction to Spectral Theory in Hilbert Space. Dover Publications,

1969.

[10] P.M. Fitzpatrick H.L. Royden. Real Analysis. PHI Learning, fourth edition, 2011.

[11] Thomas W. Hungerford. Algebra: A Graduate Course. Springer-Verlag, 1989.

[12] Akitaka Kishimoto. Outer automorphisms and reduced crossed products of simple c*-

algebras. Commun. Math. Phys., 1981.

[13] Sidney A. Morris. Pontryagin Duality and the Structure of Locally Compact Abelian Groups.

Cambridge University Press, 1977.

[14] Gerald J. Murphy. C*-Algebras and Operator Theory. Academic Press, 2004.

[15] Narutaka Ozawa Nathanial P. Brown. C*-Algebras and Finite-Dimensional Approximations.

American Mathematical Society, 2008.

88 [16] Brent Nelson. Amenable Groups. https://math.berkeley.edu/~brent/Amenable_ Groups.pdf.

[17] N. Christopher Phillips. An Introduction to Crossed Product C*-Algebras and Minimal Dy-

namics. http://pages.uoregon.edu/ncp/Courses/CRMCrPrdMinDyn/Notes_20170205.

pdf.

[18] S. C. Power. Simplicity of C*-Algebras of Minimal Dynamical Systems. Journal of the

London Mathematical Society, 1978.

[19] John C. Quigg and J. Spielberg. Regularity and hyporegularity in c*-dynamical systems.

Houston Journal of Mathematics, 1992.

[20] Iain Raeburn. Graph Algebras. American Mathematical Society, 2005.

[21] Avery Robinson. The Banach-Tarski Paradox. http://math.uchicago.edu/~may/ REU2014/REUPapers/Robinson.pdf.

[22] Walter Rudin. Real and Complex Analysis. McGraw-Hill, third edition, 1987.

[23] Walter Rudin. Functional Analysis. McGraw-Hill, second edition, 1991.

[24] J. v. Neumann. On rings of operators iii. Annals of Mathematics, 1940.

[25] J. v. Neumann and F. J. Murray. On rings of operators. Annals of Mathematics, 1936.

[26] J. v. Neumann and F. J. Murray. On rings of operators ii. Annals of Mathematics, Second

Series, 1937.

[27] G. Zeller-Meier. Produits crois´esd’une c*-alg`ebrepar un group d’automorphismes. J. Math.

pure appl., (47), 1968.

89