<<

theory and exotic Banach spaces (Banach spaces with small spaces of operators) Bernard Maurey

This set of Notes is a largely expanded version of the mini-course “Banach spaces with small spaces of operators”, given at the Summer School in Spetses, August 1994. The lectures were based on a forthcoming paper [GM2] with the same title by Tim Gowers and the speaker. A similar series of lectures “Operator theory and exotic Banach spaces” was given at Paris 6 during the spring of ’95 as a part of a program of three mini-courses organized by the “Equipes d’Analyse” of the Universities of Marne la Vall´eeand Paris 6. We present in section 10, 11 and 12 several examples of Banach spaces which we call “exotic”. The first class is the class of Hereditarily Indecomposable Banach spaces (in short H.I. spaces), introduced in [GM1]: a X is called H.I. if no subspace of X is the topological direct sum of two infinite dimensional closed subspaces. One of the main properties of a H.I. Banach space X is the following: every bounded linear operator T from X to itself is of the form λIX + S, where λ ∈ C, IX is the identity operator on X and S is strictly singular. It is well known that this implies that the spectrum of T is countable, and it follows easily that a H.I. space is not isomorphic to any proper subspace. More generally, we present in section 11 a class of examples of Banach spaces having “few” operators. The general principle is the following: given a relatively small semi-group of operators on the space of scalar sequences (for example, the semi-group generated by the right and left shifts), we construct a Banach space such that every bounded linear operator on this space is (or is almost) a strictly singular perturbation of an element of the algebra generated by the given semi-group. We obtain in this way in section 12 a new prime space, a space isomorphic to its subspaces with finite even codimension but not isomorphic to its hyperplanes, and a space isomorphic to its cube but not to its square. We have chosen to present a fairly detailed account of all the tools of general interest that are necessary to the analysis, although these appear already in many classical books (but probably not in the same book); we develop elementary theory in section 2, in section 4, strictly singular operators and strictly singular perturbations of Fredholm operators in section 6, an incursion into the K-theory for Banach algebras in section 9. Ultraproducts and Krivine’s theorem about the finite representability of `p are also presented in sections 7 and 8, with some emphasis on the operator approach to these questions. The actual construction of our class of examples appears in section 11, and the applications to some specific examples in the last section 12.

1. Notation We denote by X,Y,Z infinite dimensional Banach spaces, real or complex, and by E,F finite dimensional normed spaces, usually subspaces of the preceding. Subspaces are closed vector subspaces. We write X = Y ⊕ Z when X is the topological direct sum of two closed subspaces Y and Z. The unit ball of X is denoted by BX . We denote by K the field of scalars for the space in question (K = R or K = C). We denote by L(X,Y ) the space of bounded linear operators between two (real or complex) Banach spaces X and Y . When Y = X, we simply write L(X). We denote

1 by S, T, U, V bounded linear operators. Usually, S will be a “small” operator; it could be small in , or compact, or finite rank or strictly singular... By IX we denote the identity operator from X to X. An into isomorphism from X to Y is a bounded linear operator T from X to Y which is an isomorphism between X and the image TX; this is equivalent to saying that there exists c > 0 such that kT xk ≥ ckxk for every x ∈ X. Let K(X,Y ) denote the closed vector subspace of L(X,Y ) consisting of compact operators (we write K(X) if Y = X).

A normalized sequence in a Banach space X is a sequence (xn)n≥1 of norm one vectors. The closed linear span of a sequence (xn)n≥1 is noted [xn]n≥1.A basic sequence is a Schauder basis for its closed linear span [xn]n≥1. This is equivalent to saying that there n exists a constant C such that for all integers m ≤ n and all scalars (ak)k=1 we have °Xm ° °Xn ° ° akxk° ≤ C ° akxk°. k=1 k=1

The smallest possible constant C is called the basis constant of (xn)n≥1.

An unconditional basic sequence is an (infinite) sequence (xn)n≥1 in a Banach space n for which there exists a constant C such that for every integer n ≥ 1, all scalars (ak)k=1 n and all signs (ηk)k=1, ηk = ±1, we have °Xn ° °Xn ° ° ηkakxk° ≤ C ° akxk°. k=1 k=1

The smallest possible constant C is called the unconditional basis constant of (xn)n≥1.A question that remained open until ’91 motivated much of the research contained in these Notes: does every Banach space contain an unconditional basic sequence (in short: UBS). The answer turned out to be negative and lead to the introduction of H.I. spaces.

2. Basic Banach Algebra theory (For this paragraph, see for example Bourbaki, Th´eoriesspectrales, or [DS], or many others.) A Banach algebra A is a Banach space (real or complex) which is also an algebra where the product (a, b) → ab is norm continuous from A × A to A. This means that the product and the norm are related in the following way: there exists a constant C such that for all a, b ∈ A, we have kabk ≤ Ckak kbk. It is then possible to define an equivalent norm on A satisfying the sharper inequality

∀a, b ∈ A, kabk ≤ kakkbk.

We shall call a norm satisfying this second property a Banach algebra norm. In order to define an equivalent Banach algebra norm from a norm satisfying the first property with a constant C, we may for example consider

|||a||| = sup{kab + λak : kbk ≤ 1, |λ| ≤ 1}.

2 We say that A is unital if there exists an element e ∈ A such that ea = ae = a for every a ∈ A; we write usually 1A for this element e. If A is unital, we get an equivalent Banach algebra norm on A using the formula

|||a||| = sup{kabk : b ∈ A, kbk ≤ 1} and for this norm |||1A||| = 1. A Banach algebra norm with this additional property will be called unital Banach algebra norm. A C∗-algebra is a complex Banach algebra A with a Banach algebra norm and with an anti-linear involution a → a∗ (i.e. a∗∗ = a,(a + b)∗ = a∗ + b∗,(λa)∗ = λa,(ab)∗ = b∗a∗ for every a, b ∈ A and λ ∈ C) and such that

∀a ∈ A, ka∗ak = kak2.

∗ If A is unital, then 1A = 1A and k1Ak = 1. An element x ∈ A is Hermitian (or self- adjoint) if x∗ = x. Every a ∈ A can be written as a = x + iy, where x and y are Hermitian (x = (a + a∗)/2, y = i(a∗ − a)/2). At some point we will need the self-explanatory notion of a C∗-norm on a not complete algebra with involution. The above definition is not satisfactory in the real case. Indeed, if A is any real unital subalgebra of the complex algebra C(K) of continuous functions on a compact topological space K, and if we define on A the trivial involution f ∗ = f for every f ∈ A, then all properties of the preceding definition hold (because λ is now restricted to R), but A is not necessarily what we want to call a real C∗-algebra. In order to obtain a reasonable definition for the real case, we need to add an axiom which is consequence of the others in the complex case, but not in the real case, for example

∀a, b ∈ A, ka∗ak ≤ ka∗a + b∗bk.

Adding 1 When A has no unit it is possible to embed A in a larger unital Banach algebra, by + considering on A = A⊕K the product (a, λ)(b, µ) = (ab+λb+µa, λµ). Then 1A+ = (0, 1) is the unit of A+ and A is a closed two-sided ideal in A+. When A is a C∗-algebra, it is possible to define on A+ a C∗-norm. An important example of unital Banach algebra is L(X) where X is a (real or complex) Banach space. The operator norm is a unital Banach algebra norm on L(X). The subspace K(X) is a non unital closed subalgebra of L(X), actually a closed two-sided ideal of L(X). The algebra (K(X))+ is isomorphic to the subalgebra of L(X) consisting of all operators of the form T = λIX + K, K compact (recall that X is infinite dimensional). The Calkin algebra C(X) = L(X)/K(X) is another important example. It will play a role for the notion of later in this section. When H is a , L(H) is a C∗-algebra. It is a fundamental example since every C∗-algebra can be ∗-embedded in some L(H). The quotient of a C∗-algebra by a closed two-sided ideal is a C∗-algebra for the quotient norm (it is true but not obvious that for any such ideal I, we have x ∈ I ⇒ x∗ ∈ I); in particular the Calkin algebra C(H) of a Hilbert space H is a C∗-algebra.

3 Complexification Most of is done when the field of scalars is C. It is therefore important to be able to pass from the real case to the complex case. If X is a real Banach space, the complexified space is XC = X ⊕ X with the rule

i(x, y) = (−y, x).

In this way we have (0, y) = i(y, 0) and identifying (x, 0) ∈ XC with x ∈ X, we see that every z ∈ XC can be written as z = x + iy with x, y ∈ X. In order to define a complex norm on XC it is useful to think about XC as being C ⊗ X; it is clear that any tensor norm on C ⊗ X will give in particular a complex norm on XC (of course we choose the modulus as norm on the 2 dimensional real space C). We can consider for example the injective tensor norm C ⊗ε X,

kx + iyk = sup{|x∗(x) + ix∗(y)| : x∗ ∈ X∗, kx∗k ≤ 1}.

With this norm C ⊗ X is isometric to (the real space) L(X∗, C). A (real) linear functional ∗ ∗ x on X is extended to a complex linear functional on XC by setting simply x (x + iy) = ∗ ∗ x (x) + ix (y), and it is easy to see that every complex linear functional on XC can be obtained as x∗ + iy∗, where x∗, y∗ ∈ X∗. Given T ∈ L(X,Y ), one gets a complex linear operator TC ∈ L(XC,YC) by setting TC = Id ⊗ T in the tensor product language. In the previous language we can write TC(x + iy) = T x + iT y for all x, y ∈ X. When A is a real Banach algebra, it can be checked that if we norm AC by AC = C⊗πA, we get a complex Banach algebra norm (if A had one); if A = L(X), X real, then AC identifies with L(XC) and any complex norm on XC gives a Banach algebra norm on AC; given V = T + iU in AC, T,U ∈ A = L(X), we associate the operator on XC = X ⊕ X given by the µ ¶ T −U V = . UT

Invertible elements Let A be a unital Banach algebra over K. We say that a ∈ A is invertible (in A) if there exists x ∈ A such that ax = xa = 1A. Lemma 2.1. Let A be a unital Banach algebra (with a Banach algebra norm). If b ∈ A and kbk < 1 then 1 − b is invertible in A. −1 P∞ n Proof. (1 − b) = n=0 b . Corollary 2.1. Let A be a unital Banach algebra. The set of invertible elements is open in A. Furthermore, if K = C, when a ∈ A is invertible, the function f(z) = (a − zb)−1 is analytic in a neighborhood of 0 in C for every b ∈ A. Proof. Write a − b = a(1 − a−1b) and use Lemma 2.1. If kbk < ka−1k−1, then ka−1bk < 1, and we obtain the following formula for the inverse

(a − b)−1 = u + ubu + ububu + ··· ,

4 where u = a−1. Applying to zb instead of b clearly gives an analytic function of z in a neighborhood of 0 in C,

(a − zb)−1 = u + z ubu + z2 ububu + ··· .

Remark 2.1. The same proof works when a is replaced by T ∈ L(X,Y ) invertible, and b is replaced by a small operator S ∈ L(X,Y ); the above formulas hold with u = T −1 ∈ L(Y,X), provided kSk < kT −1k−1. Spectrum of an element, resolvent set Let A be unital over C and let a ∈ A. The ρ(a) is the set of all λ ∈ C such that λ1A − a is invertible in A. The spectrum σ(a) is the complementary set C \ ρ(a). This set ρ(a) is open by Corollary 2.1 and is clearly a neighborhood of infinity, hence σ(a) is a closed and bounded subset of C.

Exercise 2.1. Let a, b ∈ A. Show that σ(ab)\{0} = σ(ba)\{0} (hint: if z(λ1A −ab) = 1A, find u ∈ A such that u(λ1A − ba) = λ1A). Why did we exclude 0? Real spectrum If A is a real unital Banach algebra and if a ∈ A we may define a real spectrum by

R σ (a) = {λ ∈ R :(a − λ1A) not invertible in A}.

The problem is that this spectrum may be empty (contrary to what happens in the complex case: we shall recall that the spectrum is non empty in the complex case). If we extend the scalars and consider the same a ∈ A as an element of AC, we can work with the complex spectrum σAC (a). Exercise. Let A be a real unital Banach algebra and let a ∈ A.

1. Show that σAC (a) = σAC (a) R 2. Show that σ (a) = R ∩ σAC (a). In what follows we shall denote by σK(a) the real spectrum when A is real and the complex spectrum if A is complex. Changing the ambient algebra Assume that A is a closed subalgebra of a unital Banach algebra B, with the induced norm and containing 1B. If x ∈ A is invertible in A, it is obviously also invertible in B, but it is possible for x ∈ A to be invertible in B but not in A. It is necessary in this case to distinguish the spectrum of x relative to B or relative to A; we denote the K K spectrum by σA(x) or σB(x), and similarly for the resolvent sets. If x ∈ A, it is clear that K K K K K K σA(x) ⊃ σB(x) (equivalently, ρA(x) ⊂ ρB(x)). We also have ∂σA(x) ⊂ ∂σB(x) from the next Lemma. Lemma 2.2. Let B be a unital Banach algebra, and let A be a closed subalgebra with K K 1B ∈ A. For every x ∈ A, the resolvent set ρA(x) is open and closed in ρB(x). Hence, for K K K every connected component ω of ρB(x), either ω ⊂ ρA(x) or ω ∩ ρA(x) = ∅. It follows that K K K K K K K ∂ρA(x) = ∂σA(x) ⊂ ∂σB(x) = ∂ρB(x). If ρB(x) is connected, then σA(x) = σB(x).

5 K K Proof. Let x be fixed in A and write simply ρA, ρB for ρA(x) and ρB(x). We know that ρA is open. Let (λn) ⊂ ρA, λn → λ ∈ ρB. There exists b ∈ B such that (x−λ)b = b(x−λ) = 1B. On the other hand, for every n there exists an ∈ A such that (x−λn)an = 1B. Multiplying by b we get

b = (bx − λnb)an = (1B + (λ − λn)b)an;

−1 when n → ∞ we see that (1B + (λ − λn)b) tends to 1B, thus an → b, hence b ∈ A and λ ∈ ρA. Let λ ∈ ∂ρA. Then λ∈ / ρA since ρA is open in K, and therefore λ∈ / ρB since ρA is closed in ρB; this implies that λ ∈ ∂ρB.

Remark. When σA(x) 6= σB(x), we see that the interior int(σA(x)) must be non empty. Examples 2.1. 1. It will be shown later (section 6, Proposition 6.1) that the spectrum (in B = L(X)) of a strictly singular operator S on a complex Banach space X is a countable compact subset of C. This implies that the resolvent set ρB(S) is connected, hence the inverse of λIX − S, when it exists, belongs to the closed subalgebra A of L(X) generated by IX and the operator S. 2. Suppose that A is a complex unital C∗-subalgebra of L(H) and let u ∈ A be hermitian. Since the spectrum of u in B = L(H) is contained in R and bounded, it is clear that the resolvent set ρB(u) is connected, hence σA(u) = σB(u). This remark implies easily that σA(x) = σB(x) for every x ∈ A (it is enough to show that when x ∈ A is invertible in B its inverse belongs to A; if x is invertible in B, then the hermitian operators x∗x and xx∗ are invertible in B, hence in A, therefore x is invertible in A: there exist y, z ∈ A such ∗ ∗ ∗ ∗ that y(x x) = (xx )z = 1A, and yx = x z is the inverse of x in A). More generally, if an isomorphism from H ⊕ H onto H is given by a matrix (a, b) with a, b ∈ A, the inverse operator from H to H ⊕ H is given by a matrix with two entries in A. 3. We can always embed a Banach algebra A in the space of bounded linear operators on some Banach space. Suppose that A is a unital Banach algebra with a unital Banach algebra norm. We simply embed A into L(A) by mapping each a ∈ A to the operator Ma of left multiplication by a, defined by Ma(x) = ax for every x ∈ A. It is clear that this gives an isometric embedding from A into L(A). We show now that the spectrum of a ∈ A is the same relative to A or to the larger algebra B = L(A). We only need to show that a is invertible in A iff Ma is invertible in B. One direction is obvious. In the other direction, assume that Ma is invertible in B. Then Ma is onto and there exists u ∈ A such that 1A = Ma(u) = au. We see that MaMu = 1B. Since Ma is invertible in B, this implies MuMa = 1B and ua = au = 1A shows that a is invertible in A.

4. Suppose that A is a real unital Banach algebra; if a ∈ A is invertible in AC, then a is invertible in A. 5. Consider the embedding of L(X) into L(X∗) given by the adjoint (this is not exactly an algebra morphism since (UT )∗ = T ∗U ∗); the spectrum of T ∗ in L(X∗) is the same as the spectrum of T ∈ L(X) (of course this is totally obvious when X is reflexive).

6 Let a ∈ A. The spectral radius of a is defined by

r(a) = lim kank1/n. n

Exercise. Show that the above limit exists. Example 2.2. If u is hermitian in a C∗-algebra, we get ku2nk = kuk2n for every integer n ≥ 1, thus r(u) = kuk. Notice that the spectral radius is not changed if the norm on A is replaced by an equivalent norm. Also, r(a) ≤ kak if the norm is a Banach algebra norm.

Proposition 2.1. (K = C and 1A 6= 0) The spectrum σ(a) is contained in the closed disc ∆(0, r(a)) in C centered at 0 and of radius r(a), and it intersects the circle of radius r(a). In other words, r(a) = max{|λ| : λ ∈ σ(a)}. In particular, σ(a) is non empty. −1 −1 Proof. The function g(z) = (1A − za) is clearly defined and analytic when |z| < r(a) , hence σ(a) ⊂ ∆(0, r(a)). Since 1A 6= 0 we may assume k1Ak = 1 for some equivalent unital Banach algebra norm. If a is invertible, it is then easy to see that r(a)r(a−1) ≥ 1, hence r(a) > 0. If r(a) = 0, we know therefore that a is not invertible, thus 0 ∈ σ(a) and finally σ(a) = {0} in this case. −1 Assume now r(a) > 0. If the inverse (1A − za) exists for every z on the circle |z| = r(a)−1, we can show that the function g is analytic in a neighborhood of a closed disc of radius R > r(a)−1. It follows then from Cauchy’s inequalities that for some constant M, we have M ∀n ≥ 0, kank ≤ Rn yielding r(a) ≤ 1/R and contradicting the choice of R. Resolvent equation. Spectral projections Let A be a unital Banach algebra over C and let a ∈ A. The resolvent operator of a −1 0 is defined for z ∈ ρ(a) by R(z) = (z1A − a) . Note that R(z) and R(z ) commute and commute with a. We have

R(z0) − R(z) = −R(z0)R(z). z0 − z

Let λ ∈ C be isolated in σ(a). Let ∆r be the closed disc ∆(λ, r) with radius r centered at λ; let r0 > 0 be such that λ is the unique point of σ(a) contained in ∆r0 ; let γr be the boundary of ∆r, oriented in the counterclockwise direction. Let f be holomorphic in a neighborhood V of λ; for r > 0 such that ∆r ⊂ V and r ≤ r0 we know that γr ⊂ ρ(a) and we set Z 1 Φ(f) = f(z)R(z) dz ∈ A. 2iπ γr

7 Since f is holomorphic, the result does not depend upon the particular value r ∈ (0, r0] such that ∆r ⊂ V . Also Φ(f) commutes with a and with every R(µ). The mapping f → Φ(f) is clearly linear with respect to f. What is more interesting is that

Φ(fg) = Φ(f) ◦ Φ(g).

For the proof let 0 < r < s ≤ r0 be such that f and g are holomorphic in a neighborhood of ∆s and write Z Z 1 1 Φ(f) ◦ Φ(g) = f(z)g(z0)R(z)R(z0)dzdz0; 0 2iπ z∈γr 2iπ z ∈γs use then the resolvent equation and Cauchy’s formula. It follows that for every r with 0 < r ≤ r0 Z 1 −1 p = Φ(1) = (z1A − a) dz 2iπ γr is an idempotent (p2 = p), commuting with a and with every R(µ); furthermore for every element b = Φ(g) ∈ A we have pb = bp = b = pbp since pΦ(g) = Φ(1) ◦ Φ(g) = Φ(g).

To every idempotent p in A we may associate the Banach algebra Ap = pAp of all elements of the form pap, a ∈ A. As a Banach algebra, the norm and the product in Ap are those of A, but the unit of Ap is p. For example, let p be a bounded projection defined on some Banach space X, and let Y = p(X) be the range of p. We see that L(X)p identifies with L(Y ) (with an equivalent norm). Let us come back to our isolated λ ∈ σ(a) and let again p = Φ(1). The above remark shows that Φ(f) ∈ Ap for every f holomorphic in a neighborhood of λ. Also notice that Z 1 Φ(z) − ap = (z1A − a)R(z) dz = 0, 2iπ γr so that ap = Φ(z). Suppose that f(λ) 6= 0; then g = 1/f is holomorphic in a neighborhood of λ, and Φ(f)◦Φ(1/f) = Φ(1) = p. It follows that Φ(f) is invertible in Ap when f(λ) 6= 0. This applies in particular to f(z) = z − µ when µ 6= λ to show that Φ(z − µ) = ap − µp is invertible in Ap. This shows that the spectrum of pa = pap in Ap reduces to {λ}. Letting q = 1A − p, the spectrum of qa in Aq does not contain λ. This follows from Z ³ 1 R(z) ´ (λ1A − a) dz = q. 2iπ γr z − λ

More generally, when the spectrum σ(a) can be decomposed into two subsets σ1 and σ2 open and closed in σ(a), we can construct similarly a spectral projection p by replacing the above circle γr by a curve γ around σ1, such that σ2 is exterior to γ. We still get pa = ap and σ(pa) = σ1 (in Ap) and similarly in Aq we have that σ(qa) = σ2.

8 Exercise. Suppose that ka2 − ak < ε < 1/4; show that there exists an idempotent p such that ap = pa and kp − ak < f(ε, kak) (prove that σ(a) is contained in the union of the interiors of two circles γ1 and γ0 with radius 1/2 and centered at 1 and 0, for example by considering (a−t)(a−(1−t)) for |t(1−t)| ≥ 1/4; give an upper bound for k(z−a)−1k when z belongs to γ0 or γ1; let Φ0 and Φ1 be the operators as above associated with the two circles, and consider p = Φ1(1); use b = Φ1(1/z) for proving that (p − ap) = (1A − a)abp is small, and similarly for (1A − p)a). Essential spectrum Let X be an infinite dimensional Banach space. The ideal K(X) is then proper and the Calkin algebra C(X) = L(X)/K(X) is not {0}. For every T ∈ L(X), the essential spectrum σbK(T ) is the spectrum of the image Tb of T in C(X). We also consider the corresponding resolvent set ρbK(T ). A scalar λ ∈ K belongs to this essential resolvent set iff T − λIX is invertible modulo compact operators. We shall recall in section 6 that this happens iff T − λIX is a on X. Commutative Banach algebras A (complex) Banach algebra A which is a (skew-)field is isomorphic to C. Indeed, let a ∈ A and λ ∈ σ(a). Since a − λ1A is not invertible, we must have a − λ1A = 0. Hence every element of A is of the form λ1A for some λ ∈ C. Let A be a unital commutative Banach algebra over C. A maximal ideal I is closed and is an hyperplane: first, I is closed because the set of invertible elements is open, hence the closure of I is still a proper ideal, equal to I since I is maximal; second, A/I is a Banach field, therefore isomorphic to C, and I is thus an hyperplane. The linear functional χ such that ker χ = I, normalized by the condition χ(1A) = 1, is called a character. A character χ is a non zero bounded linear functional on A which is also multiplicative. Indeed, if a ∈ A and χ(a) 6= 0, let g(x) = χ(ax)/χ(a). Then g vanishes on I, so g is proportional to χ, but χ(1A) = g(1A) = 1, thus χ = g and χ(ax) = χ(a)χ(x) for every x ∈ A. The set of characters on A is called spectrum of A, and denoted by Sp(A); suppose that A is equipped with a Banach algebra norm; then Sp(A) is a subset of the unit sphere ∗ n of the A ; to see this, observe that the sequence (a )n≥0 is bounded when kak ≤ 1, so that (χ(an)) = (χ(a)n) is bounded, therefore |χ(a)| ≤ 1, and kχk = 1 since χ(1A) = 1. An element a ∈ A is invertible in A iff χ(a) 6= 0 for every character χ on A. Indeed, it is clear that χ(a) 6= 0 when a is invertible; if a is not invertible, aA is a proper ideal in A, thus contained in a maximal ideal I. If χ is the character such that I = ker χ then χ(a) = 0. It follows that for every a ∈ A,

σ(a) = {χ(a): χ ∈ Sp(A)}; since χ(a − χ(a)1A) = 0, we see that χ(a) ∈ σ(a) for every χ ∈ Sp(A). Conversely, if a − λ1A is not invertible, there exists a character χ such that χ(a − λ1A) = 0. Furthermore Sp(A) is w∗-closed in the unit sphere; this allows to map A to the space C(Sp(A)) of continuous functions on the compact space Sp(A); for every a ∈ A, let j(a)

9 be the on Sp(A) defined by j(a)(χ) = χ(a); this map j need not be injective in general. When A is a unital (complex) commutative C∗-algebra, this embedding is isometric; iu ∗ ∗ we first observe that when u is hermitian, a = e is unitary, i.e. a a = aa = 1A; then ∗ −1 −1 n a = a and kak = ka k = 1; this implies that the sequence (a )n∈Z is bounded, thus n iχ(u) (χ(a ))n∈Z is bounded and therefore 1 = |χ(a)| = | e |, hence χ(u) is real for every hermitian u; this implies ∀a ∈ A, χ(a∗) = χ(a). If u is hermitian, we have r(u) = kuk by Example 2.2, hence there exists χ ∈ Sp(A) such that |χ(u)| = kuk. This implies that for every a ∈ A, there exists χ such that

|χ(a)|2 = χ(a∗a) = ka∗ak = kak2,

showing that the mapping j : A → C(Sp(A)) is isometric in this case. The image of A into C(Sp(A)) is now a subalgebra closed under complex conjugation and obviously separating points of Sp(A), therefore our embedding is onto by Stone-Weierstrass’ theorem. Suppose that ϕ is a (unital) ∗-homomorphism between two unital commutative C∗- algebras, then kϕk ≤ 1; by the preceding paragraph we may think that A = C(K) and B = C(L), where K and L are two compact topological spaces. Since ϕ is a ∗-homomorphism,√ it sends every function f ≥ 0 on K to ϕ(f) ≥ 0 (introduce the hermitian element g = f). If kfk ≤ 1, then f ∗f ≤ 1, so that ϕ(f ∗f) ≤ 1 and kϕ(f)k ≤ 1; furthermore if ϕ is injective then ϕ is isometric; this is because the adjoint map ϕ∗ :(C(L))∗ → (C(K))∗ sends Sp(B) ' L into Sp(A) ' K, and must be onto by the preceding result (otherwise we may find a continuous function f 6= 0 supported on K \ ϕ∗(Sp(B)), and then ϕ(f) = 0, contradicting the injectivity). Observe that we don’t need B to be commutative in the above argument, because the range ϕ(A) is commutative, so that its closure in B is a commutative unital C∗-algebra. Proposition 2.2. Let B be a non necessarily commutative unital C∗-algebra. For every unital C∗-algebra C, every unital ∗-homomorphism ϕ : B → C satisfies kϕk ≤ 1. If ϕ is injective, then ϕ is isometric. Proof. Given an hermitian element u ∈ B, we may consider the unital subalgebra A of B generated by u. This is a commutative C∗-algebra. We have seen that the spectrum of u in A is real, hence σA(u) = σB(u) by Lemma 2.2. Suppose that ϕ is a ∗-homomorphism from B to some C∗-algebra; restricting ϕ to A, it follows from the preceding remark that kϕ(u)k ≤ kuk, and this is true for every hermitian u ∈ B. For a general b ∈ B, we write

kϕ(b)k2 = k(ϕ(b))∗ϕ(b)k = kϕ(b∗b)k ≤ kb∗bk = kbk2.

Suppose further that ϕ is injective; then ϕ is isometric; indeed, the preceding remarks show that kϕ(u)k = kuk when u is hermitian; for a general b ∈ B, we write

kϕ(b)k2 = k(ϕ(b))∗ϕ(b)k = kϕ(b∗b)k = kb∗bk = kbk2.

10 Wiener’s algebra The Wiener algebra W is the algebra of continuous (complex) functions on T with absolutely summable Fourier coefficients; the product is the pointwise product and the norm is the `1-norm of Fourier coefficients. This algebra W is clearly isometric to `1(Z) with its usual norm and the convolution as product. A function f in W is invertible in W iff f does not vanish on T. This amounts to showing that the characters on W reduce to evaluation at points of T. Let χ be a iθ inθ character on W and let λ = χ(e ). Since the family (e )n∈Z is bounded in W , it n inθ followsP that the sequence (λ = χ(e ))n∈Z is bounded, hence |λ| = 1. For every function f(θ) = a einθ in W , n∈Z n X n χ(f) = anλ = f(λ). n∈Z If f does not vanish on T, we see that χ(f) 6= 0 for every character χ, hence f is invertible in W . C∗-algebra summary (see first pages of Pedersen’s book [Pd]) Let A be a unital (complex) C∗-algebra. For every hermitian element b ∈ A, we may consider the unital subalgebra B generated by b; it is a commutative C∗-algebra. We ∗ know that jB : B → C(Sp(B)) is an onto isomorphism. For example, when b = b and σA(b) ⊂ [0, ∞), then σB(b) = σA(b) by Lemma 2.2; the function jB(b) is a non-negative continuous function on Sp(B), therefore there exists c ∈ B which is the “square root” of b, i.e. c = c∗, b = c2 and σ(c) ⊂ [0, ∞). Consider

C = {a ∈ A : a = a∗ and σ(a) ⊂ [0, ∞)}.

If b = b∗ and a = b2, then a ∈ C; this follows from the commutative theory, but also 2 simply√ from the√ fact that σ(b) ⊂ R which implies that for every t > 0, b + t1A = (b + i t1A)(b − i t1A) is invertible. ∗ If a = a and kt1A − ak ≤ t for some t ≥ 0, then a ∈ C. Thisp is clear when t = 0; ∗ when t > 0, we have k1A − a/tk ≤√ 1 and we may define b = 1A − (1A − a/t) = b from its Taylor series. Then a = ( t b)2 ∈ C. Conversely, if a ∈ C and t = kak, then kt1A − ak ≤ t, so that we got a characterization of C; this last fact is because we know for every hermitian element a that r(a) = kak = t, and

kt1A − ak = r(t1A − a) = max{|t − λ| : λ ∈ σ(a) ⊂ [0, t]} ≤ t.

It follows immediately from this characterization that a1, a2 ∈ C implies a1 + a2 ∈ C, so that C is a closed convex cone in A, with C ∩ (−C) = {0}. The next essential step is to prove that a∗a ∈ C for every a ∈ A. This is easy once we know that we may embed A as a ∗-subalgebra of some L(H), but it is not obvious from the abstract definition, and it is a main ingredient for the proof of the representation of an abstract C∗-algebra as a subalgebra of some L(H).

11 Observe first that for every b = r + is ∈ A, r, s hermitian, we have that b∗b + bb∗ = 2 2 ∗ 2(r +s ) ∈ C. Let a ∈ A. The commutative√ theory implies that we can write a a = u−v, with u, v ∈ C and uv = vu = 0. Let b = a v; then b∗b = −v2 ∈ −C, and bb∗ ∈ C since b∗b + bb∗ ∈ C; by Exercise 2.1, this implies that σ(b∗b) = {0}, hence r(b∗b) = kb∗bk = kbk2 = 0, thus v2 = 0, and finally v = 0, a∗a = u ∈ C. For every u 6= 0, we know that −u∗u∈ / C. We may therefore find by Hahn-Banach a linear functional ξ on A such that −ξ(u∗u) ≤ inf ξ(C); this yields that ξ ≥ 0 on C. we define a scalar product on A by ha, bi = ξ(a∗b).

To every a ∈ A we associate the operator Ta(b) = ab. Let a ∈ A with kak ≤ 1. We have

∗ ∗ hTa(b),Ta(b)i = ξ(b a ab);

∗ 2 ∗ ∗ ∗ ∗ 2 Since kak ≤ 1, we know that 1A − a a = c ∈ C and b b − b a ab = b c b ∈ C, therefore ξ(b∗a∗ab) ≤ ξ(b∗b) = hb, bi. This shows that

∀a ∈ A, kTakL(H) ≤ kak, where H is the Hilbert space obtained from the above scalar product. It is easy to check ∗ that (Ta) = Ta∗ , so that we have a ∗-homomorphism from A to L(H). We may improve the argument and obtain kTuk = kuk for the given u and then find an isometric embedding of A into some L(H) using a direct sum of such embeddings (if kuk = 1, observe that the closed convex cone u∗u + C is disjoint from the open unit ball, and use the separation theorem as before to obtain ξ such that ξ(u∗u) = kξk = 1). √ Suppose now that I is a closed two-sided ideal in A, and let a ∈ I. Let |a| = a∗a; then |a| ∈ I; indeed, there exists a sequence (Pn) of√ polynomials with real coefficients such that Pn(0) = 0 and such that (Pn(t)) converges to t uniformly on any compact interval √ 2 ∗ [0,T ], so that for every x ∈ C ∩ I, we have x ∈ I. For every ε > 0, ε 1A + a a is invertible, thus there exists uε ∈ A such that p 2 ∗ a = ε 1A + a a uε.

We obtain from the commutative theory

2 2 ∗ −1/2 ∗ 2 ∗ −1/2 kuεk = k(ε 1A + a a) a a(ε 1A + a a) k ≤ 1.

When ε → 0 we obtain that a = limε→0 |a|uε hence

∗ ∗ a = lim uε|a| ∈ I. ε

Finally, let us say some words about the real case. Let A be a real Banach algebra with involution and with a Banach algebra norm such that

∀a, b ∈ A, kak2 ≤ ka∗a + b∗bk.

12 This implies as in the complex case that every hermitian element x ∈ A as a real spectrum ∗ in AC (we don’t claim so far that AC can be equipped with a C -norm). Indeed, a = cos tx 2 2 and b = sin tx are hermitian for every real t, and a + b = 1A, therefore by our hypothesis itx k cos txk ≤ 1 and k sin txk ≤ 1 for every real t, which implies that e is bounded in AC, from which it follows that the spectrum of x is real. With this information, it is possible to reproduce the arguments from the beginning of this paragraph and to ∗-embed A in L(H), for some real Hilbert space H. Exercise. Complete the details. Show first that if B is the (real) subalgebra generated by an hermitian element a ∈ A, and if x, y ∈ B,

2 2 1/2 kx + iyk = kx + y kA ∗ ∗ v∗ v is a C -norm on BC. If v ∈ A is anti-hermitian, i.e. v = −v, observe that e e = 1A v 2 2 and k e k = 1. It follows that σ(v) ⊂ iR, and v − t 1A is invertible for t real, t 6= 0, thus σ(−v2) ⊂ [0, +∞).

3. Some operator theory: finitely singular operators See for example [LT1], section 2.c. Lemma 3.1. Let X and Y be real or complex Banach spaces. 1. Trivial principle: if T is an isomorphism from X into Y , a small norm perturbation T + S of T is still an into isomorphism. 2. Fundamental principle: if T is an isomorphism from X onto Y , a small norm perturbation T + S is still an isomorphism from X onto Y (this is Remark 2.1). Lemma 3.2. Let T ∈ L(X,Y ) be an into isomorphism and k ≥ 0 an integer. 1. Suppose that codim TX ≥ k. There exists c > 0 such that codim(T + S)X ≥ k whenever kSk < c. 2. Suppose that codim TX = k. There exists d > 0 such that codim(T + S)X = k whenever kSk < d. Proof. If codim TX ≥ k we can find a subspace F ⊂ Y such that dim F = k and TX ∩F = {0}. Let πF denote the quotient map Y → Y/F . Then πF ◦ T is an isomorphism from X into Y/F . If c > 0 is small enough and kSk < c, πF ◦ (T + S) is also an into isomorphism by Lemma 3.1. This implies that F ∩ (T + S)X = {0}, hence codim(T + S)X ≥ k. In the second case the proof is similar, but we can now select a subspace F such that Y = TX ⊕ F , dim F = k. Then πF ◦ T is an onto isomorphism, hence there is d > 0 (we may choose d < c), such that πF ◦ (T + S) is an onto isomorphism when kSk < d. By the above argument we already know that F ∩ (T + S)X = {0}; furthermore, for every y ∈ Y , there exists x ∈ X such that πF (y) = πF ((T + S)x), so y − (T + S)x ∈ F , showing that Y = F + (T + S)X. Finally Y = F ⊕ (T + S)X and codim(T + S)X = k. Proposition 3.1. Let T ∈ L(X,Y ) be an into isomorphism. Then T + S is an into isomorphism and codim(T +S)X = codim TX (finite or +∞) for every S in a neighborhood of 0 in L(X,Y ). Proof. Let c > 0 be such that T + S is an into isomorphism whenever

S ∈ Bc = {U ∈ L(X,Y ): kUk < c}.

13 Observe that Dk = {U ∈ Bc : codim(T + U)X = k} is open and closed in Bc for every integer k ≥ 0. Indeed, the set {W ∈ Bc : codim(T + W )X ≥ k + 1} is open by Lemma 3.2, part 1, while each Dj, j = 0, 1, . . . , k is open by part 2 of the same Lemma. Since Bc is connected, each Dk is empty or equal to Bc. The result follows. Boundary of spectrum lemma Lemma 3.3. If U ∈ L(X) is an into isomorphism but is not invertible in L(X), then 0 belongs to the interior int(σK(U)) (relative to K) of σK(U). Proof. Since U is an into isomorphism but not invertible, U is not onto and codim UX ≥ 1. This remains true under small perturbation by Lemma 3.2, part 1: there exists ε0 > 0 such that U − εIX is not onto, therefore not invertible, for every ε ∈ K such that |ε| < ε0. K This shows that B(0, ε0) ∩ K ⊂ σ (U). Corollary 3.1. Let T ∈ L(X). If λ ∈ ∂σK(T ) (of course this boundary is relative to K), there exists a normalized sequence (xn) in X such that (T − λIX )xn → 0 (this sequence is possibly constant).

Proof. What we want to prove is equivalent to saying that U = T − λIX is not an into isomorphism. By our assumption, we have that U is not invertible, but 0 is not interior to σK(U), hence U is not an into isomorphism by Lemma 3.3. Exercise. If X is real, T ∈ L(X) and if λ = r(cos θ + i sin θ), r sin θ 6= 0, belongs to the boundary of σ(TC), then there exist two sequences (xn) and (yn) in X such that kxnk + kynk = 1, T xn − r(cos θ xn − sin θ yn) → 0 and T yn − r(sin θ xn + cos θ yn) → 0. Remark 3.1. Let A be a unital Banach algebra. If λ belongs to the boundary of σK(a), there exists a normalized sequence (bn) in A such that abn − λbn tends to 0 in A. This follows from section 2, example 2.1,3 (we could also get a normalized sequence (cn) such that cna − λcn goes to 0).

Definition 3.1. Let T ∈ L(X,Y ); we say that T is finitely singular if there exists c > 0 and a (closed) finite codimensional subspace X0 ⊂ X such that

kT xk ≥ ckxk for every x ∈ X0. In other words the restriction of T to X0 is an into isomorphism. Let c(T ) denote the supremum of all c > 0 for which the above property holds, and set c(T ) = 0 if T is not finitely singular. Our terminology is not classical and perhaps a little strange, since we call a non singular operator, for instance an onto isomorphism, “finitely singular”; it would be better to say “at most finitely singular”, but this is definitely too long. Remark 3.2. Suppose that T ∈ L(X,Y ) is finitely singular. It is clear that a small norm perturbation of T is still finitely singular (precisely, T +S is finitely singular if kSk < c(T ); actually it is enough that kS|X1 k < c(T ) for some finite codimensional subspace X1 of X). It is also clear that the restriction of T to any infinite dimensional subspace Z of X is finitely singular.

14 If T ∈ L(X,Y ) and if UT is finitely singular for some U ∈ L(Y,Z), then T is finitely singular. Exercise 3.1. Suppose that T ∈ L(X,Y ) is finitely singular. Show that 1. ker T is finite dimensional. 2. For every (closed) subspace Z of X, T (Z) is a closed subspace of Y . 3. If (xn) is a bounded sequence in X and if (T xn) converges in Y , then there exists

a norm-converging subsequence (xnk ); in other words, the restriction of T to any bounded subset of X is a proper map. 4. One can choose X0 in Definition 3.1 in such a way that X = ker T ⊕ X0. Hence, if ker T = {0}, then T is an isomorphism from X into Y . 5. Let T ∈ L(X,Y ). Show that T is finitely singular if and only if TX is closed and dim ker T < ∞. 6. If T is finitely singular from X to Y and K = R, the complexified operator TC is finitely singular from XC to YC. Lemma 3.4. Let T ∈ L(X,Y ). Then T ∗ is finitely singular from Y ∗ to X∗ if and only if TX is closed and finite codimensional in Y . Proof. Suppose that T ∗ is finitely singular. Since ker T ∗ = (TX)⊥ is finite dimensional, we know that TX is finite codimensional. It is enough to show that for some c > 0 and for every y ∈ TX, there exists x ∈ X with ky − T xk ≤ kyk/2 and kxk ≤ kyk/c (the end 0 P of the proof is by iteration: one constructs a convergent series x = xn in X such that y = T x0). If the preceding claim is not true, we can find for every integer n ≥ 1 a vector yn ∈ TX such that kynk = 1 and 1 y ∈/ nT (B ) + B . n X 2 Y ∗ ∗ ∗ ∗ ∗ ∗ By Hahn-Banach there exists yn ∈ Y such that yn(yn) = 1 and kynk ≤ 2, kT (yn)k ≤ 1/n. Since T ∗ is finitely singular, we know from Exercise 3.1,3 that there exists a subsequence (y∗ ) converging to some y∗; it follows then that T ∗y∗ = 0, thus y∗ ∈ ker T ∗, which implies nk that y∗(y ) = 0 for every n, contradicting y∗ (y ) = 1 and y∗ → y∗. n n n nk Conversely, assume TX closed and finite codimensional in Y . By the open mapping theorem, T induces an isomorphism from X/ ker T onto TX. Let Y0 = TX and let c > 0 be such that for every y0 ∈ Y0, there exists x ∈ X such that y0 = T x and ky0k ≥ ckxk. Let ∗ ⊥ ∗ Y = Y0 ⊕ F and let Q be the projection from Y0 ⊕ F onto Y0. Given y ∈ F , ky k = 1, ∗ ∗ there exists y = y0 + f ∈ Y0 ⊕ F such that ky0 + fk ≤ 1 and y (y0 + f) = y (y0) > 1/2. Then, since ky0k = kQyk ≤ kQk there exists x ∈ X such that y0 = T x and kxk ≤ kQk/c, hence kQk kT ∗y∗k ≥ T ∗(y∗)(x) = y∗(y ) > 1/2, c 0 showing that kT ∗y∗k ≥ cky∗k/(2kQk) for every y∗ in the finite codimensional subspace F ⊥ of Y ∗, hence T ∗ is finitely singular. We say that T is infinitely singular if it is not finitely singular. Proposition 3.2. Let T ∈ L(X,Y ). Then T is infinitely singular if and only if for every ε > 0, there exists an infinite dimensional subspace Z ⊂ X such that kT|Z k < ε.

15 Furthermore, we may assume that this subspace Z has a Schauder basis (zn)n≥1 and that the norm of the restriction of T to [zn, zn+1,...] tends to 0 when n → ∞; in particular we may assume that T|Z is compact. Proof. Suppose that T is infinitely singular. We construct a normalized basic sequence 0 −n 0 (zn) in X such that kT znk < ε 2 for every n ≥ 1, where 0 < ε < ε/4. If z1, . . . , zn are already selected, let An be a finite subset of BX∗ which is 1/2-norming for the linear span [z1, . . . , zn], that is ∗ ∀x ∈ [z1, . . . , zn], kxk ≤ 2 max |x (x)|. ∗ x ∈An T ∗ We may assume that A ⊃ A . Let X = ∗ ker x ; since X is finite codimensional n n−1 0 x ∈An 0 and T infinitely singular, we may find zn+1 ∈ X0 such that kzn+1k = 1 and kT zn+1k < 0 −n−1 ∞ ε 2 . We let Z be the closed linear span of the sequence (zn)n=1; it is easy to show that this sequence is a Schauder basis with constant 2 for Z; indeed, if m < n, since ∗ n zm+1, . . . , zn were chosen in the kernel of all x ∈ Am, we obtain for all scalars (ak)k=1

Xm ¯ Xm ¯ ¯ Xn ¯ Xn ∗ ∗ k akzkk ≤ 2 max ¯x ( akzk)¯ = 2 max ¯x ( akzk)¯ ≤ 2k akzkk. ∗ ∗ x ∈Am x ∈Am k=1 k=1 k=1 k=1 P For z = k≥n akzk this implies that |ak| ≤ 4kzk for every k, hence X kT zk ≤ 4kzk ε02−k ≤ 8 2−nε0kzk < εkzk. k≥n

We obtain that T|Z is compact and that kT|Z k < ε. The other direction is clear. Exercise. T is finitely singular iff the image of every closed subspace Z ⊂ X is closed. Remark. What we have showed is that when c(T ) = 0, there exists a subspace Z ⊂ X such that kT|Z k is small; it does not seem possible to obtain in general a quantitative result of the form kT|Z k ≤ Mc(T ) + ε for some universal constant M when c(T ) > 0. This is however true with M = 1 in a Hilbert space or in `p, 1 ≤ p < ∞ (or in c0).

4. Basic Fredholm theory Definition 4.1. We say that T ∈ L(X,Y ) is a Fredholm operator from X to Y if there exists a (closed) finite codimensional subspace X0 of X such that T|X0 is an isomorphism from X0 onto some finite codimensional subspace Y0 = TX0 of Y . In particular T is finitely singular. We know by Exercise 3.1, part 4, that one can choose X0 such that X = ker T ⊕ X0. In this case Y0 = TX0 = TX. Exercise 4.1. 1. Let T be a Fredholm operator from X to Y , and let X0,Y0 be as above. Prove that

codimX X0 − codimY Y0 = dim ker T − codimY TX.

This integer is called index of T , and denoted ind(T ).

16 −1 −1 Hint. Write TX = Y0 ⊕ F , X = X0 ⊕ T F , T F = ker T ⊕ E and count dimensions. 2. Show that T ∈ L(X,Y ) is Fredholm if and only if ker T is finite dimensional, and TX closed and finite codimensional in Y (this is the usual definition). 3. If T is Fredholm on a real Banach space X, then TC is Fredholm on XC, with the same index (counting of course complex dimensions for TC). 4. Direct sums; if T1,T2 are Fredholm from X1 to Y1 and from X2 to Y2, then T1 ⊕ T2 is Fredholm from X1 ⊕ X2 to Y1 ⊕ Y2. Check that ind(T1 ⊕ T2) = ind(T1) + ind(T2). 5. When U1T and TU2 are Fredholm, then T is Fredholm. 6. If Q : X → Y is a quotient map with finite dimensional kernel E, then Q is Fredholm and ind(Q) = dim E. If T : Z → X is such that QT is Fredholm, show that T is Fredholm. Perturbation by a small norm operator or a finite rank operator Proposition 4.1. Let T ∈ L(X,Y ) be Fredholm. There exists d > 0 such that kSk < d implies that T + S is Fredholm and ind(T + S) = ind(T ).

Proof. Let X0,Y0 be as in Definition 4.1. The result follows immediately from Lemma

3.2, part 2, applied to the operator T0 = T|X0 , considered as operator from X0 to Y . This operator is an into isomorphism, hence for d > 0 small and kSk < d we know that

(T + S)|X0 is an into isomorphism and that codim(T + S)X0 = codim TX0. Lemma 4.1. If T ∈ L(X,Y ) is Fredholm and if S has finite rank, then T + S is Fredholm and ind(T + S) = ind(T ).

Proof. This is because we may choose X0 contained in the finite codimensional subspace ker S of X. Then T + S and T coincide on X0. Exercise 4.2. 1. If T ∈ L(X,Y ) is Fredholm, there exists U ∈ L(Y,X) such that UT − IX and TU − IY have finite rank. 2. If U1T − IX and TU2 − IY have finite rank, then T is Fredholm. Hence T is Fredholm iff it is invertible modulo finite rank operators. Proposition 4.2. Composition formula. If T : X → Y and U : Y → Z are Fredholm, then UT is Fredholm from X to Z and

ind(UT ) = ind(T ) + ind(U).

Proof. We can find X0, Y0, Y1, Z1 finite codimensional such that T defines an isomorphism from X0 onto Y0 and U an isomorphism from Y1 onto Z1. We simply replace these finite codimensional subspaces by smaller finite codimensional subspaces given by Y2 = Y0 ∩ Y1, −1 X2 = X0 ∩ T Y2 and Z2 = UY2; now T|X2 is an isomorphism from X2 onto Y2 and U|Y2 an isomorphism from Y2 onto Z2 and we compute

ind(T ) + ind(U) = (codim X2 − codim Y2) + (codim Y2 − codim Z2) = ind(UT ).

17 Duality Lemma 4.2. An operator T ∈ L(X,Y ) is Fredholm if and only if T and T ∗ are finitely singular. Proof. If T is Fredholm, we know that T is finitely singular, and that TX is closed and finite codimensional, so that T ∗ is finitely singular by Lemma 3.4. Conversely, when T and T ∗ are finitely singular, we have dim ker T < ∞ by Exercise 3.1,1 and TX closed and finite codimensional by Lemma 3.4. Lemma 4.3. Let T ∈ L(X,Y ). Then T is Fredholm iff T ∗ is Fredholm, and in this case we have ind(T ∗) = − ind(T ). Proof. If T is Fredholm, T ∗ is finitely singular by the preceding Lemma, hence dim ker T ∗ < ∞ and T ∗Y ∗ is closed. Furthermore, since ker T = (T ∗Y ∗)⊥, it follows that T ∗Y ∗ is finite codimensional and T ∗ is Fredholm. Conversely, if T ∗ is Fredholm, then T ∗ is finitely singular, hence TX is closed and finite codimensional by Lemma 3.4, and dim ker T < ∞ for the same reason as before.

Let X0 and Y0 be finite codimensional subspaces of X and Y such that T0 = T|X0 is an isomorphism from X0 onto Y0. Let i : X0 → X and j : Y0 → Y be the natural inclusion ∗ ∗ ∗ ∗ ∗ ∗ maps. Now j : Y → Y0 and i : X → X0 are the natural projections. They are clearly ∗ ⊥ ∗ Fredholm (quotient maps with finite dimensional kernel; ker i = X0 , ind(i ) = codim X0). By the composition formula

∗ ∗ ∗ ∗ ind(T0 ) + ind(j ) = ind(i ) + ind(T ) so that ∗ ∗ ∗ ind(T ) = ind(j ) − ind(i ) = codim Y0 − codim X0 = − ind(T ).

When T is Fredholm, it follows that T ∗∗ is Fredholm and ind(T ∗∗) = ind(T ). We can deduce it from the duality statement but it is actually clear directly. Lemma 4.4. The set of T ∈ L(X,Y ) which are finitely singular and not Fredholm is open in L(X,Y ), as well as the set of T ∈ L(X,Y ) such that T ∗ is finitely singular and not Fredholm from Y ∗ to X∗. Proof. Let T ∈ L(X,Y ) be finitely singular and not Fredholm. We can find a finite codimensional subspace X0 of X such that T is an isomorphism from X0 into Y . If T is not Fredholm, it implies that codim TX0 = ∞. By Proposition 3.1 this remains true in a neighborhood of T , hence codim(T + S)X0 = +∞ and codim(T + S)X = +∞ also for T + S in this neighborhood. The second part is similar. Semi-Fredholm operators

If T is finitely singular from X to Y , there exists a finite codimensional subspace X0 of X such that T is an isomorphism from X0 into Y . Either codim TX0 < ∞ and T is Fredholm, or codim TX0 = ∞ (and thus codim TX = +∞) and we define the generalized index by ind(T ) = −∞. More generally, an operator T : X → Y is called semi-Fredholm if TX is closed and if the kernel or the cokernel is finite dimensional; the generalized index is defined as before,

18 with the possible values +∞ and −∞. An operator T is semi-Fredholm iff T or T ∗ is finitely singular by Lemma 3.4 and Exercise 3.1, 5. Observe that a Fredholm operator is precisely a semi-Fredholm operator with finite index.

Lemma 4.5. The set of semi-Fredholm operators is open in L(X,Y ), and the generalized index is locally constant. Proof. By Lemma 4.4 and Proposition 4.1.

Lemma 4.6. If Tt is a continuous path in L(X,Y ), t ∈ [0, 1], such that Tt is semi-Fredholm for every t ∈ [0, 1], then ind(T1) = ind(T0). In particular, under the same assumptions, if T0 is Fredholm then T1 is Fredholm with same index. Proof. We know that the generalized index is locally constant.

Corollary 4.1. If T is finitely singular and kSk < c(T ), then T + S is finitely singular and the generalized index satisfies ind(T + S) = ind(T ). In particular, if T is Fredholm and kSk < c(T ), then T + S is Fredholm and the index satisfies ind(T + S) = ind(T ). Proof. When kSk < c(T ), we see that ktSk < c(T ) for every t ∈ [0, 1], therefore T + tS is a continuous path consisting of finitely singular operators by Remark 3.2. The result follows from Lemma 4.6.

Corollary 4.2. If T is Fredholm and K compact from X to Y , then T + K is Fredholm and ind(T + K) = ind(T ). Proof. It is enough to show that T + K is finitely singular for every K (because then T + tK will be finitely singular for every t ∈ [0, 1]); this will be a consequence of Lemma 6.1, but we give here a different proof: for every ε > 0, there exists

a finite codimensional subspace X0 such that kK|X0 k < ε; indeed, there exists a finite set ∗ ∗ ∗ x1, . . . , xn in X such that

[n ∗ ∗ K (BY ∗ ) ⊂ (xj + εBX ). j=1

Tn ∗ This implies that kKxk ≤ εkxk when x ∈ j=1 ker xj . If we choose ε < c(T ), we see that T + K is finitely singular (Remark 3.2).

Corollary 4.3. Let T ∈ L(X,Y ). Then T is Fredholm if and only if T is invertible modulo compact operators, i.e. iff there exists U ∈ L(Y,X) such that TU − IY and UT − IX are compact.

Proof. This condition is necessary by Exercise 4.2,1. Conversely, if TU − IY is compact, we know that TU is Fredholm by Corollary 4.2. In the same way, UT is Fredholm and we know then that T is Fredholm by Exercise 4.2,2.

19 5. More on operators See [LT1], section 2.c; Kato, [Kt1], [Kt2]. We prove now another version of Lemma 3.2. Suppose that T ∈ L(X,Y ) has closed range. Then T induces an isomorphism between X/ ker T and TX, and there exists a constant c > 0 such that for every y ∈ TX, we can find x ∈ X such that y = T x and kyk ≥ ckxk.

Lemma 5.1. Let T1,T2 ∈ L(X,Y ). Suppose that for every y ∈ T1X, there exists x ∈ X such that y = T1x and kyk ≥ ckxk; if kT1 − T2k < c and if T2X is closed, then codim T2X ≤ codim T1X, finite or infinite. Remark. We can recover from the above Lemma the fact proved in Proposition 3.1 that codim T1X remains constant in a neighborhood of an into isomorphism T1. Indeed, if T1 is an into isomorphism from X into Y , with kT1xk ≥ ckxk for every x ∈ X, and if we assume kT2 − T1k < c/2, we have kT2xk ≥ (c/2)kxk for every x ∈ X; this shows that T2 is an into isomorphism, therefore T2X is closed; we obtain codim T2X ≤ codim T1X by the above Lemma; since kT2 − T1k < c/2, we can exchange the roles of T1 and T2 and conclude that codim T2X = codim T1X, finite or infinite.

Proof of the Lemma. If codim T2X > codim T1X, there exists by the next sublemma, applied to Z = T1X, Y = T2X and ε = 1 − kT1 − T2k/c > 0 a vector z ∈ T1X, kzk = 1 such that dist(z, T2X) > kT1 − T2k/c. Then there exists x ∈ X such that z = T1x and kxk ≤ 1/c; now kz − T2xk ≤ kT1 − T2k kxk gives a contradiction. Sublemma. Let Y and Z be two finite codimensional subspaces of X. If codim Y > codim Z, there exists for every ε > 0 a vector z ∈ Z such that kzk = 1 and dist(z, Y ) > 1−ε. This is also valid if Y is closed, codim Y = ∞ and codim Z < ∞. Proof. Let ρ be a continuous lifting (not necessarily linear!) from the unit sphere S(X/Y ) of X/Y to the ball of radius (1 − ε)−1 in X. We may assume that ρ(−x) = −ρ(x). Then πZ ◦ ρ is an odd mapping from the sphere of X/Y to X/Z and dim X/Y > dim X/Z. By Borsuk’s antipodal mapping theorem, there exists x ∈ S(X/Y ) such that πZ (ρ(x)) = 0, i.e. ρ(x) ∈ Z. Take z0 = ρ(x) and z = z0/kz0k. When codim Y = ∞, it is enough to select F ⊂ X/Y such that dim F > dim X/Z and to apply the same reasoning.

Corollary 5.1. Let T1,T2 ∈ L(X,Y ). If kT1xk ≥ ckxk for every x ∈ X and if kT1 −T2k < c, then codim T2X = codim T1X, finite or infinite.

Proof. It follows from the hypothesis that every T on the segment [T1,T2] is an into isomorphism, and we know that codim TX is locally constant by Proposition 3.1 or the Remark following the above Lemma. Lemma 5.2. Let T ∈ L(X); if T or T ∗ is finitely singular (in other words if T is semi- Fredholm) and if 0 ∈ ∂σK(T ), then T is Fredholm with index 0. Proof. Since 0 ∈ ∂σK(T ), we can find invertible operators, in particular Fredholm operators with index 0, arbitrarily close to T , hence T is Fredholm with index 0 by the continuity of the index of semi-Fredholm operators (Lemma 4.5). Remark. We obtain a slightly different proof for the “Boundary of spectrum lemma”: if T ∈ L(X) and λ ∈ ∂σK(T ), then there exists a (possibly constant) normalized sequence

20 (xn) in X such that (T −λIX )xn → 0. Indeed, let U = T −λIX . If U is infinitely singular, we know the result by Proposition 3.2. If U is finitely singular and 0 ∈ ∂σK(U), then U is Fredholm with index 0 by Lemma 5.2. Since 0 ∈ σK(U), it follows that U is not invertible, thus ker U 6= {0}. T∞ n Lemma 5.3. If T ∈ L(X) is finitely singular, then Y = n=0 T X is closed and TY = Y . Furthermore T|Y : Y → Y is Fredholm and the constant value of ind(T|Y − λIY ) for λ in a neighborhood of 0 (in K) is the (constant and finite) dimension of the kernel of T − λIX for small λ 6= 0. Precisely, there exists ε > 0 such that

∀λ ∈ K, 0 < |λ| < ε ⇒ dim ker(T − λIX ) = ind(T|Y ).

Proof. We prove that T nX is closed by induction using Exercise 3.1, part 2, hence Y is closed. It is clear that TY ⊂ Y . We also know that N1 = ker T is finite dimensional. It j follows that there exists an integer k such that, setting Yj = T X, we have N1∩Yk = N1∩Y . Let y ∈ Y ; for every integer j, there exists a vector zj ∈ Yj such that y = T zj. For j > k we get zj − zk ∈ N1 ∩ Yk = N1 ∩ Y . In particular zj − zk ∈ Y ⊂ Yj and thus zk ∈ Yj. Since this is true for every j > k, we obtain zk ∈ Y , and finally y = T zk ∈ TY . We have that ker T|Y ⊂ ker T is finite dimensional; since TY = Y , it follows that T|Y is Fredholm from Y to Y . n n Suppose that λ 6= 0 and (T − λIX )x = 0. We have x = T x/λ for every integer n ≥ 1, yielding x ∈ Y . Hence the kernel of T − λIX coincides with the kernel of T|Y − λIY . Furthermore, if we choose Y0 finite codimensional in Y such that Y = ker T|Y ⊕ Y0, we see that T|Y0 is an isomorphism from Y0 onto Y . This remains true for T − λIX for small λ, say |λ| < ε, hence T|Y − λIY remains onto and for 0 < |λ| < ε we obtain

dim ker(T − λIX ) = dim ker(T|Y − λIY ) = ind(T|Y − λIY ) = ind(T|Y ).

Remark. Suppose that ker T = {0} and ker(T − λnIX ) 6= {0} for a sequence (λn) tending to 0. Then T is infinitely singular. Indeed, there exists a normalized sequence (xn) such that T xn = λnxn tends to 0; if T is finitely singular, there exists by Exercise 3.1,3 a subsequence (xnk ) converging to some x; then x 6= 0 and T x = 0, a contradiction. More generally, we see that when T is finitely singular, dim ker(T − λIX ) ≤ dim ker T when λ is ∗ ∗ small (we may apply Lemma 5.1 to T and T − λIX∗ ). Proposition 5.1. Let T ∈ L(X); if T or T ∗ is finitely singular and if 0 ∈ ∂σK(T ), there exists an integer k ≥ 1 such that ker T k = ker T k+1 and T kX = T k+1X. The space X is then the direct sum of two invariant subspaces for T , Y = T kX and the finite dimensional k subspace N = Nk = ker T 6= {0}. The operator T|Y is an isomorphism from Y onto Y . Furthermore 0 is isolated in σK(T ). Proof. If T or T ∗ is finitely singular and 0 ∈ ∂σK(T ) then T is Fredholm with index 0 by Lemma 5.2; also ker T 6= {0} since 0 ∈ σK(T ). Furthermore by Lemma 5.3, there exists ε0 > 0 such that dim ker(T − λIX ) is constant for all λ ∈ K such that 0 < |λ| < ε0. Since K 0 ∈ ∂σ (T ), this constant dimension of ker(T − λIX ) must be 0; we may assume that ε0

21 is so small that T − λIX is still Fredholm with index 0 when 0 < |λ| < ε0. These two facts imply that (T − λIX )X = X for such λ, hence T − λIX is invertible when 0 < |λ| < ε0. Therefore 0 is isolated in the spectrum of T . We also know by Lemma 5.3 that ind(T|Y ) = dim ker(T −λIX ) = 0 when 0 < |λ| < ε0 and since TY = Y , it yields that T|Y is an isomorphism and so ker T ∩ Y = {0}; there exists therefore an integer k such that {0} = ker T ∩ Y = ker T ∩ T kX, and this yields that ker T k = ker T k+1 (if T k+1x = 0, then T kx ∈ ker T ∩ T kX, hence T kx = 0); it follows that T kX = T k+1X = Y , because T k and T k+1 are Fredholm with index 0 by Proposition 4.2. We obtain a decomposition of the space into two invariant subspaces, Y k and Nk = ker T (this one is finite dimensional). Indeed, let V ∈ L(Y ) be the inverse of k k T|Y and set Q = V T , considered as a map from X to X. Then Q is a projection from X onto Y and ker Q = ker T k = N. Remark. In the complex case, the restriction of T to N decomposes into a finite number of Jordan cells with 0 on the diagonal. The spectral projection defined in section 2 has Y as kernel and N for range.

Exercise 5.1. If K ∈ L(X) is compact, we know that T = IX − K is finitely singular by k the proof of Corollary 4.2. Find a direct proof that Nk = ker T stabilizes. If 0 ∈ σ(T ), show that 0 is isolated in σ(T ).

Hint. If Nk does not stabilize, let xk ∈ Nk be such that 1 = kxkk = dist(xk,Nk−1). If yk ∈ Nk−1 is arbitrary, observe that k(xl − yl) − (xk − yk)k ≥ 1 when l 6= k. Apply to yk = T xk to obtain a contradiction to the compactness of K = IX − T . Boundary of essential spectrum lemma Recall that the Calkin algebra of a real or complex infinite dimensional Banach space X was defined by C(X) = L(X)/K(X). Let T ∈ L(X). It follows from Corollary 4.3 that K λ ∈ ρb (T ) iff T − λIX is Fredholm. Let

K σ∞(T ) = {λ ∈ K : T − λIX infinitely singular}, ∗K ∗ σ∞ (T ) = {λ ∈ K : T − λIX∗ infinitely singular}. K ∗K We know that T − λIX is semi-Fredholm if and only if λ∈ / σ∞(T ) ∩ σ∞ (T ); the two K ∗K sets σ∞(T ) and σ∞ (T ) are compact, and non-empty in the complex case (see Lemma 5.4 K below); we know that T − λIX is Fredholm iff λ∈ / σb (T ), and by Lemma 3.4 T − λIX is ∗ Fredholm iff T − λIX and (T − λIX ) are finitely singular, therefore

K K ∗K σb (T ) = σ∞ ∪ σ∞ .

Example. Let R be the right shift on `2(N) (complex case). The spectrum of R is the ∗ closed unit disc; σ∞(R) = σ∞(R) = T. Exercise. Suppose that K = C and let T be an isometry from X into X. Show that ∗ σ∞(T ) ∩ σ∞(T ) ⊂ T and that they are equal if T is not onto. K ∗ Lemma 5.4. Let λ ∈ ∂σb (T ). Then T − λIX and (T − λIX ) are infinitely singular,

K K ∗K ∂σb (T ) ⊂ σ∞(T ) ∩ σ∞ (T ).

22 K Proof. We know that T − λIX is not Fredholm since λ ∈ σb (T ) but T − λIX is arbitrarily close to Fredholm operators since λ ∈ ∂σbK(T ). It follows then from Lemma 4.5 that T − λIX is not semi-Fredholm. Corollary 5.2. If K = C and if dim X = ∞, then for every T ∈ L(X) there exists λ ∈ C ∗ such that T − λIX is infinitely singular (and also (T − λIX ) ). Proof. Since dim X = ∞ the algebra L(X)/K(X) is not {0} hence the spectrum of the image Tb in C(X) is not empty, and we simply have to pick any boundary point λ of this spectrum. Exercise. If X is a real Banach space and T ∈ L(X), then either there exists λ ∈ R such 2 that T − λIX is infinitely singular, or there exists p, q ∈ R with p − 4q < 0 such that 2 T + pT + qIX is infinitely singular. K Lemma 5.5. Let T ∈ L(X) and let K be a compact subset of K such that σ∞(T ) ∩ ∗K K σ∞ (T ) ⊂ K. If Ω is a connected component of int σ (T )\K, the boundary ∂Ω is contained in K. K Proof. Suppose that λ ∈ ∂Ω but λ∈ / K; then λ ∈ ∂σ (T ). Let U = T − λIX ; since λ∈ / K, U or U ∗ is finitely singular, and 0 ∈ ∂σK(U). By Proposition 5.1, 0 is isolated in σK(U), a contradiction to the fact that λ ∈ Ω. Corollary 5.3. Every λ ∈ σK(T ) belonging to the unbounded connected component of ρbK(T ) is an isolated eigenvalue of T with finite multiplicity (by this we mean that X splits as X = E ⊕ Y , where E and Y are invariant under T , E is finite dimensional and σ(T|E) = {λ}, and T|Y is an isomorphism from Y onto Y ). Proof. Let K be the complement in K of the unbounded connected component ω of ρbK(T ); K ∗K K then K is compact and contains σ∞(T ) ∩ σ∞ (T ). We want to prove that int σ (T ) \ K is empty. The open subset int σK(T ) \ K of ω is bounded, hence different from ω; if it is non empty it is not closed in ω; there must therefore exist some µ ∈ ∂σK(T ) ∩ ω, contradicting Lemma 5.5). Assume that λ ∈ σK(T ) ∩ ω. By the preceding remark, λ does not belong to K the interior of σ (T ); also U = T − λIX is finitely singular, and 0 belongs to the boundary of the spectrum of U. The result follows by Proposition 5.1. Remark. We know by Proposition 5.1 that every non isolated point in ∂σ(T ) belongs to σ∞(T ). ∗ Corollary 5.4. (K = C) If σ∞(T ) ∩ σ∞(T ) cannot contain the boundary of any bounded ∗ non empty open subset of C, every λ ∈ σ(T ) \ (σ∞(T ) ∩ σ∞(T )) is isolated and is an eigenvalue with finite multiplicity.

Proof. By Lemma 5.5 the spectrum has an empty interior. If λ ∈ σ(T ) but λ∈ / σ∞(T ) ∩ ∗ σ∞(T ), the operator U = T − λIX is semi-Fredholm and 0 belongs to the boundary of σ(U). The result follows by Proposition 5.1. This Corollary applies for example when σb(T ) is countable; also if A is hermitian in L(H) (complex case) and if K is a compact operator, then the essential spectrum of A+K is real, hence does not contain the boundary of any bounded non empty open subset of

23 the complex plane; every non real λ in the spectrum of A + K is an isolated eigenvalue with finite multiplicity. If σb(T ) does not contain the boundary of any bounded non empty open subset of C and if S is strictly singular, then...

6. Strictly singular operators. More on Fredholm operators See Kato [Kt1], PeÃlczy´nski[Pe2]; also [LT1], section 2.c. for a concise presentation. Definition 6.1. We say that S ∈ L(X,Y ) is strictly singular if for every (infinite dimen- sional) subspace Z of X and every ε > 0, there exists z ∈ Z such that kSzk < εkzk. Let S(X,Y ) denote the set of strictly singular operators from X to Y . When Y = X we simply write S(X). Exercise 6.1. 1. Let S ∈ S(X,Y ). For every (infinite dimensional) subspace Z ⊂ X and every ε > 0, 0 there exists (an infinite dimensional) subspace Z ⊂ Z such that kS|Z0 k < ε (compare to Proposition 3.2). 2. If S1,S2 ∈ S(X,Y ), then S1 +S2 is strictly singular. Show that S(X,Y ) is a closed vector subspace of L(X,Y ). 3. Let S ∈ S(X,Y ). For every T ∈ L(W, X) and U ∈ L(Y,Z), show that ST and US are strictly singular. When Y = X, S(X) is a closed two-sided ideal of L(X). 4. Show that K(X,Y ) ⊂ S(X,Y ), and that they coincide when X = Y = H is a Hilbert space, or when X = Y = `p. Give an example of S ∈ S non compact. Give an example where S ∈ S(X,Y ) but the adjoint S∗ is not strictly singular. 5. Matrix of strictly singular operators. Let T ∈ L(Xn,Y m). Then T can be repre- sented by a m × n matrix (Ti,j) of operators from X to Y . Show that T is strictly singular iff each Ti,j is strictly singular. 6. Complexification of a strictly singular operator. If S is a strictly singular operator between two real spaces X and Y , then SC is strictly singular from XC to YC. 7. Show that L(`1, `2) = S(`1, `2). Remark. The essentially dual notion of strictly cosingular operators was defined by PeÃl- czy´nski[Pe2] in the following way: an operator T : X → Y is strictly cosingular if, for every linear q from Y onto some Banach space Z, the map q ◦ T is not onto. Lemma 6.1. Assume that T ∈ L(X,Y ) is finitely singular and that S ∈ S(X,Y ). Then T + S is finitely singular.

Proof. There exists a finite codimensional subspace X0 ⊂ X and c > 0 such that kT xk ≥ ckxk for every x ∈ X0. Assuming T +S infinitely singular, there would exist by Proposition 0 3.2 a subspace Z ⊂ X (infinite dimensional) such that k(T +S)|Z k < c/2. Then Z = Z∩X0 0 is infinite dimensional, hence there exists z ∈ Z ⊂ X0 such that kSzk < (c/2)kzk. But this implies kT zk < ckzk, contradicting the choice of X0 and c. Remark 6.1. Let U ∈ L(X,Y ). In order that the above proof works for T + U, it is enough that s(U) = sup inf{kUzk : z ∈ Z, kzk = 1} Z⊂X dim Z=∞

24 is strictly less than c(T ). Note that U is strictly singular iff s(U) = 0. Lemma 6.2. Let K = C. For every U ∈ L(X),

rb(U) ≤ s(U).

Proof. There exists λ ∈ ∂σb(U) such that |λ| = rb(U) and U − λIX infinitely singular by Lemma 5.4, hence there exists an infinite dimensional subspace Z on which U ∼ λIX by Proposition 3.2, so that s(U) ≥ |λ|. Corollary 6.1. Let T,U ∈ L(X,Y ). If T is Fredholm and if s(U) < c(T ), then T + U is Fredholm, and ind(T + U) = ind T . Proof. For every t ∈ [0, 1], we have s(tU) < c(T ), hence T + tU is finitely singular for every t ∈ [0, 1] by Remark 6.1. The result follows by Lemma 4.6. Remark. The corresponding result holds also if T is finitely singular and not Fredholm. Corollary 6.2. Let S, T ∈ L(X,Y ). If T is Fredholm and S strictly singular, then T + S is Fredholm, and ind(T + S) = ind T . Corollary 6.3. T is Fredholm iff it is invertible modulo strictly singular operators. Corollary 6.4. T is Fredholm iff it is invertible modulo compact operators. Proposition 6.1. Let S ∈ L(X) be strictly singular, X complex. Then every λ 6= 0 in the spectrum of S is isolated in σ(S) and is an eigenvalue with finite multiplicity. It follows that the spectrum of S is finite or consists of a sequence converging to 0. (This is a particular case of a Riesz operator; for the proof below it is enough to know that σb(S) = {0}.) If X is real and S ∈ S(X) we obtain considering SC a complex spectrum invariant under complex conjugation and consisting of a sequence converging to 0; the real spectrum is at most a sequence converging to 0. Proof. If S is strictly singular it is clear that σb(S) = {0} by Lemma 6.2. It follows then from Corollary 5.3 that every λ 6= 0 in σ(S) is isolated and is an eigenvalue with finite multiplicity. Exercise. Let T ∈ L(X), S ∈ S(X). If λ belongs to the unbounded component of ρbK(T ) and if λ ∈ σK(T + S), then λ is isolated in σK(T + S) and is an eigenvalue with finite multiplicity for T + S.

7. Ultraproducts Ultraproducts appeared in model theory (ÃLo´s)and as models for non-standard analy- sis. The notion of “restricted ultraproduct”, also called ultraproduct of Banach spaces, is more suitable to classical analysis and was developed among others by Dacunha-Castelle and Krivine in [DK] (see also [Ja], approximately at the same period).

Let I be an infinite index set and let U be a non-trivial ultrafilter on I. Let (Xi)i∈I be a family of Banach spaces indexed by I. Let L = `∞(I, (Xi)) be the space of all bounded

25 families xe = (xi)i∈I such that xi ∈ Xi for every i ∈ I and supi kxikXi < ∞. The norm of xe = (xi)i∈I ∈ L is given by kxek = supi∈I kxikXi . We define a seminorm on L by

p(xe) = lim kxikX (≤ kxek) U i and we let N = {xe : p(xe) = 0}. Then N is a closed subspace of L and we define the Q ultraproduct Xe = Xi/U to be the quotient Banach space L/N. When all spaces Xi are equal to the same space X, we call Xe an ultrapower of X. We can embed X isometrically in the ultrapower Xe by mapping each x ∈ X to the constant sequence (xi)i where xi = x for every i ∈ I.

By a slight abuse, we shall consider (xi)i as representing an element in Xe, instead of the correct formulation which refers to the equivalence class modulo U (same tradition in measure theory when speaking about a “function” in L1, instead of a class modulo negligible functions).

Exercise. 1. Show that Xe = X when Xi = X is finite dimensional, and that Xe 6= X when X is infinite dimensional. In this case Xe is non separable. 2. Show that Xe is finite dimensional iff d = limU dim Xi is finite. In this case d is the dimension of Xe. The index set is usually the set N of integers, but more general sets are useful when dealing for example with ultrapowers Xe of spaces X with non-separable dual; in this case a natural index set is the set of finite subsets of the dual space X∗ (or finite subsets of a dense subset in X∗: this is why the case of separable dual reduces to the index set N); equivalently we can work with the set I of (closed) finite codimensional subspaces of X.

The weakly null part Xe0 of the ultrapower Xe of X consists of elements xe that have a representative (xi)i∈I such that w- limU xi = 0 and plays a role in several questions. Exercise. If X is reflexive, then

Xe = X ⊕ Xe0.

When H is a Hilbert space, the ultrapower He is also a Hilbert space: we may define a scalar product on He that extends the scalar product of H, and such that the corresponding norm is the norm of He. Indeed, let for xe = (xi) and ye = (yi)

hx,e yei = limhxi, yii. i,U

Then kxek2 = hx,e xei.

It is also true, but more complicated, that for any given p ∈ [1, +∞), the class of Lp spaces is stable under ultraproduct (Dacunha-Castelle and Krivine [DK]).

26 Ultrapower of an operator We fix an index set I and an ultrafilter U throughout this paragraph; all ultrapowers of possibly different spaces are taken with respect to I and U. Given T ∈ L(X,Y ) we get in the obvious way a bounded linear operator Te from the ultrapower Xe to the corresponding ultrapower Ye: if xe = (xi)i we simply let Texe = (T xi)i. It is clear that kTek = kT k. If T ∈ L(X,Y ) and U ∈ L(Y,Z), then UTg = UeTe. Also IeX is the identity of Xe. It follows that Te is invertible when T is invertible. In the case of a Hilbert space, it is easy to see that the ultrapower of the hilbertian adjoint T ∗ of T is the adjoint of Te. When Y = X, we see that T → Te is a unital Banach algebra morphism from L(X) to L(Xe). Furthermore, it is a ∗-morphism when X = H is a Hilbert space.

Let Xe0 be the weakly null part of the ultrapower that was defined before. Then, for every T ∈ L(X,Y ), TeXe0 ⊂ Ye0. For a compact operator T , TeXe0 = {0}. Conversely, when the index set is rich enough for coding every weakly null net, Te = 0 implies that T is |Xe0 compact. Exercise. Fredholm and ultrapowers. 1. If T is finitely singular, then so is Te. 2. When T is Fredholm from X to Y , show that Te is Fredholm from Xe to Ye, with the same index. 3. When λ ∈ ∂σK(T ), there exists an eigenspace for Te corresponding to λ. What is the spectrum of Te?

Finite representability Definition 7.1. We say that Y is finitely representable into X if for every finite dimensional subspace F of Y and every ε > 0, there exists a A : F → X such that

∀y ∈ F, (1 − ε)kyk ≤ kAyk ≤ (1 + ε)kyk.

Proposition 7.1. If every Xi is finitely representable in X then the ultraproduct Xe of the family (Xi) is finitely representable in X. (α) Proof. Let F ⊂ Xe be finite dimensional and let (ye )α be an algebraic basis for F ; let (α) (α) (y )i∈I be a representative of ye . By assumption there exists for every i ∈ I a linear (α) map Ai : Fi = [yi ]α → X such that

∀y ∈ Fi, (1 − ε)kyk ≤ kAiyk ≤ (1 + ε)kyk.

(α) (α) P (α) We obtain a linear map A : F → X by setting Aye = (Aiyi )i. If xe = α aαye ∈ F , P (α) then xi = α aαyi ∈ Fi and

(1 − ε)kxek = (1 − ε) lim kxik ≤ lim kAixik = kAxek ≤ (1 + ε)kxek. U U

27 Spreading models ([BS], [BL]) Let U be a non trivial ultrafilter on N; consider the successive ultrapowers of X defined in the following way. Let Xe1 be the usual ultrapower with index set N. Let Xe2 be the

vector space of classes of double sequences xe = (xn1,n2 ) with the norm

kxek = lim lim kxn1,n2 k. n1,U n2,U

Similarly Xe3 is defined from triple sequences, and so on... There is a natural isometric embedding from Xen into Xen+1, which allows to consider Xen as a subspace of Xen+1 and e S e then to define the completion X∞ of the union n Xn (this type of construction is very similar to the notion of ultralimit in model theory). Let us describe the embedding ik from e e e e Xk into Xk+1: to xe = (xn1,...,nk ) ∈ Xk we associate ik(xe) = (yn1,...,nk+1 ) ∈ Xk+1 defined by yn1,...,nk,nk+1 = xn1,...,nk for every (n1, . . . , nk+1). For every operator T ∈ L(X,Y ), there exists an operator Te∞ : Xe∞ → Ye∞ defined in the obvious way. This space Xe∞ is finitely representable into X. We can define on Xe∞ an isometric shift D in the following e manner: if xe = (xn1,...,nk ) belongs to Xk, let

e Dxe = (yn1,...,nk+1 ) ∈ Xk+1, where yn1,n2,...,nk+1 = xn2,...,nk+1 .

The spreading model operation corresponds then to an iterated action of this on an element xe ∈ Xe1. This point of view was popularized by Krivine (the original approach of Brunel and Sucheston to spreading models uses a precise extraction of subsequence, with the help of Ramsey’s theorem). The ultrapower point of view has the advantage of being “functorial”: if Y is a second Banach space, and if we construct the corresponding 0 space Ye∞, there exists a similar shift D on Ye∞, and for every T ∈ L(X,Y ) we have 0 D Te∞ = Te∞D.

Let (xn)n≥1 be a sequence in X with no Cauchy subsequence. We consider in the successive ultrapowers the vectors e1 = (xn), e2 = De1 (note that e2 is not the image of k−1 e1 under the canonical embedding of Xe1 into Xe2), and generally ek = D e1 for every k ≥ 1. The norm in Xe∞ of a linear combination of the vectors e1, . . . , ek is given by

Xk

k aieik = lim ... lim ka1xn1 + a2xn2 + ··· + akxnk k. n1,U nk,U i=1

This norm on [en]n≥1 is invariant under spreading, which means that

Xk Xk

k aieik = k aiemi k i=1 i=1

k for every k, all scalars (ai)i=1 and all m1 < m2 < . . . < mk. Note that ke1 − e2k > 0, otherwise (xn) would have a Cauchy subsequence.

28 Let f1 = e1 − e2, f2 = e3 − e4,... Then kfnk = kf1k > 0 and (fn) is a monotone basic sequence (see below). The space generated by this sequence is contained in Xe∞ and hence finitely representable in X. The norm is invariant under spreading. We call this space generated by (fn) a monotone spreading model of X, generated by the sequence (xn).

We prove that (fn)n is a monotone basic sequence. By the spreading invariance property, we obtain Xk kX−1 k aifik = k aifi + ak(el − el+1)k i=1 i=1 for every l ≥ 2k − 1. Taking averages from l = 2k − 1 to l = 2k + n − 2 we obtain Pk Pk−1 k i=1 aifik ≥ k i=1 aifi + ak(e2k−1 − e2k+n−1)/nk, hence letting n → ∞

Xk kX−1 k aifik ≥ k aifik. i=1 i=1

Block finite representability

Let (xn) be a sequence in a Banach space X, with no Cauchy subsequence. We say that a space Y with a basis (fn) is block finitely representable in the span of (xn) if for every finite sequence y1, . . . , yk of (successive) blocks in Y and every ε > 0 there exists a linear map A :[y1, . . . , yk] → X such that Ay1, . . . , Ayk are successive linear combinations of the (xn), and

∀y ∈ [y1, . . . , yk], (1 − ε)kyk ≤ kAyk ≤ (1 + ε)kyk.

A (monotone) spreading model generated by a sequence (xn) is block finitely repre- sentable into this sequence. Ultrapowers of commuting or almost commuting operators When U, T ∈ L(X) commute then Ue and Te commute on Xe. In some situations U and T do not exactly commute but the restrictions of Te and Ue to the weakly null part Xe0 commute. For example, suppose that TU − UT is compact. Then TeUe − UeTe vanishes on the subspace Xe0 of Xe. We shall give an easy application to the existence of common approximate eigenvectors. Lemma 7.1. (Complex scalars) Let T,U ∈ L(X) with TU = UT . If T is not an into isomorphism, there exists λ ∈ C and a normalized sequence (xn) (possibly constant) such that T xn → 0, (U − λIX )xn → 0.

Proof. We only have to find λ ∈ C such that for every ε > 0, there exists a norm one vector x ∈ X with kT xk < ε and kUx − λxk < ε. Since T is not an into isomorphism, we can find a normalized sequence (yn) ⊂ X such that T yn → 0 (this sequence (yn) may be constant). If we consider ye = (yn) in the ultrapower Xe, we get Teye = 0. This shows that e e e e e Z = ker T 6= {0}. Since U and T commute, we know that UZ ⊂ Z. Let V = U|Z ∈ L(Z)

29 and let λ ∈ ∂σ(V ). There exists ze such that ze ∈ Z and V ze ∼ λze. Pulling back ze to X gives the desired vector x. Exercise. If X is a real Banach space and T,U are as in Lemma 7.1 we can find r > 0, θ ∈ R and two sequences (xn), (yn) in X with kxnk + kynk = 1, T xn → 0, T yn → 0, Uxn − r(cos θ xn − sin θ yn) → 0 and Uyn − r(sin θ xn + cos θ yn) → 0.

Corollary 7.1. (K = C) Let T1,...,Tn ∈ L(X) commute. There exist λ1, . . . , λn ∈ C such that for every ε > 0, there exists x ∈ X, kxk = 1 and

∀i = 1, . . . , n, kTix − λixk < ε.

Proof. The proof is by induction, using the argument of Lemma 7.1. When n = 2, we choose λ1 ∈ ∂σ(T1); then T = T1 − λ1IX is not an into isomorphism and we can find by Lemma 7.1 some λ2 for which there exist common approximate vectors. Passing to an ultrapower we see that the eigenspaces for Te1 and λ1, and for Te2 and λ2 intersect, and their intersection is stable under Te3 since the operators commute. It is therefore possible as before to find an approximate eigenvector for Te3 in this intersection. There is a variant of Lemma 7.1 where one assumes that T is infinitely singular and then the normalized sequence (xn) in the result can be chosen basic. In order to construct this basic sequence, we only need to show that the vector x in the above proof can be chosen in any given finite codimensional subspace of X. Let I be the set of finite codimensional subspaces of X and let U be a ultrafilter on I containing the set {Z ∈ I : Z ⊂ Y } for every Y ∈ I. Since T is infinitely singular, we may choose for every i = Y ∈ I a norm one vector −1 yi ∈ i such that kT yik < (codim i) . The corresponding net (yi)i belongs to the weakly null part Xe0 of the ultrapower, and Xe0 is stable under Te and Ue. It is easy then to adapt the above argument to the present case. The final ze is now a weakly null net, so it can be pulled back in any finite codimensional subspace.

Exercise. Assume that T1,...,Tk commute and that there exists a normalized (basic) sequence (xn) such that Tixn → 0 for every i = 1, . . . , k. Let U commute with each Ti. Then there exists λ ∈ C and a normalized (basic) sequence (yn) such that Tiyn → 0 for each i = 1, . . . , k and (U − λ)yn → 0. The above variant of Lemma 7.1 remains true if T and U weakly commute. Lemma 7.2. (Complex scalars) Let T,U ∈ L(X). Assume that T is infinitely singular and that TU − UT is compact. We can then find λ ∈ C and a normalized basic sequence (xn) in X such that T xn → 0 and (U − λIX )xn → 0. Wiener’s algebra again P inθ P n Let f = n∈Z an e ∈ W and T = n∈Z anR where R is the right shift on `1(Z). Then

kfkW = kT kL(`1(Z)). P n P Note that T commutes with R. Conversely, to T = n∈Z anR with Z |an| < ∞ we P inθ associate f = ϕT = n∈Z an e ∈ W . If f does not vanish on T, we show first that T is an into isomorphism. Otherwise there exists by Lemma 7.1 a λ ∈ T and a normalized

30 sequence (xn) such that T xn → 0 and Rxn − λxn → 0; on the other hand, T xn ∼ f(λ)xn, hence f(λ) = 0, contradiction. It follows that ∂σ(T ) ⊂ f(T); indeed, if λ∈ / f(T), we see that ϕT −λId = ϕT − λ does not vanish on T, hence T − λId is an into isomorphism and therefore λ∈ / ∂σ(T ) by the boundary of spectrum Lemma. Assuming that ϕT does not vanish on T, we can find a trigonometric polynomial g such that |gϕT − 1| < 1/2 on T. Let U be the finite linear combination of powers of R such that ϕU = g. Then, since ϕTU = ϕT ϕU = ϕT g, the boundary of the spectrum of TU is contained in the disc ∆(1, 1/2), hence the spectrum itself is contained in the same disc and it follows that TU is invertible; similarly UT is invertible, therefore T is invertible. −1 P n The inverse must commute to R, therefore T = n∈Z bnR , where the sequence (bn) −1 P P −1 is defined by T e0 = n∈Z bnen; finally n∈Z |bn| < ∞ since T e0 ∈ `1. If we set P inθ h = n∈Z bn e , we see that hf = 1 on T. Summing up, we found an alternate proof for the fact that a non-vanishing function in W is invertible in W (end of section 2). Cuntz algebras See Cuntz [C1]. ∗ Let P be the unital complex algebra generated by six elements (uj) and (uj ), j = 0, 1, 2, satisfying the relations

X2 ∗ ∗ ui uj = δi,j1P ; ujuj = 1P . j=0

∗ Notice that pj = ujuj is an idempotent, and pipj = 0 when i 6= j. We shall also consider ∗ the unital complex algebra Q generated by six elements (vj) and (vj ), j = 0, 1, 2, satisfying the relations ∗ vi vj = δi,j1Q.

∗ P2 Again qj = vjvj is an idempotent, qiqj = 0 when i 6= j, but q = 1Q − j=0 qj is a non zero idempotent in Q. Exercise. Let (q) denote the ideal generated by q in Q. Show that P'Q/(q). S∞ n Let T be the ternary tree n=0{0, 1, 2} . The root is the empty word, denoted ∅. If ∗ s, t ∈ T , let (s, t) stand for the concatenation of s and t. For t ∈ T , we define ut and ut in ∗ ∗ ∗ ∗ P (and in a similar way vt and vt in Q) inductively by u(t,i) = uiut and (u(t,i)) = ut ui . ∗ ∗ (We also let u∅ = u∅ = 1P .) Then because ui uj = 0 when i 6= j, every w ∈ P has a decomposition XN w = c u u∗ , l αl βl l=1 ∗ ∗ where αl and βl are words in T and cl ∈ C. Notice that uαuα = 1P and that uαuα is an idempotent pα for every word α ∈ T . Similarly every w ∈ Q has a decomposition

XN w = c v v∗ , l αl βl l=1

31 where αl and βl are words in T and cl ∈ C. We shall describe a model for Q and two models for P. Let Y00 be the vector space of finitely supported scalar sequences indexed by T . Denote the canonical basis for Y00 by (et)t∈T and denote the length of a word t ∈ T by |t|. We shall now describe some operators on Y00. Let Id denote the identity operator on the space of sequences. Let Vi, for i = 0, 1, 2 be defined by their action on the basis as follows:

Viet = e(t,i).

th Thus Vi can be thought of as the map taking each vertex of T to the i vertex immediately ∗ ∗ below it. The adjoints Vi act in the following way: Vi et = es if t is of the form t = (s, i), ∗ ∗ and Vi et = 0 otherwise. The following facts are easy to check: Vi Vj = δi,jId; if Q denotes P2 ∗ the natural rank one projection on the line Ce∅, then i=0 ViVi = Id−Q. This is a model ∗ ∗ for Q: we have a representation ρ : Q → L(Y00) defined by ρ(vi) = Vi, ρ(vi ) = Vi for i = 0, 1, 2. We shall see below that ρ is injective.

For constructing our first model for P, consider the subset T0 of T consisting of all words t ∈ T that do not start with 0 (including the empty sequence). Let L0 be the vector subspace of Y00 generated by the (et)t∈T0 . In order to define U0 on L0, we modify the definition of V0 slightly, by letting U0e∅ equal e∅ instead of e0. Operators U1 and U2 are ∗ defined exactly as V1 and V2 were. We still have that the UiUi are projections and that ∗ P2 ∗ Ui Uj = δi,jId, but this time i=0 UiUi = Id. We have a model for P, that we call P0. ∗ ∗ The mapping ρ0 from P to L(L0) which takes ui to Ui and ui to Ui is a representation of P into L(L0). It is injective.... There is a simpler way to present this model: define on c00(N) the operators 0 Ui en = e3n+i−2, i = 0, 1, 2.

This is the same model, up to isomorphism; indeed, to each s = (i1, . . . , in) ∈ T0 we n−1 can associate the integer ns = 3 i1 + ··· + 3in−1 + in + 1, (with n∅ = 1), and this 0 −1 defines a bijection between T0 and N such that Ui = ϕUiϕ for i = 0, 1, 2, where ϕ is the isomorphism from L(L0) to L(c00) deduced from that bijection.

Our second model for P uses the index set T∞ = Z × T . We consider the vector space L∞ of finitely supported complex functions on T∞, with its natural basis en,t, n ∈ Z, t ∈ T . Now define

U0(en,∅) = en+1,∅; U0(en,t) = en,(t,0) if t 6= ∅, and for j = 1, 2 Uj(en,t) = en,(t,j). ∗ ∗ Then we set U0 (en,∅) = en−1,∅ and for j = 1, 2, Uj (en,∅) = 0; when t = (s, k) is not the ∗ empty word (k = 0, 1, 2) we let Uj (en,t) = 0 when t does not end with j, that is j 6= k, ∗ ∗ and Uj removes that last j otherwise: Uj (en,(s,j)) = en,s. We write en instead of en,∅; one can think of en as a vector et with an infinite t, having infinitely many 0s at the left of the nth place. We have a second model P∞ for P, with a representation ρ∞ from P in L(L∞). This vector space L∞ admits a natural graduation by

L∞,n = span{em,t : m + |t| = n}, n ∈ Z.

32 ∗ It should be observed that each Uj sends L∞,n to L∞,n+1 while each Uj sends L∞,n to L∞,n−1. ∗ Consider in P the subset Bk generated by products uαuβ where |α| = |β| ≤ k. It is easy to see that Bk is a subalgebra of P. Furthermore, every element in Bk is a linear ∗ combination of words uαuβ with |α| = |β| = k. This follows from the relation

X2 ∗ ∗ ∗ uαuβ = uαujuj uβ j=0 which shows that products of length r can be expressed as sum of products of length r + 1. ∗ Let fα,β = uαuβ, for |α| = |β| = k. It is easy to check that they generate an algebra isomorphic to the matrix algebra M3k , thus Bk is isomorphic to M3k . Let B denote the subalgebra of P obtained as union of the increasing sequence (Bk). ∗ Lemma 7.3. Let L 6= {0} be a complex vector space and let (Ui), (Ui ), i = 0, 1, 2 be operators on L such that

X2 ∗ ∗ Ui Uj = δi,j1L(L); UjUj = 1L(L). j=0

∗ ∗ Then ρ(ui) = Ui, ρ(ui ) = Ui for i = 0, 1, 2 defines an injective representation of P into ∗ L(L). Similarly suppose (Vi), (Vi ), i = 0, 1, 2 are operators on L such that

∗ Vi Vj = δi,j1L(L).

∗ ∗ Then ρ(ui) = Vi, ρ(ui ) = Vi defines an injective representation of Q into L(L). Proof. It is clear that ρ exists. Let Li = Ui(L); it is easy to check that L is the direct sum of L0, L1 and L2; for every t ∈ T let Lt = Ut(L); it is also clear that for every k, L is the direct sum of the Lt, |t| = k. This implies that the restriction of ρ to Bk is injective. ... Using B, it is possible to give a useful representation for every w ∈ P. For every n n ≥ 0, let 0 denote the word consisting of n zeros and let rn be the idempotent rn = ∗ n ∗ n u0n u0n = u0 (u0) . We shall also define the weight q(w) of an element w ∈ P by letting q(10 = 0, q(ui) = 1 ∗ and q(ui ) = −1, and defining the weigth of w as the sums of weights of the factors (note that the fundamental relation between the generators is compatible with this definition). Proposition 7.2. Every w ∈ P has a unique representation X X ∗ −n n w = (u0) bn + bnu0 , n<0 n≥0 where bn ∈ B satisfy

bn = bnrn for n ≥ 0, bn = r−nbn, n < 0.

33 P Proof. It is possible to express w as w = n∈Z wn, where each wn is of the form Xk w = c u u∗ , n l αl βl l=1

with |αl| − |βl| = n for each l = 1, . . . , k. Then for n ≥ 0 we have ∗ n n ∗ n n n wn = (wn(u0) u0 (u0) )u0 = bnu0 ∗ −n and for n < 0 we may write wn = (u0) bn; it is easy to check that bn ∈ B and bn = bnrn for n ≥ 0, bn = r−nbn for n < 0. This shows the existence. For the uniqueness, assume that for some N ≥ 0, we have X X ∗ −n n 0 = (u0) bn + bnu0 = w n<0 n≤N

and that this is a representation with the above properties; we want to show that bN = 0 (the case where the largest index N in the summation is < 0 can be treated in a similar way and will be left to the reader). For proving bN = 0 we use the model P∞. Observe that for v ∈ B, V = ρ∞(v) leaves each subspace L∞,n invariant. We have, letting Bn = ρ∞(bn) X X ∗ −n n 0 = (U0 ) Bn + BnU0 = W = ρ∞(w). n<0 n≤N

N Let y ∈ L∞,0. Then PN W y = BN U0 y, where PN denotes the projection of L∞ onto L∞,N ; since bN ∈ B, it belongs to some BM and since BN = BN RN we get, setting Ft,t0 = ρ∞(ft,t0 ) X BN = at,t00 Ft,(t00,0N ), t,t00 00 N P where |t| = M, |t | = M − N. Choosing y = eN−M,t00 we get BN U0 y = t at,t00 eN−M,t = 0 hence all at,t00 are 0. Remark. We may obtain analogous results for Q.....

It follows that the projection πn in P on the set of elements of weight n is well defined. It also follows that for every λ ∈ T the transform defined by ∗ ∗ ϕλ(uj) = λuj; ϕλ(uj ) = λuj extends to a morphism of P. Indeed, if X 0 = c u u∗ = w l αl βl l n then πn(w) = 0, but ϕλ multiplies by λ the set of elements of weight n hence X 0 = c λ|αl|−|βl|u u∗ l αl βl l and ϕλ is well defined on P. In the same way the ∗-transform is well defined on P.

34 Hilbertian representations of P and Q Suppose that H is a Hilbert space. A Hilbertian representation of P is a representation ∗ ∗ ρ : P → L(H) such that Ui = ρ(ui ) is the Hilbertian adjoint of Ui = ρ(ui). This implies immediately that Ui is an (into) isometry on H. We obtain an orthogonal decomposition of H into three subspaces Hi, with Hi = UiH, i = 0, 1, 2, and each of these subspaces again decomposes into a sum of three... For every Hilbertian model of P, we obtain a C∗-algebra norm on P. We shall prove the result of Cuntz [C1] that the norm of ρ(w) does not depend from the representation, or in other words that there exists a unique C∗-norm on P. We first observe that this is the case for the subalgebra B: for every k, it is easy to ∗ check that ρ is injective on Bk; the image ρ(Bk) is a finite dimensional C -algebra, hence ∗ 3k has a unique C -norm (namely, the norm of operators on `2 ). Suppose that ρ is a ∗-representation of P in some L(H), H a Hilbert space. Then, considering an ultrapower He of H we may define ∀w ∈ P, ρe(w) = ρg(w); this is a ∗-representation of P in L(He). We define a Hilbertian model from P∞ in the obvi- ous way: we associate to the vector space L∞ a Hilbert space H∞ admitting (en,t)n∈Z,t∈T ∗ for hilbertian basis. Every Ui extends clearly to an isometry of H∞ and Ui is the Hilber- tian adjoint of Ui. We want to show that P∞ with this `2-norm is minimal among the hilbertian models of P. Lemma 7.4. For every ∗-representation ρ of P into some L(H), there exists an M ⊂ He for ρe and an onto isometry j : H∞ → M such that −1 ∀w ∈ P, ρeM (w) = jρ∞(w)j , where ρeM denotes the restriction of ρe to M. Hence

∀w ∈ P, kρ∞(w)k = kρeM (w)k ≤ kρe(w)k = kρ(w)k.

Proof. Let Uj = ρ(uj), j = 0, 1, 2. Let y ⊥ U0(H), kyk = 1; it is easy to check that the n e sequence (U0 y)n≥0 is orthonormal. We define in H the vectors k+n e fn,t = (UtU0 y)k≥0 ∈ H. It is not hard to show that these vectors are normalized and pairwise orthogonal, so that j(en,t) = fn,t defines an isometry from H∞ into He. We set M = j(H∞) and the conclusion follows easily.

In a similar way one can show that the model P0 is also minimal. Let 1 ¡nX−1 ¢ z = √ U ky , n n 0 k=0

where y is chosen as before; then zn is almost fixed under U0, and ze = (zn) is fixed under Ue0; let j(et) = Uetz,e t ∈ T0.

35 Proposition 7.3. Suppose that ρ is a representation of P in a Banach algebra A such that ∗ ∗ Uj = ρ(uj) and Uj = ρ(uj ) have norm one for j = 0, 1, 2 and such that ϕλ is isometric, i.e. kρ(ϕλ(w))k = kρ(w)k for every w ∈ P. Let Φλ(W ) = ρ(ϕλ(w)) where W = ρ(w), w ∈ P. Then λ → ϕλ(W ) is well defined and continuous on T for every W in the closure of ρ(P). We can write X+∞ n Φλ(W ) ∼ λ Wn, −∞

where Wn is uniquely defined by Z −n Wn = λ Φλ(W ) dµ(λ), T (µ is the invariant probability on T); we have for n ≥ 0

n Wn = BnU0 ,Bn ∈ ρ(B),Bn = BnRn,Rn = ρ(rn), and for n < 0 ∗ −n Wn = (U0 ) Bn,Bn ∈ ρ(B),Bn = R−nBn.

Proof. It is clear that λ → ρ(ϕλ(w)) is continuous from T to A for every w ∈ P, and since ϕλ is isometric λ → Φλ(W ) is well defined and is theP uniform limit of continuous functions on T for every W in the closure of ρ(P). Let w = n wn ∈ P, where each wn P n has weight n ∈ Z; then Φλ(ρ(w)) = n λ ρ(wn), and the integral formula above implies n that kρ(wn)k ≤ kρ(w)k. Observe that, writing wn = bnu0 as in Proposition 7.2???, we get kρ(bn)k ≤ kρ(wn)k. When ρ(w) converges in A to some W belonging to the closure of ρ(P), we see thus that the corresponding ρ(wn) converges in A to some Wn, and that ρ(bn) converges to some Bn ∈ ρ(B). The equations for Wn and Bn follow by continuity.

Proposition 7.4. Suppose that Y∞ is equipped with a norm such that (en,t) is 1-uncondi- n+|t| tional (this is in particular true for the Hilbert space H∞). Then Dλen,t = λ en,t defines an isometry on Y∞ and

∀w ∈ P,Dλρ∞(w)Dλ−1 = ρ∞(ϕλ(w)).

Proof. Immediate.

It follows that kρ∞(ϕλ(w))kL(H∞) = kρ∞(w)kL(H∞) for every w ∈ P and every λ ∈ T. In particular, the Hilbertian model P∞,2 satisfies the hypothesis of Proposition 7.3. Theorem 7.1. There exists a unique C∗-norm on P. ∗ Proof. We already know a C -norm on P, namely the norm given by ρ∞ : P → L(H∞). We also know by Lemma 7.4 that this norm is smaller than any other C∗-norm on P. Conversely, let ρ : P → L(H) be a Hilbertian representation of P. We already know that kρ(w)k ≥ kρ∞(w)k. Define

|w| = sup kρ(ϕλ(w))k ≥ kρ(w)k. λ∈T

36 This norm corresponds to the direct sum of the family of representations (ρ ◦ ϕλ)λ∈T and is therefore a C∗-norm on P. Let A denote the completion of P under this norm. It is a ∗ C -algebra and we may define a ∗-representation ψ from A to the closure A∞ of ρ∞(P) in L(H∞). All we have to show (by Proposition 2.2) is that ψ is injective. It will follow that for every w ∈ P, we have |w| = kρ∞(w)k, so that finally since

kρ∞(w)k ≤ kρ(w)k ≤ |w|, the three norms coincide. We first observe that ψ is isometric on B, since B has a unique C∗-algebra norm. n Next, if n ≥ 0 and Wn = BnU0 , with Bn ∈ B and Bn = BnRn, we get kψ(Bn)k = kBnk, ∗ n kRnk ≤ 1 since U0 is an isometry, hence kWnk ≤ kBnk and kBnk = kWn(U0 ) k ≤ kWnk. Finally

kWnk ≤ kBnk = kψ(Bn)k = kψ(BnRn)k ≤ kψ(Wn)k ≤ kWnk

∗ −n for every such Wn. A similar computation gives the case where Wn = (U0 ) Bn, for n < 0. Suppose now that W ∈ A and that ψ(W ) = 0. We have that ϕλ is isometric in A by construction and also isometric in A∞ (because the hypothesis of Proposition ?? is satisfied), so that we may apply Proposition 7.3 and obtain X n Φλ(W ) ∼ λ Wn; n∈Z taking the image under ψ gives X n Φλ(ψ(W )) ∼ λ ψ(Wn) = 0, n∈Z hence ψ(Wn) = 0 thus Wn = 0 for every n and W = 0.

∗ The unique C -algebra constructed from P is called O3. The above result says that ∗ ∗ any unital C -algebra generated by three elements Uj, j = 0, 1, 2 such that Uj Uj = 1 and P ∗ ∗ ∗ UjUj = 1 is isometric to O3 (in the C -case, it is easy to see that the property Uj Ui = 0 when i 6= j follows from the two properties above). Simplicity: let I be a proper two-sided closed ideal in O3; the quotient algebra O3/I is ∗ 0 0∗ 0 P 0 0∗ a C -algebra generated by three elements Uj = π(Uj), such that Uj Ui = δi,j1, j UjUj = 1, therefore this quotient map is isometric and I = 0. Thus O3 is simple. Extensions by compacts. Let E be a Hilbertian model for Q. We know ??? that we have a map from E to O3: it is a quotient map. ∗ We have described O3; all the proofs generalize easily to the C -algebra On generated Pn−1 by n partial isometries such that j=0 = 1.

Embedding Q in O4;

37 8. Krivine’s theorem

Theorem 8.1. ([K], see also [Le], [MiS]) Let X be a Banach space and let (xn) be a sequence in X with no Cauchy subsequence. There exists p ∈ [1, ∞] such that `p (or c0 if p = ∞) is block finitely representable in the span of the given sequence. In other words, there exists p ∈ [1, ∞] such that for every k and ε > 0, we can find successive blocks k y1, . . . , yk of the sequence (xn) such that for all scalars (ai)i=1

Xk Xk Xk p 1/p p 1/p (1 − ε)( |ai| ) ≤ k aiyik ≤ (1 + ε)( |ai| ) . i=1 i=1 i=1

We proceed by successive reductions of the problem, each time constructing a space with basis, block finitely representable in the preceding, thus block finitely representable in the given sequence. We assume that the scalars are real. The first reduction is to a space with a monotone basis and a norm invariant by spreading. This is given by any monotone spreading model generated by the sequence (xn), as explained before in section 7. Building unconditionality We shall use an operator trick with the right shift defined on our spreading invariant ∞ space. After the first reduction we have a Banach space Y with a monotone basis (fn)n=1 and a norm invariant under spreading. In particular the right shift R on Y defined by Rfn = fn+1 is an isometry on Y . It follows that the real spectrum of R is contained in [−1, +1]. Also, it is easy to check that R + IY is not onto (check that f1 is not in the range) hence −1 belongs to the boundary of the real spectrum of R. It follows by Lemma 3.3 that one can find for every ε > 0 a vector y ∈ Y such that kyk = 1 and ky + Ryk < ε. PN−1 N One can assume that y has finite support, y = i=1 aifi. Consider y0 = y, y1 = R y, 2N kN y2 = R y, . . . , yk = R y, and so on... It is easy to check that this sequence (yk) is invariant under spreading and that changing one sign in a linear combination gives X X X

k aiyi − ai0 yi0 k ≤ k aiyi + ai0 Ryi0 k + ε|ai0 | = k aiyik + ε|ai0 |.

i6=i0 i6=i0 i

For every given integer n > 0 and ε = 1/n we can find such a vector y(n) with (n) (n) (n) (n) ky + Ry k < 1/n and we form the sequence y1 , . . . , yk ,... as above. This sequence is spreading invariant. Then in the ultrapower Ye we obtain for every k ≥ 1 a vector (n) e ek = (yk )n with the property that kR ek + ekk = 0. This sequence (ek) is invariant under spreading and also 1-unconditional because we obtain in the limit X X

k aiei − ai0 ei0 k ≤ k aieik

i6=i0 i

for every i0 and all scalars (ai)i. Exercise. Construct the unconditionality in the case of complex scalars.

38 Lemberg’s method: a space on Q+ ∞ At this point we have a space with a 1-unconditional basis (yn)n=1 invariant under spreading. We may define in a further ultrapower the vectors

fq = (y1+[nq])n, for every non-negative rational q. Let Ξ be the closed subspace generated by (fq)q≥0 and let Ξ0 be the closed subspace of Ξ generated by (fq)0≤q<1. The family (fq) is still invariant under spreading in the following sense: if q1 < . . . < qn and r1 < . . . < rn, then X X

k aifqi k = k aifri k i i

n P for all scalars (ai)i=1 (and it is equal to k i |ai| fqi k because the family is 1-unconditio- nal). We can consider elements of Ξ as (real) functions defined on Q+ (for example, fq is the function equal to 0 at every s ∈ Q+, except fq(q) = 1). We define operators Dn on Ξ0 by

∀t ∈ Q+, (Dnf)(t) = f(nt mod 1).

The operators Dn commute. It is easy here to complexify the space Ξ by simply defining ΞC to be the space of complex functions f on Q+ such that |f| belongs to Ξ, and with the norm kfk = k |f| k. Common approximate eigenvectors

Since the operators (Dn) commute, it is possible by Corollary 7.1?? to find, for every integer N ≥ 2, scalars λ2, . . . , λN and a common approximate eigenvector g ∈ Ξ0 such that kgk = 1 and Dig ∼ λig for all Di, i = 2,...,N. We can replace each λi by |λi| and g by |g| because |Dig| = Di|g|; we assume therefore that λi ≥ 0, i = 2,...,N, and g ≥ 0 in what follows. One shows that ln λi/ ln i is constant; this is not totally obvious: let R be the right shift by 1 on Ξ (defined by (Rf)(t) = f(t − 1)); if 2m < 3n, we see that

2Xm−1 3Xn−1 Rjg ≤ Rjg j=0 j=0 in the Banach lattice ΞC, thus

n n °2X−1 ° °3X−1 ° m j j n k(D2) gk = ° R g° ≤ ° R g° = k(D3) gk, j=0 j=0

m n therefore λ2 ≤ λ3 , and this yields that ln λ2/ ln λ3 ≤ ln 2/ ln 3; the argument can be reversed to get ln λ2/ ln 2 = ln λ3/ ln 3, hence there exists p ∈ [1, +∞] (1/p = ln λi/ ln i) 1/p such that λi = i for i = 1,...,N.

39 m Construction of `p Assume p < ∞. We choose N so large that the set of all vectors in Rm with coordinates of the form (k/N)1/p, k integer with 0 ≤ k ≤ N, gives a good approximation for the positive m part of the unit ball of `p (in more precise terms: an ε-net for some small ε > 0). Let i y1, . . . , ym be defined by yi = R y for i = 1, . . . , m, where y ∈ Ξ0 satisfies kyk = 1, 1/p Diy ∼ i y for i = 1, . . . , Nm. We obtain

1/p 1/p k(k1/N) y1 + ··· + (km/N) ymk '

−1/p m−1 N kDk1 y + RDk2 y + ··· + R Dkm yk =

¡k + k + ··· + k ¢1/p = N −1/pkD yk ' 1 2 m . k1+k2+···+km N i n If p = ∞, we have D2y ∼ y and then D2n y ∼ y. If yi = R y, i = 1,..., 2 , we get that X k ±yik ' kyk = 1 i 2n hence (y1, . . . , y2n ) is well equivalent to the usual vector basis for `∞ .

9. K-theory of Banach algebras See [Bl], [Ta], [WO], [C2], [Sk]. We work in this section with complex Banach spaces and complex Banach algebras. Let A be a unital Banach algebra over C. We denote by Mn(A) the unital algebra of n × n matrices with entries in A. It can be identified with Mn ⊗A, where Mn = Mn(C). It is easy to see that Mn(A) is a Banach algebra, but we will not insist on defining a Banach algebra norm on it. We denote by 1n and 0n respectively the unit matrix and the zero matrix in Mn, and by 1n,A = 1n ⊗ 1A and 0n,A = 0n ⊗ 0A the unit matrix and the zero matrix in Mn(A). Given a ∈ Mn(A) and b ∈ Mp(A), we denote by a ⊕ b the matrix in Mn+p(A) equal to µ ¶ a 0 . 0 b

When X is a Banach space and A = L(X), the algebra Mn(A) is naturally identified n to L(X ). Let GLn = GLn(C) denote the group of complex n × n invertible matrices, and GLn(A) the (topological) group of n × n invertible matrices with entries in A. By (0) GL(A) = GL1(A) we denote the group of invertible elements in A, and by GL (A) the connected component of the identity 1A in this group.

Exercise 9.1. Show that GLn(C) is connected (let M be an invertible matrix; since σ(M) is finite and does not contain 0, one can find µ 6= 0 such that the half-line R+µ does not −1 intersect σ(M) ∪ {1}; consider then Mt = (1 − tµ) (M − tµId), t varying from 0 to +∞). The following fact will be very useful to the discussion. Proposition 9.1. For every unital Banach algebra B, the set of all finite products

eb1 eb2 ... ebn ,

40 (0) for bi ∈ B and n ∈ N, is equal to the connected component GL (B) of 1B in GL(B). b Proof. Our first remark is that there is an obvious continuous path from e to 1B, namely t → etb, t varying from 1 to 0, hence all finite products of exponentials belong to GL(0)(B). For proving the converse, we may assume that the norm on B is a Banach algebra norm. We observe that the Taylor series of ln(1B + x) converges when kxk < 1. It follows that b each a ∈ B such that k1B − ak < 1 is an exponential a = e , where b = ln(1B + (a − 1B)). If u is invertible and if kv − uk < ku−1k−1, this implies that v = eb u for some b ∈ B. With these remarks it is easy to see that the set of finite products of exponentials is open and (0) closed in GL(B) and since it contains 1B, it is equal to GL (B).

Corollary 9.1. Let B be a unital Banach algebra and let J be a closed two-sided ideal in B. Then every invertible element in GL(0)(B/J) can be lifted to an invertible element in GL(0)(B). Proof. Simply write our invertible element in GL(0)(B/J) as product of exponentials b1 bn e ... e , with bi ∈ B/J, and lift each bi arbitrarily in B.

Similarity, equivalence and homotopy Let p and q be two idempotents in A, i.e. p2 = p, q2 = q. We say that p and q are equivalent in A if there exist x, y ∈ A such that p = yx, q = xy. We say that p and q are similar in A if there exists an invertible u in A such that q = upu−1. It is clear that similar implies equivalent. We say that p and q are homotopic in A if there exists a continuous 2 path t → pt from [0, 1] into A such that p0 = p, p1 = q and pt = pt for all t ∈ [0, 1]. We shall also say that two invertible elements a, b ∈ GL(A) are homotopic in GL(A) if there exists a continuous path in GL(A) joining a and b.

Let B be a unital Banach algebra. Consider the following path in GL2(B): µ ¶ (cos θ)1B −(sin θ)1B rθ,B = . (sin θ)1B (cos θ)1B

Using (rθ,B) for θ varying from θ = 0 to θ = π/2 we build a continuous path rθ,B (a ⊕ b) r−θ,B in M2(B) between a ⊕ b and b ⊕ a, for any a, b ∈ B. Observe that when a and b are invertible in B this path is contained in GL2(B). When a, b are idempotents, it is a path of idempotents in M2(B). We get in this way an homotopy in GL2(B) between −1 −1 a ⊕ 1B and 1B ⊕ a. Multiplying it by 1B ⊕ a , we get an homotopy between a ⊕ a and −1 (0) 12,B. In particular a ⊕ a ∈ GL2 (B). This is an important fact that will be used several times later. If we apply this to B = Mn(A), and if a ∈ GLn(A), we get an homotopy in −1 GL2n(A) between a ⊕ 1n,A and 1n,A ⊕ a, and an homotopy between a ⊕ a and 12n,A. In −1 (0) particular a ⊕ a ∈ GL2n (A). We have thus obtained

Lemma 9.1. If p and q are idempotents in Mn(A), then p ⊕ q and q ⊕ p are homotopic in −1 M2n(A). If a, b ∈ GLn(A), then a ⊕ b and b ⊕ a are homotopic in GL2n(A), and a ⊕ a (0) belongs to GL2n (A).

41 Exercises and examples 9.2. 1. Show that equivalence is an equivalence relation (if q = vu and r = uv, consider uqx and yqv). 2. In Mn(C), two idempotents are equivalent iff they have the same rank. They are then similar and homotopic (because GLn(C) is arcwise connected). 3. Let X be a Banach space and let p and q be two projections in L(X). Show that p and q are equivalent in L(X) iff the ranges pX and qX are isomorphic Banach spaces. Consequently, any rank one projections p and q are equivalent (and also similar and homotopic) in L(X). For example, assume that X is a Banach space such that X ' X2, one has that 2 IX ⊕ IX and IX ⊕ 0 are equivalent in L(X ). Indeed, let U : X → X ⊕ X be an onto isomorphism and let V : X ⊕ X → X be its inverse. In M2(L(X)) we have the equations µ ¶ µ ¶ µ ¶ µ ¶ I 0 V I 0 V e = X = ( U 0 ) ; f = X = ( U 0 ) . 0 IX 0 0 0 0 2 This means that the two idempotents e and f in L(X ) = M2(L(X)) are equivalent.

4. If p is an idempotent in B, then there is an homotopy in M2(B) between p⊕(1B −p) and 1B ⊕ 0B: consider the path of idempotents in M2(B) µ 2 ¶ p + (1B − p) sin θ (1B − p) sin θ cos θ 2 , (1B − p) sin θ cos θ (1B − p) cos θ where θ varies from 0 to π/2. More generally, if p and q are two idempotents in B such that pq = qp = 0, then there exists an homotopy in M2(B) between p ⊕ q and (p + q) ⊕ 0B. Applying this to B = Mn(A), if p and q are two idempotents in Mn(A) such that pq = qp = 0, then there exists an homotopy in M2n(A) between p ⊕ q and (p + q) ⊕ 0n,A. We shall now investigate the relations between the three notions of equivalence, simi- larity and homotopy. Proposition 9.2. Let B be a unital complex Banach algebra. If two idempotents p and q are similar in B, they are equivalent in B; if p and q are homotopic in B, they are similar in B. Proof. The first assertion is obvious; for the second we need the following lemma Lemma 9.2. For two idempotents p and q in a unital Banach algebra B (with a Banach algebra norm), kp − qk < (kpk + kqk)−1 implies that p and q are similar.

Proof. Let u = qp + (1B − q)(1B − p). Then qu = qp = up, and u = 1B − p − q + 2qp = 1B − q(q − p) + (q − p)p is invertible when kq − pk(kpk + kqk) < 1.

Suppose that pt is a continuous path of idempotent elements in B, t ∈ [0, 1]. Then kptk is bounded by some M, and the condition kpt − psk < 1/(2M) implies that pt and ps are similar by Lemma 9.2. By uniform continuity we can find ε > 0 such that kpt − psk < 1/(2M) whenever |t − s| < ε. We can then pass from p0 to p1 by a finite number of similarities (actually, we may find a continuous path of invertible elements (ut) −1 such that pt = ut p0 ut for every t ∈ [0, 1]).

42 Proposition 9.3. Let B be a unital complex Banach algebra. If two idempotents p and q are equivalent in B, then p ⊕ 0 and q ⊕ 0 are similar in M2(B); if p and q are similar in B, they are homotopic in M2(B). Proof. Suppose first that p and q are equivalent, with p = xy, q = yx. We may assume that x = pxq and y = qyp. Indeed, we have p = xyxyxyxyxy = (xy)x(yx)(yx)y(xy), which gives that p = (pxq)(qyp) and similarly for q. We can write µ ¶ µ ¶ µ ¶ µ ¶ 0 0 1 − q y q 0 1 − q y = , 0 p x 1 − p 0 0 x 1 − p and we know that p ⊕ 0 and 0 ⊕ p are similar (actually homotopic). Suppose now that p and q are similar in B; we can write q = upu−1, with u invertible −1 in B. Let vt denote a path in GL2(B) from 1B ⊕ 1B to u ⊕ u (Lemma 9.1). We get an −1 homotopy between p ⊕ 0B and q ⊕ 0B in M2(B) given by rt = vt (p ⊕ 0B) vt . Examples 9.1.

1. Triangular matrices. Let T2(A) denote the algebra of 2×2 upper triangular matrices with entries in A. Every triangular idempotent is homotopic to its diagonal, using the path µ ¶ µ ¶ µ ¶ µ ¶ λ−1 0 a b λ 0 a b/λ = , 0 1 0 d 0 1 0 d λ varying from 1 to +∞. A similar reasoning applies to the algebra T (Y,X) of operators on a Banach space X that leave a complemented subspace Y ⊂ X invariant; let Z be the kernel of a projection from X onto Y . Then every idempotent in this algebra T (Y,X) is homotopic to a projection sending Y to Y and Z to Z. 2. Vector bundles. Let K be a compact topological space. Consider the Banach algebra C(K) of continuous complex functions on K. An idempotent in Mn(A) identifies with a continuous function p from K to Mn, such that p(x) is a projection for every x ∈ K. This data defines a (complex) vector bundle over K; for every x ∈ K, the fiber at x is the n n complex vector space Fx = p(x)(C ) ⊂ C ; the dimension of the fiber at x ∈ K is the rank of p(x). Assume that K = [0, 1]. Setting ps(t) = p(st), we get an homotopy between p and the constant function p(0) ∈ Mn. The same reasoning applies to any contractible compact space. Example of the Hopf fibration. The space P1(C) admits a canonical (complex) line 2 bundle. Recall that P1(C) is the set of complex lines in C . To every line t ∈ P1(C) we associate the one-dimensional vector space t ⊂ C2. Every line t can be parametrized by a non zero point ζ = (z1, z2) ∈ t. To t ∈ P1(C) containing (z1, z2) we associate the orthogonal projection on the line t, µ ¶ 1 z1z1 z1z2 p(t) = 2 ; |ζ| z2z1 z2z2 if we represent t by (z, 1) (this is possible except for the line passing trough (1, 0)) we get a continuous idempotent defined for z = ρ eiθ ∈ C by µ ¶ 1 ρ2 ρ eiθ p(z) = , ρ2 + 1 ρ e−iθ 1

43 and this converges when z tends to infinity to µ ¶ 1 0 p(∞) = 0 0 that corresponds to the line t passing trough (1, 0). It is clear that P1(C) is homeomorphic to the one-point compactification of C, or also to the sphere S2. Identifying S2 with the closed unit disc D in C, with all points on the unit circle identified to a single point, we get from the preceding p, setting r = tan(πρ/2), for z = r eiθ ∈ D µ ¶ 1 1 − cos πr eiθ sin πr q(z) = 2 e−iθ sin πr 1 + cos πr

(note that q(eiθ) = 1 ⊕ 0 for every θ, so that q is constant on the unit circle T and can therefore be considered as a function on S2). This idempotent q is not equivalent to a 2 constant function on S (more generally, for every integer k ≥ 0, q ⊕ 0k is not equivalent 2 in Mk+2(C(S )) to a constant idempotent; this is a relatively difficult exercise, where the decisive argument is Brouwer’s fixed point theorem or the notion of degree). ∗ 3. Cuntz algebras. For every n ≥ 2 let On be the C -algebra generated in L(`2) ∗ Pn ∗ by n into isometries U1,...,Un (so Ui Ui = I for every i) such that i=1 UiUi = I (it ∗ follows that Uj Ui = 0 when i 6= j). It is proved in [C1] that On does not depend upon the particular choice of (Ui) (see section 7 for O3). For every i = 1, . . . , n, we have a projection ∗ ∗ ∗ pi = UiUi equivalent to I since pi = UiUi and I = Ui Ui. The algebra En is generated in L(`2) by n into isometries V1,...,Vn such that Xn ∗ ViVi < I i=1

∗ ∗ (it also follows that Vj Vi = 0 when i 6= j). The projections qi = ViVi , i = 1, . . . , n are again equivalent to I. The semi-group of classes of idempotents

We introduce now the algebra M∞(A) equal to the union of the Mn(A), the embedding of Mn(A) into Mn+p(A) being given by a → a⊕0p,A; we say that two idempotents a and b are equivalent, similar or homotopic in M∞(A) iff there is some n such that a, b ∈ Mn(A) and a and b are equivalent, similar or homotopic in Mn(A); in M∞(A) the three notions of comparison for idempotents coincide. Let Pr(A) denote the set of equivalence classes of idempotents in M∞(A). Let {p} denote the equivalence class of an idempotent p of M∞(A) in Pr(A). Additive structure on Pr(A) If {p} and {q} are two classes in Pr(A), we define their sum by {p} + {q} = {p ⊕ q}. Exercise. Show that this operation is well defined, associative and that {0} is a neutral element. In other words, Pr(A) is a monoid. This addition is commutative; indeed, we know by Lemma 9.1 that p ⊕ q and q ⊕ p are homotopic.

44 Examples 9.2. 1. In Mn(C), we know that two idempotents are equivalent iff they have same rank. This shows that Pr(C) can be identified to Z+. Furthermore, the addition corresponds to the addition of ranks. This shows that Pr(C) is simply Z+ (as monoid), where 1 ∈ Z is identified to the class of rank one projections. If we consider Pr(Mn), it is clear that M∞(Mn) ' M∞(C) and hence Pr(Mn) ' Z+, where again 1 ∈ Z+ corresponds to the class of rank one projections. 2. {1n,A} = n {1A}.

3. If p is an idempotent in Mn(A), then {p} + {1n,A − p} = {1n,A}. More generally, if p and q are two idempotents in Mn(A) such that pq = qp = 0, then {p} + {q} = {p + q}. This follows directly from Example 9.2, 4 above. 4. Let A, B be two unital Banach algebras. It is clear that Pr(A × B) ' Pr(A) × Pr(B). Let T2(A) be the algebra of 2 × 2 upper triangular matrices with entries in A.A matrix in Mn(T2(A)) can be considered as an element of T2(Mn(A)), after some reindexing. Applying the deformation from Example 9.1, 1 with A replaced by Mn(A), we see that 2 Pr(T2(A)) ' (Pr(A)) . For the algebra T (Y,X) of operators on a Banach space X that leave a complemented subspace Y ⊂ X invariant, we see that Pr(T (Y,X)) ' Pr(L(Y )) × Pr(L(Z)), where Z is the kernel of a projection from X onto Y . 5. Vector bundles. We said that an idempotent in Mn(A), A = C(K) identifies to a continuous map from K to the space of idempotents in Mn. If K splits into two closed and open subsets K1 and K2, then Pr(C(K)) is isomorphic to Pr(C(K1)) × Pr(C(K2)). If K is connected, the rank of p(t) is constant when t varies in K, but this rank is not enough to characterize the class of p in Pr(C(K)). This rank only gives the dimension (or rank) of the associated vector bundle on K. When K is contractible, every idempotent p ∈ M∞(C(K)) is equivalent to a constant function p(x0) ∈ Mn, hence Pr(C(K)) ' Z+ in this case. In any case Z+ is always a submonoid of Pr(C(K)), corresponding to the classes of constant functions p (or of trivial bundles). 2 The example of the Hopf fibration yields an idempotent p ∈ M2(C(S )) which is not 2 2 2 equivalent to a constant function on S ; the monoid Pr(C(S )) contains at least Z+. 6. When A = L(X), the class {IX } in the monoid Pr(L(X)) is the class of comple- mented subspaces of Xn isomorphic to X. If X is a Banach space such that X ' X2, we have seen in Example 9.2, 3 that {IX ⊕ IX } = {IX } in M∞(L(X)). One has therefore 2{IX } = {IX } + {IX } = {IX }. Indeed, we know that IX ⊕ IX and IX ⊕ 0 are equivalent, and this implies by definition of the addition that {IX } + {IX } = {IX }. Several examples show that the monoid Pr(A) may fail the cancellation property (but 1 it is true in Pr(C) ' Z+): for A = L(L ) for example, we have

`1 ⊕ L1 ' L1 ⊕ L1 ' L1 ' 0 ⊕ L1, hence identifying projections and ranges (see Exercise 9.2,3),

{`1} + {L1} = {L1} + {L1} = {L1}. When the Banach space X is isomorphic to its hyperplanes, we have in A = L(X), {IX } = {IX − p} when p is a rank one projector, hence

{IX } + {p} = {IX }.

45 ∗ 7. Cuntz algebras. In On we have n projections pi = UiUi , each equivalent to I, and Pn ∗ pipj = 0 when i 6= j. Then the relation i=1 UiUi = I implies that n{1On } = {1On }, using case 3 above. Pn ∗ In En, Q = I − i=1 ViVi is a non zero projection and we get n{1En } + {Q} = {1En }.

Group associated to an additive monoid M. Additive group K0(A) On the set of couples (m, n) ∈ M 2, we define the equivalence relation (m, n) ∼ (m0, n0) if there exists r ∈ M such that m + n0 + r = m0 + n + r. The quotient of M by this relation is an additive group G. If φ denotes the map from M to G that sends m ∈ M to the class [(m, 0)] of (m, 0) in G, there exists for every monoid morphism f from M to a group H a unique group morphism f : G → H such that f = f ◦ φ. Every element α ∈ G can be written α = φ(m) − φ(n) for some m, n ∈ M.

Definition 9.1. If A is a unital Banach algebra, we denote by K0(A) the group associated to the monoid Pr(A). We denote by [p] the image φ({p}) by φ : Pr(A) → K0(A) of the class {p} in Pr(A) of an idempotent p ∈ M∞(A). Every element of K0(A) can be written [p] − [q] for some idempotents p, q in M∞(A). 0 Every α = [p]−[q] ∈ K0(A) can be written [p ]−[1n,A] for some n and some idempotent 0 p in M2n(A). This follows from Example 9.3, 3 below. We have [p] = [q] if and only if there exists r such that {p} + {r} = {q} + {r}. Since r is an idempotent in some Mn(A), there exists s ∈ Mn(A) so that {r} + {s} = {1n,A} (we may simply choose s = 1n,A − r), and finally [p] = [q] if and only if there exists n such that {p} + {1n,A} = {q} + {1n,A}. Examples 9.3. 1. For C, we obtain of course K0(C) ' Z. Recall that 1 ∈ Z corresponds to the class of rank one projections. We also have K0(Mn) ' Z for every integer n ≥ 1, where again 1 ∈ Z corresponds to the class of rank one projections. 2. [1n,A] = n [1A] (since we had {1n,A} = n {1A}). 3. If p is an idempotent in Mn(A), [p] + [1n,A − p] = [1n,A]. This is because {p} + {1n,A − p} = {1n,A}. More generally, if p and q are two idempotents in Mn(A) such that pq = qp = 0, then [p] + [q] = [p + q]. 4. Let p be a finite rank projection in some Banach space X isomorphic to its hyper- planes. Then [p] = 0. Indeed, we saw in Example 9.2, 6 that {IX } + {q} = {IX } if q has rank one, thus [q] = 0 and when p has rank n, we get [p] = n[q] = 0. 5. Suppose A = L(X), where X is a Banach space such that X ' X2, we have [IX ] + [IX ] = [12,A] = [IX ], thus [IX ] = 0. Since {IX } is the class in Pr(L(X)) of n complemented subspaces of X isomorphic to X, the class of [IX ] is 0 in K0(L(X)) if and only if there exists an integer n such that Xn ' Xn+1. It is relatively usual for many classical spaces that X ' X2. I know of no example where X 6' X2 but X2 ' X3. Suppose that X ' `p(X) for some 1 ≤ p < ∞. Then every complemented subspace Y of X containing a complemented copy of X is isomorphic to X. This is PeÃlczy´nski’s decomposition method. If X = Y ⊕ Z and Y = X ⊕ U we have

X ⊕ Y ' `p(X) ⊕ X ⊕ U ' `p(X) ⊕ U ' Y

46 and X ⊕ Y ' `p(Y ⊕ Z) ⊕ Y ' `p(Y ⊕ Z) ' X. This implies that X is isomorphic to its square and isomorphic to its hyperplanes. 6. K0(L(X)) for a primary Banach space isomorphic to its square. The space X is said primary when for every decomposition X = Y ⊕ Z, one has Y ' X or Z ' X. We know for example after Enflo [E] that Lp[0, 1] is primary when 1 ≤ p < ∞ ([AEO], [M1]). The space `p is more than primary, it is prime; recall that a Banach space is said to be prime if it is isomorphic to every infinite-dimensional complemented subspace of itself. The only known examples before [GM2] were c0 and `p (1 ≤ p ≤ ∞). These were shown to be prime by PeÃlczy´nski[P], apart from `∞ which is due to Lindenstrauss [L1]. If we assume that X2 is primary, it follows that X ' X2, and Xn ' X is primary for every n. Let A = L(X). For every idempotent p ∈ Mn(L(X)), either [p] = [1n,A] = [IX ] or 2 [1n,A − p] = [IX ]. We also know that [IX ] = 0 because X ' X . In either case [p] = 0, hence K0(L(X)) = {0}. In particular for a Hilbert space H we obtain K0(L(H)) = {0}. This shows that in most classical cases, the K0-theory of L(X) is trivial (and thus mostly useless). It will not be so for the exotic spaces of sections 10, 11and 12. 2 7. It is clear that K0(A × B) ' K0(A) × K0(B). Since Pr(T2(A)) ' (Pr(A)) , it 2 follows that K0(T2(A)) ' K0(A) . For the algebra T (Y,X) introduced previously, we see that K0(T (Y,X)) ' K0(L(Y )) × K0(L(Z)), where Z is the kernel of a projection from X onto Y . On the contrary, for M2(A) as well as for Mn(A), we obtain K0(Mn(A)) ' K0(A). 8. Let A = C(K), where K is compact and connected. We see that K0(C(K)) contains a canonical copy of Z, corresponding to idempotents in Mn(C(K)) given by constant functions from K to Mn. When K is contractible, we know that every idempotent in Mn(C(K)) is homotopic to a constant function, hence we get K0(C(K)) ' Z. 9. Simpler presentation of the case of a Banach space X such that X ⊕X is isomorphic to a complemented subspace of X. In this case we can avoid the use of M∞(L(X)) and the symmetrisation from Pr to K0 in the following way: let X ' X ⊕ X ⊕ Y . To every T ∈ L(X) we associate, in block notation with respect to the decomposition X = X⊕X⊕Y     T 0 0 0 0 0 i1(T ) =  0 0 0  ; i2(T ) =  0 T 0  . 0 0 0 0 0 0

Then i1(T ) and i2(T ) are equivalent to T ; when T and U are equivalent, then i1(T ) and i2(U) are homotopic in L(X) and given T1,...,Tn we can find using products of i1 and i2 operators U1,...,Un such that Ui ∼ Ti and such that the Ui appear as disjoint diagonal blocks in a larger decomposition of X. We may then define the sum {T1} + ··· + {Tn} as {U1 + ··· + Un}.

10. Cuntz algebras. In On the relation n{1On } = {1On } implies that (n−1)[1On ] = 0.

Cuntz proved in [C2] that K0(On) = Z/(n − 1)Z. In En we obtain (n − 1)[1En ] + [Q] = 0.

K0 functor Let ϕ be a morphism of unital Banach algebras from A to B. Letting ϕ act on (n) each entry of a matrix in Mn(A), we get a unital algebra morphism ϕ from Mn(A) to

47 Mn(B) for every n ≥ 1, sending idempotents in Mn(A) to idempotents in Mn(B). Clearly, homotopic idempotents have homotopic images. This gives a morphism of monoids from Pr(A) to Pr(B), then a group morphism ϕ∗ from K0(A) to K0(B). One can check that A → K0(A), ϕ → ϕ∗ defines a functor from the category of unital Banach algebras to the category of additive groups.

An example. For every n ≥ 0 let ϕn be the algebra morphism from M3n to M3n+1 defined by ϕn(a) = a⊕a⊕a. For every n, we have that K0(M3n ) = Z, and the map ϕn sends rank one projections to rank three projections, hence ϕn,∗(1) = 3. If we consider the Banach algebra A obtained as inductive limit of the sequence (M3n ) with the (ϕn) as successive embeddings, we obtain a chain of maps

Z → · · · → Z → Z → · · ·

where each arrow is the map ϕ∗(k) = 3k; this implies (with some work) that

k K (A) '{ ∈ Q : k ∈ Z, n ≥ 0}. 0 3n

We may associate to 1 ∈ Q the class [1A]. This algebra A is closely related to the algebra B appearing in section 7 with the discussion of P; actually A is the completion of B under its (unique) C∗-norm.

K0 for non unital algebras Let A be a Banach algebra without unit. We consider the unital algebra A+ = A ⊕ C from section 2. Then A is a closed two-sided ideal of A+ and A+/A is canonically isomorphic to C. Let π be the projection from A+ onto C, and let i : C → A+ be given by + i(λ) = λ1+. We have π◦i = IdC. By the functorial character we get π∗ : K0(A ) → K0(C), + + i∗ : K0(C) → K0(A ), and π∗ ◦ i∗ = Id. Hence K0(C) ' Z appears as factor in K0(A ); we define K0(A) as the kernel of π∗:

+ K0(A) = ker π∗ ⊂ K0(A ).

+ Let α ∈ K0(A). We know that we can write α = [p] − [1n,A+ ] ∈ K0(A ), where p is + an idempotent in M2n(A ); by definition of K0(A) we have π∗(α) = 0. This means that π(p) and 1n ⊕ 0n are equivalent idempotents in M2n(C). Hence there exists u ∈ GL2n(C) −1 −1 such that uπ(p)u = 1n ⊕ 0n. Then, setting ue = u ⊗ 1A+ , we see that r = ue p ue is an + idempotent in M2n(A ), equivalent to p, and r = 1n,A+ ⊕ 0n,A + a, with a ∈ M2n(A). Finally, replacing p by r we obtain:

Every α in K0(A) can be expressed as α = [r] − [1n,A+ ], where r is an idempotent in + M2n(A ) of the form r = 1n,A+ ⊕ 0n,A + a, with a ∈ M2n(A). Remark 9.1. The above construction is certainly necessary for Banach algebras without unit that have no idempotent, except 0 (for example, the algebra A = C0(K) of continuous functions on a compact connected space K, vanishing at some point x0 ∈ K). Some algebras without unit, like K(X), already have idempotents (finite rank projections). If p

48 is an idempotent in A, its class in the above construction is given by [1A+ ] − [1A+ − p] or [1A+ ⊕ p] − [1A+ ]. One can check that when A is already unital, the above construction i + π defines the same group K0(A). Indeed we have the exact sequence A −→ A −→ C, giving

i∗ + π∗ K0(A) −→ K0(A ) −→ K0(C), where K0(A) here is defined according to our first definition for unital Banach algebras. By functoriality it is clear that i∗(K0(A)) is contained in the kernel of π∗. Conversely, let + α ∈ K0(A ) belong to ker π∗. We write α = [p] − [1n,A+ ], where p = 1n,A+ ⊕ 0n,A + a is an + idempotent in M2n(A ), with a ∈ M2n(A). We see that q = 1A+ − 1A is an idempotent in + 0 0 0 A such that qA = Aq = 0. We may write p as p = p + q , where p = 1n,A ⊕ 0n,A + a is 0 + 0 an idempotent in M2n(A), and q is an idempotent in M2n(A )(q is the direct sum of n 0 0 0 0 0 0 0 copies of q) such that q p = p q = 0, hence [p] = [p ] + [q ]; similarly [1n,A+ ] = [1n,A] + [q ] 0 + and finally [p] − [1n,A+ ] = [p ] − [1n,A] belongs to the image of K0(A) in K0(A ).

The K0 functor extends now to the category of Banach algebras, not necessary unital. Examples.

1. Let K be a compact connected topological space. Let C0(K) = Cx0 (K) denote the closed ideal of C(K) consisting of continuous functions on K vanishing at some point + x0 ∈ K; we see that C(K) ' C0(K) , hence

K0(C(K)) ' K0(C0(K)) ⊕ Z.

When A = C([0, 1]), we have seen that K0(C([0, 1])) ' Z (true for any contractible space). For a contractible compact space K we get K0(C0(K)) = {0}. For C0(T), we also have K0(C(T)) = Z, thus K0(C0(T)) = {0}. 2 2. We have seen that P1(C) ' S admits a canonical (complex) line bundle (Hopf 2 2 fibration). This bundle gives a non zero class in K0(C0(S )): if we identify C0(S ) to the space of continuous functions on C vanishing at infinity, the idempotent described in 2 Example 9.1, 2 gives a non zero element [p] − [1 ⊕ 0] in K0(C0(S )).

Example of K0(S) Here X is a Banach space, and we study the ideal S(X) of strictly singular operators.

Lemma. Let X = U ⊕ V , dim U = dim V = +∞, and let πU be the projection of U ⊕ V onto U. Let A denote the unital subalgebra of L(U ⊕ V ) generated by πU and S(U ⊕ V ). Let p be a projection in L(X), with the form p = πU + S, where S ∈ S(U ⊕ V ). Then p is equivalent (in A), either to a projection on Y , finite codimensional subspace of U, or to a projection on U ⊕ E, where E is a finite dimensional subspace of V . In other words, p is equivalent to a projection (πU − r1) ⊕ r2, where r1 and r2 are finite rank projections in L(U) and L(V ) respectively. The quantity rank(r2) − rank(r1) is an invariant of the similarity class of p in A. 0 0 Proof. Let q = πU p πU = IU +S , S ∈ S(U), considered as operator on U. By Proposition 6.1, we know that q has a finite codimensional invariant subspace Z on which q gives an isomorphism a ∈ L(Z). Considering a new decomposition Z ⊕ W of U ⊕ V (notice that

49 finite rank projections belong to A, so the projections on Z and W belong to A), we get in block notation in Z ⊕ W µ ¶ a b p = = p2, c d with a invertible, b, c, d strictly singular; then p is similar in A to µ ¶ µ ¶ µ ¶ µ ¶ −1 0 a 0 a b a 0 IZ b −1 = 0 , −ca IW c d c IW 0 d

(we know that a−1 ∈ A by Lemma 2.2 and Proposition 6.1) with b0, d0 strictly singular. It follows from the fact that the above matrix is an idempotent that b0d0 = 0, d02 = d0; next µ ¶ µ ¶ µ ¶ µ ¶ 0 0 0 IZ b IZ b IZ −b IZ 0 0 = 0 0 IW 0 d 0 IW 0 d

is an idempotent equivalent in A to an idempotent of the desired form; indeed, d0 is a strictly singular projection, thus has finite rank. It only remains to move the finite dimensional image F of d0 in the correct position; if k denotes the codimension of Z in U and if dim F ≤ k, we can move F to G inside U, in such a way that Z ⊕ G is a direct sum; if dim F > k, we will move a k-dimensional subspace of F to some G inside U in such a way that U = Z ⊕ G, and the remaining part of F into V . Let us give now a partial proof for the claim in the last line of the Lemma. Let r be a finite rank non zero projection in L(V ), and let F = rV 6= {0}; we will show that q = πU ⊕ r is not similar to πU in A. If it was similar, we could find an invertible element u in A such that πU u = uq. Then quq gives an isomorphism from U ⊕ F onto U, thus a Fredholm operator in L(U ⊕ F ) with non zero index. But quq has a block decomposition in U ⊕ F µ ¶ λI + s br U , rc rdr with s strictly singular and λ 6= 0 (because u is invertible). This operator is a strictly singular perturbation of λπU , and should therefore have zero index, a contradiction.

We arrive to the computation of K0(S(X)). Here A = S(X) and we assume of course that dim X = +∞; we can then identify A+ to the subalgebra of B = L(X) consisting of all operators λId + S, S ∈ A and λ ∈ C. Let now α ∈ K0(S(X)). There exists a 2n 2n projection p on X of the form 1n,B + S, with S ∈ S(X ), such that α = [p] − [1n,B]. By the Lemma, we know that p is equivalent either to a projection on Y , finite codimensional subspace of Xn, or to a projection on Xn ⊕ F , dim F < ∞. Furthermore we said that the integer k, equal to − codim Y or to dim F characterizes α, hence K0(S(X)) is isomorphic to Z. We may choose for generator of K0(S(X)) the class of rank one projections, (see remark 9.1).

The above proof also applies to K(X). We get that K0(K(X)) ' Z. In the case of the Hilbert space, a more natural approach uses the fact that K(`2) is the inductive limit of the algebras (Mn).

50 Short exact sequence in K-theory Suppose given a short exact sequence

0 → I −→i A −→π A/I → 0,

where I is a closed two-sided ideal in A, i the inclusion map and π the canonical quotient map. We get

i∗ π∗ K0(I) −→ K0(A) −→ K0(A/I)

exact at K0(A) (we just remark that π∗ ◦ i∗ = 0 by functoriality; the proof that ker π∗ is precisely equal to the range of i∗ uses Corollary 9.1 and Lemma 9.1).

Exercise. Prove the above: let α = [p] − [1n,A], with p idempotent in M2n(A), such that π∗(α) = 0. We know that for some m, the idempotent π(p)⊕1m,A/I is similar to 1n+m,A/I in Mn+m(A/I); apply now the last part of Lemma 9.1 to M2n+2m(A/I) and next apply Corollary 9.1.

Suspension and the group K1(A) Let A be a Banach algebra. Denote by SA the (non unital) algebra of continuous maps f : T → A such that f(1) = 0A, the product being the pointwise product in A. We will set K1(A) = K0(SA), but some comments are necessary. For defining K0(SA) + we consider idempotents in Mn((SA) ). Such an idempotent p is a n × n matrix with entries in (SA)+. An element of (SA)+ identifies to a continuous function f from T to A+ such that f(1) = λ1+. The matrix p identifies to a continuous function p(t) with values + in idempotents of Mn(A ) such that p(1) is a “scalar” matrix i.e. a matrix q ⊗ 1n,A+ , q ∈ Mn(C). Up to equivalence we can assume that p(1) = 1k,A+ ⊕ 0n−k,A+ . By definition we get an homotopy from p(1) to p(1) obtained by travelling around the circle. This gives as explained after Lemma 9.2 a similarity up(1) = p(1)u. But the special form of p(1) + + k + n−k implies that u ∈ GLn(A ) leaves both factors (A ) and (A ) invariant, so u = v ⊕w, + + with v ∈ GLk(A ) and w ∈ GLn−k(A ). We have therefore associated to any idempotent + + p in M∞((SA) ) an invertible v ∈ GLk(A ), for some k. + + Conversely, let v ∈ GLk(A ); we may associate to v some w ∈ GLn−k(A ) such (0) + −1 that v ⊕ w ∈ GLn (A ) (for example n = 2k, w = v ). We can find a continuous + path ut in GLn(A ) such that u0 = v ⊕ w and u1 = 1n,A, and the path of idempotents −1 p(t) = ut (1k,A+ ⊕0n−k,A+ ) ut ; we check that p(1) = p(0) (because u(0) and u(1) commute + 0 to p(0)) and p(0) is scalar), thus p identifies to an idempotent in Mn((SA) ). If v belongs + to the same connected component of GLk(A ), we may find a continuous path (vt) from 0 + + v to v in GLk(A ), that gives us an invertible ve in GLk(C([0, 1],A )). Applying the above construction to the algebra C([0, 1],A+), we associate to ve an idempotent pe of + + Mn(C([0, 1],A )), i.e. a continuous function from [0, 1] to the idempotents of Mn(A ). The values at 0 and 1 of that function pe are the idempotents p et p0 associated to v et v0 by the above reasoning, so p and p0 are homotopic. Finally, p and p0 are homotopic when 0 v and v belong to the same component. This explains why K0(SA) is equivalent to the + study of connected components of GLn(A ) (for varying n).

51 The usual definition of K1(A) uses the family of groups GLn(A). Let A be a unital Banach algebra. Let GL∞(A) denote the group equal to the union of the GLn(A), n ∈ N, where the injection from GLn(A) into GLn+p(A) is now of course given by u → u ⊕ 1p,A. The group K1(A) is the set of connected components of GL∞(A), that is the set of equivalence classes of u ∈ GL∞(A) for the homotopy relation in GL∞(A): we say that u, v ∈ GL∞(A) are homotopic in GL∞(A) if there exists an integer n such that u, v ∈ GLn(A) and u, v are homotopic in GLn(A). The product of [u] and [v] is the component of the product uv. This product is commutative because [uv] = [uv ⊕ 1] = [u ⊕ v] = [v ⊕ u] = [vu] by Lemma 9.1. We shall actually use the additive notation in K1(A). + For an algebra without unit we set K1(A) = K1(A ). We obtain a second functor from the category of Banach algebras to the category of groups, the K1 functor. Indeed, every homomorphism from A to B induces a map from GL∞(A) to GL∞(B) which gives a group homomorphism from K1(A) to K1(B). Examples 9.4.

1. Since GLn(C) is connected we get K1(C) = {0}. Compare to K0(C0(T)) = K0(SC).

2. Products and triangular matrices; it is clear that K1(A × B) = K1(A) × K1(B). It is also clear that a triangular matrix is invertible iff its diagonal elements are invertible. We already explained that an element in Mn(T2(A)) can be seen as an element of T2(Mn(A)); if it is invertible we may deform it inside GLn(T2(A)) to the diagonal form as explained before, but also trivially by letting the non diagonal entries go to 0. It follows that 2 K1(T2(A)) ' (K1(A)) . For T (Y,X), we see that K1(T (Y,X)) ' K1(L(Y )) × K1(L(Z)), where Z denotes the kernel of the projection from X to Y .

3. Show that K1(S(X)) = {0}, K1(K(X)) = {0}. Hint. Use Proposition 6.1 and a reasoning similar to the one used for proving the connect- edness of GLn(C) in Exercise 9.1. 4. It is known that the linear group of a Hilbert space is connected [CL], hence K1(L(H)) = {0}. This proof uses the for isometries on a Hilbert space. We can give a more Banach space theoretic proof (which is essentially Kuiper’s lemma from [Ku]), that also works for `p, when 1 ≤ p ≤ ∞ (and for c0). Let T be an invertible operator in L(`p). We shall prove that T ⊕ I`p is homotopic to I`p ⊕ I`p in L(`p ⊕ `p) = M2(L(`p)). Let us represent the second factor `p in the sum as X = `p(`p ⊕ `p). In each component −1 `p ⊕ `p of X, we may find an homotopy from I`p ⊕ I`p to T ⊕ T . This gives an homotopy −1 from IX , written symbolically as `p(I`p ⊕ I`p ) to `p(T ⊕ T ). Then T ⊕ IX is homotopic −1 −1 to T ⊕ `p(T ⊕ T ); using a different grouping of the T and T we may deform this last operator back to I`p ⊕IX . Finally K1(L(`p)) = {0}. The same proof works for any Banach space X such that X ' `p(X). Actually, more is true: Neubauer proved that GL(L(`p)) is contractible; see also Mityagin [Mt] for more examples. To the contrary, the linear group of `p ⊕ `q, 1 ≤ p < q < ∞, is not connected (Douady [Do]). We shall compute later K1(L(`p, `q)).

5. Let α ∈ K1(C([0, 1])); it corresponds to an invertible v ∈ GLn(C([0, 1])), that is to say a continuous map from [0, 1] to GLn. Letting vs(t) = v(st) we define an homotopy

52 between v and the constant function v(0) ∈ GLn(C). Since GLn is connected we get K1(C[0, 1])) = {0}. This argument generalizes to any contractible compact space K. The situation is different for C(S1). In this case the determinant of v(t) can make a 1 1 non trivial loop around 0 in C, therefore K1(C(S )) 6= {0}; we get actually K1(C(S )) ' Z. This will correspond to a special case of Bott’s theorem.

The index map Let I be a closed two sided proper ideal of A. We are going to define a map ∂ from K1(A/I) to K0(I) that plays the role of the connecting map in homological theories. Recall that for every unital Banach algebra B, every invertible element in GL(0)(B/J) can be lifted to an invertible element in GL(0)(B). In order to simplify the discussion of the index map let us assume that A is unital; let I be a proper closed two-sided ideal in A, let i : I → A be the inclusion map and π : A → A/I the quotient map; let us identify I+ with the subalgebra of A consisting of all elements λ1A + x, λ ∈ C and x ∈ I. Let u be invertible in GLn(A/I). Notice that Mn(A/I) ' Mn(A)/Mn(I), so that we can apply the above remarks to B = Mn(A) −1 (0) and to the ideal J = Mn(I) in B. We have seen that u ⊕ u belongs to GL2n (A/I). (0) (0) We know that every element in GL2n (A/I) can be lifted to GL2n (A). Let v ∈ GL2n(A) −1 −1 be any lifting of u ⊕ u . Consider p = v (1n,A ⊕ 0n,A) v . This is an idempotent and + π(p) = 1n,A/I ⊕ 0n,A/I . The matrix p is thus in M2n(I ). If we set

∂[u] = [p]I+ − [1n,A]I+

+ + + we get an element of K0(I ) such that q∗(∂[u]) = 0, where q : I → I /I ' C is the canonical quotient map; therefore we get an element of K0(I) (of course one has to show that this class only depends upon [u]; the notation [p]I+ means that this is a class computed + in K0(I ); notice that as idempotent in M2n(A), p is similar to 1n,A ⊕ 0n,A, so that

[p]A − [1n,A]A = 0 ∈ K0(A); in other words i∗∂[u] = 0). It is easy to check that ∂ is a group morphism.

Example. The right shift on `2(N).

Let R be the right shift on `2(N), and let L denote the left shift; we see that LR = b I = I`2 and RL = I − e1 ⊗ e1. The image R of R in the Calkin algebra C = L(`2)/K(`2) is thus invertible, and the inverse is the image of L. Apply the preceding discussion to u = Rb ∈ C. Let P be the rank one projection P = e1 ⊗ e1. We obtain an explicit lifting of −1 u ⊕ u given by µ ¶ µ ¶ RP L 0 v = , and v−1 = , 0 L PR (Notice that LP = PS = 0). The idempotent p from the general discussion is now µ ¶ µ ¶ RL 0 I − P 0 p = v (1 ⊕ 0) v−1 = = . 0 0 0 0

53 According to the discussion about K0(S), the element ∂Rb = [p] − [I] is identified to the opposite of the codimension of the image of p in the first factor `2, which gives here ∂Rb = −1. This value is equal to the index of R. This is not an accident: Exercise. Generalize the above discussion to the quotient algebra L(X)/S(X) and to an arbitrary Fredholm operator T on any Banach space X. Exactness The construction of the index map extends to the non unital case. Suppose given a short exact sequence 0 → I −→i A −→π A/I → 0. We deduce a sequence

π∗ ∂ i∗ π∗ K1(A) −→ K1(A/I) −→ K0(I) −→ K0(A) −→ K0(A/I)

exact at K1(A/I) and at K0(I). We shall only explain that i∗ ◦ ∂ and ∂ ◦ π∗ vanish. We already observed that i∗∂ = 0; if w ∈ GLn(A) and u = π(w), we may choose as lifting −1 −1 v ∈ GL2n(A) of u ⊕ u simply v = w ⊕ w , and with this choice it is clear that ∂[u] = 0; this reasoning shows that ∂ ◦ π∗ = 0.

Triviality of ∂; when for each integer n, every invertible u ∈ GLn(A/I) can be lifted to an invertible v ∈ GLn(A), we see by definition of ∂[u] that ∂[u] = 0. This is in particular true if the quotient map π : A → A/I is splitted by an algebra morphism σ such that π ◦ σ = Id. In this case ∂ = 0.

The above exact sequence is related to the discussion about K1(A) ' K0(SA). Sup- pose that A is unital, and represent SA as the algebra of continuous functions from [0, 1]

to A such that f(0) = f(1) = 0A. Let SAe be the algebra of continuous functions from [0, 1] to A such that f(0) = 0A. It is clear that SA/SAe ' A in a canonical way. The above exact sequence gives

∂ K1(SAe ) → K1(A) −→ K0(SA) → K0(SAe );

using the homotopy ft(s) = f(ts) which moves every element in Mn(SAe ) to the zero matrix when t decreases from 1 to 0, it is easy to see that K1 and K0 vanish for SAe , so that ∂ gives our isomorphism between K1(A) and K0(SA).

K2(A) and Bott’s periodicity theorem

We set now K2(A) = K1(SA); then Bott’s periodicity theorem states that K2(A) ' + K0(A). We define K2(A) = K1(SA) = K1((SA) ); assume that A is unital for simplicity. We describe again SA as the algebra of continuous functions from the circle T to A such + that f(1) = 0A, and (SA) as the algebra of continuous functions from T to A such that f(1) = λ1A for some λ ∈ C. We say that a ∈ Mn(A) is “scalar” if a = Λ ⊗ 1A, where + + Λ ∈ Mn. The group K1((SA) ) is defined using invertible elements in Mn((SA) ). But giving such an invertible is the same as giving a continuous path u from T into GLn(A), such

54 that u(1) is a “scalar” matrix. We can find an equivalent element v such that v(1) = 1n,A in the following way: let u(1) = Λ ⊗ 1A and let (µt) be a continuous path in GLn from −1 + 1n to Λ ; then ut = (µt ⊗ 1A)u is a path from u to v in GLn((SA) ), and v satisfies v(1) = 1n,A. We define a map j : K0(A) → K2(A) which is easy to describe; the difficult part will be to show that j is onto; to each idempotent p in Mn(A) we associate the “loop” + f(z) = zp + (1n,A − p) in GLn(A), z ∈ T. Then f(1) = 1n,A, f belongs to GLn((SA) ), and we set j(p) = [f] in K2(A). We can see easily that the image of p ⊕ q in K2(A) is the sum of images, hence our map j is a monoid morphism from Pr(A) to K2(A), hence gives a group morphism still denoted j from K0(A) to K2(A). As we said, the difficult part in the proof of Bott’s theorem is the fact that j is onto from K0(A) to K2(A); the fact that j is injective from K0(A) into K2(A) is not obvious but will follow from the fact that j is onto for every Banach algebra. Let us indicate the main steps of the proof that k j is onto. We first observe that the path z 1n,A of invertible defines the same class as k z1kn,A in K2(A). Multiplying a path of invertible by z 1n,A amounts thus to add the loop j(1kn,A) associated to the idempotent 1kn,A. Let α ∈ K2(A). We can find a continuous map w from T to GLn(A) such that [w] = α, and we may assume that w(1) = 1n,A. We want to find an element β in K0(A) such that j(β) = [w] = α.

1. We begin by approximating our continuous map w from T to GLn(A) by a map 0 0 PN k z → w (z) from T to GLn(A) which is a trigonometric polynomial w (z) = k=−N vkz , 0 with each vk ∈ Mn(A), and such that w (1) = 1n,A; the standard way is to use convolution with a Fejer kernel; we have [w0] = [w] if the approximation is good enough; multiplying N N 0 by z we get a polynomial u(z) = z w (z) in z ∈ T with coefficients in Mn(A). With this multiplication we have added to [w] the class of the loop associated to 1nN,A, that we should substract at the end. We are going to find an idempotent p in M∞(A) such that j({p}) = [u]. It will follow that j([p] − [1nN,A]) = [w] = α, thus proving that j is onto. 2. Passing to a larger dimension K we may find an equivalent path u1 that is linear in z, namely u1(z) = a + bz, a, b ∈ MK (A) and u1(1) = 1K,A. We may describe this (in a way that is not the cheapest on dimensions) in the following way: for every polynomial P (z) with coefficients in a unital Banach algebra B and such that deg P ≤ m, we may write P (z) = (z − 1)2Q(z) + R(z), where deg R ≤ 1 and deg Q ≤ m − 2. Let for every λ ∈ [0, 1] µ ¶ µ ¶ µ ¶ 1B λ(z − 1)1B P (z) 0 1B 0 ϕλ(z) = = 0 1B 0 1B −λQ(z) 1B µ ¶ P (z) − λ2(z − 1)2Q(z) λ(z − 1)1 B . −λ(z − 1)Q(z) 1B

For every λ, we get that z → ϕλ(z) is a path of invertible and ϕλ(1) = 12,B, ϕ0 = P (z)⊕1B. When λ = 1, the result is a path of invertible elements with degree ≤ m − 1 in z, µ ¶ R(z)(z − 1)1B ϕ1z) = . −(z − 1)Q(z) 1B

55 + When λ varies from 0 to 1 we get an homotopy in GL2((SB) ) between P (z) ⊕ 1B and this path ϕ1(z) of degree ≤ m − 1 in z. 3. Using spectral theory one finally shows that a linear path of invertible elements is equivalent to a loop z → zp + (1 − p) where p is an idempotent. We shall prove a crucial Lemma. We recall the notion of spectral projection. Let B be a unital Banach algebra. Assume that the spectrum of b ∈ B does not meet the imaginary axis in C. It is then possible to find a circle γ centered at some real point M, with M > 0 large, and with radius M − ε, ε > 0 small, such that every λ ∈ σ(b) with Re λ > 0 will be contained in the interior of γ. Let Z 1 p = (z − b)−1 dz. 2πi γ Then p is a spectral projection commuting with b.

Lemma 9.3. Let B be a unital Banach algebra. Assume that b ∈ B is such that b − it1B is invertible for every t ∈ R, and let p be the above spectral projection corresponding to the half complex plane C+ = {z ∈ C : Re z > 0}. Then b + sp is invertible for every real number s ≥ 0. In other words: if σ(b) does not meet the imaginary axis iR, the same is true for σ(b + sp) for every real s ≥ 0.

Proof. Let Bp = pBp. This is a Banach algebra with unit p. We know that bp = pb = pbp and the spectrum of bp = pbp in Bp is contained in C+. It follows that bp + sp is invertible in Bp for every real s ≥ 0, hence there exists u ∈ B such that

(pbp + sp)pup = pup(pbp + sp) = p.

Let q = 1B − p. The spectrum of qbq in Bq is contained in C−, hence bq is invertible in Bq and there exists v ∈ B such that

(qbq)qvq = qvq(qbq) = q.

Now (b + sp)(pup + qvq) = (b + sp)(p + q)(pup + qvq) =

(bp + sp)(pup) + bq(qvq) = (pbp + sp)(pup) + (qbq)(qvq) = p + q = 1B

and similarly for the other direction, (pup + qvq)(b + sp) = 1B.

For the sake of completeness, let us give a sketch of a proof that the spectrum of bp is contained in C+. First of all, it is clear that the spectrum of bp in Bp is contained in the spectrum of b in B (if (b − λ1B)u = u(b − λ1B) = 1B, then u commutes with p and (pbp − λp)(pup) = (pup)(pbp − λp) = p). If the spectrum of bp contains elements λ ∈ C with Re λ ≤ 0, then the boundary of the spectrum will also contain such λ. Then there exists by Remark ?? a norm one d = pcp ∈ Bp such that (bp − λp)d ∼ 0. But in Bp, Z 2 1 −1 p = p = (zp − bp) dz. 2πi γ

56 −1 −1 We have bpd ∼ λd, thus (zp − bp)d ∼ (z − λ)d and (zp − bp) d ∼ (z − λ) d, therefore Z 1 ³ ´ d = pd ∼ (z − λ)−1 dz d = 0. 2πi γ

This contradicts kdk = 1.

Corollary. If b ∈ B is as in the Lemma, there exists an homotopy (bs) in GL(B) from b to p such that for every s, bs + it1B is invertible for every t ∈ R. −1 Proof. For s real varying from 1 to +∞ we set bs = (1 + s) (b + sp).

Let us come back to the proof of the third step. We have a linear path u1(z) = a + zb in GLK (A), with u1(1) = 1K,A, thus a+b = 1K,A. We see that 1K,A +(z −1)b is invertible for every z ∈ T; this yields that the spectrum of b in MK (A) does not meet the line Re λ = 1/2. Applying the preceding Corollary with B = MK (A) to b − (1/2)1K,A, we get an homotopy (bs) from b to an idempotent p, such that bs + λ1K,A is invertible for every λ ∈ C with Re λ = 1/2. It follows that for every s, the path 1K,A + (z − 1)bs, for z ∈ T, is + a path of invertible. We have thus found a deformation in GLK ((SA) ) from the original path 1K,A + (z − 1)b to a loop 1K,A + (z − 1)p associated to an idempotent p, as was to be proved. This proof of the third step is nicer with the full strengh of the functional calculus. Let us find two circles γ1 and γ2 containing the spectrum of b, where γ1 is contained in {Re ζ < 1/2} and γ2 in {Re ζ > 1/2}. We have Z Z 1 z 1 z b = dz + dz. 2iπ γ1 z − b 2iπ γ2 z − b

What we do is to find a deformation ϕ1(z, t), t ∈ [0, 1], of the identity function z → z to z → 0 at the left of the line Re ζ = 1/2 and a deformation ϕ2(z, t) from z → z to z → 1 at the right of the same line in such a way that ϕ1(z, t) and ϕ2(z, t) never meet the line Re ζ = 1/2. There is an easy choice: ϕ1(z, t) = (1 − t)z if z ∈ γ1 and ϕ2(z, t) = (1 − t)z + t for z ∈ γ2. We obtain a continuous path Z Z 1 ϕ1(z, t) 1 ϕ2(z, t) bt = dz + dz 2iπ γ1 z − b 2iπ γ2 z − b

of elements such that 1K,A + (z − 1)bt is invertible for every z ∈ T, with b0 = b and b1 = p.

Finally, let us explain why j : K0(A) → K2(A) is injective; suppose that α = [p] − [q] and j(α) = 0; this means that the two loops zp + (1 − p) and zq + (1 − q), z ∈ T, are + homotopic in GL∞((SA) ). We can therefore construct a continuous map ϕ(t, z), t ∈ [0, 1], such that ϕ(0, z) = zp + (1 − p) and ϕ9!, z) = zq + (1 − q); we may consider this as a map from T to the space of continuous maps from [0, 1] to GL(A). More precisely, the values at 0 and 1 belong respectively to the unital subalgebras Cp and Cq generated by p and q (thus Cp is the subalgebra of elements λ1A + µp, and similarly for Cq). Let B be the algebra of continuous functions from [0, 1] to A such that f(0) ∈ Cp and f(1) ∈ Cq; assume for simplicity that p and q are different from 1A; the two preceding subalgebras

57 are then isomorphic to C2. Since the map j is onto for the algebra B, we may deform ϕ to a loop zP + (1 − P ), where P is an idempotent in M∞(B), i.e. a continuous map from [0, 1] to M∞(A) such that zP (0) + (1 − P (0)) is homotopic to zp + (1 − p) and similarly for t = 1. Since the subalgebra is isomorphic to C2, this implies that the ranges of P (0) and p have equal dimensions (use determinant??), and are therefore equivalent as projectors; similarly for P (1) and q; finally, P (t) is a continuous path of idempotents from [p] to [q], and therefore α = [p] − [q] = 0. Examples 9.5. 1 + + 1. K1(C(S )) corresponds to K2(C). Indeed, K2(C) is K1((SC) ), and (SC) iden- 1 1 tifies with C(S ). It follows from Bott’s theorem that K1(C(S )) = K2(C) ' K0(C) ' Z. 1 We get a generator of K1(C(S )) by considering a continuous map v(t) from T to GLn such that the determinant of v(t) makes a loop around 0 in C with index 1. 2. C(S2). The suspension SC identifies to the space of continuous functions on [0, 1], vanishing at 0 and 1. Then SSC identifies to continuous functions on the square [0, 1]2, vanishing on the boundary. But this algebra can also be identified to continuous functions on S2, vanishing at a given point (here, the point obtained by identifying all points on 2 the boundary to a single point). Therefore, K0(C0(S )) ' K2(C) ' K0(C) ' Z. This is a fundamental example, and it is possible to deduce the general case from it by a tensor product technique.

With this Bott’s isomorphism we get a new connecting map ∂0 from K0(A/I) ' K2(A/I) = K1(S(A/I)) to K1(I) = K0(SI), (observe that S(A/I) ' SA/SI) and a new exact sequence. Call now ∂1 the connecting map that was defined earlier. Suppose given a short exact sequence 0 → I −→i A −→π A/I → 0. We obtain a cyclic exact sequence with period 6

∂1 i∗ π∗ ∂0 i∗ π∗ ∂1 K1(A/I) −→ K0(I) −→ K0(A) −→ K0(A/I) −→ K1(I) −→ K1(A) −→ K1(A/I) −→ ....

Some examples with the cyclic exact sequence

1. Let A = L(`p) and I = K(`p), A/I = Cp = C(`p). We know that K1(I) = {0}, K0(I) ' Z, K1(A) = {0} (see Example 9.4, 4) et K0(A) = {0} because `p is prime. We get π∗ ∂1 i∗ π∗ ∂0 0 −→ K1(A/I) −→ Z −→ 0 −→ K0(A/I) −→ 0, thus K1(Cp) ' Z, K0(Cp) = {0}. When X ' `p(X), we have K1(L(X)) = {0} by Example 9.4, 4, and we know that i∗(1) = 0 because X is isomorphic to its hyperplanes (Example 9.2, 6); this gives

K0(L(X)) ' K0(C(X)); K1(L(X)) ' Z.

2. Let now A = L(`p ⊕ `q). Assume 1 ≤ p < q < ∞. Then every operator from `q to `p is compact (Pitt’s Theorem, see [LT1], Proposition 2.c.3). This implies that the Calkin

58 algebra Cp,q = C(`p ⊕ `q) is triangular, hence K0(Cp,q) ' K0(Cp) × K0(Cq) = {0}. Also, K1(Cp,q) ' K1(Cp) × K1(Cq) = Z × Z. We get using Z ' K0(Kp,q)

i∗ π∗ ∂1 i∗ π∗ 0 −→ K1(A) −→ K1(Cp,q) −→ Z −→ K0(A) −→ K0(Cp,q), this gives i∗ π∗ ∂1 i∗ π∗ 0 −→ K1(A) −→ Z × Z −→ Z −→ K0(A) −→ 0.

The map ∂1 from Z × Z to Z is simply the sum ∂1(n, m) = n + m. Finally K1(A) is isomorphic to the kernel of this sum map, thus isomorphic to Z. Since this ∂1 is onto, we get that i∗ = 0, but i∗ is also onto and thus K0(A) = {0} (for this last statement one can use directly Edelstein-Wojtaszczyk [EW] who say that every projection on `p ⊕ `q is equivalent to a direct sum of projections in `p and `q).

3. Cuntz algebras. We want to show that [1On ] 6= 0 for the Cuntz algebra On when n ≥ 3. To this end Cuntz introduces the auxiliary algebra En generated by n into isometries Pn ∗ P ∗ V1,...,Vn such that i=1 ViVi < 1. The projection Q = 1 − ViVi generates an ideal I isomorphic to K, with Q playing the role of a rank one projection in K, and On 'En/I by the uniqueness of On; we obtain

∂1 i∗ π∗ ∂0 K1(On) −→ K0(I) ' Z −→ K0(En) −→ K0(On) −→ K1(I) = 0.

One can show that ∂1 = 0 (for example by the reasoning of Corollary 12.1). Since i∗(1) is the class of rank one projections in K we have i∗(1) = [Q], and since ∂1 = 0 we know that i∗ is injective, thus [Q] generates a subgroup of K0(En) isomorphic to Z. If [1On ] = 0, we must have [1En ] = m[Q] for some m ∈ Z by exactness. On the other hand (Example 9.3, 10),

n[1En ] + [Q] = [1En ], hence (m(n − 1) + 1)[Q] = 0, which is impossible since n − 1 ≥ 2 and since [Q] generates a group isomorphic to Z. Cuntz proved in [C2] that K0(On) = Z/(n − 1)Z. This is significantly more difficult than the above remark (see also Pimsner-Voiculescu).

10. Exotic Banach spaces H.I. spaces Definition 10.1. Let X be an infinite dimensional Banach space, real or complex. We say that X is Indecomposable if X cannot be written as the topological direct sum of two infinite dimensional closed subspaces Y1 and Y2. We say that X is Hereditarily Indecomposable (in short, H.I.) if every closed infinite dimensional subspace Y of X is indecomposable, that is if no subspace Y of X can be written as the topological direct sum of two infinite dimensional closed subspaces Y1 and Y2 of X. Obviously, if X is H.I. then every (infinite dimensional) subspace Y ⊂ X is H.I. Exercise. 1. A Banach space X is H.I. iff for all infinite dimensional subspaces Y and Z of X, we have inf{ky − zk : y ∈ Y, z ∈ Z, kyk = kzk = 1} = 0.

59 In other words, X is H.I. when the “angle” between any two (infinite dimensional) sub- spaces of X is equal to 0. 2. X is H.I. iff for every (infinite dimensional) subspace Y ⊂ X, the quotient map πY : X → X/Y is strictly singular.

Fact [GM1]. There exist H.I. Banach spaces. Actually the first example in [GM1] of a H.I. space was also the first example of an indecomposable space. The existence of an indecomposable space answered a question of Lindenstrauss [L2]. This example in [GM1] is a special case of the construction presented in section 11. When X is a H.I. Banach space, then X contains no (infinite) Unconditional Basic Sequence (UBS in short). This is clear, because a space with unconditional basis is easily decomposable, for example into the two subspaces generated by basis vectors with odd indices and even indices. Tim Gowers and the author of these Notes were actually looking for an example of a space without UBS. The example turned out to have the stronger property of being H.I. (this was observed by W.B. Johnson). This was not totally accidental (although not deliberate), as Gowers’ dichotomy theorem will explain (see Theorem 10.2 below). Since every Banach space contains a subspace with basis, it is formal from the existence of any H.I. space that there exist H.I. spaces with monotone basis; actually the example in [GM1] is also reflexive; much more difficult is another example due to Gowers [G2] of a H.I. space without any reflexive subspace. Being H.I. this last example does not contain c0 or `1, solving another longstanding conjecture in Banach space theory (see [LT, ???]). Ferenczi [F1] has constructed an example of uniformly convex H.I. space. The proof adds to the ideas of the construction of [GM1] the notion of complex interpolation for families of Banach spaces developed by Coifman, Cwikel, Rochberg, Sagher and Weiss in [CW]. Argyros-Delyanni [AD] constructed asymptotically `1 H.I. spaces, using a technique closer to the original Tsirelson example [T] instead of working with a modification of Schlumprecht’s example as it is done in [GM1] or [F1]. Kalton [K] has constructed an example of a quasi-Banach space X with the very strange property that there is a vector x 6= 0 such that every closed infinite dimensional subspace of X contains x. It follows that this quasi-Banach space does not contain any infinite basic sequence. This example is related to an example of Gowers [G1] of a space with unconditional basis not isomorphic to its hyperplanes; Kalton’s construction uses the technique of twisted sums together with the properties of the space in [G1].

“Germs” of H.I. spaces; in some sense all subspaces of a H.I. space intersect; we may define a net of subspaces that captures a good part of the structure of a H.I. space; the order of this net is not the inclusion, as it is not true that any two infinite dimensional subspaces have an infinite dimensional intersection, but almost... . We say that Y ≤ Z if there exists a compact operator K : Y → X such that iY,X + K)(Y ) ⊂ Z. Given Z1,Z2 ⊂ X there exists Y such that Y ≤ Z1 and Y ≤ Z2. We could call “germ” of H.I. space an equivalence class of such nets, in a way to be made precise. An interesting class of examples is the family of spaces containing no UBS but having only a finite set of germs

60 of H.I. spaces. The example of Gowers [G2] of a non reflexive H.I. space is mainly an example of non reflexive germ. See also Remark 10.1 below. Spectral theory and consequences Theorem 10.1. Let X be a complex H.I. space. Then every T ∈ L(X) can be written as T = λIX + S, where λ ∈ C and S is strictly singular. Thus every T ∈ L(X) is either strictly singular or Fredholm with index 0. Furthermore, the spectrum of T is either finite, or consists of a sequence (λn) converging to λ. In this second case, each λn 6= λ is an eigenvalue of T with finite multiplicity. Proof. Let T ∈ L(X). We know by Lemma 5.4 or Corollary 5.2 that there exists λ ∈ C such that T − λIX is infinitely singular. Let U = T − λIX . For every ε > 0 there exists by

Proposition 3.2 an infinite dimensional subspace Yε ⊂ X such that kU|Yε k < ε. Now let Z be any infinite dimensional subspace of X. Since X is H.I., we can find a vector z ∈ Z such that kzk = 1 and dist(z, SYε ) < ε. It follows that kUzk < (1 + kUk)ε, showing that U = T − λIX is strictly singular. The rest is given by Proposition 6.1. Remark 10.1. Ferenczi [F2] has shown that, given a complex H.I. space X and a bounded linear operator T from a subspace Y of X to X, one can write T = λiY,X +S, where λ ∈ C, S is strictly singular and iY,X denotes the inclusion map from Y to X. This property was shown to be true for the specific H.I. example in [GM1]. Conversely, it is easy to see that any Banach space X with the property that for every subspace Y , every T ∈ L(Y,X) can be written as λiY,X + S is a H.I. space, so that the above result is a characterization of complex H.I. spaces. Ferenczi’s proof consists essentially to show that the space of “germs” of operators on X is a Banach field, hence isomorphic to C. In the case of a real H.I. Banach space X, his proof shows that the quotient L(X)/S(X) is a division ring, hence isomorphic to R, C or H. A germ of operator is an equivalence class for the relation where T1 ∈ L(Z1,X) and T2 ∈ L(Z2,X) are equivalent if there exists Y ≤ Z1,Z2 such that T1 ◦ (iY,X + K1) − T2 ◦ (iY,X + K2) is compact on Y , and K1,K2 are the compact operators from the definition of the order. Exercise 10.1. 1. Operators on real H.I. spaces. Let X be a real H.I. space and let T ∈ L(X). Either there exists λ ∈ R such that T − λIX is strictly singular, or there exists λ ∈ C \ R such 2 ¯ that T − 2 Re λ T + λλIX is strictly singular. Check that T is either strictly singular or Fredholm with index 0. The spectrum of the complexified operator TC is invariant under complex conjugation, and the part of the spectrum contained in the upper half plane is finite or consists of a convergent sequence with its limit. 2. Operators on Xn. If X is a complex H.I. space and if T ∈ L(Xn), there exists a n matrix Λ ∈ Mn(C) such that T = Λ ⊗ IX + S, with S strictly singular on X . 3. If X is H.I. then Xn 6' Xm when m 6= n.

4. If X is a complex H.I. space, then K1(L(X)) = {0}. Also, K0(L(X)) 6= {0}. The hyperplane problem Corollary 10.1. Let X be a H.I. space, real or complex. Then X is not isomorphic to any proper subspace. In particular, X is not isomorphic to its hyperplanes.

61 The first example of a space not isomorphic to its hyperplanes appeared in [G1]. Proof. Let T be an isomorphism from X into itself; then T is not strictly singular, hence it must be Fredholm with index 0 by Theorem 10.1 or Exercise 10.1 and thus TX = X. Exercise. 1. If X is H.I. and Z ⊂ Y ⊂ X, Z 6= Y , then Z and Y are not isomorphic. 2. Show that an H.I. space X is not isomorphic to any quotient X/Y . Hint. Use the fact that the spectrum of every operator on X is countable. Let T : X/Y → X be an isomorphism, consider T ◦ πY , then its adjoint and the spectrum of the adjoint. 3. Homotopy of subspaces. Two subspaces Y and Z of a complex H.I. space X are isomorphic if and only if there exists an homotopy in L(Y,X) from iY,X to T consisting of into isomorphisms from Y to X, where iY,X denotes the injection from Y to X and T is an into isomorphism from Y to X such that TY = Z. Ferenczi showed that the dual of the example in [GM1] is also H.I. and even that every quotient of this space is still H.I. This question is not at all clarified in general. What is clear is that the dual of a reflexive indecomposable space (not necessarily hereditarily indecomposable) is indecomposable; therefore, if every quotient of a reflexive space is H.I., then every subspace of a quotient is indecomposable and this property clearly passes to the dual. However Ferenczi gave an example of a H.I. space such that the dual is not H.I. Gowers’ dichotomy theorem and homogeneous Banach spaces Recall that a Banach space X is said homogeneous if X is isomorphic to all its infinite dimensional closed subspaces. What can we say about a homogeneous Banach space? Since every Banach space contains a subspace with a basis, X must have a basis. Furthermore, every subspace will also have a basis. It follows from the work of Enflo on the , extended by Szankowski, that every Banach space with this property must have type 2 − ε and cotype 2 + ε for every ε > 0 ([LT2], Theorem 1.g.6). These results have been obtained in the ’70s; more recently, Komorowski and Tomczak proved in [KT] the following result: Theorem. Let X be a Banach space with finite cotype and not containing any subspace isomorphic to `2. Then there exists a subspace Y of X without unconditional basis. The proof of [KT] is rather difficult and complicated, and uses techniques different from those of these Notes.

Corollary. [KT] If X is a homogeneous Banach space not isomorphic to `2, then X does not contain any UBS. Proof. We said that X must have finite cotype if it is homogeneous. If X is not isomorphic to `2, we know by the Theorem above that X contains a subspace Y without unconditional basis. Since X is homogeneous, it follows that X has no subspace with unconditional basis. It was then very tempting to try to relate the fact that a space does not contain any UBS to the H.I. property. This was done by T. Gowers in a beautiful “dichotomy Theorem”; Gowers obtains more general combinatorial statements (in [G4] and [G5]) that we shall not give here, and which are somewhat analogous to the infinite versions of

62 Ramsey’s Theorem; he then deduce the result about UBS and H.I. from them. We shall only prove the particular case which is needed here. Theorem 10.2. (Gowers’ dichotomy theorem, [G4], [G5]). Let X be an arbitrary (infinite dimensional) Banach space. Either X contains an infinite unconditional basic sequence, or X contains a H.I. subspace. This result gives a very good reason for introducing this notion of H.I. spaces. If one is interested to know whether a general Banach space contains a subspace with unconditional basis, one has to encounter H.I. spaces some day. Proof of Theorem 10.2. For the proof we need a more quantitative result; let ε > 0; we shall say that a Banach space X is HI(ε) when for all subspaces Y and Z of X, there exist two vectors y ∈ Y , z ∈ Z such that

ky − zk < εky + zk.

Setting λ = k(y + z)/2k > 0 and y0 = y/λ, z0 = z/λ we see that 1 − ε < ky0k, kz0k < 1 + ε and ky0 − z0k < 2ε. It is thus clear that X is H.I. if and only if it is HI(ε) for every ε > 0. Lemma 10.1. Let X be a Banach space. For every ε > 0, either X contains an UBS with constant 2/ε, or X contains an HI(2ε) subspace Z. Proof (from [M2]). The approach is combinatorial. We shall need to discretize the problem to make the situation countable, and even finite later on. We shall restrict now to the real case. Let us choose in X a normalized basic sequence (xn)n≥1 with constant 2 (say) and denote by X0 the Q-vector subspace generated by this sequence. This space X0 is countable and infinite dimensional (over Q); furthermore, for every infinite dimensional Q-vector subspace Y of X0, the closure Y in X is an infinite dimensional Banach space over R or C. From now on, in the proof of the Lemma, the notation Y , Z, or U, V,W will be used for infinite dimensional Q-vector subspaces of X0. Let us consider the set

A = {(x, y) ∈ X0 × X0; kx − yk < εkx + yk}.

This set A is countable and symmetric. We introduce a convenient terminology, inspired by [GP]. Let (x, y) be a couple of vectors in X0 and let Z be an infinite dimensional subspace of X0. We say that (x, y) accepts Z if for all subspaces U, V of Z there exists (u, v) ∈ U ×V such that (x + u, y + v) ∈ A. Since A is symmetric, acceptation is also symmetric. Observe also that if (x, y) ∈ A, then (x, y) accepts every subspace Z: just take u = 0 and v = 0. We say that (x, y) rejects Z if no subspace Z0 ⊂ Z is accepted by (x, y). Rejection is also symmetric, and saying that (x, y) rejects some subspace Z implies that (x, y) ∈/ A. Observe that when (x, y) accepts or rejects a subspace Z, it remains true for every subspace Z0 of Z, and it is also true for “supspaces” of Z of the form Z + F , when F is finite dimensional; combining these two observations, we see that when (x, y) accepts or rejects Z, the same is true for every Z0 such that Z0 ⊂ Z + F , when F is finite dimensional. This simple remark is the basis for our first step:

63 Claim: there exists an infinite dimensional subspace Z0 of X0 such that for every couple (x, y) ∈ X0 × X0, either (x, y) accepts Z0 or (x, y) rejects Z0.

We use for this a very usual diagonal argument; since X0 is countable, we may form the list (xn, yn)n≥1 of all elements of X0 ×X0. We then construct a decreasing sequence (Xn)n of subspaces in the following way: if (xn+1, yn+1) rejects Xn, we simply let Xn+1 = Xn. Otherwise, there exists a subspace Xn+1 of Xn such that (xn+1, yn+1) accepts Xn+1. We consider then a diagonal subspace Z0 built by taking one vector in each Xn, in such a way that Z0 is infinite dimensional.

From now on the whole construction will be performed inside our “stabilizing” sub- space Z0. For every couple (x, y) in Z0 × Z0,(x, y) accepts ou rejects, and we don’t need 0 anymore to specify “accepts or rejects a subspace Z ⊂ Z0”. There are two possibilities: either the couple (0, 0) accepts, or it rejects. If (0, 0) 0 0 accepts, we see that the Banach space Z = Z0 is HI(ε + ε ) for every ε > 0, in particular HI(2ε). Indeed, if U and V are two subspaces of Z, we may approximate them by two 0 0 0 0 0 0 Q-subspaces U and V of Z0. Since (0, 0) accepts, there exist u and v in U and V such that (u0, v0) ∈ A which gives by approximation two vectors u ∈ U and v ∈ V such that ku − vk < (ε + ε0)ku + vk.

Suppose now that (0, 0) rejects; we will find in Z0 an unconditional sequence (ek)k≥1 with constant 2/ε, namely such that

X 2 X k b e k ≤ k η b e k k k ε k k k k k

for all scalars (bk)k and every choice of signs (ηk)k (signs appear usually on the other side of the inequality, but it is clearly equivalent to put them on the right). In order to deal with this in a combinatorial manner, we discretize our problem as follows: it is easy to see that we only need to make sure that 1 < kekk < 2 for every integer k ≥ 1 and that

XK 1 XK (∗) k a e k ≤ k η a e k k k ε k k k k=1 k=1

K K for every integer K ≥ 1, all choices of signs (ηk)k=1 and all scalars (ak)k=1 taking the k k k values ak = j/(N2 ), j = −N2 ,...,N2 , where N is an integer larger than 16/ε. We K call reasonable such a choice of scalars (ak)k=1, and reasonable combination (of length K) PK a linear combinationP of the form kP=1 akek. Relation (∗) means that kx − yk ≥ εkx + yk whenever x = k∈I akek and y = k∈J akek, where the coefficients are reasonable and K (I,J) is the partition of {1,...,K} corresponding to the signs (ηk)k=1. We call such a couple (x, y) a partition of a reasonable combination. In other words, we want to make PK sure that (x, y) ∈/ A whenever (x, y) is a partition of a reasonable combination k=1 akek with arbitrary length K. As we observed, it is enough to know that every partition of a reasonable combination rejects.

64 Sublemma: if (x, y) rejects, then for every infinite dimensional subspace W of Z0 there exists a further subspace W 0 ⊂ W such that for every w0 ∈ W 0, the couple (x + w0, y) rejects.

(Otherwise, for every subspace U ⊂ W , there would exist u0 ∈ U such that (x + u0, y) accepts; then, for every subspace V ⊂ W we could find a couple (u1, v) in U × V such that (x + u0 + u1, y + v) ∈ A, which implies that (x, y) accepts, contrary to the initial hypothesis.)

Let us finish the proof of Lemma 10.1. Assuming that (0, 0) rejects, we build by ∞ induction a sequence (ek)k=1, such that for every integer n ≥ 1, every partition (x, y) of a reasonable combination with length n rejects. If e1, . . . , en are already constructed, consider the finite list of all partitions (xi, yi) of reasonable combinations of length n. By our induction hypothesis every such couple (xi, yi) rejects; applying successively the sublemma to each (xi, yi) from the list, we obtain a subspace W such that for every w ∈ W and every i,(xi + w, yi) rejects; observe that (yi, xi) also belongs to the list, hence (yi + w, xi) rejects, and since A is symmetric (xi, yi + w) also rejects. We choose now a vector en+1 in W , such that 1 < ken+1k < 2. We check that the induction hypothesis is verified for n + 1. Indeed, every partition (x0, y0) of a reasonable combination of length (n + 1) is either of the form (x + aen+1, y) or of the form (x, y + aen+1), where (x, y) is a partition of a reasonable combination of length n. It follows from the choice of W and 0 0 en+1 that (x , y ) rejects.

We can now finish the proof of Theorem 10.2. Assume that X does not contain any UBS. Let Y be a Banach subspace of X. Since Y does not contain an UBS, it follows from Lemma 10.1 that for every ε > 0, Y contains a subspace Z which is HI(ε). −n Taking successively ε = 2 , we can construct a decreasing sequence (Zn) of subspaces −n corresponding to εn = 2 . Let Z be a subspace obtained by a diagonal procedure from the sequence (Zn), which means that for every n this space Z is contained in Zn up to finitely many dimensions. Let ε > 0, and let U and V be infinite dimensional subspaces of Z. Let n be such that 2−n < ε. We can find infinite dimensional subspaces U 0 of U 0 0 0 and V of V such that U ⊂ Zn, V ⊂ Zn. By the construction of Zn there exists a couple (u, v) such that u ∈ U 0, v ∈ V 0 and ku − vk < εku + vk. Therefore Z is HI(ε) for every ε > 0, so Z is H.I.

Exercise. Finite field.

Theorem. Every homogeneous Banach space is isomorphic to `2.

Proof. Let X be a homogeneous Banach space, not isomorphic to `2. We know that X does not contain any UBS by the Corollary of [KT]. By Gowers’ result, X must contain a H.I. subspace, hence X itself is H.I. But an H.I. space is obviously not homogeneous, and in a very strong way, since we have seen that it is not isomorphic to any proper subspace by Corollary 10.1.

65 11. A class of examples of exotic spaces The contents of this section come from [GM2]. ∞ Let c00 be the vector space of all scalar sequences of finite support. Let (en)n=1 be P∞ the standard basis of c00. Given a vector a = n=1 anen its support, denoted supp(a), is the set of n such that an 6= 0. Given two subsets E,F ⊂ N, we say that E < F if max E < min F . If x, y ∈ c00, we say that x < y if supp(x) < supp(y). We also write n < x when n ∈ N and n < min supp(x). If x1 < . . . < xn, then we say that the vectors x1, . . . , xn are successive. An infinite sequence of successive non-zero vectors is also called a block basis and a subspace generated by a block basis is a blockP subspace. Given a subset E ⊂ N and a vector a as above, we write Ea for the vector n∈E anen. An interval of integers is a set of the form {n, n + 1, . . . , m} and the range of a vector x, written ran(x), is the smallest interval containing supp(x).

Let X stand for the set of Banach spaces obtained as the completion of c00 for a norm ∞ k.k such that the sequence (en)n=1 is a normalized bimonotone basis. A first extremely important example in this class is the space T constructed by Tsirelson [T] (see also [FJ]). ∗ Let BT be the smallest convex subset of B(c0) ∩ c00 containing ±en for each n ≥ 1 and such that ∗ ∗ ∗ (x1 + ··· + xn) ∈ 2BT ∗ ∗ ∗ ∗ whenever x1, . . . , xn are successive in BT and n < x1. The norm is then defined on c00 by

∗ ∗ ∗ kxkT = sup{|x (x)| : x ∈ BT }.

A second extremely important example is the space S constructed by Schlumprecht [S1], [S2], which is a very useful variation of the construction of T . Let f(t) = log2(t + 1) ∗ for t ≥ 0. The relevant properties of this function will be listed below. Let BS be the smallest convex subset of B(c0) ∩ c00 containing ±en for each n and such that

∗ ∗ ∗ (x1 + ··· + xn)/f(n) ∈ BS

∗ ∗ ∗ whenever x1, . . . , xn are successive in BS and n ≥ 2. The norm is then defined on c00 by

∗ ∗ ∗ kxkS = sup{|x (x)| : x ∈ BS}.

The basic idea for the construction of our class of examples uses the technology of lower f-estimate introduced by Schlumprecht in [S1], [S2]. Given X ∈ X , we shall say that X satisfies a lower f-estimate if, given any vector x ∈ X and any sequence of intervals −1 Pn E1 < . . . < En, we have kxk ≥ f(n) i=1 kEixk. In the dual formulation, this property ∗ ∗ ∗ means that whenever x1, . . . , xn are successive functionals with norm ≤ 1 in X , we have

∗ ∗ k(x1 + ··· + xn)/f(n)kX∗ ≤ 1.

The norm of S appears then as the smallest norm for a space in X satisfying a lower-f estimate.

66 Schlumprecht introduced the important notion of Rapidly Increasing Sequences, in short RIS. Let us mention first that every subspace Y of S generated by a block basis contains for every n ≥ 1 a sequence y1 < . . . < yn of normalized vectors which is almost n isometrically equivalent to the unit vector basis of `1 (see Lemma 11.1 below). Roughly speaking, a RIS is a normalized sequence x1 < . . . < xk where each xi is the average of an ni `1 sequence, with n1 < n2 < . . . < nk growing extremely rapidly (a precise definition will be given later). Schlumprecht proved that the norm of the sum of RIS sequences has an almost minimal behaviour; indeed, Schlumprecht’s space satisfies a lower f-estimate, hence Pn k i=1 xik ≥ n/f(n) for every sequence of successive norm one vectors. Schlumprecht’s Pn Lemma states that for an RIS, we almost get an equality, k i=1 xik ≤ (1 + ε)n/f(n). We obtain in this way one of the most important features of Schlumprecht’s example: on one n hand, we can find `1 in every subspace; on the other hand, we can always combine very ni different `1 in a RIS and get a behaviour arbitrarily far from the `1 behaviour. But these n new vectors can again be combined to give further `1 , and so on... For our construction, we need to work with more than one function f. To this end we introduce the family F of functions g : [1, ∞) → [1, ∞) satisfying the following conditions: (i) g(1) = 1 and g(t) < t for every t > 1; (ii) g is strictly increasing and tends to infinity; −q (iii) limt→∞ t g(t) = 0 for every q > 0; (iv) the function t/g(t) is concave and non-decreasing; (v) g(st) ≤ g(s)g(t) for every s, t ≥ 1;

p It is easy to check that f(t) = log2(t+1) satisfies these conditions, as does the function f(t). Note also that some of the conditions above are redundant. In particular, it follows from the other conditions that g(x) and x/g(x) are strictly increasing. Let X ∈ X and y ∈ X. For every n ≥ 1, let Xn kyk(n) = sup kEiyk i=1 where the supremum is extended to all families E1 < . . . < En of successive intervals. This quantity is clearly increasing with n, and kyk = kyk(1) since (en) is a bimonotone basis for the space X. Observe that the basis (ei) satisfies keik(n) = 1 for every n ≥ 1. Lemma 11.1. Let X ∈ X satisfy a lower f-estimate. Given n ≥ 1 and ε > 0, there exists an integer N(n, ε) such that for every sequence x1, .P . . , xN of successive norm one vectors with N ≥ N(n, ε), we may find x of the form x = λ i∈A xi, where A is some subinterval of {1,...,N} such that kxk = 1 and kxk(n) ≤ 1 + ε. The proof of this Lemma uses a variant of a well known blocking procedure for con- n structing `1 , originating in James [J3]; let us also mention Giesy [Gi], Pisier [P1]; and much more elaborated results of Elton (E], (Pajor [Pa] in the complex case). Corollary 11.1. Let X ∈ X , satisfying a lower f-estimate. Then for every n ∈ N and ε > 0, every subspace Y of X contains a vector y such that kyk = 1 and kyk(n) ≤ 1 + ε. Proof. By the standard gliding hump procedure, we may find for every N a normalized sequence y1, . . . , yN of vectors in Y and successive vectors x1 < . . . < xN in X such

67 that kyi − xik < ε/nN. The result follows from Lemma 11.1 and an easy approximation argument. Given a subspace Y ⊂ X, we will be interested in a seminorm |||.||| defined on L(Y,X) as follows. We say that a sequence (xn) is a sequence of almost successive vectors if there exists a sequence (yn) of successive vectors such that limn kxn − ynk = 0. Let MY be the ∞ set of sequences (xn)n=1 of almost successive vectors in Y such that lim supn kxnk(n) ≤ 1. Now, given T ∈ L(Y,X) let

|||T ||| = sup lim sup kT xnk . x∈M(Y ) n

Suppose that X is reflexive and that (xn) is a weakly null sequence in a subspace Y such that lim supn kxnk(m) ≤ 1 for every integer m. Let T ∈ L(Y,X) and t = lim sup kT xnk. 0 0 0 We can find a subsequence (xn) such that t = limn kT xnk. Since (xn) is weakly null, we 00 can extract a further subsequence (xn) which is almost successive, and we may also arrange 00 −n 00 00 that kxnk(n) ≤ 1 + 2 . Now (xn) belongs to MY , therefore t = lim kT xnk ≤ |||T |||. Let us say the same thing in a slightly different way: let Pm denote the projection on the interval {1, . . . , m}; for every T ∈ L(Y,X) and for every ε > 0, there exists integers m, n ≥ 1 such that, for every y ∈ Y , the condition kPnyk ≤ 1/n implies that kT yk ≤ (|||T ||| + ε)kyk(m). Lemma 11.2. Suppose that X ∈ X satisfies a lower f-estimate; then for every subspace Y of X and T ∈ L(Y,X) |||T ||| = 0 ⇒ T is strictly singular; if X is reflexive and T compact, then |||T ||| = 0; if for every z in some infinite dimensional subspace Z of Y , we have kT zk ≥ kzk, then |||T ||| ≥ 1. Proof. By the preceding Lemma, every subspace Y contains for every ε > 0 normalized sequences in MY . Hence every (infinite dimensional) subspace of Y contains a norm one vector x such that kT xk ≤ (1 + ε) |||T |||. As a consequence, for every U ∈ L(Y,X), we see that s(U) ≤ |||U||| (where s(U) was defined in section 6); in particular, if |||T ||| = 0, then T is strictly singular. Suppose that X is reflexive; then every normalized sequence of almost successive vectors is weakly null, therefore lim kT xnk = 0 if T is compact, hence |||T ||| = 0. Lastly, suppose that kT zk ≥ kzk for every z in some infinite dimensional subspace Z of Y . We know from Corollary 11.1 that Z contains a normalized sequence (zn) of almost successive vectors with lim kznk(n) = 1. By definition,

|||T ||| ≥ lim kT znk ≥ 1. n

Remark. if U ∈ L(X), then rb(U) ≤ s(U) ≤ |||U|||. If T − λIX is infinitely singular, we see that |λ| ≤ |||T |||. The second ingredient inspired by Schlumprecht is that of Rapidly Increasing Se- quences, in short RIS. Let X ∈ X . For 0 < ε ≤ 1, we say that a sequence x1, . . . , xN of successive vectors in X satisfies the RIS(ε) condition if there is a sequence

N 2/ε2 2 < n1 < ··· < nN

68 of integers such that kxik(ni) ≤ 1 for each i = 1,...,N and

p Xi−1 ε f(ni) > | ran( xj)| j=1 for every i = 2 ...,N. Given g ∈ F, M ∈ N and X ∈ X , an (M, g)-form on X is defined to be a functional ∗ PM ∗ ∗ ∗ x of norm at most one which can be written as j=1 xj for a sequence x1 < . . . < xM of successive functionals all of which have norm at most g(M)−1. Observe that if x∗ is an ∗ ∗ −1 (M, g)-form then kx k∞ ≤ 1/g(M) and |x (x)| ≤ g(M) kxk(M) for any x. √Lemma 11.3. Let X ∈ X . Suppose that (x1, . . . , xN ) satisfies RIS(ε) in X. If g ∈ F, f ≤ g and if x∗ is a (k, g)-form on X, we have

∗ N |x (x1 + ··· + xN )| ≤ max kxjk + ε + . j=1,...,N g(k)

∗ N 2/ε2 In particular, |x (x1 + ··· + xN )| ≤ max kxjk + 2ε when k ≥ 2 .

Proof. Let n1 < n2 < . . . < nN be the sequence of integers associated to the RIS property. Let i ∈ {1,...,N} be such that ni < k ≤ ni+1. Observe that the RIS condition implies kxjk∞ ≤ 1 for each j = 1,...,N. The result follows from three easy inequalities,

¯ Xi−1 ¯ ¯ Xi−1 ¯ 1 p ¯x∗( x )¯ ≤ kx∗k ¯ran( x )¯ ≤ ε f(n ) ≤ ε, j ∞ j g(k) i j=1 j=1

∗ |x (xi)| ≤ kxik ≤ max kxjk, j=1,...,N and for j ≥ ni+1, 1 1 1 |x∗(x )| ≤ kx k ≤ kx k ≤ . j g(k) j (k) g(k) j (nj ) g(k)

2 2 p When k ≥ 2N /ε , we get g(k) ≥ f(k) ≥ N/ε. The next Lemma is a variation of a main Lemma due to Schlumprecht. We already mentioned that in the case of the space constructed by Schlumprecht, this Lemma says that the norm of the sum of RIS sequences has an almost minimal behaviour. Our situation is technically more complicated; the space we want to construct will satisfy a lower f- estimate, but in some parts of our space thep behaviour of RIS will be larger than n/f(n), namely it could sometimes be as big as n/ f(n). We need a more general statement that allows to play between the two possibilities. To this end we introduced the family F of functions. The next Lemma is similar to Lemma 3 from [GM2] or Lemma 7 from [GM1]. √ Lemma 11.4. Let X ∈ X , g ∈ F, f ≤ g, and let x1 < . . . < xn in X satisfy kxik(pn) ≤ 1 Pn for every i = 1, . . . , n and some integer p ≥ 2. Let x = i=1 xi and suppose that © ª kExk ≤ 1 ∨ sup |x∗(Ex)| : x∗ is a (k, g)-form, 2 ≤ k ≤ p

69 for every interval E. Then kxk ≤ ng(n)−1.

Proof. Let G(t) = t/g(t) when t ≥ 1 and G(t) = t when 0 ≤ t ≤ 1. This function G is concave and increasing on [0, +∞). For every interval E and every integer l ≥ 0, let

Xn σl(E) = kExik(pl). i=1

We shall prove by induction on l, 1 ≤ l ≤ n that

(∗) kExk ≤ G(σκ(E)(E)), where κ(E) is the number of i ∈ {1, . . . , n} such that Exi 6= 0 (if κ(E) = 0, then Ex = 0 and this case is obvious). Once this is done, we obtain the result for l = n, E = ran(x),

³Xn ´ n kxk ≤ G(σ (ran(x))) = G kx k n ≤ G(n) = . n i (p ) g(n) i=1

Pn Observe first that when kExk ≤ 1, we have kExk = G(kExk) ≤ G( i=1 kExik) ≤ G(σl(E)). This shows that (∗) is true when κ(E) = 1. Assume (∗) true when κ(E) ≤ l < n, and suppose there exists an interval E such that κ(E) = l + 1 and kExk > G(σl+1(E)); since (∗) is not true for E we know that l ≥ 1 and kExk > 1. From our assumption there ∗ Pk ∗ ∗ exists a (k, g)-form x = ( j=1 Ajxj )/g(k), 2 ≤ k ≤ p, kxj k ≤ 1 and A1 < . . . < Ak, such that ∗ G(σl+1(E)) < |x (Ex)|.

Assume first that κ(AjE) ≤ l for every j = 1, . . . , k. We have kAjExk ≤ G(σl(AjE)) by the induction hypothesis, and using the concavity of G we obtain

k Xk 1 k ³ 1 Xk ´ |x∗(x)| ≤ G(σ (A E)) ≤ G σ (A E) g(k) k l j g(k) k l j j=1 j=1

³ Xn Xk ´ ³ Xn ´ ³ ´ k 1 k 1 k σl+1(E) = G kA Exk l ≤ G kEx k l+1 = G . g(k) k j (p ) g(k) k i (p ) g(k) k i=1 j=1 i=1

If σl+1(E) ≤ k, this last expression is σl+1(E)/g(k) ≤ σl+1(E)/g(σl+1(E)) = G(σl+1(E)), otherwise it is equal to

σl+1(E) σl+1(E) ≤ = G(σl+1(E)), g(k)g(σl+1(E)/k) g(σl+1(E)) so that we have reached a contradiction.

70 In the remaining case there exists j0 ∈ {1, . . . , k} such that Aj0 Exi 6= 0 for every i such that Exi 6= 0. Assume for example j0 < k (otherwise 1 < j0 deserves a similar

treatment). Let m be the last integer i such that Exi 6= 0. Let Bj0 = Aj0 \ ran(Exm), B0 = A ∩ ran(Ex ), B00 = A , B = B0 ∪ B00 and B = A otherwise. j0+1 j0 m j0+1 j0+1 j0+1 j0+1 j0+1 j j We see that

0 00 kAj0 Exk + kAj0+1Exk ≤ kBj0 Exk + kBj0+1Exmk + kBj0+1Exmk ≤

≤ kBj0 Exk + kBj0+1Exmk(2).

l Every Bj satisfies κ(BjE) ≤ l, so that the induction hypothesis applies and since p ≥ 2 we obtain

Xk X X

kAjExk ≤ kBjExk + kBj0+1Exmk(2) ≤ G(σl(BjE)) + G(σl(Bj0+1E)),

j=1 j6=j0 j6=j0

and the conclusion follows as before. The next simple Lemma is useful in conjunction with the preceding.

Lemma 11.5. Let X ∈ X , and let x1 < . . . < xl in X be such that

X |A| kx k ≤ 1; k x k ≤ i i f(|A|) i∈A for every interval A ⊂ {1, . . . , l} such that m ≤ |A| ≤ l. Then for every integer n ≥ 1

f(l) °Xl ° f(l) 2nmf(l) ° x ° ≤ + . l i (n) f(m) l i=1

Pl Proof. Let x = i=1 xi and let (Ej) be a sequence of n successive intervals. By adding at most n cuts, we may assume that we have a family (Ej) of at most 2n intervalsP and that for every j = 1,..., 2n there exists an interval A ⊂ {1, . . . , l} such that E x = x . j j i∈Aj i Let

J = {j : |Aj| ≥ m}. We get X2n X X X |Aj| kEjxk = kEjxk + kEjxk ≤ 2nm + , f(|Aj|) j=1 j∈ /J j∈J j∈J and the result follows.

71 Proper semi-groups of spreads.

Given two infinite subsets A = {a1, a2,...} and B = {b1, b2,...} of N, define the spread from A to B to be the map SA,B on c00 that sends en to zero if n∈ / A, and sends eak to ebk for every k ∈ N. SA,A is just the projection on to A. Note that SB,C SA,B = SA,C . Note also that SB,A is (formally) the adjoint of SA,B. Given any set S of spreads, we shall say that it is a proper set if it is closed under composition (note that this applies to all compositions and not just those of the form SB,C SA,B) and taking adjoints; we also make in [GM2] a technical assumption which means roughly that our semi-group is rather small: ∗ For every (i, j) 6= (k, l), there are only finitely many spreads U ∈ S for which ei (Uej) 6= 0 ∗ and ek(Uel) 6= 0. Note that a proper semi-group is countable. A good example of such a set is the collection of all spreads SA,B where A = {m, m + 1, m + 2,...} and B = {n, n + 1, n + 2,...} for some m, n ∈ N. This is the proper set generated by the shift operator. The next theorem is the main result of [GM2]. Theorem 11.1. Given a proper set S of spreads, there exists a Banach space X = X(S) ∈ X such that: 1. The space X is reflexive, satisfies a lower-f estimate (and the natural basis (en) of c00 is a bimonotone basis for X by definition of the class X ). 2. kUk ≤ 1 for every U ∈ S. It follows that kSA,Bxk = kxk when SA,B ∈ S and supp x ⊂ A. 3. |||TU||| ≤ |||T ||| |||U||| for all T,U ∈ L(X). 4. Let A be the algebra generated by S. For every subspace Y of X, every ε > 0 and every T ∈ L(Y,X), there exists U ∈ A such that |||T − U ◦ iY,X ||| < ε, where iY,X denotes the injection from Y into X. Recall some facts from the general discussion: property 1 implies that every subspace Y ⊂ X contains normalized sequences in M. Since X is reflexive with a basis, normalized sequences of (almost) successive elements are weakly null, in particular sequences in M are weakly null. It follows that |||K||| = 0 when K is compact from Y ⊂ X to X. If T ∈ L(Y,X) and |||T ||| = 0, then T is strictly singular. We have seen that rb(U) ≤ s(U) ≤ |||U|||; Corollary 11.2. All spaces X = X(S) from Theorem 11.1 have the property that they do not contain any UBS. Proof. This is because property 4 immediately implies that for every subspace Y of X, L(Y ) is separable for |||.|||; on the other hand, if X contains an infinite dimensional subspace Y with unconditional basis (yn), then we can find an uncountable set P of projections in L(Y ) such that |||P − Q||| ≥ 1 when P 6= Q in P; for every infinite set L ⊂ N, let YL denote the span [yn]n∈L of the corresponding subsequence, and let PL denote the corresponding projection from Y onto YL. If L and M are two subsets of N with infinite difference D, we obtain that k(PL − PM )zk ≥ kzk for every z in YD, hence |||PL − PM ||| ≥ 1 by Lemma 11.2. Finally, it is well known that we may find uncountably many infinite subsets of N with pairwise infinite differences. These examples are not necessarily H.I. spaces however. They give a good illustration for the dichotomy result of Tim Gowers (Theorem 10.2).

72 It follows from Gowers’ dichotomy theorem that every Banach space X not containing an UBS has the property that every subspace Y of X contains an H.I. subspace. By the Corollary above this is the case for all the examples given by Theorem 11.1. It is possible actually to see this directly by a reasoning close to the proof of the Corollary. Suppose that X = X(S) for some proper set of spreads, and that Y ⊂ X contains no H.I. subspace.

We can build by a standard ordinal construction a family (Yα)α<ω1 of subspaces of Y in the following way: since Yα is not H.I. we can find a direct sum Uα ⊕ Vα in Yα. Let Tα be Id on Uα and 0 on Vα. We choose then Yα+1 = Uα; for a limit ordinal, we construct Yβ by a diagonal procedure in such a way that Yβ is almost contained in the preceding spaces, up to finite dimension. If α < β, Tβ − Tα = Id on a finite codimensional space of Vβ, hence |||Tα − Tβ||| ≥ 1 by ???. On the other hand there should exist by Theorem 11.1 for every ordinal α < ω1 an Aα ∈ A such that |||Uα − Tα||| ≤ 1/4, and this contradicts the separability of A. We start now the construction of the spaces X(S) and the proof of Theorem 11.1. We introduce a lacunary subset J of N, which we split into two disjoint parts K and L. Let J ⊂ N be a set such that, if m < n and m, n ∈ J, then log log log n ≥ 2m. Let us write J in increasing order as {j1, j2,...}. We also need f(j1) > 256. Now let K,L ⊂ J be the sets {j1, j3, j5,...} and {j2, j4, j6,...}. We mentioned before the statement of Theorem 11.1 two important ingredients of the construction. The third and last important ingredient is the notion of special sequence. Let us recall the definition from [GM1] of the special functionals on a space X ∈ X . Let Q ⊂ c00 be the (countable) set of sequences with rational coordinates and maximum at most 1 in modulus. Let σ be an injection from the collection of finite sequences of successive elements of Q to the set L introduced above. Given X ∈ X such that X satisfies a lower ∗ f-estimate and given an integer m ∈ N, let Am(X) be the set of (m, f)-forms on X, i.e. ∗ ∗ −1 Pm ∗ ∗ ∗ the set of all functionals x of the form x = f(m) i=1 xi , where x1 < . . . < xm and ∗ kxi k ≤ 1 for each i = 1, . . . , m. Note that these functionals have norm at most 1 by the X ∗ ∗ ∗ lower f-estimate. If k ∈ N, let Γk be the set of sequences y1 < . . . < yk such that yi ∈ Q ∗ ∗ ∗ ∗ for each i, y1 ∈ Aj (X) and yi+1 ∈ A ∗ ∗ (X) for each 1 ≤ i ≤ k − 1. We call these 2k σ(y1 ,...,yi ) ∗ ∗ special sequences. Let Bk(X) be the set of all functionals y of the form Xk ∗ 1 y = p gj f(k) j=1

X such that (g1, . . . , gk) ∈ Γk is a special sequence. These, when k ∈ K, are the special functionals (on X of size k). Note that if g ∈ F and g(k) = f(k)1/2, then a special functional y∗ of size k is also a (k, g)-form, and the same is true for every EUy∗, for every interval E and every U ∈ S. The idea behind this notion of special fonctionals is that their normalization is very different from the usual normalization of functionals obtained by the ∗ ∗ “Schlumprecht operation” (x1 +···+xn)/f(n), so they produce “spikes” in the unit ball of X∗, but they are extremely rare and easily identified: a relatively weak information about −1/2 Pk a part of a special functional f(k) j=1 gj, namely knowing simply the integer l ∈ L ∗ such that gj ∈ Al , allows us to trace back all the past of its construction, since there is at most one sequence (g1, . . . , gj−1) such that l = σ(g1, . . . , gj−1).

73 Now, given a proper set S of spreads, we define the space X(S), inductively. It is the completion of c00 in the smallest norm satisfying the following equation. n Xn o kxk = kxk ∨ sup f(n)−1 kE xk : 2 ≤ n ∈ N,E < . . . < E intervals c0 i 1 n n i=1 o ∗ ∗ ∗ ∨ sup |x (Ex)| : k ∈ K, x ∈ Bk(X),E ⊂ N an interval ∨ sup{kUxk : U ∈ S}

In the case S = {Idc00 } the fourth term drops out and the definition reduces to that of the space constructed in [GM1]. The fourth term is there to force X(S) to have property (2) claimed in Theorem 11.1. The second term ensures that X satisfies a lower f-estimate. It is also not hard to show that X(S) is reflexive. (A proof can be found in [GM1], end of section 3, which works in this more general context.) It is also useful to understand the construction of X in a way similar to what we said about T and S: we construct a dense subset of the unit ball BX∗ in a sequence of steps, producing an increasing sequence (Bn) of convex subsets of Bc0 . We start with B0 = B`1 ∩ c00. If Bn is defined, we add to it ∗ −1 Pm ∗ ∗ — all (m, f)-forms x = f(m) j=1 xj using elements xj from Bn, for any integer m ≥ 2 ∗ ∗ — all functionals λEUxp where |λ| = 1, E is an interval, U ∈ S and x is any special ∗ Pk functional x = ( j=1 gj)/ f(k) with gj ∈ Bn for j = 1, . . . , k; we let finally Bn+1 be theS convex hull of the union of Bn and of the set of all these new functionals. We let B = n Bn and we can see that the above defined norm is equal to kxk = sup{|x∗(x)| : x∗ ∈ B}.

If we want to compute the norm of x ∈ X, either kxk = kxkc0 or, given ε > 0 such that ∗ ∗ kxkc0 < kxk − ε, there exist a first n ≥ 0 such that |x (x)| > kxk − ε for some x that was adjoined to Bn in the construction of Bn+1, namely either an (m, f)-form or some ∗ ∗ EUy , with√ y special functional and U ∈ S. It should be observed that if g ∈ F is such that g = f on K, then all these functionals of the form EUy∗ are (k, g)-forms for some k ≥ 2 (observe that the images of successive functionals by a spread are still successive). The next technical Lemma is taken from [GM1]. It is just a painful exercise using only elementary calculus. √ Lemmap 11.6. Let K0 ⊂ K. There exists a function g ∈ F such that f ≥ g ≥ f, g(k) = f(k) whenever k ∈ K0 and g(x) = f(x) whenever N ∈ J \ K0 and x is in the interval [log N, exp N]. Lemma 11.7. Let 0 < ε ≤ 1, M ∈ L and let N be such that N ∈ [log M, exp M]. Assume that x = x1 + ··· + xN satisfies the RIS(ε) condition and let x = x1 + ··· + xN . Then

k(f(N)/N)xk ≤ 1 + 2ε.

Assume further that 0 ≤ δ < 1, and let n be such that N/n ∈ [log M, exp M] and f(N) ≤

( 1 + δ )f(N/n). Then k(f(N)/N)xk(n) ≤ (1 + δ)(1 + 3ε).

74 Proof. Let g be the function given by Lemma 11.6 in the case K0 = K. It is clear that every vector Ex such that kExk > 1 is normed by a (k, g)-form for some k ≥ 2; furthermore 2 2 if k ≥ 2N /ε , then |x∗(x)| ≤ 1 + 2ε by Lemma 11.3, so the conditions of Lemma 11.4 are 0 −1 PN 0 satisfied for xi = (1 + 2ε) xi, and thus k i=1 xik ≤ N/g(N). Since g(N) = f(N) we obtain the first estimate. The second follows by Lemma 11.5. ?????

We start now the proof of assertion 3 in Theorem 11.1. Let Xe0 be the weakly null part of Xe, consisting of all classes xe of weakly null sequences (xn) in X. Similarly we shall consider Ye0 for every subspace Y of X. Let Ξ0(Y ) be the space of finitely supported sequences of elements of Ye0. We shall use the following notation, Xk (ye1,..., yek, 0, 0,...) = yej ⊗ fj ∈ Ξ0(X), j=1 where (fn) is the canonical basis for the space of sequences (to avoid confusion, we chose a notation different from the previous (en)). We define a norm on Y ⊕ Ξ0(Y ) by induction on k Xk Xk ky + yej ⊗ fjk = lim ky + y1,n + yej ⊗ fjk, n,U j=1 j=2 where ye1 = (y1,n). We can write this directly with an iterated limit,

Xk

ky + yej ⊗ fjk = lim ... lim ky + y1,n1 + ··· + yk,nk k. n1,U nk,U j=1

We may also observe that for every integer k ≥ 1, there exists an ultrafilter U ⊗k on Nk defined by ⊗k A ∈ U ⇔ lim ... lim 1A(n1, . . . , nk) = 1. n1,U nk,U

For every integer m ≥ 1 we extend the norm k.k(m) to Y ⊕ Ξ0(Y ) in the following way

Xk

ky + yej ⊗ fjk(m) = lim ... lim ky + y1,n1 + ··· + yk,nk k(m). n1,U nk,U j=1

Pk Letting n = (n1, . . . , nk) and yn = y + j=1 yj,nj we may write this iterated limit as

lim kynk(m). n,U ⊗k

Observe that since (yj,n)n is weakly null for each j, the vectors y1,n1 , . . . , yk,nk are almost successive when n1 < . . . < nk is lacunary enough, which always happens when we consider the iterated limit. It follows from the lower-f estimate that

Xk Xk k yej ⊗ fjk ≥ ( kyejk)/f(k). j=1 j=1

75 P For ξ = j xej ⊗ fj ∈ Ξ0, we define its support by as {j : xej 6= 0}. It is possible to generalize the above in the following way:

Lemma. If ξ1, . . . , ξk are successive in Ξ0, then Xk kξ1 + ··· + ξkk ≥ ( kξjk)/f(k). j=1

Let ΞY be the completion of Y ⊕ Ξ0(Y ). We do the same for X, writing simply Ξ. We will work now with the triple norm. Let T ∈ L(Y,X). Recall that for every ε > 0, there exists m ≥ 1 such that

(∗) ∀y ∈ Y, (kPmyk ≤ 1/m) ⇒ kT yk ≤ (|||T ||| + ε)kyk(m).

For ξ ∈ Ξ we define |||ξ||| to be the (increasing) limit of kξk(m), when m tends to +∞; this limit may be +∞. For every T from Y to X, we know that Te(Ye0) ⊂ Xe0, and we define an operator TΞ :ΞY → Ξ by the formula Xk Xk TΞ(y + yej ⊗ fj) = T y + Te(yej) ⊗ fj ∈ Ξ. j=1 j=1

It is clear from the iterated limit formula that kTΞk = kT k. Lemma. Let T ∈ L(Y,X). For every ε > 0, there exists an integer m ≥ 1 such that

∀ξ ∈ Ξ0(Y ), kTΞ(ξ)k ≤ (|||T ||| + ε)kξk(m).

It follows that kTΞ(ξ)k ≤ |||T ||||||ξ|||. Pk Proof. Given ε > 0, we find m ≥ 1 such that (∗) holds. Let ξ = j=1 yej ⊗ fj and let Pk β > kξk(m). Let ξn = j=1 yj,nj . Since each yej is weakly null, we get

lim kPm(ξn)k = 0. n,U ⊗k

⊗k It is therefore possible to find A ∈ U such that kPm(ξn)k ≤ β/m for every n ∈ A. ⊗k There exists B ∈ U such that kξnk(m) ≤ β for every n ∈ B. It follows by (∗) that kT (ξn)k ≤ (|||T ||| + ε)β for n ∈ A ∩ B, hence kTΞ(ξ)k ≤ (|||T ||| + ε)kξk(m). S e e Let V (L) = N∈L[log N, exp N]. Let NY be the part of Y0 of all xe such that

f(l)¡Xl ¢ xe ⊗ f l j j=1 is bounded when l ∈ V (L). For every xe ∈ Ne we consider the family

³f(l) Xl ´ m(xe) = ( xe ⊗ fj) l l∈L j=1

76 as representing a new vector in a further ultrapower Yeω of ΞY , where the index set is L. By the lower-f estimate, we have km(xe)k ≥ kxek. If xe ∈ Mf, then km(xe)k ≤ 1 by ???, so that

1 ≥ km(xe)k ≥ kxek. Given T ∈ L(Y,X), we can extend TΞ to an operator Tω ∈ L(Yeω, Xeω) in the usual way. Then Tω(m(xe)) = m(Texe). Lemma. Let T ∈ L(Y,X). Then ¯¯¯ ¯¯¯ ¯¯¯ ¯¯¯ ∀ye ∈ Ye0, ¯¯¯m(Teye)¯¯¯ ≤ |||T ||||||ye||| .

Proof. Let ε > 0 and m ≥ 1 satisfying (∗). For every l ∈ V (L) let

³f(l) Xk ´ ξ = ye ⊗ f . l l j j=1

N 2/ε2 Suppose |||ye||| ≤ 1. Let N ∈ L and let M1 > 2 . We can find A1 ∈ U such that kP y k < 1/m and ky k ≤ 1 for every n ∈ A . For every n ∈ A , let M (n ) be m n1 p n1 (M1) 1 1 1 1 2 1

such that ε f(N2(n1)) > | ran(yn1 )|; we can find A2(n1) ∈ U such that kPM2(n1)yn1 k <

ε/N and kxn2 k(M2(n1)) ≤ 1 for every n2 ∈ A2(n1); continuing in this way we construct

⊗N A = {(n1, . . . , nN ): nj ∈ Aj(n1, . . . , nj−1), j = 2,...,N} ∈ U

such that yn1 , . . . , ynN is a small perturbation of a RIS(ε) whenever (n1, . . . , nN ) ∈ A. This implies that °f(l) Xl ° ° y ° ≤ 1 + ε l nj (m) j=1 when log N ≤ l ≤ N and m ????? by Lemma 11.7, therefore by (∗) we get for every l ∈ [log N,N] °f(l) Xl ° ° T y ° ≤ |||T ||| + ε. l nj j=1 We obtain by Lemma 11.5 that

°f(l) Xl ° ° T y ° ≤ |||T ||| + ε l nj (p) j=1

when p = ????? Since this holds for every n ∈ A we obtain kTΞξlk(p) ≤ |||T ||| + ε, so that e finally kTωm(xe)k(p) ≤ |||T ||||||xe||| for every p ≥ 1. With these elements it is easy to prove property 3 of Theorem 11.1. Suppose that we pick ye such that |||ye||| = 1 and kSTf yek = |||ST ||| .

77 We obtain ¯¯¯ ¯¯¯ ¯¯¯ ¯¯¯ kSTf yek ≤ km(STf ye)k = kSω(m(Teye))k ≤ |||S||| ¯¯¯m(Teye)¯¯¯ ≤ |||S||||||T ||||||ye||| .

The next Lemma is the main part of the analysis in [GM2], where the properties of special functionals are fully used, as well as the structure of S. Note first that a proper set S of spreads must be countable, and if we write it as {U1,U2,...} and set Sm = {U1,...,Um} ∗ ∗ ∗ for every m, then for any x ∈ X(S), x ∈ X(S) , we have limm sup{|x (Ux)| : U ∈ ∗ S \ Sm} ≤ kxk∞ kx k∞. Lemma 11.8. Let S be a proper set of spreads, let X = X(S), let Y ⊂ X be an infinite-dimensional subspace and let T be a continuous linear operator from Y to X. Let S∞ S = m=1 Sm be a decomposition of S satisfying the condition just mentioned. Then for every ε > 0 there exists m such that, for every x ∈ Y such that kxk(m) ≤ 1 and kPmxk ≤ 1/m, d(T x, m conv{λUx : U ∈ Sm, |λ| = 1}) ≤ ε .

Proof. We may also assume that kT k ≤ 1. Suppose that the result is false. Then, for some ∞ ε > 0, we can find a sequence (yn)n=1 with yn ∈ Y , kynk(n) ≤ 1 and kPn(yn)k ≤ 1/n such that, setting Cn = n conv{λUyn : U ∈ Sn, |λ| = 1}, we have d(T yn,Cn) > ε, and we can also find a sequence (En) of successive intervals such that if zn is any one of yn, T yn or Uyn for some U ∈ Sn and zn+1 is any one of yn+1, T yn+1 or V yn+1 for some V ∈ Sn+1, −n −n then k(N \ En)znk ≤ ε2 and kEnzn+1k ≤ ε2 . ∗ By the Hahn-Banach theorem, for every n there is a norm-one functional yn such that

∗ ∗ sup{|yn(x)| : x ∈ Cn + εB(X)} < yn(T yn) .

∗ ∗ ∗ −1 It follows that yn(T yn) > ε and sup |yn(Cn)| ≤ 1. Therefore |yn(Uyn)| ≤ n for every ∗ U ∈ Sn. We may also assume that the support of yn is contained in En (up to 1/n) ???. (The case of complex scalars requires a standard modification.) Given N ∈ L define an N-pair to be a pair (x, x∗) constructed as follows. Let ∞ yn1 , yn2 , . . . , ynN be a subsequence of (yn)n=1 satisfying the RIS(1) condition, which implies 2 −1 ∗ −1 ∗ ∗ that n1 > N . Let x = N f(N)(yn1 + ··· + ynN ) and let x = f(N) (yn1 + ··· + ynN ), where the y∗ are as above. Lemma 11.7 implies that kxk ≤ 4 and kxk √ ≤ 8. ni ( N) ∗ ∗ ∗ If (x, x ) is such an N-pair, then x ∈ AN (X) and, by our earlier assumptions about supports, XN ∗ −1 ∗ ε x (T x) = N y (T yn ) > . ni i 2 i=1 ∗ −2 Similarly, |x (Ux)| ≤ N for every U ∈ SN . 1/2 Let k ∈ K be such that (ε/24)f(k) > 1. We now construct sequences x1, . . . , xk ∗ ∗ ∗ and x1, . . . , xk as follows. Let N1 = j2k and let (x1, x1) be an N1-pair. Let M2 be such ∗ ∗ ∗ that |x1(Ux1)| ≤ kx1k∞ kx1k∞ if U ∈ S \ SM2 . The functional x1 can be perturbed so ∗ −1 ∗ that it is in Q and so that σ(x1) > max{M2, f (4)}, while (x1, x1) is still an N1-pair.

78 ∗ ∗ ∗ In general, after x1, . . . , xi−1 and x1, . . . , xi−1 have been constructed, let (xi, xi ) be an ∗ Ni-pair such that all of xi, T xi and xi are supported (up to ???) after all of xi−1, T xi−1 ∗ ∗ ∗ ∗ and xi−1, and then perturb xi in such a way that, setting Ni+1 = σ(x1, . . . , xi ), we have ∗ ∗ i+1 |xi (Uxi)| ≤ kxik∞ kxi k∞ whenever U ∈ S \ SNi+1 and we also have f(Ni+1) > 2 and p Pi f(Ni+1) > 2| ran( j=1 xj)|. ∗ −1/2 ∗ ∗ Now let x = (x1 + ··· + xk) and let x = f(k) (x1 + ··· + xk). Our construction guarantees that x∗ is a special functional, and therefore of norm at most 1. We therefore have kT xk ≥ x∗(T x) > εkf(k)−1/2 .

Our aim is now to get an upper bound for kxk and to deduce an arbitrarily large lower bound for kT k. For this purpose we use Lemma 11.4.

Let g be the function given by Lemma 11.6 in the case K0 = K \{k}. It is clear that all vectors Ex are either normed by (M, g)-forms or by spreads of special functionals of length k, or they have norm at most 1. In order to apply Lemma 11.4 with this g, it is therefore enough to show that |U ∗z∗(Ex)| = |z∗(UEx)| ≤ 1 for any special functional ∗ ∗ −1/2 ∗ ∗ z of length k and U ∈ S. Let z = f(k) (z1 + ... + zk) be such a functional with ∗ ∗ zj ∈ Amj . Suppose that U ∈ SM+1 \SM , and let j be such that Nj ≤ M < Nj+1. Let t be ∗ ∗ the largest integer such that mt = Nt. Then zi = xi for all i < t, because σ is injective. ∗ ∗ −2 For such an i, |zi (Uxi)| = |xi (Uxi)| < Ni , if M < Ni. If M ≥ Ni+1, then U/∈ SNi+1 , so ∗ ∗ −i |xi (Uxi)| ≤ kxik∞ kxi k∞ ≤ 2 . If Ni ≤ M < Ni+1, the only remaining case, then i = j ∗ and at least we know that |zi (Uxi)| ≤ kxik ≤ 4. If l 6= i or l = i > t, then z∗(Ux ) = U ∗z∗(x ), and we have U ∗z∗ ∈ A∗ for some l i l i l ml ml. Moreover, because σ is injective and by definition of t, in both cases ml 6= Ni. If m < N , then, as we remarked above, kx k √ ≤ 8, so the lower bound of j for m l i i ( Ni) 2k 1 ∗ ∗ −2 tells us that |U zl (xi)| ≤ k . If ml > Ni, the same conclusion follows from Lemma ?. ∗ ∗ There are at most two pairs (i, l) for which 0 6= zl (UExi) 6= zl (Uxi) and for such a pair ∗ |zl (UExi)| ≤ 1. Putting all these facts together, we get that |z∗(UEx)| ≤ 1, as desired. We also know that (1/8)(x1, . . . , xk) satisfies the RIS(1) condition. Hence, by Lemma 11.4, kxk ≤ 24kg(k)−1 = 24kf(k)−1. It follows that kT k ≥ (ε/24)f(k)1/2 > 1, a contradiction.

We can now explain how the main assertion 4 of Theorem 11.1 follows by a fixed point argument. Suppose that m(xe) = m1(xe), m2(xe),... are successive copies of m(xe). For example, ³f(l) Xl ´ m2(xe) = xe ⊗ fl+j . l l∈L j=1

For every l, the vector

Xk f(l) Xl ξ + ··· + ξ = xe ⊗ f 1,l k,l l (i−1)l+j i=1 j=1

79 is the sum of kl successive vectors xe ⊗ fi in Ξ0, therefore

Xk kl f(l) k ξ k ≥ kxek j,l f(kl) l j=1

and f(kl)/f(l) tends to 1 for fixed k when l → ∞, so that

km1(xe) + ··· + mk(xe)k ≥ kkxek.

More generally, if xe1,..., xek ∈ Ne, we get

Xk km1(xe1) + ··· + mk(xek)k ≥ kxejk. j=1

Indeed,... Lemma. Suppose that T ∈ L(Y,X) and that m and ε are as in the above Lemma. Let

Am = conv{λU : U ∈ Sm, |λ| = 1}.

Then there exists U ∈ Am such that |||T − U ◦ iY,X ||| < 8ε.

Proof. If the Lemma is false, then for every U ∈ Am there is a sequence xeU ∈ Ye0 such g that |||xeU ||| ≤ 1 and k(T − U)xeU k > 17ε. Our first aim is to show that these xeU can be k chosen continuously in U. Let (Uj)j=1 be a covering of Am by open sets of diameter less than ε in the operator norm. For every j = 1, . . . , k, let Uj ∈ Uj and let xej be a sequence with the above property with U = Uj. By the condition on the diameter of Uj, we have g k k(T − U)xejk > 16ε for every U ∈ Uj. Let (φj)j=1 be a partition of unity on Am with φj supported inside Uj for each j. e Pk Now let us consider in Yω the vector y(U) = j=1 φj(U)mj(xej). We shall show that y(U) is a “bad” vector for U, by showing that k(Tω − Uω)°y(U)k > 8ε°. To do this, let ° ° U ∈ Am be fixed and let J = {j : φj(U) > 0}. Note that °(Tg− U)xej° > 16ε for every j ∈ J, hence

Xk Xk k(Tω − Uω)y(U)k = k φj(U)mj((Tg− U)xej)k ≥ φj(U)k(Tg− U)xjk > 16ε. j=1 j=1

The function U 7→ y(U) is clearly continuous. We now apply° a fixed-point° theorem. For ° ° every U ∈ Am, let Γ(U) be the set of V ∈ Am such that °(Tg− V )y(U)° ≤ 8ε. Clearly

Γ(U) is a compact convex subset of Am. By the previous lemma, Γ(U) is non-empty for every U. The continuity of U 7→ y(U) gives that Γ is upper semi-continuous, so there exists a point U ∈ Am such that U ∈ Γ(U). But this is a contradiction.

80 12. Applications to some specific examples In this section we present some specific examples which are special cases of Theorem 11.1. Construction of a H.I. space

Let S = {Id}, let X = X(S), let Y be any subspace of X and let iY,X be the inclusion map from Y to X. Then given any operator T from Y to X, there exists by Theorem 11.1, for every ε > 0, some λ such that |||T − λiY,X ||| < ε. Since |λ| ≤ |||T ||| + ε, an easy compactness argument then shows that there exists λ such that |||T − λiY,X ||| = 0 and thus that T − λiY,X is strictly singular, which is one of the main results of [GM1]. It implies easily that X is hereditarily indecomposable. Recall (Exercise 10.1) that Xn is isomorphic to Xm if and only if m = n.

Shift space Xs

Let S be the proper set generated by the right shift R on c00. This set S consists of all maps of the form SA,B where A = [m, ∞) and B = [n, ∞). Let Xs = X(S). We will write L for the left shift, which is (formally) the adjoint of R, and Id for the identity on m n c00. Then LR = Id and every operator in S is of the form R L . Since RL − Id is of rank one, every operator V in A is a finite-rank perturbation of an operator of the form

X−1 XN −n n U = anL + anR , n=−N n=0

so the difference V − U is of |||.|||-norm zero, hence strictly singular.

For every such U ∈ A we define a fonction ϕU on the unit circle T by

XN i ϕU (λ) = aiλ . i=−N

(λ) (λ) For every λ ∈ T it is easy to find a normalized sequence (xn ) in M such that Rxn − (λ) λxn → 0 (such vectors are simply obtained by Lemma 11.1 by normalizing a sum like Pk+N −j (λ) j=k λ ej, where N is much larger than k). This implies for any such sequence (xn ) (λ) (λ) (λ) that Uxn ' ϕU (λ)xn , and we know that lim supn kUxn k ≤ |||U|||. It follows that kϕU k∞ ≤ |||U||| (the uniform norm is taken on the unit circle T). (λ) For a general V ∈ A we notice that |||V − U||| = 0 implies that limn V (xn ) − (λ) (λ) ϕU (λ)xn = 0. We may thus define ϕV (λ) to be the only scalar µ such that limn V (xn )− (λ) (λ) (λ) µxn = 0 for any sequence (xn ) ∈ M such that lim(R−λIX )(xn ) = 0; then |||V − U||| = 0

implies that ϕV = ϕU ; it follows easily that ϕV1 ϕV2 = ϕV1V2 . By properties 4 and 5 we can therefore extend the map ϕ to an algebra homomorphism ϕ : U → ϕU from L(X) to C(T).

Proposition 12.1. Let T ∈ L(Xs); the operator T is finitely singular iff ϕT does not vanish on T.

81 Proof. Suppose that λ ∈ T and ϕT (λ) = 0, choose U ∈ A such that |||T − U||| ' 0. (λ) (λ) Then ϕU (λ) ' 0 which implies that U(xn ) ' 0 therefore T (xn ) ' 0 and T is infinitely singular. In the other direction, assume T infinitely singular. We can find a block subspace Y such that kT|Y k < ε by Proposition 3.2, hence a vector x ∈ Y such that 1 = kxk ≤ kxkn ≤ 1 + ε (by Lemma 11.1) and kT xk < ε. Next we get a normalized weakly null sequence

xe = (xn) in M(Y ) such that T xn → 0. We have kxek = 1, xe ∈ Xe0 (the weakly null part of the ultrapower Xe) and Texe = 0. Let Te0 denote the restriction of Te to Xe0. Let U ∈ A be such that |||T − U||| < ε. Then ReUe − UeRe = 0 on Xe0 because RU − UR has finite rank. Now kUexek ≤ ε because xe ∈ M. Since R is an isometry preserving successiveness, we see that Rexe ∈ M, and ReUexe = UeRexe, so kTeRexek ≤ 2ε. We get TeRexe = 0, and similarly for every k ≥ 1 we have TeRekxe = 0. It follows that we can find an invariant space for Re on which Te0 = 0. We can then find an approximate eigenvector ye for Re such that Teye = 0, and the eigenvalue must be some λ ∈ T. Then ϕT (λ) = 0.

Projections in Xs

Suppose that P is a projection on Xs. Since ϕ is an algebra homomorphism, it follows 2 2 from P = P that ϕP = ϕP , hence 0 and 1 are the only possible values for ϕP (λ). By continuity we get either ϕP = 0 or ϕP = 1. In the second case P is finitely singular by Proposition 12.1, hence has finite dimensional kernel. In the other case we get a finite dimensional range. We see that Xs is indecomposable. However, Xs is not H.I. For every λ ∈ T, we can find an H.I. subspace Xλ of Xs by considering a subspace generated by 0 (λ) 0 0 a normalized basic subsequence (xn) of the sequence (xn ) such that Rxn − λxn → 0 rapidly. If λ 6= µ, it is easy to see that Y = Xλ + Xµ is closed, which implies that Y is decomposable and Xs is not H.I.

Remark-Exercise. These spaces Xλ are pairwise non-isomorphic for λ ∈ T. We have an uncountable family of different germs indexed by the unit circle T.

This space Xs is a new prime space. The only known examples before [GM2] were c0 and `p (1 ≤ p ≤ ∞). The space Xs is prime by virtue of having no non-trivial com- plemented subspaces and being isomorphic to its subspaces of finite codimension. Indeed, we know that every projection P on Xs is of finite rank or corank. Thus, if PXs is infinite-dimensional, then it has finite codimension. Since the shift on Xs is an isometry, it follows that Xs and PXs are isomorphic, which proves the following theorem (using next Exercise).

Theorem 12.1. The space Xs is prime. Exercise. The hyperplanes of a given Banach space X are mutually isomorphic. More generally, all subspaces of X of a fixed finite codimension are isomorphic.

We note here that the argument in the above proof can be generalized to show that n if m and n are integers with m > n, then Xs does not contain a family P1,...,Pm of infinite-rank projections satisfying PiPj = 0 whenever i 6= j. Indeed, given any projection n P ∈ L(X ), we can regard it as an element of Mn(L(X)). Acting on each entry with ϕ, we get a function h ∈ Mn(C(T)). The map taking P to h is an algebra homomorphism so h is

82 an idempotent. Regarding h as a continuous function from T to Mn(C), we have that h(t) is an idempotent in Mn(C) for every t ∈ T. By the continuity of rank for idempotents, we have that if h(t) = 0 for some t, then h is identically zero. But then P is strictly singular and hence of finite rank. Applying this reasoning to the family P1,...,Pm above, we obtain h1, . . . , hm such that, for every t ∈ T, h1(t), . . . , hm(t) is a set of non-zero idempotents in Mn(C) with hi(t)hj(t) = 0 when i 6= j. But this is impossible if m > n. It follows that n m Xs and Xs are isomorphic if and only if n = m. Another simple consequence of properties 4 and 5 is that, up to strictly singular perturbations, any two operators on Xs commute. Indeed, U1U2 − U2U1 has finite rank when U1,U2 ∈ A, hence |||U1U2 − U2U1||| = 0. By approximation we get |||T1T2 − T2T1||| = 0 for every pair of operators on Xs.

One can actually get better estimates relating Xs to the Wiener algebra. PN n PN n Lemma. (Lemma 11 of [GM2].) Let U = n=0 λnR + n=1 λ−nL . Then X kUk = |||U||| = |λn|. n∈Z

It follows that the homomorphism ϕ takes values in the Wiener algebra W and it is possible to improve in this case property 5 by saying that for every T ∈ L(Xs), there P∞ n P∞ n P exists U = n=1 a−nL + n=0 anR such that n∈Z |an| < ∞ and |||T − U||| = 0. Using this and the properties of invertible elements in W we get in [GM2] an easier approach to Proposition 12.1.

Exercise. Compute K0(L(Xs)) and K1(L(Xs)). The result in this section can be compared to those of Mankiewicz [Mz]; we have here another example of a Banach space such that there exists an algebra homomorphism from L(X) into a commutative Banach algebra. It follows that X is not isomorphic to any power Y n, for n ≥ 2. Indeed, if ϕ is a non zero multiplicative functional from L(X) to n C, and if X = Y , there is a natural homomorphism i from Mn to L(X). But then ϕ ◦ i would be a non zero multiplicative functional on Mn, which is not possible for n ≥ 2.

Double shift space Xd

This example Xd is isomorphic to its codimension 2 subspaces but not to its hyper- planes. Let S be the proper set generated by the double shift R2. That is, S is as in the previous example but m and n are required to be even. We show that every Fredholm operator T on Xd = X(S) has even index. By property 4, and by the fact that every operator in S differs by a finite-rank operator from some even shift, we can find, for any ε > 0, some linear combination U of even shifts such that |||T − U||| < ε. Then s(T −U) < ε and we know that ind(T ) = ind(U) when ε is small by Corollary 6.1. Hence it is enough to show that every U ∈ A has even index. Lemma. Let V be a Fredholm isometry on a Banach space X with a left inverse W , and let T : X → X be a Fredholm operator which can be written in the form P (V ) + Q(W ) for polynomials P and Q. Then the index of T is a multiple of the index of V .

83 Proof. Suppose first that the scalars are complex. It is clear since V is isometric that V − λIX is an into isomorphism when |λ| 6= 1 (and it is onto when |λ| > 1). By Lemma 4.6, we get ind(V − λIX ) = ind(V ) when |λ| < 1 (because we can connect V − λIX to V by a path of semi-Fredholm operators) and ind(V − λIX ) = 0 when |λ| > 1. If V − λIX is finitely singular for some λ ∈ T, then ind(V − λIX ) = 0 for the same reason. In all cases, the only possible values for the index are 0 and ind(V ). Now suppose that T is as in the statement of the lemma. For sufficiently large N, TV N can be written F (V ) for some polynomial F and is still Fredholm. Writing F (V ) = Q N c i(V − λiIX ), we must have V − λiIX finitely singular for TV to be Fredholm, so ind(V − λiIX ) is either 0 or ind(V ). It follows from the composition formula, Proposition 4.2, that the index of F (V ), and hence that of T , is a multiple of the index of V as stated. When the scalars are real we may complexify V to an isometry VC of XC, for example using the injective norm C ⊗ε X on XC. Remark. When K = C and V is not invertible, the spectrum of V contains T.

Putting these facts together, we find that no continuous operator on Xd can be Fred- holm with odd index. We therefore have the following result.

Theorem 12.2. The space Xd is isomorphic to its subspaces of even codimension while not being isomorphic to those of odd codimension. In particular, it is isomorphic to its subspaces of codimension two but not to its hyperplanes. Remark. The proof of Theorem 12.1 gives for this space also that every complemented subspace has finite dimension or codimension. Combining this observation with Theorem 12.2, we see that the space Xd has exactly two infinite-dimensional complemented sub- spaces, up to isomorphism. It is true for this space as well that it is isomorphic to no subspace of infinite codimension. Note that the methods of this section generalize easily to proper sets generated by larger powers of the shift.

Exercise. Compute K0(L(Xd)).

Ternary space This application is more complicated than the previous ones. The aim is to construct a space Xt which is isomorphic to Xt ⊕ Xt ⊕ Xt but not to Xt ⊕ Xt. This question is related to the Schr¨oder-Bernsteinproblem for Banach spaces, first solved (by the negative) by T. Gowers [G3]. We have seen that in some cases, we can deduce that X ' Y from the fact that X and Y embed complementably in each other. The Schr¨oder-Bernsteinproblem for Banach spaces is the question whether this is true in general. Constructing a Banach space X such that X ' X3 but X 6' X2 gives a strong negative question to the problem, because X and X2 are in this case complementably embeddable in each other. There is a very natural choice of S in this case, strongly related to the algebra P from section 7 (or to the Cuntz’ algebra O3). For i = 0, 1, 2 let Ai be the set of positive 0 0 integers equal to i + 1 (mod 3), let Ui be the spread from N to Ai and let S be the 0 0 0 semigroup generated by U0, U1 and U2 and their adjoints. It is shown in [GM2] that this 0 is a proper set. The space Xt = X(S ) is easily seen to be isomorphic to its cube, and this isomorphism is achieved in a “minimal” way. (The primes in this paragraph are to

84 avoid confusion later.) This is one of the models introduced in section 7 for the algebra P. We equipped it here with the norm of L(X), where X is an exotic Banach space given by Theorem 11.1. We shall indeed consider the space X(S0) defined above. However, we define it slightly less directly, which helps with the proof later that it is not isomorphic to its square. The 0 algebra A arising from the above definition is, if completed in the `2-norm, isometric to the Cuntz algebra O3 ([C1], see section 7). Our proof is inspired by his paper [C2]. Recall S∞ n some notation from section 7: T is the ternary tree n=0{0, 1, 2} , Y00 is the vector space of finitely supported scalar sequences indexed by T and the canonical basis for Y00 is denoted by (et)t∈T ; we write e for e∅, denote the length of a word t ∈ T by |t| and (s, t) stands for the concatenation of s, t ∈ T . Let Id denote the identity operator on the space of sequences. Let Vi and Ti, for i = 0, 1, 2 be defined by their action on the basis as follows:

Viet = e(t,i),Tiet = e(i,t).

th ∗ ∗ Thus Ti takes the whole tree T on to the i branch. The adjoints Vi and Ti act in the ∗ ∗ following way: Vi et = es if t is of the form t = (s, i), and Vi et = 0 otherwise, while ∗ ∗ Ti et = es if t = (i, s), and Ti et = 0 otherwise. The following facts are easy to check: ∗ ∗ ∗ ∗ ViTj = TjVi, Vi Vj = Ti Tj = δi,jId; ViVi and TiTi are projections; if Q denotes the P2 ∗ P2 ∗ natural rank one projection on the line Ce, then i=0 ViVi = i=0 TiTi = Id − Q. Let S and A be respectively the proper set generated by V0, V1 and V2, and the algebra generated by this proper set. (Strictly speaking, S is not a proper set, but it is easy to embed T into N so that the maps V0, V1 and V2 become spreads as defined earlier. Note that S is the ∗ semigroup generated by the Vi and the Vi , that it contains Id and that A contains Q, as we have just shown.

In order to obtain the space Xt, consider the subset T0 of T consisting of all words t ∈ T that do not start with 0 (including the empty sequence). We modify the definition of V0 slightly when defining U0, by letting U0e equal e instead of e0. Operators U1 and U2 ∗ are defined exactly as V1 and V2 were. We still have that the UiUi are projections and ∗ P2 ∗ that Ui Uj = δi,jId, but this time i=0 UiUi = Id. We noticed in section 7 that we can n−1 associate the integer ns = 3 i1 + ··· + 3in−1 + in + 1, (with n∅ = 1), and this defines a bijection between T0 and N. The operators U0, U1 and U2 then coincide with the spreads 0 on c00 defined earlier, so we can define S to be the proper set they generate and obtain 0 0 0 the space Xt = X(S ). Let A be the algebra generated by S . ∗ For t ∈ T , we defined Vt inductively by V(t,i) = ViVt. Let Vt be the adjoint of Vt. We now from section 7 that every W ∈ A has a decomposition

XN W = c V V ∗ , l αl βl l=1 where αl and βl are words in T . Define β(W ) to be the smallest value of maxl |βl| over all such representations of W . We make the obvious modifications to the above definitions for 0 A . The remarks are still valid, except that the actions of Vt and Ut on e will be different if

85 the word t begins with 0. The next lemma is similar to Lemma 11 in [GM2]. The notation kxk1 is for the norm in `1. Lemma 12.1. (Lemma 20 of [GM2].) Let U ∈ A0. Then for |t| > β(U), we have the inequality kUetk1 ≤ |||U|||. PM Proof. Let |t| > β(U) and suppose that Uet = k=1 ckesk , where the sks are distinct. PM ∞ Since |t| > β(U), we have Ueu,t = k=1 ckeu,sk for every u ∈ T . Pick a sequence (uj)j=1 ∞ ∞ lacunary enough to guarantee that the sequences (euj ,t)j=1 and (Ueuj ,t)j=1 are successive. Then by the construction of Xt, we obtain the inequality

° XN ° N XM °U( e )° ≥ |c |. uj ,t f(MN) k j=1 k=1

By Lemma 11.7 we know that for some infinite subset L ⊂ N and for every N ∈ L

XN N k e k ≤ . uj ,t f(N) j=1

PM Letting N → ∞, this gives k=1 |ck| ≤ kUk. For the inequality for |||U|||, see [GM2] or Lemma 11.7. ?????

We now consider the algebra A. Let Y = `1(T ) be the completion of Y00 equipped with the `1 norm and let E denote the norm closure of A in L(Y ). Note that every Vi or ∗ ∗ Ti is an isometry on Y , and kVi k ≤ 1, kTi k ≤ 1. Lemma 12.2. Every Fredholm operator in E has index 0. More generally, every Fredholm n n operator T : Y → Y given by a matrix in Mn(E) has index 0. Proof. Since the Fredholm index is stable under small perturbations, it is enough to consider operators in A (as operators on Y ). For any such operator W we associate the operator X2 # ∗ W = TiWTi . i=0

# # We claim that W is a finite rank perturbation of W . It is enough to show that Vi is a ∗ # # ∗ rank-one perturbation of Vi (and to observe that (W ) = (W ) ). But

X2 X2 # ∗ ∗ Vj = TiVjTi = Vj TiTi = Vj(I − Q) = Vj − VjQ ; i=0 i=0

(instead of using approximation of elements in E by elements in A, we could observe # ∗ directly that for every V ∈ E, V − V is compact). Consider the projections Qi = TiTi . Then QiQj = 0 for i 6= j and

Y = Ce ⊕ Q0Y ⊕ Q1Y ⊕ Q2Y.

86 ∗ Each TiWTi represents an operator on QiY , equivalent (in the obvious sense) to W on ∗ Y , so that ind(TiWTi ) = ind(W ) (the first operator is obtained from W by composition with onto isomorphisms) and W # is 0 on the component Ce. It follows that ind(W #) = 3 ind(W ). On the other hand ind(W #) = ind(W ) since it is a finite rank perturbation of W . It follows that ind(W ) = 0. The proof is essentially the same for the more general statement. Given T ∈ L(Y n), represented by a matrix A ∈ Mn(E), use the #-operation on each entry. The resulting matrix is equivalent to three copies of A plus the zero matrix in Mn. This zero matrix contributes n to the dimension of the kernel and n to the codimension of the image, from which we obtain the equation

ind(T ) = ind(T #) = 3 ind(T ) + n − n.

Remark. In the rectangular case, if T : Y m → Y n is Fredholm, then 2 ind(T ) + m − n = 0. This shows that there is no Fredholm operator from Y m to Y n when m − n is odd. Let I be the closed two-sided ideal in E generated by Q. This ideal contains all ∗ rank-one operators of the form es ⊗ et with s, t ∈ T . Hence, every finite rank operator n ∗ n n on Y which is w -continuous (considering Y as the dual of (c0) ) belongs to Mn(I). PIndeed, the matrix of such an operator consists of entries which are finite sums of the form k yk ⊗ xk, with yk ∈ c0. We can approximate yk and xk by finitely supported sequences 0 0 P 0 0 yk and xk, and k yk ⊗ xk certainly belongs to I. (In fact, I consists exactly of the ∗ compact w -continuous operators on `1.)

Lemma. If V ∈ Mn(E) is Fredholm then there exists W ∈ Mn(I) such that V + W is invertible in Mn(E).

Proof. By Lemma 12.2 the index of V is zero. Let x1, . . . , xN and z1, . . . , zN be bases for ∗ PN the kernel and cokernel. We can construct a w -continuous projection k=1 yk ⊗ xk on PN the kernel. Then W = k=1 yk ⊗ zk will do.

Let O denote the quotient algebra E/I. Since I consists of compact operators, we n know that every operator on Y (or on Y ) which is invertible modulo I (or mod Mn(I)) n is Fredholm on Y or on Y by Corollary 4.3. Hence any lifting in Mn(E) of an invertible n element in Mn(O) is Fredholm on Y . As an immediate consequence of the preceding discussion we have the following statement.

Corollary 12.1. Every invertible element of Mn(O) can be lifted to an invertible element of Mn(E). It follows easily from Lemma 12.1 that |||.||| is actually a norm on A0. Let G be the Banach algebra |||.|||-completion of A0. Recall that by properties 3 and 4 of Theorem 11.1 there is a unital algebra homomorphism ϕ : L(X) → G. Lemma. There is a norm-one algebra homomorphism θ from G to O. P Proof. Define a map θ : A0 → O as follows. Given U ∈ A0, write U = N c U U ∗ 0 P l=1 l αl βl in some way, consider the corresponding sum N c V V ∗ as an element of E and let l=1 l αl βl θ0(U) be the image of this operator under the quotient map from E to O. To see that

87 this map is well defined, observe that for any pair of words α and β we have the equation ∗ P2 ∗ UαUβ = i=0 U(i,α)U(i,β) . If n is sufficiently large, we can therefore write U as above in such a way that all the αl are words of length n. Let Wn be the set ofP all words of length n. Then what we have said implies that U can be written as a sum U 0 T ∗, where α∈Wn α α ∗ ∗ each Tα is some linear combination of distinct operators of the form Uβ . It is easy to see ∗ now that U = 0 if and only if Tα = 0 for every α ∈ Wn, and moreover that distinct Uβ are linearly independent. Therefore any U ∈ A0 has at most one representation in the above form. In A we know that for any pair of words α and β the images in O of the operators ∗ P2 ∗ VαVβ and i=0 V(i,α)V(i,β) are the same. It follows that θ0 is well defined. Similarly, one can show that it is a unital algebra homomorphism. We may want to argue in this way: E/I is an algebra with six elements which are ∗ ∗ the classes wi of Ui and wi of Ui , and these elements satisfy the defining properties of ∗ P ∗ our algebra P from section ???, because Ui Ui = Id and Id − UiUi ∈ I, therefore there ∗ ∗ exists an algebra homomorphism ρ from P to E/I such that ρ(ui) = wi and ρ(ui ) = wi . .... Let Pn denote the projection on to the first n levels of the tree T , so that Pn ∈ I for every n. If U ∈ A0, then Lemma 12.1 implies that

lim kU(I − Pn)kL(Y ) ≤ |||U||| . n

It follows that we may extend θ0 to a norm-one homomorphism θ : G → O, as claimed.

We work with complex scalars for the rest of this section. The proof given in [GM2] works also in the real case, but we want to apply here directly K-theoretic results that are proved in the complex case.

Theorem 12.3. The spaces Xt and Xt ⊕ Xt are not isomorphic. 2 Proof. If Xt and Xt are isomorphic we know that [IXt ] = 0 in K0(L(Xt)) by ?????. Taking the image under θ ◦ φ : L(Xt) → O, this yields [1O] = 0 in K0(O). All we have to show now is that [1O] 6= 0. For this we follow the proof given by Cuntz for Theorem 3.7 of [C2]. ∗ ∗ By the definition of equivalence for idempotents, 1E = Vi Vi and ViVi are equivalent. The P2 ∗ relation Id − Q = i=0 ViVi implies in K0(E) that

[1E ] − [Q] = 3[1E ],

and therefore that [Q] = −2[1E ]. Now consider the short exact sequence

j 0 → I −→E −→Oπ → 0 and the corresponding exact sequence in K-theory

∂1 j∗ π∗ ∂0 K1(O) −→ K0(I) −→ K0(E) −→ K0(O) −→ K1(I).

It is easy to see that K1(I) = 0 and K0(I) ' Z as they are for the ideal of compact operators. Corollary 12.1 and the definition of ∂1 (see section 9) immediately imply that ∂1 = 0, so we get an exact sequence

j∗ π∗ 0 → K0(I) −→ K0(E) −→ K0(O) → 0.

88 Now, we know that r = [Q] generates j∗(K0(I)) = ker π∗ ' Z. If 0 = [1O] = π∗([1E ]), it follows by exactness that [1E ] = nr for some integer n ∈ Z. But we know that r = −2[1E ], so (2n + 1)r = 0, contradicting the fact that r generates a group isomorphic to Z. The proof of Theorem 12.3 generalizes in a straightforward way to give, for every k ∈ N, an example of a space X such that Xn is isomorphic to Xm if and only if m = n (mod k). It is likely that every Fredholm operator on the space X of this section has zero index, so that X is not isomorphic to its hyperplanes. Working with a dyadic tree may then give an example of a space X isomorphic to X2 but not isomorphic to its hyperplanes.

References. [AEO] D. Alspach, P. Enflo, E. Odell, On the structure of separable Lp-spaces (1 < p < ∞), Studia Math. 60 (1977), 79–90

[AD] S. Argyros, I. Delyanni, Examples of asymptotic `1 Banach spaces, preprint. [A] M.F. Atiyah, K-theory, Benjamin, 1967. [B] S. Banach, Th´eoriedes op´erationslin´eaires,Warszawa, 1932. [BL] B. Beauzamy et J.T. Laprest´e,“Mod`eles Etal´esdes´ Espaces de Banach”, Hermann (1984). [BP] C. Bessaga et A. PeÃlczy´nski, A generalization of results of R.C. James concerning absolute bases in Banach spaces, Studia Math. 17 (1958), 165–174. [Bl] B. Blackadar, K-theory for operator algebras, MSRI Publications 5, Springer Verlag 1986. [Bo] Bourbaki, Th´eoriesspectrales, Hermann. [B] J. Bourgain, Real isomorphic Banach spaces need not be complex isomorphic, Proc. AMS 96 (1986) 221–226.

[BRS] J. Bourgain, H. Rosenthal, G. Schechtman, An ordinal Lp-index for Banach spaces, with application to complemented subspaces of Lp, Annals of Math. 114 (1981), 193–228. [BS] A. Brunel et L. Sucheston, On B-convex Banach spaces, Math. Syst. Th. 7 (1974), 294–299. [CS] P.G. Casazza, T.J. Shura, Tsirelson’s space, Lecture Notes in Math. 1363 (1989). [CW] R. Coifman, M. Cwikel, R. Rochberg, Y. Sagher, G. Weiss, The complex method for interpolation of operators acting on families of Banach spaces, in “Lecture Notes in Mathematics No.779”, pp. 123-153, Springer-Verlag, (1980). [CL] H.O. Cordes et Labrousse, The invariance of the index in the metric space of closed operators, J. Math. Mech. 12 (1963), 693–719. [C1] J. Cuntz, Simple C∗-algebras generated by isometries, Commun. Math. Phys. 57 (1977), 173-185. [C2] J. Cuntz, K-theory for certain C ∗-algebras, Ann. of Math. 113 (1981), 181-197. [DK] D. Dacunha-Castelle, J.L. Krivine, Applications des ultraproduits `al’´etudedes es- paces et des alg`ebres de Banach, Studia Math. 41 (1972), 315–334. [D] M. M. Day, Normed linear spaces, Springer Verlag.

89 [Do] A. Douady, Un espace de Banach dont le groupe lin´eaire n’est pas connexe, Neder. Akad. W. Proc. (Indag. Math.) 27 (1965), 787–789. [DS] Dunford J.T. Schwartz, Linear Operators, Part II [El] E. Ellentuck, A new proof that analytic sets are Ramsey, J. Symbolic Logic 39 (1974), 163-165. [EW] I. Edelstein, P. Wojtaszczyk, On projections and unconditional bases in direct sums of Banach spaces, Studia Math. 56 (1976), 263–276. n [E] J. Elton, Sign embeddings of `1 , Trans. AMS 279 (1983), 113–124. [E] P. Enflo, Seminar lectures in 1973. [F0] V. Ferenczi, Un espace uniform´ementconvexe et h´er´editairement ind´ecomposable, CRAS Paris. [F1] V. Ferenczi, A uniformly convex H.I. Banach space, preprint. [F2] V. Ferenczi, Operators on subspaces of H.I. Banach spaces, preprint. [F3] V. Ferenczi, QHI Banach spaces, preprint. [F] T. Figiel, An example of infinite dimensional reflexive space non isomorphic to its Cartesian square, Studia Math. 42 (1972) 295–306.

[FJ] T. Figiel et W.B. Johnson, A uniformly convex Banach space which contains no `p, Compositio Math. 29 (1974), 179–190. [GP] F. Galvin and K. Prikry, Borel sets and Ramsey’s theorem, J. Symbolic Logic 38 (1973), 193-198. [Gi] D. Giesy, On a convexity condition in normed linear spaces, Trans. AMS 125 (1966), 114–146 [Gl1] E. Gluskin, Finite-dimensional analogues of spaces without a basis, Dokl.-Akad.- Nauk-SSSR 261 (1981), 1046–1050; English translation: Soviet Math. Dokl. 24 (1981), no. 3, 641–644. [Gl2] E. Gluskin, The diameter of the Minkowski compactum is roughly equal to n, Funk- tsional. Anal. i Prilozhen 15 (1981), 72–73. English translation: Functional Anal. Appl. 15 (1981), no. 1, 57 - 58 [G1] W.T. Gowers, A solution to Banach’s hyperplane problem, Bull. London Math. Soc. 26 (1994), 523–530.

[G2] W.T. Gowers, A Banach space not containing c0, `1 or a reflexive subspace, Trans. A.M.S. 344 (1994), 407–420. [G3] W.T. Gowers, A solution to the Schroeder-Bernstein problem for Banach spaces, Bull. London Math. Soc. to appear. [G4] W.T. Gowers, A new dichotomy for Banach spaces, preprint. [G5] W. T. Gowers, Analytic sets and games in Banach spaces, preprint. [G6] W.T. Gowers, Recent results in the theory of infinite dimensional Banach spaces, ICM 94 [GM1] W.T. Gowers and B. Maurey, The unconditional basic sequence problem, Jour. A.M.S. 6 (1993), 851-874. [GM2] W.T Gowers and B. Maurey, Banach spaces with small spaces of operators, IHES preprint M/94/44, to apppear in Math. Annalen.

90 [H] P. Habala, A Banach space whose subspaces do not have Gordon-Lewis property, preprint. [J1] R.C. James, Bases and reflexivity of Banach spaces, Ann. of Math. 52 (1950), 518–527. [J2] R.C. James, A separable somewhat reflexive Banach space with non-separable dual, Bull. A.M.S. 80 (1974), 738-743. [J3] R.C. James, Uniformly non-square Banach spaces, Ann. of Math. 80 (1964), 542– 550. [Ja] G. Janssen, Restricted ultraproducts of finite Von-Neumann Algebras, in Contribu- tions to Non Standard Analysis (1972), 101–114, North Holland. [Jo1] W. B. Johnson, Banach spaces all of whose subspaces have the approximation prop- erty, Special Topics of , Proceedings GMD, Bonn 1979, North- Holland 1980, 15-26. [Jo2] W.B. Johnson, Homogeneous Banach spaces, Geometric Aspects of Functional Anal- ysis, Israel Seminar, 1986–87, Lecture Notes in Math. 1317, Springer Verlag 1988, 201–203. [Ka] N. Kalton, The basic sequence problem, Studia Math. 116 (1995), 167–187.

[KR] N. Kalton, J. Roberts, A rigid subspace of L0, Trans. AMS 266 (1981) 645–654. [Kt1] T. Kato, for nullity, deficiency and other quantities of linear operators, J. d’Analyse Math. 6 (1958), 261–322. [Kt2] T. Kato, Perturbation theory for linear operators, Springer, 1980. [KT] R. Komorowski et N. Tomczak, Banach spaces without local unconditional structure, preprint. [K] J.L. Krivine, Sous-espaces de dimension finie des espaces de Banach r´eticul´es, Ann. of Math. 104 (1976), 1–29. [Ku] N.H. Kuiper, The homotopy type of the unitary group of Hilbert space, Topology 3 (1965), 19–30. [Le] H. Lemberg, Nouvelle d´emonstration d’un th´eor`emede J.L. Krivine sur la finie repr´esentabilit´ede `p dans un espace de Banach, Israel J. of Math 39 (1981), 341– 348. [L1] J. Lindenstrauss, On complemented subspaces of m, Israel J. Math. 5 (1967), 153- 156. [L2] J. Lindenstrauss, Some aspects of the theory of Banach spaces, Adv. Math. 5 (1970), 159-180. [LT] J. Lindenstrauss and L. Tzafriri, On the complemented subspaces problem, Israel J. Math. 9 (1971), 263-269. [LT1] J. Lindenstrauss and L. Tzafriri, Classical Banach Spaces I: Sequence Spaces, Sprin- ger Verlag, Berlin 1977. [LT2] J. Lindenstrauss et L. Tzafriri, Classical Banach spaces II: Function spaces, Springer Verlag, 1979. [Mz] P. Mankiewicz, A superreflexive Banach space X with L(X) admitting a homomor- phism onto the Banach algebra C(βN), Israel J. of Math. 65 (1989), 1–16.

91 [M1] B. Maurey, Sous-espaces compl´ement´esde Lp, d’apr`esP. Enflo, S´eminaireMaurey- Schwartz 1974–75, expos´e3, Ecole Polytechnique. [M2] B. Maurey, Quelques progr`esdans la compr´ehensionde la dimension infinie, Journ´ee Annuelle 1994, Soci´et´eMath´ematiquede France. [MR] B. Maurey et H. P. Rosenthal, Normalized weakly null sequences with no uncondi- tional subsequence, Studia Math. 61 (1977), 77–98. [Mi] V.D. Milman, The geometric theory of Banach spaces, part II: Geometry of the unit sphere, Uspekhi Math. Nauk 26 (1971), 73–149. English translation in Russian Math. Surveys 26 (1971), 79–163. [MiS] V. Milman et G. Schechtman, “Asymptotic Theory of Finite Dimensional Normed Spaces”, Lecture Notes in Math. 1200, Springer Verlag, 1986.

[MiT] V. Milman et N. Tomczak-Jaegermann, Asymptotic lp spaces and bounded distor- tions, in “Banach Spaces”, Contemp. Math. 144 (1993), 173–196. [Mt] B.S. Mityagin, The homotopy structure of the linear group of a Banach space, Us- pekhi Mat. Nauk 25 n 5 (1970), 63–106; english translation in Russian Math. Surveys 25 n 5 (1970), 59–103.

[N] G. Neubauer, Der Homotopietyp der Automorphismengruppen in den R¨aumen `p und c0, Math. Annalen 174 (1967), 33–40. [OS] E. Odell and T. Schlumprecht, The distortion problem, Acta Math. 173 (1994), 259–281. [OS2] E. Odell and T. Schlumprecht, Examples. n [Pa1] A. Pajor, Plongement de `1 dans les espaces de Banach complexes, CRAS 296 (1983), 741–743. [Pd] G. Pedersen, C∗-algebras and their automorphism groups, Acad. Press 1979. [Pe1] A. Pelczynski, Projections in certain Banach spaces, Studia Math. 19 (1960), 209– 228. [Pe2] A. Pelczynski, On strictly singular and strictly cosingular operators, Bull. Acad. Pol. 13 (1965), 31–41. [PV] Pimsner and Voiculescu, , (19..), . n [P1] G. Pisier, Sur les espaces de Banach qui ne contiennent pas uniform´ementde `1 , CRAS Paris 277 (1973), 991–994. [P2] G. Pisier, Volume of Convex Bodies and the Geometry of Banach Spaces, Cambridge University Press, 1990. [R] C. Read, Different forms of the approximation property, Lecture at the Strobl Con- ference, 1989, and unpublished preprint.

[R] H.P. Rosenthal, A characterization of Banach spaces containing `1, Proc. Nat. Acad. Sc. USA 71 (1974), 2411–2413. [S1] T. Schlumprecht, An arbitrarily distortable Banach space, Israel J. of Math. 76 (1991), 81–95.

[S2] T. Schlumprecht, A complementably minimal Banach space not containing c0 or `p, Seminar Notes in and PDEs, LSU 1991-2, 169-181. [Sh] S. Shelah, A Banach space with few operators, Israel J. of Math. 30 (1978), 181–191.

92 [ShS] S. Shelah, J. Steprans, A Banach space on which there are few operators, Proc. AMS 104 (1988), 101–105. [Sk] G. Skandalis, Kasparov’s bivariant K-theory and applications, Exposit. Math. 9 (1991), 193–250. [Sz1] S. Szarek, On the existence and uniqueness of complex structure and spaces with “few” operators, Trans. AMS 293 (1986), 339–353. [Sz2] S. Szarek, A superreflexive Banach space which does not admit complex structure, Proc. AMS 97 (1986), 437–444. [Ta] J.L. Taylor, Banach algebras and topology, in Algebras in Analysis, edited by J.H. Williamson, Acad. Press 1975.

[T] B.S. Tsirelson, Not every Banach space contains `p or c0, Funct. Anal. Appl. 8 (1974), 139-141. [WO] Wegge-Olsen, K-theory and C∗-algebras: a friendly approach, Oxford Univ. Press, 1993.

Equipe d’Analyse et Math´ematiquesAppliqu´ees Universit´ede Marne la Vall´ee 2 rue de la Butte verte, 93166 Noisy Le Grand CEDEX

93