<<

OPERATOR THEORY ON HILBERT

Class notes

John Petrovic

Contents

Chapter 1. 1

1.1. Definition and Properties 1

1.2. Orthogonality 3

1.3. Subspaces 7

1.4. 9

Chapter 2. Operators on Hilbert Space 13

2.1. Definition and Examples 13

2.2. Adjoint 15

2.3. topologies 17

2.4. Invariant and Reducing Subspaces 20

2.5. Finite rank operators 22

2.6. Compact Operators 23

2.7. Normal operators 27

Chapter 3. Spectrum 31

3.1. Invertibility 31

3.2. Spectrum 34

3.3. Parts of the spectrum 38

3.4. Spectrum of a 40

3.5. Spectrum of a 43

iii iv CONTENTS

Chapter 4. Invariant subspaces 47

4.1. Compact operators 47

4.2. Line integrals 49

4.3. Invariant subspaces for compact operators 52

4.4. Normal operators 56

Chapter 5. algebras 64

5.1. Compact operators 64 CHAPTER 1

Hilbert space

1.1. Definition and Properties

In order to define Hilbert space H we need to specify several of its features. First, it is a complex vector space — the field of scalars is C (complex numbers). [See Royden, p. 217.] Second, it is an . This means that there is a complex valued function hx, yi defined on H × H with the properties that, for all x, y, z ∈ H and α, β ∈ C:

(a) hαx + βy, zi = αhx, zi + βhy, zi; it is linear in the first argument;

(b) hx, yi = hy, xi; it is Hermitian symmetric;

(c) hx, xi ≥ 0; it is non-negative;

(d) hx, xi = 0 iff x = 0; it is positive.

In every inner product space it is possible to define a as kxk = hx, xi1/2.

Exercise 1.1.1. Prove that this is indeed a norm.

Finally, Hilbert space is complete in this norm (meaning: in the topology induced by this norm).

n Pn Example 1.1.1. C is an inner product space with hx, yi = k=1 xkyk and, consequently, the norm kxk = pPn 2 (k) ∞ n (k) (k) (k) (k) k=1 |xk| . Completeness: if {x }k=1 is a Cauchy sequence in C (here x = (x1 , x2 , . . . , xn )) then so

(k) is xm for any fixed m, 1 ≤ m ≤ n, and C is complete.

Example 1.1.2. Let H0 denote the collection of all complex sequences, i.e. functions a : N → C, characterized by the fact that an 6= 0 for a finite number of positive integers n. Define the inner product on H0 by ha, bi =

P∞ (k) n=0 anbn. The space H0 is not complete in the induced norm. Indeed, the sequence {a }k∈N, defined by

(k) n (k) an = 1/2 if n ≤ k and an = 0 if n > k is a Cauchy sequence, but not convergent.

1 2 1. HILBERT SPACE

2 ∞ P∞ 2 Example 1.1.3. Let ` denote the collection of all complex sequences a = {an}n=1 such that n=1 |an|

2 P∞ (k) ∞ converges. Define the inner product on ` by ha, bi = n=1 anbn. Suppose that {a }k=1 is a Cauchy sequence

2 (k) ∞ (k) 2 in ` . Then so is {an }k=1 for each n, hence there exists an = limk→∞ an . First we show that a ∈ ` . Indeed, choose K so that for k ≥ K we have ka(k) − a(K)k ≤ 1. Then, using Minkowski’s Inequality for sequences (see

Royden, p. 122), for any N ∈ N,

( N )1/2 ( N )1/2 ( N )1/2 ( N )1/2 ( N )1/2 X 2 X (K) 2 X (K) 2 X (k) (K) 2 X (K) 2 |an| ≤ |an − an | + |an | = lim |an − an | + |an | k→∞ n=1 n=1 1=0 n=1 n=1

≤ lim sup ka(k) − a(K)k + ka(K)k ≤ 1 + ka(K)k. k→∞

2 (k) (k) Thus a = {an} ∈ ` . Moreover, {a } converges to a, i.e. limk→∞ ka − a k = 0. Let  > 0 and choose M so that k, j ≥ M implies that ka(k) − a(j)k < . For such k ≥ M and any N, we have

N N X (k) 2 X (j) (k) 2 (j) (k) 2 2 |an − an | = lim |an − an | ≤ lim sup ka − a k ≤  . j→∞ n=1 n=1 j→∞

Since N is arbitrary, it follows that ka − a(k)k ≤  and, therefore, `2 is Hilbert space.

2 R 2 Example 1.1.4. The space L of functions f : X → C, such that X |f| dµ < ∞ (where X is usually [0, 1] and

R 2 µ Lebesgue measure). The inner product is defined by hf, gi = X fg dµ and L is complete by the Riesz–Fisher

Theorem (see Royden, p. 125).

Example 1.1.5. The space H2. Let X = T (the unit circle) and µ the normalized Lebesgue measure on T.

The H2 consists of those functions in L2(T) such that hf, einti = 0 for n = −1, −2,... .

Some important facts.

Proposition 1.1.1 (). kx + yk2 + kx − yk2 = 2kxk2 + 2kyk2.

Proposition 1.1.2 (). 4hx, yi = hx + y, x + yi − hx − y, x − yi + ihx + iy, x + iyi − ihx − iy, x − iyi.

Exercise 1.1.2. Prove Propositions 1.1.1 and 1.1.2. 1.2. ORTHOGONALITY 3

Problem 1. Let k · k be a norm on X , and define hx, yi as in Polarization Identity. Assuming that the norm satisfies the Parallelogram Law, prove that hx, yi defines an inner product.

1.2. Orthogonality

In Linear Algebra a basis of a vector space is defined as a minimal spanning set. In Hilbert space such a definition is not very practical. It is hard to speak of minimality when a basis can be infinite. In fact, a basis can

P be uncountable, so if {ei}i∈I is such a basis, what is the meaning of i∈I xiei?

Definition 1.2.1. An orthonormal subset of Hilbert space H is a set E such that (a) kek = 1, for all e ∈ E;

(b) if e1, e2 ∈ E and e1 6= e2 then he1, e2i = 0. An orthonormal basis in H is a maximal orthonormal set. We use abbreviations o.n.s. and o.n.b. for orthonormal set and orthonormal basis, respectively.

Theorem 1.2.1. Every Hilbert space has an orthonormal basis.

Proof. Let e be a unit vector in H. Then E = {e} is an orthonormal set. Let M be the collection of all orthonormal sets in H that contain E. By the Hausdorff Maximal Principle (Royden, p.25) there exists a maximal chain C of such orthonormal sets, partially ordered by inclusion. Let N be the union of all elements of C. Then

N is a maximal orthonormal set, hence a basis of H. 

If the set {e} is replaced by any orthonormal set, the same proof yields a stronger result.

Theorem 1.2.2. Every orthonormal set in Hilbert space can be extended to an orthonormal basis.

Example 1.2.1. For k ∈ N, let ek denote the sequence with only one non-zero entry, lying in the kth position

2 2 and equal to 1. The set {ek}k∈N is an o.n.b. for ` . (If a vector x ∈ ` is orthogonal to all ek, then each of its components is zero, so x = 0.)

2 Example 1.2.2. The set {e1, e3, e5,... } is an orthonormal set in ` but not a basis.

 1 cos t sin t cos 2t sin 2t  Example 1.2.3. The set √ , √ , √ , √ , √ ,... is an o.n.b. in L2(−π, π). 2π π π π π 4 1. HILBERT SPACE

 1  Example 1.2.4. The set √ eint : n ∈ Z is another o.n.b. in L2(−π, π). 2π

P In Linear Algebra, if {ei}i∈I is an o.n.b. then every vector x can be written as i∈I hx, eiiei. In Hilbert space our first task is to make sense of this sum since the index set I need not be countable.

k Pk 2 Theorem 1.2.3 (Bessel’s Inequality). Let {ei}i=1 be an o.n.s. in H, and let x ∈ H. Then i=1 |hx, eii| ≤ kxk2.

Proof. If we write xi = hx, eii, then

k k k k k k X 2 X X 2 X X X 0 ≤ kx − xieik = hx − xiei, x − xieii = kxk − 2Rehx, xieii + h xiei, xjeji i=1 i=1 i=1 i=1 i=1 j=1

k k k k k k 2 X X X 2 X X 2 X 2 = kxk − 2Re xihx, eii + xixjhei, eji = kxk − 2Re xixi + xixi = kxk − |xi| . i=1 i=1 j=1 i=1 i=1 i=1



Corollary 1.2.4. Let E = {ei}i∈I be an o.n.s. in H, and let x ∈ H. Then hx, eii= 6 0 for at most a countable number of i ∈ I.

Proof. Let x ∈ H be fixed and let En = {ei : |xi| ≥ 1/n}. If ei1 , ei2 , . . . , eik ∈ En then

k 2 X 2 2 kxk ≥ |xij | ≥ k(1/n ). j=1

So, for each n ∈ N, En is a finite set, and E = ∪nEn. 

P In view of Corollary 1.2.4 the expressions like hx, eiiei turn out to be the usual infinite series. Our next task is to establish their convergence. The following Lemma will be helpful in this direction.

Lemma 1.2.5. If {xi}i∈N is a sequence of complex numbers and {ei}i∈N is an o.n.s. in H, then the series

P P 2 xiei and |xi| are equiconvergent. i∈N i∈N 1.2. ORTHOGONALITY 5

P P 2 Proof. Let sm and σm denote the partials sums of xiei and |xi| , respectively. Then i∈N i∈N

2 Pm 2 Pm Pm Pm 2 ksm − snk = k i=n+1 xieik = h i=n+1 xiei, j=n+1 xjeji = i=n+1 |xi| = |σm − σn| so the series are equiconvergent. 

P Now we can establish the convergence of i∈I hx, eiiei. We will use notation xi = hx, eii for the Fourier coefficients of x ∈ H relative to the fixed basis {ei}i∈I .

Corollary 1.2.6 (Parseval’s Identity). Let {ei}i∈I be an o.n.s. in H, and let x ∈ H. Then the series

P P 2 P 2 P 2 i∈I xiei and i∈I |xi| converge and k i∈I xieik = i∈I |xi| .

Proof. Since only a countable number of terms in each series is non-zero, we can rearrange them and consider

P∞ P∞ 2 the series i=1 xiei and i=1 |xi| . The latter series converges by the Bessel’s Inequality and Lemma 1.2.5 implies that the former series converges too. Moreover, their partial sums sm and σm satisfy ksmk = σm, so the last assertion of the corollary follows by letting m go to ∞. 

Now we are in the position to show that, in Hilbert space, every o.n.b. indeed spans H. Of course, the minimality is a direct consequence of the definition.

P Theorem 1.2.7. Let E = {ei}i∈I be an o.n.b. in H. Then, for each x ∈ H, x = i∈I xiei, where xi = hx, eii.

P Proof. Let xi = hx, eii and y = x − i∈I xiei. (Well defined since the series converges.) Then hy, eki = P hx, eki − h i∈I xiei, eki = 0, for each k ∈ I, so y ⊥ E. If y 6= 0, then E ∪ {y/kyk} is an o.n.s., contradiciting the maximality of E, so y = 0. 

The following is the analogue of a well known Linear Algebra fact. We use notation card I for the cardinal number of the set I.

Theorem 1.2.8. Any two orthonormal bases {ei}i∈I and {fj}j∈J in H have the same cardinal number.

Proof. We will assume that both cardinal numbers are infinite. If either of them is finite, one knows from

Linear Algebra that the other one is finite and equal to the first. Let j ∈ J be fixed and let Ij = {i ∈ I : hfj, eii= 6 6 1. HILBERT SPACE

0}. By Corollary 1.2.4, Ij is at most countable. Further, ∪j∈J Ij = I. Indeed, if i0 ∈ I \ ∪j∈J Ij then hfj, ei0 i = 0

for all j ∈ J so it would follow that ei0 = 0. Since card Ij ≤ ℵ0 we see that card I ≤ card J ·ℵ0 = card J. Similarly, card J ≤ card I. By Cantor–Bernstein Theorem, (see, e.g., “Proofs from the book”, p.90) card I = card J. 

Definition 1.2.2. The dimension of Hilbert space H, denoted by dim H, is the cardinal number of a basis of H.

In this course we will assume that dim H ≤ ℵ0.

Exercise 1.2.1. If H is an infinite dimensional Hilbert space, then H is separable iff dim H = ℵ0. [Given a countable basis, use rational coefficients. Given a countable dense set, approximate each element of a basis close enough to exclude all other basis elements.]

Next, we want to address the question: when can we identify two Hilbert spaces? We need a vector space isomorphism (i.e., a linear bijection) that preserves the inner product.

Definition 1.2.3. If H and K are Hilbert spaces, an isomorphism is a linear surjection U : H → K such that, for all x, y ∈ H, hUx, Uyi = hx, yi. In this situation we say that H and K are isomorphic.

Exercise 1.2.2. Prove that hUx, Uyi = hx, yi for all x, y ∈ H iff kUxk = kxk for all x ∈ H. Conclude that a

Hilbert space isomorphism is injective.

Theorem 1.2.9. Every separable Hilbert space of infinite dimension is isomorphic to `2. Every Hilbert space of finite dimension n is isomorphic to Cn.

Proof. We will assume that H is an infinite dimensional Hilbert space and leave the finite dimensional case

∞ as an exercise. Since H is separable, there exists an o.n.b. {en}n=1. For x ∈ H, let xi = hx, eii and U(x) =

P∞ 2 (x1, x2, x3,... ). By Parseval’s Identity, the series i=1 |xi| converges, so the sequence (x1, x2, x3,... ) belongs to `2. Thus U is well-defined, linear (because the inner product is linear in the first argument), and isometric: 1.3. SUBSPACES 7

2 P∞ 2 2 2 P∞ 2 kUxk = i=1 |xi| = kxk . Finally, if (y1, y2, y3,... ) ∈ ` then i=1 |yi| converges so, by Lemma 1.2.5, P∞ P∞ n=1 ynen converges and U( n=1 ynen) = (y1, y1, y1,... ). Thus, U is surjective and the theorem is proved. 

Exercise 1.2.3. Prove that every Hilbert space of finite dimension n is isomorphic to Cn.

Problem 2. Let H ne a separable Hilbert space and M a subspace of H. Prove that M is a separable

Hilbert space.

m Problem 3. The Haar system {ϕm,n}, m ∈ N, 1 ≤ n ≤ 2 , is defined as:

  m/2 n − 1 n − 1/2 2 , if m ≤ x ≤ m ,  2 2   n − 1/2 n ϕm,n(x) = m/2 −2 , if m ≤ x ≤ m ,  2 2   n − 1 n  0, if x∈ / , .  2m 2m

Prove that this system is an o.n.b. of L2[0, 1].

1.3. Subspaces

Example 1.3.1. Let H = L2[0, 1] and let G be a measurable subset of [0, 1]. Denote by L2(G) the set of functions in L2 that vanish outside of G. Then L2(G) is a closed subspace of H. Further, if f ∈ L2(G) and g ∈ L2(Gc), then hf, gi = 0.

Definition 1.3.1. If M is a closed subspace of the Hilbert space H, then the of M, denoted M⊥, is the set of vectors in H orthogonal to every vector in M.

Exercise 1.3.1. Prove that M⊥ is a closed subspace of H.

Theorem 1.3.1. Let M be a closed subspace of Hilbert space H, and let x ∈ H. Then there exist unique vectors y in M and z in M⊥ so that x = y + z. 8 1. HILBERT SPACE

⊥ Proof. Let {ei}i∈I and {fj}j∈J be orthonormal bases for M and M , respectively. Their union is an o.n.b. P P P P of H so x = i∈I hx, eiiei + j∈J hx, fjifj and we define y = i∈I hx, eiiei, z = j∈J hx, fjifj. Then y ∈ M,

z ∈ M⊥, and x = y + z.

⊥ Suppose now that x = y1 + z1 = y2 + z2, where y1, y2 ∈ M and z1, z2 ∈ M . Then y1 − y2 = z2 − z1 belongs

⊥ to both M and M , so hy1 − y2, y1 − y2i = 0 and it follows that y1 = y2, and consequently z1 = z2. 

Definition 1.3.2. In the situation described in Theorem 1.3.1 we say that H is the orthogonal direct sum of

M and M⊥, and we write H = M ⊕ M⊥. When z = x + y with x ∈ M and y ∈ M⊥ we often write z = x ⊕ y.

Theorem 1.3.1 allows us to define a map P : H → M by P x = y. It is called the orthogonal projection of H

onto M, and it is denoted by PM. Here are some of its properties.

Theorem 1.3.2. Let M be a closed subspace of Hilbert space H and let P be the orthogonal projection on

M. Then:

(a) P is a linear transformation;

(b) kP xk ≤ kxk, for all x ∈ H;

(c) P 2 = P ;

(d) Ker P = M⊥ and Ran P = M.

⊥ Proof. Let {ei}i∈I and {fj}j∈J be orthonormal bases for M and M , respectively, and let Q = I − P be

⊥ 0 00 0 00 0 0 00 00 P 0 0 00 00 the orthonormal projection on M . If x , x ∈ H and α , α ∈ C, then P (α x +α x ) = i∈I hα x +α x , eii =

α0P x0 + α00P x00, so (a) holds.

(b) If x ∈ H, then x = P x + Qx and P x ⊥ Qx. Therefore, kxk2 = kP xk2 + kQxk2 ≥ kP xk2.

(c) If y ∈ M then P y = y. Now, for any x ∈ H, P x ∈ M so P 2x = P (P x) = P x.

(d) If P x = 0 then x = Qx ∈ M⊥. If x ∈ M⊥ then Qx = x by (c), so P x = 0. The other assertion is

obvious. 

Problem 4. Prove that PMx is the unique point in M that is nearest to x, meaning that kx − PMxk =

inf{kx − hk : h ∈ M}. 1.4. WEAK TOPOLOGY 9

Problem 5. In L2[0, 1] find the orthogonal complement to the subspace consisting of:

(a) all polynomials in x;

(b) all polynomials in x2;

(c) all polynomials in x with the free term equal to 0;

(d) all polynomials in x with the sum of coefficients equal to 0.

Problem 6. If M and N are subspaces of Hilbert space that are orthogonal to each other, then the sum

M + N = {x + y : x ∈ M, y ∈ N } is a subspace. Show that the theorem is not true if M and N are either: closed but not orthogonal or orthogonal but not closed.

1.4. Weak topology

Read Royden, page 236–238.

1 Example 1.4.1. Consider the sequence of functions {cos nt}n∈N in L [0, 2π]. It is easy to see that this

∞ R 1 ∞ sequence is not convergent. However, for any function f ∈ L , 0 f(t) cos nt dt → 0 as n → ∞. Since L is the

1 of L , we say that cos nt → 0 weakly, and we write w − limn cos nt = 0.

∞ Example 1.4.2. Consider the sequence of functions {cos nt}n∈N in L [0, 2π]. Notice that, while not a

1 R 1 ∞ 1 convergent sequence, if f ∈ L then 0 f(t) cos nt dt → 0 as n → ∞. Since L is the dual space of L , we say that cos nt → 0 in the weak∗ topology.

In a Banach space X it is useful to consider three topologies: the norm topology, induced by the norm; weak topology — the smallest topology in which all bounded linear functionals on X are continuous; weak∗ topology

(meaningful when X is the dual space of Y so that Y ⊂ X ∗) — the smallest topology in which some bounded linear functionals on X are continuous (those that can be identified as elements of Y). In order to dicuss these topologies (and understand their role), we need to find out what bounded linear functionals on Hilbert space H look like. 10 1. HILBERT SPACE

Theorem 1.4.1 (Riesz Representation Theorem). If L is a bounded linear functional on H, then there is a unique vector y ∈ H such that L(x) = hx, yi for every x ∈ H. Moreover, kLk = kyk.

P Proof. Assuming that such y exists, we can write it as y = yiei relative to a fixed o.n.b. {ei}i∈ . i∈N N P Then yi = hy, eii = hei, yi = L(ei). Therefore, we define y = L(ei) ei, and all it remains to prove is the i∈N

Pn Pn 2 2 convergence of the series. Let sn = i=1 L(ei) ei. Then L(sn) = i=1 L(ei) Lei = ksnk , so ksnk ≤ kLkksnk Pn from which it follows that ksnk ≤ kLk. Thus the series i=1 L(ei) ei converges and the result follows from

Lemma 1.2.5. 

∗ ∗ We see that if L ∈ H , the dual space of H, then L = Ly. The mapping Φ : H → H defined by Φ(y) = Ly is

∗ a norm preserving surjection. It is conjugate linear: Φ(α1y1 + α2y2) = α1y1 + α2y2. Nevertheless, we identify H with H. Consequently, H is reflexive (i.e., H∗∗ = H) so the weak∗ and weak topologies on H coincide. Therefore, we will work with 2 topologies: weak and norm induced. The absence of a qualifier will always mean that it is the latter.

Exercise 1.4.1. Prove that the weak topology is weaker than the norm toplogy, i.e., if G is a weakly open set then G is an open set.

Example 1.4.3. If {en}n∈N is an orthonormal sequence in H then w − lim en = 0 but the sequence is not convergent.

Exercise 1.4.2. Prove that the Hilbert space norm is continuous but not weakly continuous.

The following result shows why weak topology is important. [See Royden, p. 237]

Theorem 1.4.2 (Banach-Alaoglu). The unit ball {x ∈ H : kxk ≤ 1} in Hilbert space H is weakly compact.

Remark 1.4.1. The unit ball B1 of H is NOT compact (assuming that H is infinite dimensional). Reason: if {en}n∈N is an o.n.b. then the set {e1, e2, e3,... } is closed but not totally bounded, hence not compact.

Exercise 1.4.3. Prove that if a in H is weakly closed then it is weakly compact. 1.4. WEAK TOPOLOGY 11

In spite of the fact that the weak topology is weaker then the norm topology, some of the standard results remain true.

Theorem 1.4.3. A weakly convergent sequence is bounded.

Proof. Suppose that xn is a weakly convergent sequence. Then, for any y ∈ H, the sequence hxn, yi is a convergent sequence of complex numbers, which implies that it is bounded. In other words, for any y ∈ H there exists C = C(y) > 0 such that |hxn, yi| ≤ C. This means that, for each n ∈ N, xn can be viewed as a bounded linear functional on H. By the Uniform Bounded Principle (Royden, p. 232), these functionals are uniformly bounded, i.e., there exists M > 0 such that, for all n ∈ N, kxnk ≤ M. 

Although weakly convergent sequence need not be convergent there are situation when it does.

Theorem 1.4.4. If {xn}n∈N is a weakly convergent sequence in a compact set K then it is convergent.

0 0 Proof. Since {xn}n∈N ⊂ K, it has an accumulation point x and a subsequence xn converging to z. If {xn}

00 00 had another accumulation point x , then there would be another subsequence xn converging to w. It would follow

0 0 00 00 0 00 that w − lim xn = x and w − lim xn = x . Since {xn} is weakly convergent this implies that x = x , so it has only one accumulation point, namely the limit. 

By definition, the weak topology W is the smallest one in which every bounded linear functional L on H is continuous. This means that, for any such L and any open set G in the complex plane, L−1(G) ∈ W. Since open disks form a base of the usual topology in C it suffices to require that L−1(G) ∈ W for each open disk G. Notice

−1 −1 that x ∈ L (G) iff L(x) ∈ G, so if G = {z : |z − z0| < r} and z0 = L(x0) then x ∈ L (G) iff |L(x − x0)| < r.

−1 Now Riesz Representation Theorem implies that L (G) = {x ∈ H : |hx − x0, yi < r} for some y ∈ H. We conclude that a subbase of W consists of the sets W = W (x0; y, r) = {x ∈ H : |hx − x0, yi < r}.

Exercise 1.4.4. Prove that a bounded linear functional L is continuous in a topology T iff L−1(G) ∈ T for every open disk G. 12 1. HILBERT SPACE

Problem 7. Prove that a subspace of Hilbert space is closed iff it is weakly closed.

Problem 8. Prove that Hilbert space is weakly complete.

Problem 9. Let {xn}n∈N be a sequence in Hilbert space with the property that kxnk = 1, for all n, and hxm, xni = c, if m 6= n. Prove that {xn}n∈N is weakly convergent.

Problem 10. Find the weak closure of the unit sphere in Hilbert space. CHAPTER 2

Operators on Hilbert Space

“Nobody, except topologists, is interested in problems about Hilbert space; the people who work in Hilbert space are interested in problems about operators”.

Paul Halmos

2.1. Definition and Examples

Read Section 10.2 in Royden’s book. Operator always means linear and bounded. The algebra of all bounded linear operators on H is denoted by L(H).

n Example 2.1.1. Let H = C and A = [aij] an n × n . The operator of multiplication by A is linear 1/2 Pn 2 and bounded. Indeed, for x = (x1, x2, . . . , xn) and M = sup1≤i≤n j=1 |aij| ,

1/2 1/2 n  n   n  X X 2 X 2 kAxk = sup | aijxj| ≤ sup  |aij|   |xj|  = Mkxk 1≤i≤n j=1 1≤i≤n j=1 j=1 so kAk ≤ M.

2 ∞ Example 2.1.2. Let H = ` and A = [aij]i,j=1, where aij = ci if i = j and aij = 0 if i 6= j. We call such matrix diagonal and denote it by diag(c1, c2,... ), or diag(cn). The operator A (or, more precisely, the

∞ operator of multiplication by A) is bounded iff c = (c1, c2,... ) ∈ ` (i.e., when c is a bounded sequence).

2 2 P∞ 2 Indeed, let x = (x1, x2,... ) ∈ ` , so Ax = (c1x1, c2x2,... ) and kAxk = i=1 |cixi| . If |ci| ≤ M, i ∈ N, then

2 2 P∞ 2 2 ∞ kAxk ≤ M i=1 |xi| = kxk so A is bounded. On the other hand, if c∈ / ` , then for each n there exists in so

that |cin | ≥ n. Then kAein k = kcin ein k ≥ n → ∞ and A is unbounded.

Remark 2.1.1. It is extremely hard to decide, in general, whether an operator A is bounded just by studying

∞ its matrix [hAej, eii]i,j=1.

13 14 2. OPERATORS ON HILBERT SPACE

2 Example 2.1.3. Let H = ` and let S be the unilateral shift, defined by S(x1, x2,... ) = (0, x1, x2,... ).

2 2 2 2 2 Notice that kS(x1, x2,... )k = 0 + |x1| + |x2| + ··· = kxk so kSk = 1. In fact, S is an isometry, hence injective, but it is not surjective!

2 2 Example 2.1.4 (Multiplication on L ). Let h be a measurable function and define Mhf, for f ∈ L , by

∞ (Mhf)(t) = h(t)f(t). If h ∈ L (essentially bounded functions — see Royden, p. 118), then

Z Z 2 2 2 2 2 2 kMhfk = |hf| ≤ khk∞ |f| = khk∞kfk

2 so Mh is a on L and kMhk ≤ khk∞. On the other hand, for  > 0, there exists a set C ⊂ [0, 1] of positive measure so that |h(t)| ≥ khk∞ −  for t ∈ C. If f = χC then

Z Z 2 2 2 2 2 2 kMhfk = |hf| = |h| ≥ (khk∞ − ) µ(C) = (khk∞ − ) kfk , C

∞ and it follows that kMhk ≥ khk∞ − . We conclude that kMhk = khk∞ and Mh is bounded iff h ∈ L .

Example 2.1.5 (Integral operators on L2). Let K : [0, 1] × [0, 1] → C be measurable and square integrable R 1 with respect to planar Lebesgue measure. We define the operator TK by (Tkf)(x) = 0 K(x, y)f(y) dy. Now

Z 1 Z 1 Z 1 2 Z 1 Z 1 2 2 2 kTK fk = |Tkf(x)| dx = K(x, y)f(y) dy dx ≤ |K(x, y)f(y)| dy dx 0 0 0 0 0 Z 1 Z 1  Z 1  Z 1 Z 1 ≤ |K(x, y)|2 dy |f(y)|2 dy dx = kfk2 |K(x, y)|2 dydx. 0 0 0 0 0

1/2 nR 1 R 1 2 o Therefore, TK is bounded and kTK k ≤ 0 0 |K(x, y)| dydx .

2 Example 2.1.6 (Weighted shifts). Let H = ` and let {cn}n∈N be a bounded sequence of complex numbers.

2 A weighted shift W on ` is defined by W (x1, x2,... ) = (0, c1x1, c2x2,... ). It can be written as W = S diag(cn) so it is a bounded operator and kW k = kdiag(cn)k.

In some situations it is useful to have an alternate formula for the . In what follows we will use notation B1 for the closed unit ball of H, i.e. B1 = {x ∈ H : kxk ≤ 1}.

Proposition 2.1.1. Let T be linear operator on Hilbert space. Then kT k = sup{|hT x, yi| : x, y ∈ B1}. 2.2. ADJOINT 15

Proof. Let α denote the supremum above, and let us assume that T 6= 0 (otherwise there is nothing to prove). Clearly, for x, y ∈ B1, |hT x, yi| ≤ kT k, so α ≤ kT k. In the other direction,

T x α ≥ sup{|hT x, yi| : x, y ∈ B , T x 6= 0, y = } 1 kT xk T x = sup{|hT x, i| : x ∈ B , T x 6= 0} kT xk 1

= sup{kT xk : x ∈ B1, T x 6= 0}

= kT k,

and the proof is complete. 

2.2. Adjoint

n In Linear Algebra we learn that the column space of matrix A = [aij]i,j=1 and the null space of its transpose

T n n T ∗ n A are orthogonal complements in R . In C , A needs to be replaced by A = [aji]i,j=1. In this situation,

(2.1) hAx, yi = hx, A∗yi.

Exercise 2.2.1. Prove that, if A is an n × n matrix and x, y ∈ Cn, then hAx, yi = hx, A∗yi.

∞ 2 ∗ Example 2.2.1. Let h ∈ L and let Mh be the operator of multiplication on L . Then (Mh) = Mh.

The following result will show that a relation (2.1) is available for any operator.

Proposition 2.2.1. If T is an operator on H then there exists a unique operator S on H such that hT x, yi = hx, Syi, for all x, y ∈ H.

Proof. Let y ∈ H be fixed. Then ϕ(x) = hT x, yi is a bounded linear functional on H. By Riesz Repre- sentation Theorem there exists a unique z ∈ H such that ϕ(x) = hx, zi, for all x ∈ H. Define Sy = z. Then 16 2. OPERATORS ON HILBERT SPACE

hT x, yi = hx, Syi. To show that S is linear, let Sy1 = z1, Sy2 = z2, and let x ∈ H. Then

hx, S(α1y1 + α2y2)i = hT x, α1y1 + α2y2i = α1hT x, y1i + α2hT x, y2i

= α1hx, Sy1i + α2hx, Sy2i = hx, α1Sy1 + α2Sy2i.

By the uniqueness part of Riesz Representation Theorem S is linear. That S is unique can be deduced by

contradiction: if hx, Syi = hx, S0yi for all x, y ∈ H then hx, Sy −S0yi = 0 for all x which implies that Sy −S0y = 0

for all y, hence S = S0. Finally, S is bounded: kSyk2 = hSy, Syi = hT Sy, yi ≤ kT Sykkyk ≤ kT kkSykkyk so

kSyk ≤ kT kkyk and kSk ≤ kT k. 

Definition 2.2.1. If T ∈ L(H) then the adjoint of T , denoted T ∗, is the unique operator on H satisfying

hT x, yi = hx, T ∗yi, for all x, y ∈ H.

Here are some of the basic properties of the involution T 7→ T ∗.

Proposition 2.2.2.

(a) I∗ = I

(b) T ∗∗ = (T ∗)∗ = T ;

(c) kT ∗k = kT k;

∗ ∗ ∗ (d) (α1T1 + α2T2) = α1T1 + α2T2 ;

∗ ∗ ∗ (e) (T1T2) = T2 T1 ;

(f) if T is invertible then so is T ∗ and (T ∗)−1 = (T −1)∗;

(g) kT 2k = kT ∗T k.

Proof. The assertion (a) is obvious and (b) follows from hx, T ∗∗yi = hT ∗x, yi = hy, T ∗xi = hT y, xi = hx, T yi.

It was shown in the proof of Proposition 2.2.1 that kT ∗k ≤ kT k so kT ∗∗k ≤ kT ∗k ≤ kT k and (c) follows from

∗ ∗ ∗ ∗ (b). We leave (d) as an exercise and notice that hx, (T1T2) yi = hT1T2x, yi = hT2x, (T1) yi = hx, (T2) (T1) yi

establishes (e). As a consequence of (a) and (e), T ∗(T −1)∗ = (T −1T )∗ = I and (T −1)∗T ∗ = (TT −1)∗ = I which 2.3. 17 is (f). Finally, kT ∗T k ≤ kT ∗kkT k = kT k2 and to prove the opposite inequality let  > 0 and let x be a unit vector

∗ ∗ ∗ 2 2 such that kT xk ≥ kT k − . Then kT T k ≥ kT T xk ≥ hT T x, xi = kT xk > (kT k − ) , and (g) is proved. 

2 ∗ Example 2.2.2. Let H = ` and the let S be the unilateral shift (see Example 2.1.3). Then S (x1, x2,... ) =

∗ (x2, x3,... ). The operator S is called the backward shift.

2 ∗ Example 2.2.3. Let TK be the integral operator on L (see Example 2.1.5). Then (TK ) = TK∗ , where

K∗(x, y) = K(y, x).

We now give the Hilbert space formulation of the relation with which we have opened this section.

Theorem 2.2.3. If T is an operator on Hilbert space H then Ker T = (Ran T ∗)⊥.

Proof. Let x ∈ Ker T and let y ∈ Ran T ∗. Then there exists z ∈ H such that y = T ∗z. Therefore hx, yi = hx, T ∗zi = hT x, zi = 0 so x ∈ (Ran T ∗)⊥. In the other direction, if x ∈ (Ran T ∗)⊥ and z ∈ H, then

∗ hT x, zi = hx, T zi = 0. Taking z = T x we see that T x = 0, and the proof is complete. 

We notice that, for T ∈ L(H) and x, y ∈ H, the expression hT x, yi is a form that is linear in the first and conjugate linear in the second argument. It turns out that this is sufficient for a polarization identity.

Proposition 2.2.4 (Second Polarization Identity).

4hT x, yi = hT (x + y), x + yi − hT (x − y), x − yi + ihT (x + iy), x + iyi − ihT (x − iy), x − iyi.

Exercise 2.2.2. Prove Second Polarization Identity.

2.3. Operator topologies

In this section we take a look at the algebra L(H). It has three useful topologies which lead to 3 different types of convergence.

Definition 2.3.1. A sequence of operators Tn ∈ L(H) converges uniformly (or in norm) to an operator T if kTn −T k → 0, n → ∞. A sequence of operators Tn ∈ L(H) converges strongly to an operator T if kTnx−T xk → 0, 18 2. OPERATORS ON HILBERT SPACE n → ∞, for all x ∈ H. A sequence of operators Tn ∈ L(H) converges weakly to an operator T if hTnx−T x, yi → 0, n → ∞, for any x, y ∈ H.

It follows from the definition that the weak topology is the weakest of the three, while then norm topology

(a.k.a. the uniform topology) is the strongest. Are they different?

Proposition 2.3.1. The operator norm is continuous with respect to the uniform topology but discontinuous with respect to the strong and weak topologies.

Proof. The first assertion is a consequence of the inequality |kAk − kBk| ≤ kA − Bk. To prove the other

∞ two, let {en}n∈N be an o.n.b. of H, Hn = ∨k=nek, Pn = PHn . Then Pn → 0 strongly (hence weakly) since

2 P∞ 2 kPnxk = k=n+1 |xk| → 0. However, kPnk = 1 which does not converge to 0. 

Example 2.3.1. We say that an operator T is a rank one operator if there exist u, v ∈ H so that T x = hx, viu.

We use the notation T = u⊗v. Let Tn = en ⊗e1. Then hTnx, yi = x1yn → 0 while Tnx = x1en is not a convergent sequence. Thus, the weak and strong topologies are different.

∗ ∗ ∗ Example 2.3.2. The involution T 7→ T is continuous in uniform topology. (kTn − T k = kTn − T k). Also, it is continuous in the weak topology, because

∗ ∗ |h(Tn − T )x, yi| = |hx, (Tn − T )yi| = |h(Tn − T )y, xi| .

However, it is not continuous in the strong topologies. Counterexample: let S be the unilateral shift, and

∗ n ∗ Tn = (S ) . Then Tn → 0 strongly but {Tn } is not a strongly convergent sequence. Indeed, for any x =

2 2 P∞ 2 (x1, x2,... ) ∈ H, kTnxk = k(xn+1, xn+2,... )k = k=n |xk| → 0, as n → ∞. On the other hand, for x = e1,

∗ n Tn x = S e1 = en, which is not a convergent sequence.

An operator T ∈ L(H) is a continuous mapping when H is given the strong topology. We will write, following

Halmos, (s→s). One may ask about the other types of continuity.

Theorem 2.3.2. The three types of continuity (s→s), (w→w), and (s→w) are all equivalent. 2.3. OPERATOR TOPOLOGIES 19

Proof. Suppose that T is continuous, and let W be a weakly open neighborhood of T x0 in H. We will show that T −1(W ) is weakly open. It suffices to prove this assertion in the case when W belongs to the subbase of the

−1 weak topology. To that end, let W = W (T x0, y, r) = {x ∈ H : |hx − T x0, yi| < r}. Then z ∈ T (W ) ⇔ T z ∈

∗ −1 ∗ −1 W ⇔ |hT z − T x0, yi| <  ⇔ |hz − x0,T yi| < . We see that z ∈ T (W ) iff z ∈ V (x0,T y, ) so T (W ) = V which is a weakly open set.

The implication (w→w)⇒(s→w) is trivial, so we concentrate on the implication (s→w)⇒(s→s). To that end, suppose that T is not continuous. Then it is unbounded, so there exists a sequence {xn}n∈N of unit vectors such

2 that kT xnk ≥ n , n ∈ N. Clearly, xn/n → 0 and the assumption (s→w) implies that T xn/n weakly converges to

0. By Theorem 1.4.3 the sequence {T xn/n} is bounded which contradicts the fact that kT xn/nk ≥ n. 

The fact that every operator in L(H) is weakly continuous has an interesting consequence.

Corollary 2.3.3. If T is a linear operator on H then T (B1) is closed.

Proof. Banach-Alaoglu Theorem established that B1 is weakly compact so, by Theorem 2.3.2, T (B1) is weakly compact, hence weakly closed, hence norm closed. 

Exercise 2.3.1. Prove that if F is a closed and bounded set in H then T (F ) is closed.

At the end of this section we consider a situation that occurs quite frequently.

Theorem 2.3.4. Let M be a linear manifold that is dense in Hilbert space H. Every bounded linear trans- formation T : M → H can be uniquely extended to a bounded linear transformation Tˆ : H → H. In addition, the operator norm of T equals kTˆk.

Proof. Let x ∈ H. Then there exists a sequence {xn}n∈N ⊂ M converging to x. Since {xn}n∈N is also a

Cauchy sequence, for every  > 0 there exists N ∈ N such that, m, n ≥ N ⇒ kxm − xnk < /kT k. It follows that, for m, n ≥ N, kT xm − T xnk < , so {T xn}n∈N is a Cauchy sequence, hence convergent, and there exists y = limn→∞ T xn. We will define Tˆ x = y, i.e., Tˆ(lim xn) = lim T xn. 20 2. OPERATORS ON HILBERT SPACE

0 First we need to establish that the definition is independent of the sequence {xn}n∈N. If {xn}n∈N is another

0 0 sequence converging to x, we form the sequence (x1, x1, x2, x2,... ) which also converges to x. By the previous,

0 0 the sequence (T x1, T x1, T x2, T x2,... ) must converge, and therefore, both of the subsequences {T xn}n∈N and

0 {T xn}n∈N must have the same limit.

Notice that, if xn → x, the continuity of the norm implies that kTˆ xk = k lim T xnk = lim kT xnk ≤ lim kT kkxnk = kT kkxk so kTˆk ≤ kT k. Since the other inequality is obvious we see that kTˆk = kT k. In particular,

Tˆ is a bounded operator. Also, Tˆ(αx + βy) = Tˆ(α lim xn + β lim yn) = Tˆ(lim(αxn + βyn)) = lim T (αxn + βyn) = lim(αT xn + βT yn) = α lim T xn + β lim T yn = αTˆ x + βTˆ y, so Tˆ is linear.

Finally, suppose that T1 and T2 are two continuous extensions of T , and let x ∈ H. If xn → x, the continuity implies that both T1xn → T1x and T2xn → T2x. If xn ∈ M then T1xn = T2xn, so T1x = T2x. Therefore, the extension is unique, and the proof is complete. 

Need an example

2.4. Invariant and Reducing Subspaces

When M is a closed subspace of H, we can always write H = M ⊕ M⊥. Relative to this decomposition, any operator T acting on H can be written as a 2 × 2 matrix with operator entries

  XY   (2.2) T =   .   ZW

It is sometimes convenient to consider only the initial space or the target space as a direct sum. In such a situation   ⊥ we will use a 1 × 2 or 2 × 1 matrix. Thus XY will describe an operator T : M ⊕ M → H; if f ∈ M and

⊥  f  g ∈ M then [ XY ] g = Xf + Y g.

A subspace M is invariant for T if, for any x ∈ M, T x ∈ M. It is reducing for T if both M and M⊥ are invariant for T . 2.4. INVARIANT AND REDUCING SUBSPACES 21

Example 2.4.1. The subspace (0) consisting of zero vector only is an for any operator T .

Also, H is an invariant subspace for any operator T . Because they are invariant for every operator they are called

trivial. A big open problem in is whether every operator has a non-trivial invariant subspace.

Example 2.4.2. If M is a closed subspace of H and T1 is an operator on M with values in M, then the

⊥ operator T = T1 ⊕ 0, defined by T x = T1x if x ∈ M and T x = 0 if x ∈ M is an operator in L(H). However, if

⊥ M is not invariant for T1, the same definition (T x = T1x for x ∈ M, T x = 0 for x ∈ M ) describes the operator   T1 0 .

Proposition 2.4.1. If T is an operator on Hilbert space H, and P = PM is the projection onto the closed

subspace M, then the following are equivalent:

(a) M is invariant for T ;

(b) PTP = TP ;

(c) Z = 0 in (2.2).

I 0  0 0  Proof. It is not hard to see that the matrix for P is [ 0 0 ] so PTP − TP = −Z 0 . This establishes (b) ⇔

 f   f  h Xf i (c). Since g ∈ M iff g = 0, we see that T 0 = Zf ∈ M for all x ∈ H iff Z = 0 so (a) ⇔ (c). 

Example 2.4.3. Let S be the unilateral shift, n ∈ N, and M = ∨k≥nek. Then SM = ∨k≥n+1ek ⊂ M.

Proposition 2.4.2. If T is an operator on Hilbert space H, and P = PM then the following are equivalent:

(a) M is reducing for T ;

(b) PT = TP ;

(c) Y,Z = 0 in (2.2);

(d) M is invariant for T and T ∗.

 0 Y  ∗  X∗ Z∗  Proof. Since PT − TP = −Z 0 we see that (b) ⇔ (c). Further, the matrix for T is Y ∗ W ∗ so, by

Proposition 2.4.1, M is invariant for T and T ∗ iff Z = Y ∗ = 0 and (c) ⇔ (d). In order to prove that (a) ⇔ (d) 22 2. OPERATORS ON HILBERT SPACE it suffices to show that M is invariant for T ∗ iff M⊥ is invariant for T . By Proposition 2.4.1, M is invariant for

∗ ∗  0  h Y g i ⊥ T iff Y = 0 (iff Y = 0). On the other hand T g = W g ∈ M iff Y g = 0 for all g. 

∗  X∗ Z∗  Exercise 2.4.1. Prove that the matrix for T is Y ∗ W ∗ .

2 Example 2.4.4. Let T = Mh, let E ⊂ [0, 1], m(E) > 0, and let M = L (E). If f ∈ M then T f = hf ∈ M.

∗ ∗ Also, T = Mh and T f = hf ∈ M, so M is reducing for T .

Example 2.4.5. Let S be the unilateral shift, n ∈ N, and M = ∨k≥nek. Then M is invariant for S but not

∗ reducing, since en ∈ M but S en = en−1 ∈/ M.

2.5. Finite rank operators

The closest relatives of finite matrices are the finite rank operators.

Definition 2.5.1. An operator T is a finite rank operator if its range is finite dimensional. We denote the set of finite rank operators by F.

Example 2.5.1. If T is a rank one operator u ⊗ v (see Example 2.3.1) then the range of u ⊗ v is the one dimensional subspace spanned by u, so u ⊗ v ∈ F.

The rank one operators turn out to be the building blocks out of which finite rank operators are made.

Proposition 2.5.1. If T is a linear operator on H then T belongs to F iff there exist vectors u1,u2,... ,un, Pn and v1,v2,... ,vn such that T x = i=1hx, viiui.

Proof. Suppose that Ran T is of finite dimension n, and let e1, e2, . . . , en be an o.n.b. of Ran T . Then

Pn Pn ∗ T x = i=1hT x, eiiei = i=1hx, T eiiei. We leave the converse as an exercise. 

Pn Exercise 2.5.1. Prove that if there exist vectors u1, u2, . . . , un, v1, v2, . . . , vn such that T x = i=1hx, viiui, for all x ∈ H, then Ran T is of dimension at most n. 2.6. COMPACT OPERATORS 23

P ∗ P Exercise 2.5.2. Prove that if T = ui ⊗ vi then T = vi ⊗ ui.

The next theorem summarizes some very important properties of the class F.

Theorem 2.5.2. The set F is a minimal ∗-ideal in L(H).

Here the star means that F is closed under the operation of taking adjoints.

Proof. It is obvious that F is a subspace of L(H). Furthermore, if T ∈ F and A ∈ L(H), then Ran TA ⊂ Pn Ran T so TA ∈ F. Also, if T is of finite rank, then according to Proposition 2.5.1, T = i=1 ui ⊗ vi so

∗ Pn ∗ ∗ ∗ T = i=1 vi ⊗ ui. It follows that T ∈ F and the same is true of T A , for any A ∈ L(H). Consequently, AT is of finite rank, and F is a ∗-ideal. To see that it is minimal, it suffices to show that, if J is a non-zero ideal, then J contains all rank one operators. Let T ∈ J, T 6= 0. Then there exists vectors x, y, such that kyk = 1 and y = T x. Let u ⊗ v be a rank one operator. Since J is an ideal, it contains the product (u ⊗ y)T (x ⊗ v) which equals u ⊗ v. 

A finite rank operator is a generalization of a finite matrix. What happens when we take the closure of F in some topology?

Exercise 2.5.3. Prove that the strong closure of F is L(H). [Hint: Prove that Pn → I strongly.] Conclude that the weak closure of F is also L(H).

2.6. Compact Operators

Exercise 2.5.3 established that the strong closure of F is L(H). Therefore, we consider the norm topology.

Definition 2.6.1. An operator T in L(H) is compact if it is the limit of a sequence of finite rank operators.

We denote the set of compact operators by K.

Example 2.6.1. Let T = diag(cn) as in Example 2.1.2, with limn→∞ cn = 0. Then T is compact. Reason: take Tn = diag(c1, c2, . . . , cn, 0, 0,... ). Then Tn ∈ F and kT − Tnk = sup{|ck| : k ≥ n + 1} → 0. It follows that T is compact. 24 2. OPERATORS ON HILBERT SPACE

2 Example 2.6.2. Let T = TK as in Example 2.1.5. If K ∈ L ([0, 1] × [0, 1]) then TK is compact. We will point out at several different sequences in F that all converge to TK

We start with a function theoretic approach: simple functions are dense in L2 (Royden, p. 128), and a similar proof establishes that simple functions are dense in L2([0, 1] × [0, 1]). Since a simple function is a linear combination of the characteristic functions of rectangles χ[a,b]×[c,d](x, y) = χ[a,b](x)χ[c,d](y) it follows that K(x, y)

2 Pn is the L limit of functions of the form Kn(x, y) = i=1 fi(x)gi(y), so TK is the norm limit of TKn , which are all

finite rank operators.

Pn Exercise 2.6.1. Verify that TKn ∈ F, if Kn(x, y) = i=1 fi(x)gi(y).

2 2 Our second approach is exploiting the fact that L is Hilbert space. If {ej}j∈N is an o.n.b. of L we can, for P∞ PN a fixed y, write K(x, y) = j=1 kj(y)ej(x). Now define KN (x, y) = j=1 kj(y)ej(x) and notice that TKN → TK as N → ∞.

Exercise 2.6.2. Verify that TKN ∈ F and that limN→∞ TKN = TK , if KN (x, y) is as above.

2 Our last method is based on the matrix for TK . Let kij = hTK ej, eii, with {en}n∈N an o.n.b. of L ([0, 1]).

2 P 2 P 2 2 First we notice that, for any f ∈ L , k |hf, eki| = k khf, ekiekk = kfk . Therefore,

∞ ∞ ∞ X 2 X ∗ 2 X ∗ 2 ∗ 2 |hTK ej, eii| = |hej,TK eii| = |hTK ei, ej, i| = kTK eik j=1 j=1 j=1

1 1 2 1 1 2 Z Z Z Z ∗ = K (y, x)ei(x) dx dy = K(x, y)ei(x) dx dy.

0 0 0 0

It follows that, for any n ∈ N,

1 1 2 1 1 2 n ∞ n Z Z Z n Z X X 2 X X |kij| = K(x, y)ei(x) dx dy = K(x, y)ei(x) dx dy

i=1 j=1 i=1 0 0 0 i=1 0

1 1 2 1 1 Z ∞ Z Z Z X 2 ≤ K(x, y)ei(x) dx dy = |K(x, y)| dxdy

0 i=1 0 0 0 2.6. COMPACT OPERATORS 25

∞ ∞ P P 2 so the series |kij| converges. Operators whose matrices satisfy this condition are called Hilbert–Schmidt i=1 j=1 ∞ ∞ P P 2 1/2 operators. The Hilbert-Scmidt norm is defined as kTK k2 = { |kij| } , and it satisfies the inequality i=1 j=1 kAk ≤ kAk2. Hilbert-Scmidt operators are compact because we can define Tn to be the matrix consisting of the

first n rows of the matrix of TK and having the remaining entries 0. Then each Tn ∈ F and kTn − TK k → 0. ∞ ∞ 2 2 P P 2 Indeed, Ran Tn ⊂ ∨{e1, e2, . . . , en}, and kTK − Tnk ≤ kTK − Tnk2 = |kij| → 0, n → ∞. i=n+1 j=1

Exercise 2.6.3. Prove that the Hilbert-Scmidt norm is indeed a norm and, for any T ∈ L(H), kT k ≤ kT k2.

Next we consider some of the properties of compact operators. The first one follows directly from the definition.

Theorem 2.6.1. The set K is the smallest closed ∗-ideal in L(H).

The following result reveals the motivation for calling these operators compact.

Theorem 2.6.2. An operator T in L(H) is compact iff it maps the closed unit ball of H into a compact set.

Proof. Suppose that K is compact and let {yn}n∈N be a sequence in K(B1). We will show that there exists a subsequence of {yn} that converges to an element of K(B1). Notice that, for every n ∈ N, yn = Kxn, and xn

belongs to the weakly compact set B1. Thus, there exists a subsequence {xnk } converging weakly to x ∈ B1.

Thus, it suffices to show that Kxnk converges to Kx. Let {Kn} be a sequence in F thaty converges to K. For any m ∈ N, Km(B1) is a bounded and (by Corollary 2.3.3) that is contained in a finite dimensional subspace of H, so it is compact. By Theorem 1.4.4, {Kmxnk }k∈N converges to Kmx. Now, let  > 0. Then there exists N ∈ N such that kK − KN k < /3. Further, with N fixed, there exists k0 ∈ N so that, for k ≥ k0,

kKN xnk − KN xk < /3. Therefore, for k ≥ k0,

   kKx − Kxk ≤ k(K − K )x k + kK (x − x)k + k(K − K)xk < + + = . nk N nk N nk N 3 3 3

Thus, ynk = Kxnk is a convergent subsequence converging to Kx ∈ K(B1) so K(B1) is a compact set. 26 2. OPERATORS ON HILBERT SPACE

Suppose now that K(B1) is compact and let n ∈ N. Notice that ∪y∈K(B1)B(y, 1/n) is an open covering of

(n) (n) (n) k (n) the compact set K(B1), so there exist vectors x1 , x2 , . . . , xk ∈ H so that ∪i=1B(Kxi , 1/n) is a covering of

(n) (n) (n) K(B1). Let Hn be the span of Kx1 , Kx2 , . . . , Kxk and Pn the orthogonal projection on Hn. Finally, let Kn =

PnK. Clearly, Kn ∈ F. Let  > 0, and choose N > 1/. If n ≥ N, and kxk ≤ 1, then kKx−Knxk = kKx−PnKxk.

(n) Since PnKx is the point in Hn closest to Kx, it follows that kKx − Knxk ≤ inf1≤i≤n kKx − Kxi k < 1/n < .

Thus Kn → K and the proof is complete. 

Remark 2.6.1. In many texts the characterization of compact operators, established in Theorem 2.6.2, is taken to be the definition of a compact operator.

Exercise 2.6.4. Prove that if F is a closed and bounded set in H and T is a compact operator in L(H) then

T (F ) is a compact set.

There is another characterization of compact operators:

Proposition 2.6.3. If T is a linear operator on H then T is compact iff it maps every weakly convergent sequence into a convergent sequence. In this situation, if w − lim xn = x then lim T xn = T x.

Proof. Suppose first that T is compact and let w − lim xn = x. By Proposition 1.4.3, there exists M > 0 such that, for all n ∈ N, kxnk ≤ M. Therefore, T xn/M ∈ T (B1), which is compact by Theorem 2.6.2. Now

Theorem 1.4.4 implies that lim xn = x.

In order to establish the converse, we will demonstrate that T (B1) is compact by showing that every sequence in T (B1) has a convergent subsequence. Let {yn}n∈N ⊂ T (B1). Then yn = T xn, for xn ∈ B1, so the Banach–

Alaoglu Theorem implies that {xn} has a weakly convergent subsequence {xnk } and, by assumption, {T xnk } is a (strongly) convergent subsequence of {T xn}. 

Example 2.6.3. We have seen in Example 2.6.1 that if T − diag(cn) and cn → 0, then T is compact. The converse is also true: if {en} is the o.n.b. which makes T diagonal, then T en → 0 (because w − lim en = 0 and T is compact) so kcnenk → 0. 2.7. NORMAL OPERATORS 27

It is useful to know that compactness is inherited by the parts of an operator.

Theorem 2.6.4. Suppose that T is a compact operator on Hilbert space H = M ⊕ M⊥ and that, relative to

XY this decomposition, T = [ ZW ]. Then each of the operators X,Y,Z,W is compact.

Proof. Let {Tn} be a sequence of finite rank operators that converges to T . Write, for each n ∈ N,

T =  Xn Yn . Then all the operators X ,Y ,Z ,W ∈ and they converge to X,Y,Z,W , respectively. n Zn Wn n n n n F 

Exercise 2.6.5. Prove that Xn,Yn,Zn,Wn ∈ F and that they converge to X,Y,Z,W , respectively. [Consider

0 Y the projections P1 = PM and P2 = PM⊥ and notice that, for example P1TP2 = [ 0 0 ], so kYn − Y k ≤ kTn − T k

and Ran Yn ⊂ Ran P1TnP2 the later being finite dimensional.]

2.7. Normal operators

Definition 2.7.1. If T is an operator on Hilbert space H then:

(a) T is normal if TT ∗ = T ∗T ;

(b) T is self-adjoint (or Hermitian) if T = T ∗;

(c) T is positive if hT x, xi ≥ 0 for all x ∈ H;

(d) T is unitary if TT ∗ = T ∗T = I.

∗ ∗ Example 2.7.1. Let T = diag(cn). Then T = diag(cn) so T is normal. Also, T = T iff cn ∈ R, n ∈ N, and

∗ 2 T is positive iff cn ≥ 0, n ∈ N. Finally, T T = diag(|cn| ) so T is unitary iff |cn| = 1, n ∈ N.

2 Exercise 2.7.1. Let T = Mh on L . Prove that T is normal and that it is: self-adjoint iff h(x) ∈ R, a.e.;

positive iff h(x) ≥ 0 a.e.; unitary iff |h(x)| = 1 a.e..

The relationship between T and T ∗ that defines each of these classes allows us to establish some of their

significant properties.

Proposition 2.7.1. An operator T on Hilbert space H is self-adjoint iff hT x, xi is real for any x ∈ H. 28 2. OPERATORS ON HILBERT SPACE

Proof. If T = T ∗ then hT x, xi = hx, T ∗xi = hx, T xi = hT x, xi so hT x, xi ∈ R. On the other hand, if hT x, xi

∗ is real for any x ∈ H then Second Polarization Identity implies that hT x, yi = hT y, xi so T = T . 

Exercise 2.7.2. Prove that hT x, xi ∈ R implies that hT x, yi = hT y, xi.

Corollary 2.7.2. If P is a positive operator on Hilbert space H then P is self-adjoint.

Example 2.7.2. If P is the orthogonal projection on a subspace M of Hilbert space H, then P is a positive operator. Indeed, if z ∈ H write z = x + y relative to H = M ⊕ M⊥. By Theorem 1.3.2, P z = x and P y = 0, so hP z, zi = hx, x + yi = kxk2 ≥ 0.

Combining Theorem 1.3.2 and Example 2.7.2 we see that every projection is a positive idempotent. In fact, the converse is also true.

Theorem 2.7.3. If T is an idempotent self-adjoint operator then T is a projection on M = {x ∈ H : T x = x}.

Proof. Let z ∈ H and write it as z = T z + (z − T z). Now T (T z) = T z so T z ∈ M. Also, z − T z ∈ M⊥.

Indeed, if x ∈ M, then hx, z − T zi = hx, zi − hx, T zi = hx, zi − hT x, zi = 0. 

By Proposition 2.1.1, the norm of every operator T in L(H) can be computed by considering the supremum of the values of its hT x, yi. The next result shows that, when T is self adjoint, it suffices to consider only some pairs of x, y ∈ B1.

Proposition 2.7.4. If T is a self-adjoint operator on Hilbert space H then kT k = sup{|hT x, xi| : kxk = 1}.

Proof. Clearly, |hT x, xi| ≤ kT kkxk2, so if we denote by α the supremum above, we have that α ≤ kT k. To prove that α = kT k, we use the Second Polarization Identity, and we notice that, in view of the assumption T = T ∗ and Proposition 2.7.1, 4RehT x, yi = hT (x+y), x+yi−hT (x−y), x−yi. Moreover, using Parallelogram Law, and assuming that x and y are unit vectors, we obtain that 4RehT x, yi ≤ αkx+yk2 +αkx−yk2 = α(2kxk2 +2kyk2) =

4α. When x is selected so that kT xk= 6 0, y = T x/kT xk we obtain RekT xk ≤ α so kT k ≤ α.  2.7. NORMAL OPERATORS 29

Exercise 2.7.3. Prove that two product of two self-adjoint operators is self-adjoint iff the operators commute.

Remark 2.7.1. If we write A = (T + T ∗)/2 and B = (T − T ∗)/2i then the operators A, B are self-adjoint

and T = A + iB. We call them the real part and the imaginary part of T .

Proposition 2.7.5. If T is an operator on Hilbert space H then the following are equivalent.

(a) T is a normal operator;

(b) kT xk = kT ∗xk for all x ∈ H;

(c) the real and imaginary part of T commute.

Proof. Notice that kT xk2 − kT ∗xk2 = h(T ∗T − TT ∗)x, xi. If T is normal then the right side is 0, so (a)

implies (b). If (b) is true, then the left side is 0, for all x. Since T ∗T − TT ∗ is self-adjoint, Proposition 2.7.4

implies that its norm is 0, so (b) implies (a). A calculation shows that, if A and B are the real and imaginary

∗ ∗ part of T , resp., then AB − BA = (T T − TT )/2i so (a) is equivalent to (c). 

In Definition 1.2.3 we have introduced the concept of the Hilbert space isomorphsim. Since it preserves the

inner product (hUx, Uyi = hx, yi), it preserves the norm, and hence both weak and strong toplogies. Therefore,

if U : H → K, we do not distinguish between an operator T ∈ L(H) and UTU −1 ∈ L(H), and we say that they

are unitarily equivalent. Since, by Definition 2.7.1, an operator T is unitary iff TT ∗ = T ∗T = I, we should check

that UU ∗ = U ∗U = I.

∗ ∗ Exercise 2.7.4. Verify that UU = IK and U U = IH.

Notice that both equalities need to be verified, because it is quite possible for one to hold but not the other.

Example: the unilateral shift S satisfies S∗S = I 6= SS∗.

Exercise 2.7.5. Prove that T is an isometry iff T ∗T = I.

Exercise 2.7.3 asserts that the product of two self-adjoint operators is itself self-adjoint iff the operators

commute. What if self-adjoint is replaced by normal? If M,N are commuting normal operators, their product 30 2. OPERATORS ON HILBERT SPACE is normal if MN commutes with N ∗M ∗ and it looks like we need the additional assumption that M commutes with N ∗ (which also gives that M ∗ commutes with N). When an operator T commutes with both N and N ∗ we say that T doubly commutes with N. When N is normal we can establish even a stronger result.

Theorem 2.7.6 (Fuglede–Putnam Theorem). Suppose that M, N are normal operators and T ∈ L(H) intertwines M and N: MT = TN. Then M ∗T = TN ∗.

Proof. Let λ be a , and denote A = λM, B = λN. Notice that AT = TB, so A2T =

A(AT ) = A(TB) = (AT )B = (TB)B = TB2, and inductively AkT = TBK , for k ∈ N. It follows that, if we denote the exponential function by exp(z), exp(A)T = T exp(B). It is not hard to see that exp(−A) exp(A) = I so

T = exp(−A)T exp(B).

∗ ∗ If we denote by U1 = exp(A − A), U2 = exp(B − B ), then both U1,U2 are unitary operators. Indeed,

∗ P ∗ n ∗ P ∗ n ∗ −1 U1 = [ (A − A) /n!] = (A − A ) /n! = exp(A − A ) = U1 , and similarly for U2. Now we have that

∗ ∗ ∗ ∗ exp(A )T exp(−B ) = U1TU2 and k exp(A )T exp(−B )k = kT k. We conclude that

k exp(λM ∗)T exp(−λN ∗)k = kT k for all λ ∈ C. Now f(λ) = exp(λM ∗)T exp(−λN ∗) is an entire bounded function, hence a constant. Therefore, f 0(0) = 0. On the other hand, f 0(λ) = M ∗ exp(λM ∗)T exp(−λN ∗) + exp(λM ∗)T exp(−λN ∗)(−N ∗) so f 0(0) =

∗ ∗ M T − TN , and the theorem is proved. 

Exercise 2.7.6. Prove that exp(−T ) exp(T ) = I for any operator T ∈ L(H).

Corollary 2.7.7. The product of two normal operators is itself normal iff the operators commute.

Exercise 2.7.7. Prove Corollary 2.7.7. CHAPTER 3

Spectrum

3.1. Invertibility

In Linear Algebra we learn that each the properties of being invertible, injective, or surjective implies the other two. Things are very different in infinite dimesional Hilbert space.

Example 3.1.1. Let T = diag(1/n). It is easy to see that Ker T = (0) so T is injective. However, it is not surjective, because its range does not contain the sequence (1, 1/2, 1/3,... ) ∈ `2.

Exercise 3.1.1. Prove that T = diag(1/n) is injective but (1, 1/2, 1/3,... ) ∈/ Ran T .

∗ 2 Example 3.1.2. The backward shift S (see Example 2.2.2) is surjective: given (y1, y2,... ) ∈ ` we have that

∗ ∗ ∗ ∗ S (0, y1, y2,... ) = (y1, y2,... ). On the other hand S e1 = 0 so S is not injective. Also, S S(x1, x2, x3,... ) =

∗ ∗ ∗ S (0, x1, x2,... ) = (x1, x2, x3,... ), so S S = I. However, SS (x1, x2, x3,... ) = S(x2, x3,... ) = (0, x2, x3,... ) so SS∗ 6= I.

We say that an operator T is left invertible if there exists an operator L ∈ L(H) such that LT = I. It is right invertible if there exists an operator R such that TR = I. Therefore, the unilateral shift S is left invertible, while

S∗ is right invertible. Since S is injective, it is tempting to jump to the conclusion that an operator is injective iff it is left invertible.

2 R x Example 3.1.3. The Volterra integral operator V is defined on L by V f(x) = 0 f(t) dt. Since this is an

2 integral operator TK with K = χE(x, y) where E = {(x, y): y ≤ x} and χE ∈ L , V is a compact operator so it cannot be left invertible. Yet, V is injective since V f = 0 implies that f = 0.

Exercise 3.1.2. Prove that the Volterra integral operator V is injective.

31 32 3. SPECTRUM

Exercise 3.1.3. Prove that the range of the Volterra integral operator V is a dense linear manifold in H.

Instead of injectivity, another condition plays a major role in the questions about invertibility.

Definition 3.1.1. An operator T ∈ L(H) is bounded below if there exists α > 0 such that kT xk ≥ αkxk, for

all x ∈ H.

Example 3.1.4. Let T = diag(cn). Then T is bounded below iff |cn| ≥ α > 0, n ∈ N.

An immediate consequence of this property concerns the range of the operator.

Theorem 3.1.1. If an operator T on Hilbert space H is bounded below then its range is a closed subset of H.

Proof. Let yn be a sequence of vectors in Ran T converging to y. Then yn = T xn for some xn ∈ H, so

kyn − ymk = kT xn − T xmk ≥ αkxn − xmk. Since {yn} is a Cauchy sequence, the same is true of {xn}. Let

x = lim xn. Then T xn → T x, i.e. yn → T x. Thus y = T x ∈ Ran T , and Ran T is closed. 

Example 3.1.3 shows that the injectivity is not sufficient to guarantee the left invertibility. The next result

gives the correct necessary and sufficient conditions.

Theorem 3.1.2. Let T be an operator in L(H). The following are equivalent:

(a) T is left invertible;

(b) Ker T = (0) and Ran T is closed;

(c) T is bounded below.

Proof. If LT = I then kxk = kLT xk ≤ kLkkT xk, so T is bounded below with α = 1/kLk, and (a) ⇒ (c).

Clearly, if T is bounded below it must be injective, and the fact that its range is closed is Theorem 3.1.1, so (c)

implies (b). If (b) is true then, by the Open Mapping Theorem (Royden, p.230), there exists a bounded linear

⊥ operator L1 : Ran T → H, such that L1T = I. If we define L = [ L1 0 ] relative to H = Ran T ⊕ (Ran T ) (see

Example 2.4.2), then L ∈ L(H) and LT = I.  3.1. INVERTIBILITY 33

IA Exercise 3.1.4. Prove that T = [ 0 I ] is bounded below for any operator A.

A similar characterization is available for surjectivity. The most efficient approach seems to be based on the

observation that T is right invertible iff T ∗ is left invertible. In order to continue in this direction we need the

following result, which is significant on its own.

Theorem 3.1.3. The operator T has closed range iff the range of T ∗ is closed.

Proof. Since T ∗∗ = T it suffices to prove one of the two implications. To that end, let Ran T be closed, and

∗ ∗ let xn be a sequence of vectors such that T xn converges to y. We will show that y ∈ Ran T . Since Ran T is closed

∗ 0 00 ∗ ∗ 0 we can write H = Ran T ⊕ Ker T . If relative to this decomposition xn = xn ⊕ xn, then T xn = T xn so, without

∗ loss of generality, we may assume that the sequence xn belongs to Ran T . The convergence of T xn implies

∗ the weak convergence so, for any z ∈ H, hT xn, zi → hy, zi. It follows that hxn, T zi → hy, zi and, moreover,

that hxn, wi converges for any w ∈ H. Indeed, if we write w = w1 ⊕ w2, where w1 ∈ Ran T (so w1 = T z1)

∗ and w2 ∈ Ker T , (so hxn, w2i = 0), we see that {xn} is a weakly convergent sequence. If w − lim xn = x then

∗ ∗ ∗ ∗ ∗ w − lim T xn = T x. On the other hand, T xn converges to y, so y = T x ∈ Ran T . 

Now we can deliver the promised characterizations of surjectivity.

Theorem 3.1.4. Let T be an operator in L(H). The following are equivalent:

(a) T is right invertible;

(b) T ∗ is bounded below.

(c) T is surjective.

Proof. The equivalence of (a) and (b) follows from Theorem 3.1.2 applied to T ∗. Further, TR = I implies

that TR is surjective. Since Ran TR ⊂ Ran T , T is surjective and (a) implies (c). Finally, let T be surjective.

This implies that Ker T ∗ = (0) and also, via Theorem 3.1.3, that Ran T ∗ is closed. Applying Theorem 3.1.2 we

∗ see that T is left invertible and the result follows by taking adjoints. 

We close this section with a sufficient condition for invertibility that is of quite a different nature. 34 3. SPECTRUM

Theorem 3.1.5. If T is an operator on Hilbert space H and kI − T k < 1 then T is invertible.

Proof. Let α = 1 − kI − T k ∈ (0, 1]. If x ∈ H, then kT xk = kx − (1 − T )xk ≥ kxk − k1 − T kkxk = αkxk so T is bounded below. Suppose now that the range of T is not dense in H. Then there exists y ∈ H such that d = inf{ky − xk : x ∈ Ran T } > 0. It follows that there exists x ∈ Ran T such that (1 − α)ky − xk < d. (Obvious if α = 1, otherwise β = 1/(1 − α) > 1 so there exists x such that ky − xk < βd.) Notice that x + T (y − x) ∈ Ran T so d ≤ ky − x − T (y − x)k ≤ kI − T kky − xk < d, which is a contradiction, so T has dense range. 

Second proof: The series I + T + T 2 + T 3 + ... converges in the operator norm, and it is easy to verify

2 3 that (I − T )(I + T + T + T + ... ) = I. 

P∞ n Exercise 3.1.5. Prove that, if kT k < 1, the series n=0 T converges uniformly.

−1 P∞ n Exercise 3.1.6. Verify that, if kT k < 1, (I − T ) = n=0 T .

3.2. Spectrum

A complex number λ belongs to the spectrum of an operator T (notation: λ ∈ σ(T )) if T − λI is not invertible. The complement of σ(T ) is called the of T and is denoted by ρ(T ). The spectral radius of T , r(T ) = sup{|λ| : λ ∈ σ(T )}. While it is more pedantic to write λI, it is customary to omit the identity and write just λ for the operator λI. As usual, the interest in the spectrum of a linear operator T is motivated by the

finite dimensional case. In that situation, λ ∈ σ(T ) iff λ is an eigenvalue of T , and eigenvalues play an essential role in the structure theory via the Jordan form. As we will see, the situation is quite different in the infinite dimensional Hilbert space.

Example 3.2.1. Let T = diag(cn). If λ = cn for some n, then T − λ has non-trivial kernel (containing en) so the spectrum contains the whole diagonal. Is there more? If T = diag(1/n) then T is not invertible so 0 belongs to the spectrum of T , although it is not one of the diagonal entries and not an eigenvalue. What about the sequence {cn} = (1/2, 1/3, 2/3, 1/4, 3/4, 1/5, 4/5,... )? The operator T = diag(cn) is not invertible, but neither 3.2. SPECTRUM 35 is T − 1, so both 0 and 1 belong to the spectrum of T . Should we include limit points of the sequence as well?

The truth is, we cannot address the problem before we establish some essential properties of the spectrum.

Proposition 3.2.1. If T is an operator on Hilbert space H then r(T ) ≤ kT k.

Proof. If |λ| > kT k then kT/λk < 1. By Theorem 3.1.5, the operator I − T/λ is invertible, so λ∈ / σ(T ). 

Example 3.2.2. Let S∗ be the backward shift on `2 (see Example 2.2.2). If |λ| < 1 then the sequence u = {λn} is in `2 and it is an eigenvector of S∗, i.e., S∗u = λu so S∗ − λ has non-trivial kernel and is not invertible. Consequently, the spectrum of S∗ contains the open unit disk. On the other hand, kS∗k = kSk = 1 so, by Proposition 3.2.1, σ(S∗) is contained in the closed unit disk.

Example 3.2.2 raises once again the question whether the spectrum must contain its boundary points.

Theorem 3.2.2. If T is an operator on Hilbert space H then σ(T ) is a non-empty compact set.

Proof. Proposition 3.2.1 shows that the spectrum of T is bounded. To show that it is closed, we will show that ρ(T ) is open. Let λ0 ∈ ρ(T ) so that T − λ0 is invertible. Since

−1 −1 −1 k1 − (T − λ0) (T − λ)k = k(T − λ0) [T − λ0 − (T − λ)] k = k(T − λ0) k|λ − λ0|

−1 we see that k1 − (T − λ0) (T − λ)k < 1 if |λ − λ0| is sufficiently small. By Theorem 3.1.5, for such λ the operator

−1 (T − λ0) (T − λ) is invertible so the same is true of T − λ. Consequently ρ(T ) is open. 

Our next goal is to show that the spectrum of a bounded operator cannot be empty. In order to do that, let x, y ∈ H, and consider the complex-valued function F (λ) = h(T − λ)−1x, yi defined for λ ∈ ρ(T ).

Proposition 3.2.3. The function F is analytic in ρ(T ) ∪ {∞}.

Proof. Let λ0 ∈ ρ(T ). Write

 −1  T − λ = (T − λ0) − (λ − λ0) = (T − λ0) 1 − (T − λ0) (λ − λ0) 36 3. SPECTRUM

−1 and notice that if |λ − λ0| is sufficiently small, then k(T − λ0) (λ − λ0)k < 1. By Exercise 3.1.6, we can write

∞ −1 −1 X −n n (T − λ) = (T − λ0) (T − λ0) (λ − λ0) . n=0

∞ P −n−1 n Therefore, the function F (λ) = h(T − λ0) x, yi(λ − λ0) is analytic in a neighborhood of λ0. As for n=0

λ0 = ∞, we consider the function

(3.1) G(λ) = F (1/λ) = h(T − 1/λ)−1x, yi at λ = 0. Since T − 1/λ = −(1 − λT )/λ, for λ 6= 0, Theorem 3.1.5 and Exercise 3.1.6 show that, for λ sufficiently ∞ small (but different from 0), the operator T − 1/λ is invertible and G(λ) = −λ P hT nx, yiλn is analytic at n=0 0. Furthermore, F (∞) = G(0) = 0. If the spectrum of T were empty, F would be an entire function that is bounded, hence by Liouville’s Theorem, a constant. Since F (∞) = 0 it would follow that F is a zero function for any x, y ∈ H, which is impossible. (Take x = (T − λI)y, y 6= 0.) Thus σ(T ) is non-empty. 

Now we can return to Example 3.2.2 and conclude that the spectrum of S∗ is the closed unit disk. What about σ(S)?

Exercise 3.2.1. A complex number λ belongs to σ(T ) iff λ ∈ σ(T ∗).

Exercise 3.2.2. Given a non-empty compact set F ⊂ C, show that there exists an operator T ∈ L(H) such that σ(T ) = F .

Example 3.2.3. The spectrum of the unilateral shift S is the closed unit disk. However, S has no eigenvalues.

Theorem 3.2.4 (Spectral mapping theorem). Let T ∈ L(H) and let p be a polynomial. Then σ(p(T )) = p(σ(T )).

Proof. Suppose that λ0 ∈ σ(T ), and write p(λ) − p(λ0) = (λ − λ0)q(λ). Then p(T ) − p(λ0) = (T − λ0)q(T ) and it is not hard to see that the operator A = p(T ) − p(λ0) cannot be invertible. Otherwise, we would have that 3.2. SPECTRUM 37

−1 −1 T − λ0 has both the left inverse A q(T ) and the right inverse q(T )A . Thus p(λ0) ∈ σ(p(T )), and we obtain that p(σ(T )) ⊂ σ(p(T )).

To prove the converse, let λ0 ∈ σ(p(T )), and let λ1, λ2, . . . , λn be the roots of p(λ) = λ0. Then p(T ) − λ0 =

α(T − λ1)(T − λ2) ... (T − λn) for some non-zero complex number α. Since p(T ) − λ0 is not invertible there exists j, 1 ≤ j ≤ n, such that T − λj is not invertible. For this j, λj ∈ σ(T ) and p(λj) = λ0 so λ0 ∈ (σ(T )).

Consequently, σ(p(T )) ⊂ p(σ(T )) and the theorem is proved. 

Exercise 3.2.3. Let X and T be operators in L(H), and suppose that X is invertible. Then σ(X−1TX) =

σ(T ).

In many instances it is quite hard to determine the spectrum of an operator. However, it may be possible to determine its spectral radius, using the next result.

n 1/n Theorem 3.2.5 (Spectral Radius Formula). Let T ∈ L(H). Then r(T ) = limn→∞ kT k .

Proof. By the Spectral mapping theorem, σ(T n) = [σ(T )]n so [r(A)]n = r(An) ≤ kAnk. Thus, r(A) ≤

n 1/n n 1/n kA k and r(A) ≤ lim infn→∞ kA k . In order to prove the converse we consider the function G(λ) defined by (3.1) for λ 6= 0 and 1/λ ∈ ρ(T ). For such λ, G is analytic by Proposition 3.2.3 and it can be represented

P∞ n n n n by the convergent series −λ n=0 λ hT x, yi. Thus, the sequence λ hT x, yi must be bounded. That means that for each y, the sequence of bounded linear functionals {λnT nx} is bounded at y, i.e., there exists C(y) such that |hλnT nx, yi| ≤ C(y). By the Uniform Boundedness Principle, the sequence {λnT nx} is uniformly bounded.

This means that, for each x, there exists C(x), such that kλnT nxk ≤ C(x). Applying the Uniform Boundedness

Principle once again, we obtain M > 0 such that |λ|nkT nk ≤ M, n ∈ N. It follows that |λ|kT nk1/n ≤ M 1/n and

n 1/n |λ| lim supn→∞ kT k ≤ 1. Since this is true for any λ such that 1/λ ∈ ρ(T ) it holds all the more whenever

n 1/n 1/|λ| > r(T ). It follows that lim supn→∞ kT k ≤ r(T ) and the theorem is proved.  38 3. SPECTRUM

3.3. Parts of the spectrum

A combination of Theorems 3.1.2 and 3.1.4 established that an operator is invertible iff it is bounded below and has closed range.

Definition 3.3.1. A complex number λ belongs to the approximate point spectrum σapp(T ) of a linear operator T if T − λ is not bounded below. It belongs to the compression spectrum σcomp(T ) of T if the closure of Ran (T − λ) is a proper subspace of H. Finally, it belongs to σp(T ) — the point spectrum of T , if it is an eigenvalue of T .

Remark 3.3.1. There is more than one classification of the parts of the spectrum. The residual spectrum is

σcomp(T ) − σp(T ), and the continuous spectrum is σ(T ) − (σcomp(T ) ∪ σp(T )). The left spectrum consists of those complex numbers λ such that T − λ is not left invertible, and similarly for the right spectrum.

Example 3.3.1. Let T = diag(cn). First we notice that T is invertible iff the sequence {cn} is invertible.

∞ −1 −1 Indeed, if cndn = 1, and dn ∈ ` , define T = diag(dn). If T is invertible, then T en = en/cn so 1/|cn| =

−1 −1 ∞ kT enk ≤ kT k shows that 1/cn ∈ ` . Therefore, λ ∈ σ(T ) iff cn − λ is not invertible, which is true iff there

exists a subsequence {cnk } such that cnk − λ → 0. In other words, if and only if λ is an accumulation point of

{cn}. Thus σ(T ) is the closure of the diagonal.

P∞ 2 What are the parts of the spectrum of diag(cn)? Suppose that k(T −λ)xk ≥ αkxk. Then i=1 |(cn −λ)xn| ≥

P∞ 2 √ α i=1 |xn| . By taking x = en we obtain that |cn − λ| ≥ α, for all n ∈ N, which means that cn − λ is invertible and, hence, λ∈ / σ(T ). This shows that σ(T ) ⊂ σapp(T ) and therefore σ(T ) = σapp(T ).

The previous example is a special case of a more general result.

Theorem 3.3.1. If T is a normal operator then σ(T ) = σapp(T ).

Proof. By Proposition 2.7.5, taking into account that T − λ is normal, for any x ∈ H, k(T − λ)xk =

∗ ∗ ∗ ∗ ∗ ∗ k(T −λ)xk so σp(T ) = σp(T ). Also, λ ∈ σp(T ) ⇔ Ker (T −λ) 6= (0) ⇔ Ran (T −λ) is not dense ⇔ Ran (T −λ) 3.3. PARTS OF THE SPECTRUM 39 is not dense ⇔ λ ∈ σcomp(T ). Conclusion: σcomp(T ) ⊂ σp(T ) ⊂ σapp(T ). Since σ(T ) = σapp(T ) ∪ σcomp(T ) the result follows. 

∗ Remark 3.3.2. The proof of Theorem 3.3.1 established that a complex number λ belongs to σp(T ) iff

λ ∈ σcomp(T ).

Exercise 3.3.1. If T ∈ L(H) then σ(T ) = σapp(T ) ∪ σcomp(T ).

Since the spectrum is the union of two parts, it is interesting that its boundary is always in the same one.

Theorem 3.3.2. The boundary of the spectrum is included in the approximate point spectrum.

Proof. Let λ ∈ ∂σ(T ). The spectrum of T is closed so λ ∈ σ(T ), which means that either λ ∈ σapp(T )

(in which case there is nothing to prove) or λ ∈ σcomp(T ). In the latter case there exists a non-zero vector x orthogonal to Ran (T − λ). Let {λn} ⊂ ρ(T ) such that λn → λ. Since T − λn is invertible, we can define unit

−1 −1 vectors fn = (T − λn) f/k(T − λn) fk. Now

2 2 2 2 k(T − λ)fnk ≤ k(T − λ)fnk + k(T − λn)fnk = k(λ − λn)fnk = |λ − λn| → 0

where we have used the fact that (T − λ)fn is a multiple of f, hence orthogonal to (T − λn)fn. Consequently,

λ ∈ σapp(T ). 

Example 3.3.2. We have seen in Example 3.2.3 that the spectrum of the unilateral shift S is D−. By

Exercise 3.2.1 the same is true of σ(S∗). Since S is an isometry, 0 canot be an eigenvalue of S.(Sx = 0 imnplies kxk = kSxk = 0.) If λ 6= 0 then S(x1, x2,... ) = λ(x1, x2,... ) leads to 0 = λx1 and xn = λxn+1, n ∈ N, and we

∗ see that x = 0. Therefore, σp(S) is empty and, by Exercise 3.3.2, so is σcomp(S ).

∗ 2 The equation S x = λx leads to xn+1 = λxn, n ∈ N, and thus to x = x1(1, λ, λ ,... ). Therefore, x is a

2 ∗ non-zero vector in ` iff |λ| < 1. Consequently, σp(S ) = σcomp(S) = D.

By Theorem 3.3.2, the approximate point spectra of S and S∗ include the unit circle T. For S that is all because, if |λ| < 1 then kSx − λxk ≥ |kSxk − kλxk| = (1 − |λ|)kxk so S − λ is bounded below. On the other

∗ − hand, the approximate point spectrum always includes the eigenvalues, so σapp(S ) = D . 40 3. SPECTRUM

Theorem 3.3.3. Suppose that M is a closed subspace of Hilbert space H, and that, relative to H = M⊕M⊥,

T =  T1 0 . Then σ(T ) = σ(T ) ∪ σ(T ). 0 T2 1 2

Proof. If T − λ is not invertible then T1 − λ and T2 − λ cannot both be invertible, so σ(T ) ⊂ σ(T1) ∪ σ(T2).

On the other hand, if either T1 or T2 is not bounded below, say kT1xnk → 0, then kT (xn ⊕ 0)k → 0, so

σapp(T1)∪σapp(T2) ⊂ σ(T ). The corresponding inclusion for the compression spectra can be obtained by switching to the adjoints and using Exercise 3.3.2. 

Problem 11. Suppose that H = M1 ⊕ M2 ⊕ ... and that relative to this decomposition T = diag(Tn) is a

with operator entries T1,T2,... . Is it true that σ(T ) = (∪σ(Tn)) ?

3.4. Spectrum of a compact operator

In this section we take a more detailed look at compact operators and their spectra.

Theorem 3.4.1. Let T be a compact operator, let λ be a non-zero complex number, and suppose that T − λ is not bounded below. Then λ ∈ σp(T ).

Proof. Let {xn} be a sequence of unit vectors such that k(T − λ)xnk → 0, n → ∞. Since B1 is weakly

compact, {xn} has a weakly convergent subsequence {xnk }, so the compactness of T implies that {T xnk } is a

convergent sequence. Let x = limk T xnk . Notice that kxk ≥ kλxnk k − k(T − λ)xnk k → |λ| so x is a non-zero

vector. Moreover, k(T − λ)xk ≤ k(T − λ)(T xnk − x)k + k(T − λ)T xnk k → 0 so λ ∈ σp(T ). 

Theorem 3.4.1 established that the non-zero points in the approximate point spectrum are eigenvalues. Our goal is to prove a similar inclusion for the compression spectrum. We start with the following result.

Theorem 3.4.2. Let T be a compact operator and let λ be a non-zero complex number. Then Ran (T − λ) is closed.

Proof. First we show that, if Ran T is closed, it must be finite dimensional. Indeed, if we denote by T1 the

⊥ ⊥ restriction of T to its initial space (Ker T ) , then T1 is an injective linear transformation from (Ker T ) onto 3.4. SPECTRUM OF A COMPACT OPERATOR 41

−1 Ran T , hence invertible. Let B be the intersection of the closed ball of radius kT1 k and Ran T . Now, if y ∈ B

⊥ −1 −1 ⊥ then y = T1x, for some x ∈ (Ker T ) , so x = T1 y. Since kyk ≤ kT1 k it follows that x ∈ B1 ∩ (Ker T ) . We

⊥ conclude that B is contained in the compact set T (B1 ∩(Ker T ) ) so B must be compact, hence finite dimensional.

Next we observe that Ker (T − λ) must be finite dimensional. Reason: Ker (T − λ) is invariant for T and the restriction of T to Ker (T − λ) is a compact operator with range Ker (T − λ). (If x ∈ Ker (T − λ) write

1 x = λ [T x − (T − λ)x] = T (x/λ) ∈ T (Ker (T − λ)).)

Finally, we prove the theorem. Let S be the restriction of T − λ to Ker (T − λ)⊥. Notice that Ran S =

Ran (T − λ) so it suffices to show that Ran S is closed. By Theorem 3.1.2 we will accomplish this goal by establishing that S is bounded below. However, if S is not bounded below then Theorem 3.4.1 shows that

(T − λ)x = 0 for some nonzero vector x in Ker (T − λ)⊥. This is impossible, so Ran S is closed and the proof is complete. 

Before we can proceed we need this technical result.

Lemma 3.4.3. Let T be a compact operator and let {λn} be a sequence of complex numbers. Suppose that there exists a nested sequence of distinct subspaces M1 ( M2 ( M3 ( ... such that (T − λn)Mn+1 ⊂ Mn.

Then λn converges to 0.

Proof. Let {en} be an sequence of unit vectors such that e1 ∈ M1 and en+1 ∈ Mn+1 Mn. Clearly, this is an orthonormal system. Moreover, for n ≥ 2, h(T − λn)en, eni = 0 which implies that kT enk ≥ |hT en, eni| =

|h(T − λn)en, eni + hλnen, eni| = |λn|. Since T is compact and w − lim en = 0 it follows that limn T en = 0 so limn λn = 0. 

Theorem 3.4.1 shows that if λ ∈ σ(T ) then either λ = 0, or λ ∈ σp(T ), or T − λ is bounded below (hence injective) but not surjective. By Theorem 3.1.4, T − λ not being surjective is the same as (T − λ)∗ not being bounded below. Since T ∗ is also compact, another application of Theorem 3.4.1 allows us to conclude that

∗ λ ∈ σp(T ). The next result shows that there is even less variation in the spectrum of a compact operator. 42 3. SPECTRUM

Theorem 3.4.4. Let T be a compact operator and let λ be a non-zero complex number. Then λ ∈ σp(T ) iff

∗ λ ∈ σp(T ).

Proof. Clearly, it suffices to prove either direction. Suppose that λ ∈ σp(T ). By Theorem 3.4.2, the range of T − λ is closed. We will show that it must be a proper subspace of H. Suppose to the contrary that T − λ is

n surjective, and denote Mn = Ker (T − λ) . Since λ is an eigenvalue of T we can inductively define a sequence

{xn} of nonzero vectors such that (T − λ)xn = xn−1, with x0 = 0. Clearly xn belongs to Mn but not to Mn−1, and (T − λ)Mn+1 ⊂ Mn, so Lemma 3.4.3 implies that the constant sequence λ, λ, λ, . . . converges to 0, which contradicts the assumption that λ 6= 0. Therefore, Ran (T − λ) (which coincides with Ker (T − λ)∗) is a proper

∗ subspace of H and λ ∈ σp(T ). 

To summarize, the spectrum of a compact operator consists of the point spectrum and, possibly, 0. On the infinite dimensional Hilbert space, 0 must be in the spectrum because if a compact operator T were invertible, then so would be the identity (a product of TT −1), contradicting the conclusions of Example 2.6.1. Thus we have a corollary.

Corollary 3.4.5. The spectrum of a compact operator consists of 0 and its eigenvalues.

It is reasonable to ask about the location of the eigenvalues.

Theorem 3.4.6. For any C > 0 there is a finite number of linearly independent eigenvectors of a compact operator corresponding to eigenvalues λ such that |λ| ≥ C.

Proof. Suppose to the contrary that there is an infinite sequence {xn} of unit vectors, and a sequence of

n Pn eigenvalues λn of T , |λn| ≥ C, so that T xn = λnxn. Let Mn = ∨k=1xk. If x ∈ Mn then x = k=1 ckxk so Pn Pn Pn (T − λn)x = (T − λn) k=1 ckxk = k=1 ck(T − λn)xk = k=1 ck(λk − λn)xk ∈ Mn−1. Applying Lemma 3.4.3 we obtain that λn → 0, which contradicts |λn| ≥ C. 

Corollary 3.4.7. If λ is a non-zero eigenvalue of a compact operator T , then the nullspace of T − λ is a

finite dimensional subspace. 3.5. SPECTRUM OF A NORMAL OPERATOR 43

Corollary 3.4.8. The spectrum of a compact operator T is at most countable, and the only accumulation point of it can be zero.

Remark 3.4.1. If T = diag(cn) where c1 = 1 and cn = 0 for n ≥ 2, then T is compact, and σ(T ) = {0, 1} so it has no accumulation points.

Last remark raises a question: can a compact operator have a one-point spectrum? Since compact operators are never invertible, the single point is necessarily 0, so the question can be reformulated as: are there compact quasinilpotent operators? (An operator T is quasinilpotent if σ(T ) = {0}.) In finite dimensions, a quasinilpotent operator is nilpotent, i.e. there exists a positive integer N such that T N = 0. This need not be the case in infinite dimensional Hilbert space.

Example 3.4.1. Let T be a weighted shift (see Example 2.1.6) with weight sequence {1/n}n∈N. It is compact

k 1 following Example 2.6.1. Since W en = (1/n)en+1 it follows that W en = n(n+1)...(n+k−1) en+k. This shows that

k k 1 k k 1 W is a product of S and a diag( n(n+1)...(n+k−1) ). Since S is an isometry, kW k = supn{ n(n+1)...(n+k−1) } =

k 1/k 1/k 1/k!. Now r(W ) = limk kW k = limk(1/k!) = 0. Therefore, W is a compact quasinilpotent operator.

3.5. Spectrum of a normal operator

On the first glance, normal operators appear to be too diverse to fit one description. Before we can correct this misconception, we will need to make a thorough study of this class, and some of its prominent subclasses.

Theorem 3.5.1. (a) If T is a then σ(T ) is a subset of the unit circle. (b) If T is a self- adjoint operator then σ(T ) is a subset of the real axis. (c) If T is a positive operator then σ(T ) is a subset of the non-negative real axis. (d) If T is a non-trivial projection then σ(T ) = {0, 1}.

Proof. All operators listed are normal, so by Theorem 3.3.1, it suffices to prove assertions (a) – (d) with

σapp(T ) instead of σ(T ). To that end, we will prove that, if λ does not belong to the appropriate set, then T − λ is bounded below. 44 3. SPECTRUM

(a) If T is unitary and |λ|= 6 1, then kT x − λxk ≥ |kT xk − kλxk| = |(1 − λ)| kxk so T is bounded below.

(b) Let λ = α + iβ. Then kT x − λxk2 = kT x − αxk2 − 2RehT x − αx, iβxi + kiβxk2. If α, β are real numbers and T = T ∗ we have that hT − αx, xi ∈ R by Proposition 2.7.1, and it follows that RehT x − αx, iβxi = 0.

Therefore, kT x − λxk2 ≥ |β|2kxk2, so β 6= 0 implies that T − λ is bounded below.

(c) If T ≥ 0 then T is self-adjoint, so σ(T ) ⊂ R. Notice that kT x − λxk2 = kT xk2 − 2RehT x, λxi + kλxk2. If

λ < 0 then hT x, λxi < 0 (by definition of a positive operator) so kT x − λxk2 ≥ |λ|2kxk2 and T − λ is bounded below.

(d) If T is a non-trivial projection then neither T nor I − T (the projection on the orthogonal complement

1 1 of the range of T ) can be invertible, so {0, 1} ⊂ σ(T ). If λ∈ / {0, 1}, a calculation shows that λ(1−λ) T − λ is the inverse of T . 

Exercise 2.7.1 asserts that the operator of multiplication by an L∞ function is a normal operator. In addition, it showed that Mh belongs to one of the important subclasses iff its (essential) range belonged to a specific subset of the complex plane. On the other hand, Theorem 3.5.1 showed that for a general normal operator, a membership in each of the mentioned subclasses implies the analogous behavior of its spectrum. This is no coincidence. First we need a proposition.

2 Proposition 3.5.2. Let T = Mh on L . Then the following are equivalent:

(a) Ran T is dense;

(b) h(x) 6= 0 a.e.;

(c) T is injective;

(d) T ∗ is injective.

2 R Proof. Let A = {x : h(x) = 0}. Suppose that µ(A) 6= 0 and let f = χA. For any g ∈ L , hT g, fi = hgf = R A hg = 0 so f is a non-zero function that is orthogonal to Ran T . Thus (a) implies (b). Next, if T f = 0 then h(x)f(x) = 0 a.e., so assuming (b) we see that f = 0, and (c) follows. Notice that if T ∗f = 0 then h(x)f(x) = 0 so T f = 0 and (c) implies (d). Finally, the implication (d) ⇒ (a) is a direct consequence of Theorem 2.2.3.  3.5. SPECTRUM OF A NORMAL OPERATOR 45

Recall that the essential range of a function h ∈ L∞(X, µ) is the set of all complex numbers z such that the measure of E = {x ∈ X : |h(x) − z| < } is different from zero for all  > 0.

2 Theorem 3.5.3. Let T = Mh on L . Then σ(T ) is the essential range of h.

Proof. Notice that Mh − λ is a multiplication by h − λ. Let us denote by A = {x : h(x) 6= λ}, B = {x : h(x) = λ}, and define a function g(x) = 1/(h(x) − λ) if x ∈ A and g(x) = 0 if x ∈ B.

Suppose first that λ ∈ ρ(T ). By Proposition 3.5.2, µ(B) = 0. Thus, g(x) = 1/(h(x) − λ) a.e. and MgMh−λ =

Mh−λMg = I. Since the assumption is that Mh−λ is invertible, the operator Mg is bounded, and by Example 2.1.4,

∞ g ∈ L . The estimate |g(x)| ≤ M a.e. implies that |h(x) = λ| ≥ 1/M a.e. so µ(E1/M ) = 0 and λ is not in the essential range of h.

Conversely, if λ is not in the essential range of h, then there exists 0 > 0 such that µ(E0 ) = 0. Consequently,

|h(x) − λ| ≥ 0 a.e., whence |g(x)| ≤ 1/0 a.e., and Mg is a bounded operator. This shows that Mh−λ is invertible and the proof is complete. 

Proposition 3.2.1 established that r(T ) ≤ kT k. For normal operators more can be said, and the following result paves the way to that goal.

Proposition 3.5.4. If T is a normal operator then kT nk = kT kn, n ∈ N.

Proof. First we notice that, in view of Proposition 2.7.5, for n ∈ N,

kT nxk2 = hT nx, T nxi = hT ∗T nx, T n−1xi ≤ kT ∗T nxkkT n−1xk = kT n+1xkkT n−1xk ≤ kT n+1kkT n−1kkxk2 so kT nk2 ≤ kT n+1kkT n−1k.

Now we prove the assertion of the proposition using induction. We will assume that kT k= 6 0, otherwise the theorem is trivially correct. It is easy to see that the statement is valid for n = 0 and n = 1. Suppose that it is true for n. Then

kT k2n = (kT kn)2 = kT nk2 ≤ kT n+1kkT n−1k ≤ kT n+1kkT kn−1 46 3. SPECTRUM and, dividing both sides by kT kn−1, it follows that kT kn+1 ≤ kT n+1k. Since the opposite inequality is obvious, the theorem is proved. 

Corollary 3.5.5. If T is a normal operator then r(T ) = kT k.

pn Proof. By Theorems 3.2.5 and 3.5.4, kT k = kT nk → r(T ).  CHAPTER 4

Invariant subspaces

4.1. Compact operators

We have seen that the spectrum of a compact operator consists of the eigenvalues and 0 which may be but is not necessarily an eigenvalue. Furthermore, each of the eigenspaces E(λ) = Ker (T − λ), corresponding to λ 6= 0, is finite dimensional. The situation is especially pleasant when T is self-adjoint, in addition to being compact.

One of the benefits of this additional hypothesis concerns the eigenspaces.

Proposition 4.1.1. If T is a compact, self-adjoint operator on Hilbert space, and if λ, µ are two different eigenvalues of T , then the corresponding eigenspaces E(λ),E(µ) are mutually orthogonal.

Proof. If T x = λx and T y = µy, then λhx, yi = hT x, yi = hx, T yi = µhx, yi, since µ ∈ R. Given that λ 6= µ it follows that hx, yi = 0. 

⊥ Proposition 4.1.1 shows that H can be written as a direct sum M ⊕ M , where M = ⊕n∈NE(λn), the orthogonal direct sum of all eigenspaces. When T is self-adjoint, the subspace M⊥ is just a mirage.

Theorem 4.1.2. If T is a compact, self-adjoint operator on H space, and σp(T ) = {λi}i∈I , then H =

⊕i∈I E(λi).

∗ ⊥ Proof. Let M = ⊕i∈I E(λi) and suppose that M= 6 H. Notice that M is invariant for T = T , so M is

⊥ also reducing for T . Let T1 be the restriction of T to M . Then σ(T1) ⊂ σ(T ) by Theorem 3.3.3. Since T1 is compact, if λ 6= 0 is in its spectrum it must be an eigenvalue. However, the corresponding eigenvectors would also be eigenvectors of T and, as such, would belong to M. It follows that T1 must be quasinilpotent. On the other hand T1 is normal which would necessitate that its norm and spectral radius are equal, so T1 = 0 which

⊥ means that M ⊂ E(0) ⊂ M. The obtained contradiction shows that H = ⊕i∈I E(λi). 

47 48 4. INVARIANT SUBSPACES

Remark 4.1.1. Each eigenspace E(λ) is reducing for a self-adjoint operator so, relative to the decomposition

H = ⊕i∈I E(λi), T can be represented as diag(Ti), where Ti is an operator mapping E(λi) into itself, and σ(Ti) is a singleton {λi}. In addition, regardless of whether T is self-adjoint or not, each eigenspace is hyperinvariant for T . This means that it is invariant for any operator that commutes with T . Indeed, if A commutes with T , then T − λ annihilates Ax together with x.

When T is not self-adjoint, the situation is much more complicated. The eigenspaces need not be mutually orthogonal any more. The eigenvectors do not necessarily span H. In fact, there are compact operators without eigenvalues, (so they are necessarily quasinilpotent). Still, we can see some of the structure remaining. The eigenspaces are hyperinvariant (if there are any), although they need not be reducing. Since all operators on Cn are compact, it is instructive to look at finite matrices.

1 1 2 Example 4.1.1. Let T = [ 0 1 ] acting on C . Then σ(T ) = {1} and E(1) = C ⊕ (0) which is neither invariant for T ∗, nor is the span of eigenvectors of T equal to C2.

2 1 2 Example 4.1.2. Let T = [ 0 3 ] acting on C . The eigenvalues of T are 2 and 3, with corresponding eigenvectors

1 1 [ 0 ] and [ 1 ], and they are not mutually orthogonal.

When T has eigenvalues, it must have a non-trivial invariant subspace. What about the case of a compact quasinilpotent operator?

R x Example 4.1.3. Let T be the Volterra-type integral operator with kernel K, i.e., T f(x) = 0 K(x, y)f(y) dy.

It is compact (Example 2.6.2) and has no eigenvalues different from 0. Indeed, let λ ∈ σ(T ), λ 6= 0 and let f ∈ L2

R x 2 be the appropriate eigenfunction. Define g(x) = 0 |f(y)| dy. Clearly, g is a monotone differentiable function and g0(x) = |f(x)|2 a.e. Let a = sup{x ∈ [0, 1] : g(x) = 0}. (Since g(0) = 0 such a number exists.) Now, for a.e. x, x 2 x x Z Z Z 2 2 2 2 |λf(x)| = |T f(x)| = K(x, y)f(y) dy ≤ |K(x, y)| dy |f(y)| dy,

0 0 0 4.2. LINE INTEGRALS 49

2 0 R x 2 so |λ| g (x)/g(x) ≤ 0 |K(x, y)| dy for a.e. x ∈ (a, 1). By integrating the last inequality we obtain

1 x Z Z 2 1 2 2 |λ| ln g(x) |a ≤ |K(x, y)| dy ≤ kT k a 0 which is a contradiction since ln g(1) = ln kfk2 and kT k are finite, but ln g(a) is not.

This example shows that there are many compact quasinilpotent operators. For the Volterra-type integral operators we can exhibit some invariant subspaces.

Theorem 4.1.3. Let T be a Volterra-type integral operator with kernel K, let a ∈ [0, 1], and let Ma = {f ∈

2 2 L : f(x) = 0 when x ≤ a}. Then Ma is a subspace of L that is invariant for T .

Exercise 4.1.1. Prove Theorem 4.1.3.

A deep result in the theory of integral operators is that every compact quasinilpotent operator is unitarily equivalent to an operator of the form as in Example 4.1.3. Consequently every compact operator (quasinilpotent or not) has an invariant subspace. As we will demonstrate, there is a way to prove an even stronger theorem.

(See Theorem 4.3.2 below.)

4.2. Line integrals

In this section we make a brief detour, by considering line integrals of functions of a complex variable with values in L(H).

Example 4.2.1. Let T ∈ L(H) and consider the function ρ(λ) = (T − λ)−1 defined for λ ∈ ρ(T ). This function is known as the resolvent of T .

Let C be a curve in the complex plane. We will assume that it is parametrized by a

γ : [0, 1] → C and that it is rectifiable, which means that γ is a function of . Suppose that S is a function defined and continuous on C, with values in L(H). Let P be a partition of [0, 1]: 0 = t0 < t1 < t2 < 50 4. INVARIANT SUBSPACES

∗ ··· < tn = 1 and, for 1 ≤ k ≤ n let tk ∈ [tk−1, tk]. Then we have a partition of C with points γi = γ(ti) and

∗ ∗ intermediate points γi = γ(ti ). Let us denote ∆γi = γi − γi−1 and consider the sum

n X ∗ S(γk) ∆γk. k=1 R It can be shown that these sums converge to a unique operator S which we denote as S = C S(γ) dγ. Moreover, if T is an operator that commutes with each S(γ), then T commutes with S.

Example 4.2.2. Let T ∈ L(H), and let C be a curve in ρ(T ) defined by γ = γ(t). For every λ ∈ ρ(T ), the R function ρ(λ) is a continuous function (in the uniform topology), so we can consider C ρ(γ) dγ.

What happens when the curve C is replaced by a curve C0 that is not far from C?

Theorem 4.2.1. Let C0 be a rectifiable curve in the resolvent set of T , and let C1 be a curve homotopic to R R C0. Then ρ(γ) dγ = ρ(γ) dγ. C0 C1

Remark 4.2.1. All these facts can be established following the same procedures as in the case when the integrand is a complex-valued function. [See Conway.]

R Now we turn to operators. Example 4.2.2 showed that the operator C ρ(γ) dγ is well defined. It turns out that this operator has some interesting properties.

Theorem 4.2.2. Let C be a simple closed rectifiable curve in ρ(T ). Then the operator

1 Z (4.1) P = − ρ(λ) dλ 2πi C is a projection (not necessarily orthogonal) that commutes with every operator that commutes with T . Conse- quently, the subspaces Ran P and Ker P are both invariant for T .

Proof. Let C0 be a simple closed rectifiable curve in ρ(T ) that lies inside C and is homotopic to C. Then

Z Z Z Z (2πi)2P 2 = ρ(γ) dγ ρ(λ) dλ = ρ(γ)ρ(λ) dγdλ. C C0 C C0 4.2. LINE INTEGRALS 51

A calculation shows that ρ(γ)ρ(λ) = [ρ(γ) − ρ(λ)](γ − λ)−1. Thus we have that

Z Z Z Z Z (2πi)2P 2 = ρ(γ) (γ − λ)−1 dλdγ − ρ(λ) (γ − λ)−1 dγdλ = −2πi ρ(γ) dγ − 0 = (2πi)2P. C0 C C C0 C0

So, P 2 = P , and it follows from the definition of the integral and ρ(λ), that if A commutes with T then A commutes with P .

Finally, if y ∈ Ran P , then T y = T P y = P T y so T y ∈ Ran P . Similarly, if x ∈ Ker P , then 0 = T P x = P T x so T x ∈ Ker P . 

Exercise 4.2.1. Verify that ρ(γ)ρ(λ) = [ρ(γ) − ρ(λ)](γ − λ)−1.

Theorem 4.2.2 required that the closed curve C lies in ρ(T ), but made no reference to the spectrum of

T . Consequently, we may have a part of the spectrum inside C and a part outside. In that case we obtain a decomposition of T .

Theorem 4.2.3. Let T be an operator in L(H), let C be a simple closed rectifiable curve in ρ(T ), let P be the projection defined in (4.1), and let T 0 and T 00 be the restrictions of T to Ran P and Ker P , respectively. Then

T = T 0 + T 00, the spectrum of T 0 is precisely the subset of σ(T ) inside C, and the spectrum of T 00 is precisely the subset of σ(T ) outside C.

Proof. Since ρ(λ) commutes with P , for any λ ∈ ρ(T ), the subspaces Ran P and Ker P are invariant for

ρ(λ). Let ρ0(λ) and ρ00(λ) denote the restrictions of ρ(λ) to these subspaces. If we denote by I0 and I00 the identity operators on these subspaces, then ρ0(λ)(λI0 −T 0) = I0 and ρ00(λ)(λI00 −T 00) = I00. Therefore, if λ ∈ ρ(T ) then λ must belong to both ρ(T 0) and ρ(T 00). In the other direction, if λ ∈ ρ(T 0) ∩ ρ(T 00) then there exist operators A0 and A00 such that A0(λI0 − T 0) = I0 and A00(λI00 − T 00) = I00. Now we can define, for any x ∈ H,

Ax = A0P x + A00(I − P )x. It is not hard to see that the restricitons of A to Ran P and Ker P are precisely A0 and A00, and that A(λI − T )x = x when x belongs to either Ran P or Ker P . It follows that A(λI − T )x = x holds for all x ∈ H, so λ ∈ ρ(T ). We conclude that λ ∈ σ(T ) iff λ ∈ σ(T 0) or λ ∈ σ(T 00). 52 4. INVARIANT SUBSPACES

Suppose now that λ lies outside of C. We will show that λ ∈ ρ(T 0), which is true iff there exists an operator A0 acting on Ran P and satsifying A0(λI0 − T 0) = I0. Actually, we will show that there exists an operator A ∈ L(H) that commutes with T and A(λI − T ) = P . To that end, we notice that

(T − λI)ρ(γ) = (T − λI)(T − γI)−1 = (T − γI)(T − γI)−1 + (γ − λ)(T − γI)−1 = I + (γ − λ)(T − γI)−1.

Therefore,

1 Z 1 Z 1 Z (4.2) (T − λI) ρ(γ)(γ − λ)−1 dγ = (γ − λ)−1 dγ I + ρ(γ) dγ = 0 − P = −P. 2πi 2πi 2πi C C C

On the other hand, if λ lies inside of C, then the integral in (4.2) equals I − P , so the restriction to Ker P yields

00 00 00 I . Once again, this shows that λI − T is invertible. 

4.3. Invariant subspaces for compact operators

In Section 4.1 we have discovered that every compact operator on Hilbert space has an invariant subspace.

What more is there to say? For one thing, if λ is an eigenvalue of T , then E(λ) is hyperinvariant. Thus, it is natural to ask whether a compact quasinilpotent operator always has a hyperinvariant subspace.

Before we address this question, let us take a look at the set of all operators that commute with T . It is called the commutant of T , it is denoted by {T }0, and it is an algebra. The last statement means that {T }0 is closed under sums, products, and multiplication by scalars.

Exercise 4.3.1. Prove that {T }0 is an algebra.

Definition 4.3.1. A subalgebra of L(H) is transitive if it is weakly closed, unital (containing the identity operator), and has only the trivial invariant subspaces.

Example 4.3.1. The algebra L(H) is transitive. It is clearly weakly closed and unital. If L(H) had a non- trivial invariant subspace M, then we could pick non-zero vectors x ∈ M⊥ and y ∈ M, and consider the rank one operator T = x ⊗ y. This would lead to a contradiciton, since y ∈ M but T y = (x ⊗ y)y = hy, yix ∈ M⊥. 4.3. INVARIANT SUBSPACES FOR COMPACT OPERATORS 53

A big open problem in operator theory is whether L(H) is the only transitive algebra. This is true when H is finite dimensional.

Theorem 4.3.1 (Burnside’s Theorem). Let H be a finite dimensional vector space of dimension larger than

1. If A is a transitive algebra of linear transformations on H, then A = L(H).

Proof. We will show that A contains a rank one operator. Let T0 be an operator with minimal non-zero rank d. If d > 1, choose x1 and x2 so that vectors T0x1,T0x2 are linearly independent, and then choose A ∈ A so that AT0x1 = x2. (Such an operator A exists, otherwise {AT0x1 : A ∈ A} would be a subspace of H, invariant for

A.) Then T0AT0x1 (= T0x2) and T0x1 are linearly independent, and T0AT0 −λT0 is not a zero transformation for any λ ∈ C. On the other hand, there exists a complex number λ0 such that the restriction of T0A − λ0 to Ran T0 is not invertible. Therefore, T0AT0 − λ0T0 has rank less than d and greater than 0, contradicitng the minimality of d. Hence d = 1.

If T0 = x ⊗ y, we will show that A contains all rank one operators. Let u ⊗ v be a rank one operator. Once

∗ ∗ again, there must be an operator A1 ∈ A such that A1x = u. Notice that the algebra A = {A : A ∈ A} is also

∗ transitive. Therefore, there exists an operator A2 ∈ A such that A2y = v. Then A1T0A2 = u ⊗ v so A contains all rank one operators and, hence, all finite rank operators, i.e. L(H). 

Exercise 4.3.2. Prove that if A is a subalgebra of L(H) and x ∈ H, then Ax = {Ax : A ∈ A} is a subspace of H, invariant for A.

Exercise 4.3.3. Prove that A is transitive iff A∗ is transitive.

Theorem 4.3.2 (Lomonosov’s Theorem). Let A be a non-scalar operator on Hilbert space that commutes with a compact operator. Then A has a nontrivial hyperinvariant subspace.

The proof of this result uses a fixed point theorem.

Theorem 4.3.3. Let F be a compact and convex subset of Hilbert space H, and let T be a linear operator in

L(H) with the property that T (F ) ⊂ F . Then there exists p ∈ H such that T p = p. 54 4. INVARIANT SUBSPACES

2 n−1 Proof. For every n ∈ N, let Tn = (1 + T + T + ··· + T )/n. The set Tn(F ) is convex, (Exercise 4.3.4), and compact, as the image of a compact set under a continuous map. Also, Tn(F ) ⊂ F , because if x ∈ F then

k T x ∈ F , 0 ≤ k ≤ n − 1, and K is convex. Further, for any m, n ∈ N, Tm(F )Tn(F ) ⊂ Tm(F ) ∩ Tn(F ) which shows that the family {Tn(F )}n∈N has a finite intersection property. Since they are all subset of a compact set

F , they all have a non-empty intersection, i.e., there exists p ∈ ∩{Tn(F ): n ∈ N}. We will show that T p = p.

Suppose, to the contrary, that T p 6= p. Then there exists α > 0 such that kT p−pk ≥ α. Since F is a bounded set, there exists M > 0 such that kxk ≤ M, for x ∈ F . Let n be a positive integer satisfying n > 2M/α. Since p ∈ Tn(F ), there exists xn ∈ F such that p = Tnxn and, therefore,

1 + T + T 2 + ··· + T n−1 T n − 1 T p − p = (T − 1)T x = (T − 1) x = x . n n n n n n

n n Then α ≤ kT p − pk = k(T − 1)/nxnk ≤ (kT xnk + kxnk)/n ≤ 2M/n which contradicts the choice of n. 

Exercise 4.3.4. Prove that if C is a in Hilbert space H and T ∈ L(H), then T (C) is a convex set.

Now we can prove the result which is frequently referred to as the Lomonosov’s Lemma.

Theorem 4.3.4. If A is a transitive subalgebra of L(H) and if K is a non-zero compact operator in L(H), then there exists an operator A ∈ A and a non-zero vector x ∈ H such that AKx = x.

Proof. Without loss of generality we will assume that kKk = 1. As we have already noticed, it suffices to consider the case when K is quasinilpotent. Let x0 be a vector in H such that kKx0k > 1 and notice that this implies that kx0k > 1, so the closed ball B(x0, 1) does not contain 0. Let D be the image under K of the closed ball B(x0, 1). By Exercise 2.6.4, D is a compact set. In addition, it is convex, by Exercise 4.3.4 and it does not contain 0. Indeed, for any x ∈ B(x0, 1), kKxk ≥ kKx0k − kK(x − x0)k > 1 − kx − x0k ≥ 0.

−1 For an operator T ∈ A, consider the set UT = {y ∈ H : kT y − x0k < 1}. Notice that UT = T ({z : kz − x0k < 1} so it is an open set. Moreover, every non-zero vector y belongs to UT , for some T ∈ A. Indeed, A is transitive so the linear manifold {T y : T ∈ A} must be dense in H and, hence, there exists T ∈ A such that kT y − x0k < 1, which means that y ∈ UT . Thus, ∪T ∈AUT is a covering of H − {0}, and all the more of D. As 4.3. INVARIANT SUBSPACES FOR COMPACT OPERATORS 55

n established earlier, D is a compact set, so there exist operators T1,T2,...,Tn ∈ A such that D ⊂ ∪i=1UTi . This means that, for any y ∈ D there exists Ti, 1 ≤ i ≤ n, such that kTiy − x0, k < 1.

Now, for each j, 1 ≤ j ≤ n, and y ∈ D, we define αj(y) = max{0, 1 − kTjy − x0k}. Notice that each αj is

Pn continuous on D, 0 ≤ αj ≤ 1, and j=1 αj(y) > 0, for all y ∈ D. Furthermore, αj(y) 6= 0 iff kTjy − x0k < 1.

Define, for y ∈ D and 1 ≤ j ≤ n,

αj(y) βj(y) = n , P αi(y) i=1

Pn and notice that each βj is continuous on D, 0 ≤ βj ≤ 1, and j=1 βj(y) = 1, for all y ∈ D. Also, βj(y) 6= 0 iff Pn αj(y) 6= 0 iff kTjy − x0k < 1. Finally, let Ψ : D → H be defined by Ψ(y) = j=1 βj(y)Tjy. It is easy to see that

Ψ is continuous on D. We will show that Ψ(D) ⊂ B(x0, 1). Let y ∈ D. Then

n n n X X X kΨ(y) − x0k = k βj(y)Tjy − βj(y)x0k ≤ |βj(y)|kTjy − x0k ≤ 1 j=1 j=1 j=1

so Ψ(y) ∈ B(x0, 1) and Ψ(D) ⊂ B(x0, 1). If we define Φ : B(x0, 1) → H by Φ(y) = Ψ(Ky), then Φ is a continuous map of B(x0, 1) into itself. Since B(x0, 1) is a compact, convex set, Theorem 4.3.3 shows that Φ has a fixed n P point p ∈ B(x0, 1), hence non-zero. Now we define the operator A = βj(Kp)Tj which is in A. Finally, j=1 n P AKp = βj(Kp)TjKp = Ψ(Kp) = Φ(p) = p.  j=1

Now we can prove Lomonosov’s theorem.

Proof of Lomonosov’s Theorem. Let A = {A}0 and suppose, to the contrary, that A is transitive. By

Theorem 4.3.4, there exists an operator T ∈ {A}0 such that T Kx = x. In other words, a compact operators AK has 1 as an eigenvalue. Let E(1) denote the appropriate eigenspace which is finite dimensional. Since A commutes with TK, the subspace E(1) is invariant for A as well. The restriction of A to E(1) must have an eigenvalue λ and, since E(1) is invariant for A, we see that λ is an eigenvalue for A (not just the restriction). Let M denote the eigenspace of A corresponding to λ, i.e., M = {x ∈ H : Ax = λx}. Being an eigenspace, it is hyperinvariant for A. It is not (0), so it remains to notice that it is not H because A 6= λ.  56 4. INVARIANT SUBSPACES

4.4. Normal operators

2 We have seen in Exercise 2.7.1 that a Mh on L is a normal operator. In this section we will show that, in a sense, every normal operator is a multiplication by an essentially bounded function.

a 0 ∗ ∗ Example 4.4.1. Let T = [ 0 b ], with a, b ∈ C. Then TT = T T . Let X = {1, 2} and let µ be a counting

2 R 2 1/2 measure on X. Notice that L (X, µ) is the collection of all functions f : X → C with norm X |f| dµ =

2 21/2 2 2 |f(1)| + |f(2)| . Since this is the Euclidean norm, we see that L (X, µ) is just L(C ). Finally, let h be a

2 function on X, h(1) = a, h(2) = b. Then T can be identified with Mh on L(C ).

Remark 4.4.1. A similar construction can be made for the case when T is an n × n diagonal matrix,

T = diag(cn).

n Example 4.4.2. Let T = diag(cn), with cn ∈ C for all n ∈ N. Let X = N and µ({n}) = 1/2 . Then (X, µ) is a finite measure space. Further, let h : X → C be defined by h(n) = cn. Then T can be identified with the

2 operator Mh on L (X, µ).

The last example shows the danger of going through the motions. What does it mean “can be identified”?

While it is easy to see that T f = Mhf for any sequence f, their domains are not the same. Namely, T acts on

2 2 ` but Mh acts on L (X, µ), and these 2 spaces are not the same. For example, the sequence (1, 1, 1,... ) belongs to L2(X, µ) but not to `2. However, these two spaces are isomorphic. Let U : L2(X, µ) → `2 be defined by √ √ √ U(f) = (f(1)/ 2, f(2)/ 22, f(3)/ 23,... ). It is easy to verify that U is injective and surjective so, by the Open Mapping Principle, it is an isomorphism. Moreover, if f ∈ L2(X, µ), then

f(1) f(2) f(3) c f(1) c f(2) c f(3) U −1TU(f) = U −1T ( √ , √ , √ ,... ) = U −1( 1√ , 2√ , 3√ ,... ) 2 22 23 2 22 23 h(1)f(1) h(2)f(2) h(3)f(3) = U −1( √ , √ , √ ,... ) = hf, 2 22 23

so T is unitarily equivalent to Mh. 4.4. NORMAL OPERATORS 57

Exercise 4.4.1. Prove that the map U : L2(X, µ) → `2, constructed in Example 4.4.2, is an isometric isomorphism.

Notice that in Examples 4.4.1 and 4.4.2 the measure was defined on each of the pieces. What happens if pieces are not that obvious? How do we define a piece?

Definition 4.4.1. A vector ξ is cyclic for an operator T if the set {p(T )ξ : p is a polynomial} is dense in H.

An operator T is cyclic if it has a cyclic vector.

2 Example 4.4.3. Let T = S, the unilateral shift. The vector ξ = e1 is cyclic for S. If x ∈ ` , x = (x1, x2,... )

Pn Pn k then x can be approximated by truncated sequences (x1, x2, . . . , xn, 0, 0,... ) = k=1 xkek = k=1 T e1.

Example 4.4.4. Let {. . . , e−2, e−1, e0, e1, e2,... } be an o.n.b. of H, and let T be the bilateral shift: T en =

− ∞ ∗ en+1, n ∈ Z. Then ξ = e0 is not a cyclic vector for T , because {p(T )e0} = ∨k=0ek. However, T en = en−1,

∗ Pn i ∗j n ∈ Z, so we need to replace polynomials in T by polynomials in T and T , i.e., f(T ) = i,j=1 T T . If the set

∗ {f(T )ξ : f is a polynomial in T,T } is dense in H, we say that e0 is a star-cyclic vector for T .

Before we proceed, we revisit the Stone–Weierstrass Theorem [Bartle, p. 184]. Although it is proved under the assumption that K is a compact subset of Rp, the same proof is valid when K is a compact set in C. Also, we will rephrase it using the following terinology. We will say that an algebra A of functions separates points on

K if, for any two distinct points x, y ∈ K there is a function f ∈ A such that f(x) 6= f(y). If for each x ∈ K there is a function g ∈ A such that g(x) 6= 0, we say that A vanishes at no point of K.

Theorem 4.4.1 (Stone–Weierstrass Theorem). Let A be an algebra of continuous, real-valued functions on a compact set K in C. If A separates points on K and if A vanishes at no point of K, then the uniform closure of B of A consists of all real-valued continuous functions on K.

The Stone–Weierstrass Theorem deals only with real-valued functions of complex variable. Now we extend it to complex-valued functions. We will require that A be self-adjoint, meaning that if f ∈ A the f ∈ A. 58 4. INVARIANT SUBSPACES

Theorem 4.4.2. Let A be a self-adjoint algebra of continuous, complex functions on a compact set K in C.

If A separates points on K and if A vanishes at no point of K, then the uniform closure of B of A consists of all complex continuous functions on K.

Proof. Let f = u+iv be a continuous function on K, and let AR denote the set of all real-valued functions in

A. Since u, v are continuous real-valued continuous function on K, it suffice to show that every such function lies in the closure of AR. Since AR is clearly an algebra, the result will follow from the Stone–Weierstrass Theorem, once we show that AR separates points on K and vanishes at no point of K.

Suppose that z1, z2 are distinct points in K. By assumption, A separates points on K so it contains a function f such that f(z1) 6= f(z2). Also, A vanishes at no point of K, so it contains two functions g, h such that g(z1) 6= 0, h(z2) 6= 0. Then, the function f(z)g(z) − f(z )g(z) F (z) = 2 f(z1)g(z1) − f(z2)g(z1) belongs to A and has the property that F (z1) = 1, F (z2) = 0. Notice that, if F = u + iv ∈ A, then F ∈ A and u = (F + F )/2 ∈ AR. Clearly, u(z1) = 1, u(z2) = 0 so AR separates points on K.

Let z0 ∈ K. Then there exists a function G ∈ A such that G(z0) 6= 0. Let λ be a complex number such that

λG(z0) > 0 and notice that H = Re(λG) is a function in AR such that H(z0) > 0. Thus, AR vanishes at no point of K and the proof is complete. 

Now we are ready to establish a stronger connection between normal operators and operators of multiplication.

Theorem 4.4.3. Let T be a normal operator in L(H) with a star-cyclic vector ξ. Then there exists a finite measure µ on σ(T ), a bounded function h : σ(T ) → R, and an isomporphism U : L2(σ(T ), µ) → H such that

U −1T Uf(x) = h(x)f(x) for a.e. x ∈ σ(T ) and all f ∈ L2(σ(T ), µ).

Proof. Let A be the algebra of complex-valued polynomials in z, z. For f ∈ A we define L(f) = hf(T )ξ, ξi.

Clearly, L is a linear functional and it is bounded on A. Indeed, |L(f)| = |hf(T )ξ, ξi| ≤ kf(T )ξkkξk ≤ kf(T )kkξk2.

Further, T is normal, so f(T ) is also normal and, by Corollary 3.5.5, kf(T )k = r(f(T )) = sup{|λ| : λ ∈ σ(f(T ))}. 4.4. NORMAL OPERATORS 59

Finally, by the Spectral Mapping Theorem, λ ∈ σ(f(T )) iff λ = f(µ), for some µ ∈ σ(T ). Thus, kf(T )k =

2 sup{|f(µ)| : µ ∈ σ(T )} = kfk∞. We conclude that |L(f)| ≤ kfk∞kξk , so L is bounded on A. By Theorem 4.4.2,

A is dense in C(σ(T )) so we can extend L to a bounded linear functional on C(σ(T )). If f is a non-negative function √ in C(σ(T )), then so is f and it can be approximated by a sequence fn ∈ A. It follows that f can be approximated

2 by the sequence fnfn and, by the continuity of L, L(f)f = lim L(fnfn) = hfn(T )fn(T )ξ, ξi = kfn(T )ξk ≥ 0.

Thus, L is positive, and by Riesz Representation Theorem [Royden, p. 352] there exists a finite positive measure

µ on σ(T ) such that hf(T )ξ, ξi = R f dµ. Now define the operator U on A by U(f) = f(T )ξ. Since |f|2 = ff we have that R |f|2 dµ = hf(T )f(T )ξ, ξi = kf(T )ξk2 = kU(f)k2. That way, U is an isometry on A. Further,

A is dense in L2(µ) because it is dense in C(σ(T )), and the latter set is dense in L2 ([Rudin, Theorem 3.14]).

Therefore, by Theorem 2.3.4, U can be extended to an isometry U : L2(σ(T ), µ) → H. Since ξ is star-cyclic, the set {f(T )ξ : f ∈ A} is dense in H so the range of U is dense. Since U is bounded below its range is closed so U is surjective.

Finally, if we denote by f˜(z) the function zf(z), then U −1TU(f) = U −1T f(T )ξ = U −1f˜(T )ξ = f˜ so T can

2 be identified with Mz on L (σ(T ), µ). 

What if T does not have a star-cyclic vector?

Theorem 4.4.4. Let T be a normal operator in L(H). Then there exists a compact set X, a finite measure µ on X, a bounded function h : X → R, and an isomporphism U : L2(X, µ) → H such that U −1T Uf(x) = h(x)f(x) for a.e. x ∈ X and all f ∈ L2(X, µ).

Proof. Let x1 be a non-zero vector and let M1 be the closed linear span of {f(T )x1 : f ∈ A}. If M1 = H

⊥ then x1 is a star-cyclic vector for T and Theorem 4.4.3 applies. If M1 6= H there exists a non-zero vector x2 ∈ M1 .

∗ ⊥ Notice that M1 is invariant (hence reducing) for T and T , so the same is true of M1 . Now, either the closed

⊥ linear span of {f(T )x2 : f ∈ A} equals M1 , in which case T = T1 ⊕ T2 and both T1 and T2 are star-cyclic, or we continue the process. Applying the Hausdorff Maximal Principle, we obtain a decomposition of H relative to which T = diag(Ti) and each of the operators on the diagonal is star-cyclic. By Theorem 4.4.3, for each i there 60 4. INVARIANT SUBSPACES

2 2 exists a finite measure space (Xi, µi), a function hi ∈ L (Xi, µi), and unitary operator Ui : L (Xi, µi) → Mi,

−1 such that Ui TiUi = Mhi . Next we define X to be the union of Xi and µ a measure on X so that µ = µi on

Xi. Finally, we define a function h so that h = hi on Xi and a unitary operator U = diag(Ui). Then T can be

2 −1 identified with Mh on L (X, µ), i.e., U TU = Mh. 

We will now introduce a very important concept.

Definition 4.4.2. If X is a set, Ω a σ-algebra of subsets of X, and H is Hilbert space, a spectral measure

for (X, Ω, H) is a function E :Ω → L(H) such that

(a) for each ∆ in Ω, E(∆) is a projection;

(b) E(∅) = 0 and E(X) = 1;

(c) E(∆1 ∩ ∆2) = E(∆1)E(∆2); P (d) if {∆i}i∈I are pairwise disjoint sets in Ω, then E(∪i∈I ∆i) = i∈I E(∆i).

Example 4.4.5. Let X = N, let Ω be the set of all subsets of N, and let {en}n∈N be an o.n.b. of H. For ∆ ⊂ N,

define E(∆) to be the projection onto the span ∨n∈∆en. Properties (a) and (b) of Definition 4.4.2 are obvious.

Since E(∆)ei is either ei or 0, depending on whether i belongs to ∆ or not, we see that E(∆1)E(∆2)ei = 0 unless

i ∈ ∆ ∩ ∆ , in which case it equals e . Thus, for x = P x e , E(∆ )E(∆ )x = P x e = E(∆ ∩ ∆ )x, 1 2 i i i 1 2 i∈∆1∩∆2 i i 1 2 P and (c) holds as well. Finally, if {∆i}i∈I are pairwise disjoint sets in Ω, and ∆ = ∪i∈I ∆i, writing x = xnen, n∈N we have that E(∆)x = P x e + P x e + ··· = E(∆ )x + E(∆ )x + ... . i∈∆1 i i i∈∆2 i i 1 2

Example 4.4.6. If X is a set, Ω a σ-algebra of subsets of X, and µ a measure on Ω, let H = L2(X, µ), and

2 define, for ∆ ∈ Ω and f ∈ L , E(∆)f = χ∆f. Then, E is a spectral measure.

Exercise 4.4.2. Verify that E in Example 4.4.6 is a spectral measure.

−1 We will now show that the equality U TU = Mh, established in Theorem 4.4.3, can be extended in the

−1 following manner. Suppose that F is a bounded function on σ(T ). Then we can define F (T ) = UMF ◦hU since,

−1 2 −1 2 −1 for x ∈ H, U x ∈ L and MF ◦hU x is also in L , so UMF ◦hU x is well defined. 4.4. NORMAL OPERATORS 61

Theorem 4.4.5. Let T be a bounded linear operator on Hilbert space H. The mapping F 7→ F (T ) is an algebra homomorphism from L∞(σ(T ), µ) to L(H).

Exercise 4.4.3. Prove Theorem 4.4.5.

Remark 4.4.2. The homomorphism F 7→ F (T ) is called a .

Example 4.4.6 shows that a spectral measure can be defined using multiplication by characteristic functions.

We present a variation on this theme.

Theorem 4.4.6. If T is a normal operator on Hilbert space, ∆ is a measurable subset of σ(T ), and F = χ∆,

−1 then the mapping E defined by E(∆) = F (T ) = UMF ◦hU is a spectral measure.

Exercise 4.4.4. Prove Theorem 4.4.6.

Exercise 4.4.5. What is E when T = diag(cn)?

Let x, y ∈ H and denote by f = U −1x and g = U −1y. Since U is a surjective isometry, U −1 = U ∗ so f = U ∗x

∗ −1 and g = U y. If F = χ∆ then, by definition, hE(∆)x, yi = hF (T )x, yi = hUMF ◦hU x, yi = hMF ◦hf, gi =

R F ◦ h fg dµ. On the other hand, E is the spectral measure of T , so hE(∆)x, yi also defines a measure ν(∆). It is often called the scalar spectral measure of T .

Exercise 4.4.6. Verify that ν is a measure.

R R Now, hE(∆)x, yi is equal to χ∆ dν as well as to F ◦ hfg dµ, so we have the equality

Z Z (4.3) F ◦ hfg dµ = F dν whenever F is a characteristic function. Since every simple function is a linear combination of characteristic functions, it is not hard to see that (4.3) remains true when F is a simple function. Further, every bounded function can be approximated by simple functions so, by relying on Lebesgue Dominated Convergence Theorem, 62 4. INVARIANT SUBSPACES we obtain that (4.3) holds for any bounded function F . In particular, if F (λ) = λ, we obtain that hT x, yi =

R λ dν = R λ dhE(λ)x, yi. Since this is true for all x, y ∈ H, we can write T = R λ dE(λ) or

Z (4.4) T = λ dE.

More generally, since (4.3) holds for any bounded function F , it follows that, for any such function,

Z (4.5) F (T ) = F (λ) dE.

Theorem 4.4.6 established that to every normal operator there corresponds a spectral measure. The following result shows how essential this measure is for the operator.

Theorem 4.4.7. If T is a normal operator and E the associated spectral measure, then an operator A com- mutes with T iff A commutes with E(∆) for every Borel set ∆ ⊂ σ(T ).

Proof. Let x, y ∈ H, and let F be a bounded function on σ(T ). Then

Z hAF (T )x, yi = hF (T )x, A∗yi = F (λ) dhE(λ)x, A∗yi, and

Z hF (T )Ax, yi = F (λ) dhE(λ)Ax, yi.

If A and T commute, Fuglede–Putnam Theorem implies that A commutes with T ∗, hence with F (T ), for any

∗ bounded function F . In particular, by taking F = χ∆, we obtain that hE(∆)x, A yi = hE(∆)Ax, yi or, equiv- alently that hAE(∆)x, yi = hE(∆)Ax, yi. Since this holds for all x, y ∈ H it follows that A commutes with

E(∆).

Conversely, if A commutes with E(∆), then hE(∆)x, A∗yi = hAE(∆)x, yi = hE(∆)Ax, yi. Since hAT x, yi =

R λ dhE(λ)x, A∗yi and hT Ax, yi = R λ dhE(λ)Ax, yi, we obtain that hAT x, yi = hT Ax, yi for all x, y ∈ H. Thus

AT = TA and the proof is complete. 

Theorem 4.4.7 has an important consequence that concerns the existence of hyperinvariant subspaces. 4.4. NORMAL OPERATORS 63

Corollary 4.4.8. If T is a normal operator in L(H), and E is its spectral measure, then E(∆) is a hyper- invariant subspace for T , for any Borel set ∆ ⊂ σ(T ). Consequently, if T is not a scalar multiple of the identity, then T has a non-trivial hyperinvariant subspace.

Exercise 4.4.7. Prove Corollary 4.4.8. CHAPTER 5

Spectral radius algebras

5.1. Compact operators

In Section 4.3 we have shown that every compact operator is contained in an algebra, namely its commutant, that is not transitive. Are there other algebras that would contain a given operator and still have an invariant subspace? We will show that the answer is affirmative. Let us denote the class of quasinilpotent operators as Q.

The following is a direct consequence of Theorem 4.3.4.

Proposition 5.1.1. Let A be a unital subalgebra of L(H) and let K be a compact operator in L(H). If

AK ∈ Q for each A ∈ A, then A has a n. i. s.

Proof. If A is transitive, by Theorem 4.3.4 there exists A ∈ A such that 1 ∈ σp(AK), so AK∈ / Q. 

Our goal is to find an algebra A with the property stated in Proposition 5.1.1. Let A ∈ L(H). For m ∈ N, define

∞ !1/2 m X (5.1) d = , and R = d2nA∗nAn . m 1 + mr(A) m m n=0

Exercise 5.1.1. Prove that the series in (5.1) converges uniformly and, for each m ∈ N, Rm is invertible

−1 with ||Rm || ≤ 1.

If A is an operator in L(H) and Rm is as in (5.1), we associate with A the collection

  −1 BA = T ∈ L(H) : sup ||RmARm || < ∞ . m

Exercise 5.1.2. Show that BA is an algebra.

We will show that BA contains all operators that commute with A. In fact, we can prove a stronger result.

64 5.1. COMPACT OPERATORS 65

Proposition 5.1.2. Suppose A is a nonzero operator, B is a power bounded operator commuting with A, and T is an operator for which AT = BTA. Then T ∈ BA.

An operator T is power bounded if there exists C > 0 such that kT nk ≤ C, for all n ∈ N. For example, if kT k ≤ 1, then T is power bounded.

Proof. It is easy to verify that A2T = B2TA2. Using induction one can prove that AnT = BnTAn, for every n ∈ N. The operator B is power bounded so there is a constant C such that kBnk ≤ C, for each n ∈ N.

For any vector x ∈ H and any positive integer m, we have that

∞ ∞ ∞ 2 2 X 2n ∗n n X 2n n n X 2n n 2 (5.2) kRmxk = hRmx, Rmxi = hRmx, xi = dm hA A x, xi = dm hA x, A xi = dm kA xk . n=0 n=0 n=0

n −1 n n −1 n −1 On the other hand, kA TRm xk = kB TA Rm xk ≤ CkT kkA Rm xk so we obtain that

∞ −1 2 X 2n n −1 2 kRmTRm xk = dm kA TRm xk n=0 ∞ 2 2 X 2n n −1 2 ≤ C kT k dm kA Rm xk n=0

2 2 −1 2 = C kT k kRmRm xk

= C2kT k2kxk2.

Thus T ∈ BA. 

From this we deduce an easy consequence.

Corollary 5.1.3. Let T be an operator such that AT = λT A for some complex number λ with |λ| ≤ 1.

Then T ∈ BA. In particular BA contains the commutant of A.

∗ Example 5.1.1. If u and v are unit vectors then Bu⊗v = {T ∈ L(H): v is an eigenvector for T }. Let

A = u ⊗ v be a rank one operator, with u and v are unit vectors. One knows that r(u ⊗ v) = |hu, vi|.A 66 5. SPECTRAL RADIUS ALGEBRAS calculation shows that, for n ∈ N, An = hu, vin−1 u ⊗ v and A∗nAn = r2n−2 v ⊗ v. Therefore,

∞ ! X d2 R2 = I + d2n r2n−2 v ⊗ v = I + m v ⊗ v. m m 1 − d2 r2 n=1 m

p 2 2 2 Let λm = 1 + dm/(1 − dmr ) for every m ∈ N. Notice that λm → ∞ as m → ∞. Indeed, either dm → 1/r √ 2 or, if A is quasinilpotent, λm = 1 + m . If we denote by M the one dimensional space spanned by v then,

⊥  λm 0  −1  1/λm 0  relative to H = M ⊕ M , the matrix of Rm is Rm = 0 1 and Rm = 0 1 . If T is an arbitrary operator,

XY say T = [ ZW ], then         λm 0 XY 1/λm 0 X Y λm −1         RmTR =         m         0 1 ZW 0 1 Z/λm W

−1 ⊥ and it is easy to see that supm kRmTRm k < ∞ if and only if Y = 0. This means that M is invariant for T or, equivalently, that M is invariant for T ∗, and this is true iff v is an eigenvector for T ∗.

Exercise 5.1.3. Prove that r(u ⊗ v) = |hu, vi|.

−1 Now we define QA = {T ∈ L(H): kRmTRm k → 0}.

Theorem 5.1.4. QA is a two sided ideal in BA and every operator in QA is quasinilpotent. Furthermore, if

A is quasinilpotent, then A ∈ QA.

−1 −1 −1 Proof. Let T ∈ QA and let X ∈ BA. Then kRmTXRm k ≤ kRmTRm k kRmXRm k → 0 so QA is a right ideal. Since the same estimate holds for XT we see that QA is a two sided ideal in BA. On the other hand

−1 −1 r(T ) = r(RmTRm ) ≤ kRmTRm k which shows that if T ∈ QA then it must be quasinilpotent. Finally, if A ∈ Q then r(A) = 0 and dm = m. Using (5.2) we see that

∞ ∞ X 1 X kR AR−1xk2 = m2nkAn+1R−1xk2 = m2n+2kAn+1R−1xk2 m m m m2 m n=0 n=0 " ∞ # 1 X 1 kxk2 = −kR−1xk2 + m2nkAnR−1xk2 = kxk2 − kR−1xk2 ≤ m2 m m m2 m m2 n=0

−1 from which it follows that kRmARm k ≤ 1/m → 0, m → ∞.  5.1. COMPACT OPERATORS 67

Remark 5.1.1. The ideal QA need not contain every quasinilpotent operator in BA. Indeed, if A is the

2 2 unilateral forward shift a calculation shows that Rm = 1/(1 − dm). Since every operator commutes with a scalar

−1 multiple of the identity it follows that BA = L(H). On the other hand, kRmTRm k = kT k for any T in L(H), so

QA = (0).

The following result justifies our interest in QA.

Theorem 5.1.5. If QA 6= (0) and there exists a nonzero compact operator in BA, then BA has a n. i. s.

Proof. Let K be a nonzero compact operator in BA. Without loss of generality we may assume that QK = 0 for every Q ∈ QA. Indeed, if QK 6= 0 for some Q ∈ QA, then QK is a compact quasinilpotent operator with the property that BA QK ⊂ Q and the result follows from Proposition 5.1.1.

Let Q be a fixed nonzero operator in QA and let T be an arbitrary operator in BA. Then QT ∈ QA and, hence, QT K = 0. Since K 6= 0 there is a nonzero vector z in the range of K. Clearly, QT z = 0 so T z ∈ ker Q for all T ∈ BA. Naturally, the closure of the subspace {T z : T ∈ BA} is an invariant subspace for BA. It is nonzero since z 6= 0 and the identity operator is in BA. Finally, it is not H since it is contained in the kernel of a nonzero operator Q. 

From Theorem 5.1.5 we deduce some easy consequences.

Corollary 5.1.6. Suppose that A is a quasinilpotent operator, B is a power bounded operator commuting with A, and K is a nonzero compact operator satisfying AK = BKA. Then BA has a n. i. s.

Proof. By Proposition 5.1.2, K is in BA. Since A ∈ Q, Theorem 5.1.4 shows that A ∈ QA. The result then follows from Theorem 5.1.5. 

Corollary 5.1.7. Suppose that A is a quasinilpotent operator, λ is a complex number, and K is a nonzero compact operator satisfying AK = λKA. Then either BA or BA∗ has a n. i. s. In any case, A has a proper hyperinvariant subspace. 68 5. SPECTRAL RADIUS ALGEBRAS

∗ ∗ ∗ ∗ Proof. If |λ| ≤ 1 Corollary 5.1.6 implies that BA has a n. i. s. For |λ| > 1, we have A K = (1/λ)K A

∗ so the same argument shows that BA∗ has a n. i. s. If M is such a subspace then it is hyperinvariant for A . It

⊥ follows that M is a proper hyperinvariant subspace for A. 

Now we arrive to the main result of this section.

Theorem 5.1.8. Let K be a nonzero compact operator on the separable, infinite dimensional Hilbert space

H. Then BK has a n. i. s.

Proof. We will show that QK 6= (0). The result will then follow from Theorem 5.1.5. Of course, if K is quasinilpotent, Theorem 5.1.4 shows that K ∈ QK . Therefore, for the rest of the proof, we will assume that r(K) > 0.

−1 −1 −1 Notice that x ⊗ y ∈ QA iff kRm(x ⊗ y)Rm k → 0. However, kRm(x ⊗ y)Rm k = kRmxkkRm yk so it suffices

−1 to exhibit a rank one operator x ⊗ y with supm kRmxk < ∞ and limm kRm yk = 0. A vector y with the desired property is supplied by the following lemma.

Lemma 5.1.9. Suppose that K is a compact operator and r(K) > 0. Then there exists a unit vector v such

−1 that Rm v → 0, m → ∞.

Proof. Let λ be a complex number in σ(K) such that |λ| = r(K). Then λ ∈ σ(K∗) so there are unit vectors u and v for which Ku = λu and K∗v = λv. An easy calculation shows that K (u ⊗ v) = (u ⊗ v) K so that u ⊗ v ∈

0 −1 {K} ⊂ BA. It then follows that supm ||Rmu|| ||Rm v|| < ∞. On the other hand, a straightforward calculation

−1 −1 shows that ||Rmu|| → ∞, m → ∞. Since supm ||Rmu|| ||Rm v|| < ∞ it must follow that ||Rm v|| → 0. 

Exercise 5.1.4. Prove that kRmuk → ∞, m → ∞.

So it remains to provide a nonzero vector x with the property that

(5.3) sup kRmxk < ∞. m 5.1. COMPACT OPERATORS 69

To that end, it suffices for x to satisfy

(5.4) lim sup kKnxk1/n < r(K). n

P∞ n 2 n 2 Indeed, (5.4) implies that the power series n=0 ||K x|| z has radius of convergence bigger than 1/r and,

P n 2 2n consequently, the series n kK xk /r converges. Since

∞  2n 2 X m 2 ||R x|| = ||Knx|| m 1 + mr n=0 and {m/(1 + mr)} is an increasing sequence converging to 1/r, we see that (5.4) implies (5.3).

It is not hard to see that, if K has an eigenvalue λ with the property that |λ| < r(K), then any eigenvector corresponding to λ satisfies (5.4). Thus we may assume that 0 is an isolated point of σ(K). Let Γ be a positively oriented circle around the origin such that 0 is the only element of σ(K) inside the circle, and let

1 Z P = − (K − λI)−1 dλ. 2πi Γ

By Theorem 4.2.2, P is a projection that commutes with K, and the restriction K0 of K to the invariant subspace

n 1/n n 1/n n 1/n Ran P is quasinilpotent. It follows that, if x is a unit vector in Ran P , then ||K x|| = ||K0 x|| ≤ kK0 k →

0. This completes the proof of the theorem. 

Exercise 5.1.5. Prove that if u ⊗ v is a rank one operator, then ku ⊗ vk = kukkvk.

As mentioned earlier, the presence of proper invariant subspaces for BK (K compact) is an advancement in

0 invariant subspace theory only if BK differs from {K} . We do not know at the present time if BK can equal

{K}0 for a compact nonzero operator K on an infinite dimensional space. We do know that the answer is no if

K has positive spectral radius.

Proposition 5.1.10. Let K be a compact operator on an infinte dimensional Hilbert space such that r(K) > 0.

0 Then BK 6= {K} . 70 5. SPECTRAL RADIUS ALGEBRAS

Proof. Notice that the vectors x and y obtained in the proof of Theorem 5.1.8 satisfy (5.3) and K∗y = λy¯ , with |λ| = r(K). Since it was established that x ⊗ y ∈ BK it suffices to prove that K(u ⊗ v) 6= (u ⊗ v)K. This follows from the fact that Kx 6= λx which is a simple consequence of (5.3).