Spectral Theory Notes for discussion on Spring 2020 31.3.2020

The next Zoom session of the course will start on Tuesday March 31 at 14:15, in the same Zoom room as the previous one. (The link can be found in Moodle if you have lost the previous invitation.) Before the session, please study the following sections from the textbook Mathematical Methods in Quantum Mechanics With Applications to Schrödinger Operators by G. Teschl:

Section 1.4, half page, up to and including the rst Example: The main topic is the denition of the (external) direct sum of Hilbert spaces, L , also called the orthog- i Hi onal sum of Hilbert spaces. Section 2.2, 4 pages, until but excluding Example (Dierential operator): The main topic is introduction of Multiplication operators in the rst Example. This also serves as an introduction to unbounded operators on a , as linear maps A : Dom(A) → H, and to the denition of the adjoint of a densely dened (denition of A∗ if Dom(A) = H) and, consequently, the concept of self-adjoint unbounded operators (A∗ = A and Dom(A) = H). We also need the denition of S + T and ST on their natural domains for unbounded operators S, T . Section 2.4, two new items: Essential range of a measurable and its relation to the corresponding multiplication operator (Example (multiplication operator)). Spectrum of an inverse (Lemma 2.17). Section 2.5, excluding statements about closed operators and closures: For the spectral decomposition theorem, we will need direct (orthogonal) sums of op- erators dened on a direct sum of Hilbert spaces in Section 1.4 above.

Motivations for the selection above:

• The spectral decomposition theorem of self-adjoint operators (bounded or unbounded) A on H can be loosely summarized as follows: There exists a collection of Hilbert spaces , , and a unitary map L which transforms the operator into a Hi i ∈ I U : H → i∈I Hi A direct sum of multiplication operators on , i.e., L  . This version Ai Hi UA = i∈I Ai U gives the appropriate generalization of the matrix result that real symmetric matrices can be diagonalized by orthogonal matrices (and complex Hermitian matrices can be diagonalized by unitary matrices). Compare this also to the statement in Theorem 5.4.1 in the textbook by Davies.

• Why include unbounded operators but not their whole theory? There is very little dierence in the proof of the spectral decomposition theorem between bounded and unbounded operators. In addition, most unbounded operators in applications arise in this manner from multiplication operators, via a suitable unitary map. This holds also for dierential operators (using Fourier transforms and leading to theory) but these topics would need their own course to be done properly. Hence, we are skipping all examples related to closed operators, closures and dierential operators in the textbook by Teschl. Below is a summary of denitions for bounded and unbounded operators on a Hilbert space H. Let us rst x some related terminology for this course: there are variations on the related notations and terms, so please keep an open mind when reading other sources. In particular, below you can nd a comparison between our notations and those used in Teschl's textbook.

Scalar product: We use hf, gi which is conjugate linear in the second argument (in g). In Teschl's textbook, the scalar product is conjugate linear in the rst argument (in f). ∗ Therefore,  hf, giTeschl  = hg, fi = hf, gi .

1 Subspace operator: A is a subspace operator on H if its domain Dom(A) and image Image(A) are linear subspaces of H, and A : Dom(A) → Image(A) is a . In Teschl's textbook, one denotes D(A) = Dom(A). Range: The range of a subspace operator A is Ran(A) := {Af | f ∈ Dom(A)}, and it is a linear subspace of Image(A), hence also of H. Operator: A subspace operator A is called an operator if Image(A) = H. (Sometimes operator is used as a shorthand for , as dened below.) In Teschl one assumes that an operator is always densely dened (see below).

Boundedness: A subspace operator A is called bounded if

kAk := sup {kAfk | f ∈ Dom(A) , kfk ≤ 1} < ∞ .

A subspace operator is a continuous map (in the inherited from H) if and only if it is bounded.

Unbounded operator: Consistently with the terminology so far: A is an unbounded op- erator on H if it is a subspace operator for which Image(A) = H and kAk = ∞. Densely dened: A subspace operator A is densely dened if Dom(A) = H. (Closure taken in the norm of H.) Bounded operator: A is a a bounded operator on H if it is a subspace operator for which Dom(A) = H = Image(A) and kAk < ∞. (Note that for a bounded subspace operator Dom(A) or Image(A) could both be proper subspaces of H.) We denote the collection of all bounded operators by L (H), in Teschl's textbook this is denoted by L(H). Extension of operators: Suppose A and B are both operators on H, i.e., both are subspace operators with Image(A) = H = Image(B). We say that A is extended by B, or equivalently that B extends A, if Dom(A) ⊂ Dom(B) and Bf = Af for all f ∈ Dom(A). This is denoted by A ⊂ B, and in Teschl's textbook by A ⊆ B. Natural domain of a sum of operators: Suppose A and B are both operators on H. The natural domain of A + B is Dom(A + B) := Dom(A) ∩ Dom(B) since this is the largest collection of f ∈ H for which both Af and Bf are dened. Note that Dom(A + B) is a linear subspace and setting (A + B)(f) = Af + Bf denes then an operator A + B. Natural domain of a product of operators: Suppose A and B are both operators on H. The natural domain of AB is Dom(AB) := {f ∈ Dom(B) | Bf ∈ Dom(A)} ⊂ Dom(B) since this is the largest collection of f ∈ H for which both Bf and A(Bf) are dened. Note that Dom(AB) is a linear subspace and setting (AB)(f) = A(Bf) denes an operator AB. Note the following consequences of the denitions:

• An unbounded operator cannot have bounded operator extensions. • A bounded subspace operator A always has a unique bounded extension into a subspace operator B with Dom(B) = Dom(A) and Image(B) = Image(A). However, if A is not densely dened, A might have operator extensions which are unbounded.

• If A, B ∈ L (H), then the above natural domain denitions yield the same bounded operators A + B and AB ∈ L (H) as we have used up to now. The following properties are not needed (yet), but let me include them here if you wish to read also some of the skipped parts of the textbook.

2 Graph of an operator: If A is an operator, then its graph is the following subset G(A) := {(f, Af) | f ∈ Dom(A)} of the Hilbert space H ⊕ H which is equal to the product set with a scalar product . In particular, H×H h(f1, g1), (f2, g2)iH⊕H = hf1, f2iH+hg1, g2iH is here endowed with the norm p 2 2 . In Teschl's H ⊕ H k(f, g)kH⊕H = kfkH + kgkH textbook the graph is denoted by Γ(A). Closed operator: An operator A is closed if its graph G(A) is a closed subset of H ⊕ H. Note the following consequences of the denitions: • The graph of an operator is always a linear subspace of H ⊕ H. • If A ∈ L (H), then A is closed. (General topological property, using the fact that then A is continuous, H × H = Dom(A) × Image(A), and checking that the topology of H ⊕ H agrees with the product topology.) • Conversely, by the , if A is a closed subspace operator with Dom(A) = H = Image(A), then A is bounded, and thus A ∈ L (H).

Section 1.4 Explanation of the Example: Take and for each set which is a one- J = N j ∈ N Hj = C dimensional Hilbert space such that each g ∈ Hj has the norm kgkj := |g|. Dene further L . Then if , we have where for all , and H := Hj f ∈ H f = (fj)j∈N fj ∈ C j j∈N X X kfk2 = kf k2 = |f |2 = kfk2 . H j j j `2 j∈N j∈N

n P 2 o 2 Thus H := f ∈ N kfjk < ∞ is isometrically isomorphic with ` ( , ) (the scalar C j∈N j N C products agree by the ).

Section 2.2 Some comments to connect the results to the textbook by Davies: • The Axioms refer to an axiomatic version of quantum mechanics and can be ignored.

• The quadratic form of an operator A, qA(f) := hAf, fi, f ∈ Dom(A), appeared already in the Lemmata in Section 5.1 of Davies when A was a self-adjoint bounded operator. • To dene the adjoint A∗ of an unbounded operator A, it is crucial that it is densely dened (this is the main reason why Teschl includes the assumption in the denition of an operator), see the discussion below.

• It should be stressed that, in general, (A∗)∗ is no longer automatically well dened: for this one needs to require that also Dom(A∗) is dense. This property is not automatic, it is even possible that Dom(A∗) = {0}. • In the proof of the properties of multiplication operators, to get the approximating sequence fn from the domain of the operator the denition means, explicitly, that ( f(x) , if x ∈ Ωn fn(x) = 1 f(x) = {x∈Ωn} 0 otherwise . The comment about dominated convergence refers to the integral Z Z 2 2 1 2 kf − fnk = dx |f(x) − fn(x)| = dx {x6∈Ωn}|f(x)| . n n R R • Note the hidden but important corollary in the example: if V is a real-valued measur- able function, then it denes a (possibly unbounded) self-adjoint operator on L2. In particular, however crazy singularities V has, it yields an operator whose domain is dense.

3 Adjoints, symmetric and self-adjoint unbounded operators

Suppose A is a densely dened operator. As given in the textbook, it then has an adjoint which is an operator A∗ with the domain

Dom(A∗) := {f ∈ H | there is g ∈ H such that hAh, fi = hh, gi for all h ∈ Dom(A)} .

If f ∈ Dom(A∗), we wish to set A∗f := g which implies that A∗f is a solution to the equation

hAh, fi = hh, A∗fi , for all h ∈ Dom(A) , (1) generalizing the concept of adjoint from bounded operators. However, for this to make sense the solution g must be unique, and it turns out that this is true if and only A is densely dened, hence the restriction above. Namely, suppose that g and g0 are solutions to (1). Subtracting, we then nd that g−g0 is ⊥ orthogonal to every h ∈ Dom(A). Conversely, if g0 ∈ Dom(A) , it is straightforward to check ⊥ g + g0 is also a solution. Therefore, the solution is unique if and only if Dom(A) = {0}. Since H = Dom(A) ⊕ Dom(A)⊥, this occurs if and only if Dom(A) = H. Since bounded operators have Dom(A) = H, they are automatically densely dened, Dom(A∗) = H, and the denition of A∗ coincides with the one used up to now. Let us also point out that Riesz representation theorem may be used here to nd out another, equivalent, denition for the domain of A∗. If g satisfying the condition can be found, then the linear map h 7→ hAh, fi = hh, gi is bounded (since | hh, gi | ≤ khkkgk) and hence continuous. On the other hand, if h 7→ hAh, fi is continuous, since Dom(A) is dense, this map has a unique extension into an element in the H∗. Therefore, in this case by the Riesz representation theorem there is g ∈ H such that hAh, fi = hh, gi for all h ∈ Dom(A). Thus then f ∈ Dom(A∗). We can conclude that

Dom(A∗) = {f ∈ H | h 7→ hAh, fi is continuous} ( )

= f ∈ H sup | hAh, fi | < ∞ , khk≤1, h∈Dom(A) where the second identity arises from the classication of continuous linear maps. In some cases one of these alternative representations for the domain is easier to use than the original one. For densely dened operators we thus obtain a unique map f 7→ g = A∗f from Dom(A∗) to H. It is straightforward to check that this map is linear, and hence A∗ is an operator. The implication (2.27) is a useful result which can be proven directly from the denitions as follows: Suppose A ⊂ B and take f ∈ Dom(B∗). Then for all h ∈ Dom(A) ⊂ Dom(B), we have hh, B∗fi = hBh, fi = hAh, fi. Thus f ∈ Dom(A∗) and A∗f = B∗f. This implies B∗ ⊂ A∗. Self-adjoint unbounded operator is dened as a densely dened operator for which A∗ = A. Note that this, in particular, requires that Dom(A∗) = Dom(A). Symmetric operator A satises

hAh, fi = hh, Afi , for all f, h ∈ Dom(A) .

This implies that every f ∈ Dom(A) belongs to Dom(A∗), and then A∗f = Af; using the above shorthand notations, then A ⊂ A∗. Although every self-adjoint operator is clearly symmetric, symmetric operators do not need to be self-adjoint. In fact, a symmetric operator does not necessarily have any self-adjoint extensions (although if it does, the extension is unique: see Corollary 2.2 in the textbook). It should be stressed that the upcoming spectral decomposition result can only be used for self-adjoint operators, and not directly for symmetric ones.

Section 2.4 Spectrum of a multiplication operator: Note that the resolvent of a multiplication operator −1 A := MV is also a multiplication operator, R(z, A) = MW , where W (x) = (V (x) − z) .

4 The resolvent operator is thus bounded if and only if the essential supremum, i.e., the L∞- norm, of W is bounded. In particular, sets of measure zero need to be ignored, leading to the spectrum of MV being equal to the essential range of the function V . Note also that a multiplication operator may have eigenvalues. Lemma 2.16: For bounded operators, this result was already proven as part of Davies:Lemma 1.2.13. Note that the sequence in the case (iii) of this Lemma is called Weyl sequence in Teschl. Lemma 2.17: For bounded operators, one assumes that A is invertible. The rest of the algebra in the proof remains then unchanged.

Section 2.5 The results in Theorem 2.23 and Lemma 2.24 work also for unbounded densely dened operators but their proofs require tools from the preceding sections in Teschl. Below are versions of these results for bounded operators, using tools already proven during the lectures. Theorem 1 (Theorem 2.23 for bounded operators). Take a (countable) nonempty index set J. Suppose Aj are bounded self-adjoint operators on Hilbert spaces Hj, j ∈ J, such that . Consider the direct sum Hilbert space L and dene L supjkAjk < ∞ H := Hj A := Aj j∈J j∈J as in Exercise 10.5. Then A is a bounded self-adjoint operator of H. If z ∈ C has Im z 6= 0, then z 6∈ σ(A) and the corresponding resolvent operator is given by M R(z, A) = R(z, Aj) . j∈J

Proof. The fact that A is bounded and self-adjoint follows from the assumptions and Exercise 10.5. Suppose z ∈ C has Im z 6= 0. The proof in this case is the same as that in Teschl, except now we may use Davies:Lemma 5.1.1. In particular, combined with Exercise 10.5 we nd that R(z, A) dened above is a bounded operator on H. Taken any f ∈ H, we have for arbitrary j ∈ J that (R(z, A)(A − zI)f)j = R(z, Aj)(Ajfj − zfj) = fj and ((A − zI)R(z, A)f)j = −1 (Aj − zIj)R(z, Aj)fj = fj. Hence, R(z, A) = (A − zI) .

Comment about internal and external direct sums: Suppose Mj, j ∈ J, form a com- plete collection of mutually orthogonal closed subspaces of a Hilbert space H and let Pj, j ∈ J, denote the corresponding orthogonal projections; completeness here means that any can be written as a sum P . In this case, we write L , generaliz- f ∈ H f = j∈J Pjf H = Mj j∈J ⊥ ing the earlier notation where J = {1, 2}, M1 = M and M2 = M for some closed subspace M. This can be called an internal direct sum since we start with subspaces of some given Hilbert space.

However, in this case each Mj is itself also a Hilbert space Hj with the scalar product L inherited from H. Therefore, we also have the external direct sum H˜ := Hj as dened j∈J in Section 1.4. Then H˜ can be naturally identied with H: the identication is provided by the map Φ: H → H˜ dened by Φ(f)j := Pjf which one can straightforwardly check to be unitary. Teschl does not make any distinction between H and H˜ in the notation; for instance, it is ne to have H˜ := H1 ⊕ H2, f1 ∈ H1, f2 ∈ H2, and write (f1, f2) ∈ H˜ as f1 + f2 after identifying f1 with (f1, 0) as an element of the closed subspace H1 × {0} of H and f2 with (0, f2), similarly. Lemma 2 (Lemma 2.24 for bounded operators). Take a (countable) nonempty index set J. L Suppose Hj, j ∈ J, are Hilbert spaces and consider the direct sum Hilbert space H := Hj. j∈J L If A ∈ L (H) is such that Hj reduces A for every j ∈ J, then A := Aj where we set j∈J Ajf := Af ∈ Hj for all f ∈ Hj, yielding Aj ∈ L (Hj).

Proof. Since Hj reduces A, we have PjA ⊂ APj where Pj is the orthogonal projection onto Hj ⊂ H. Hence, if f ∈ H, we have PjAf = APjf ∈ Hj and since fj := Pjf ∈ Hj, we conclude that . Thus P . Each is linear PjAf = Ajfj Af = j∈J PjAf = (Ajfj)j∈J ∈ H Aj

5 and, since , it is also bounded on . Since , the operator kAjfjk ≤ kAkkfjk Hj supjkAjk ≤ kAk L A˜ := Aj is a bounded operator on H. Finally, we observe that (Af˜ )j = Ajfj for all f ∈ H j∈J and any j ∈ J. Therefore, A = A˜.

6