Weak Solutions for a Class of Quadratic Operator-Differential Equations

Luis Antonio Gómez Ardila

UNIVERSIDAD DE LOS ANDES FACULTAD DE CIENCIAS DEPARTAMENTO DE MATEMÁTICAS 2016

Weak Solutions for a Class of Quadratic Operator-Differential Equations

Trabajo de Grado presentado como requisito parcial para optar al título de Magíster en Matemáticas

Luis Antonio Gómez Ardila

Directora: Monika Winklmeier

UNIVERSIDAD DE LOS ANDES FACULTAD DE CIENCIAS DEPARTAMENTO DE MATEMÁTICAS 2016

Abstract

"Mathematics is the most beautiful and most powerful creation of the human spirit" (Stefan Banach)

This monograph presents results of existence and uniqueness of generalized solutions for two classes of quadratic operator-differential equations with constant coefficients:

Au(t) + Bu0(t) − Du00(t) = 0 (1) and Au(t) + iBu0(t) + Du00(t) = 0, (2) where A, B and D are self-adjoint operators which satisfy certain conditions under which the equation (1) is called elliptic-hyperbolic and the equation (2) is called hyperbolic. The main result is the existence and uniqueness of weak solutions, on the positive real axis or on the negative real axis, for the class of operator differential equations called elliptic-hyperbolic. These solutions, on the positive real axis, decay exponentially to zero in the infinity. A similar result is obtained on the negative real axis.

We also give results of existence of solutions of (1) and (2) in the case in which A, B and D are self-adjoint . These results, specifically for the elliptic-hyperbolic case, are related with a factorization of the quadratic pencil A + λB + λ2C (where C = −D) and of its associated pencil L(λ) = λ2A + λB + C. The factorization of this latter pencil is related to the existence of roots for the operator equation

AZ2 + BZ + C = 0. (3)

In order to prove that (3) has a solution Z, we define the so-called linearizer L of the pencil. Then we use the theory of Krein spaces to establish existence of certain L-invariant subspaces which finally lead to the existence of solutions Z of (3).

Additionally, some results on completeness and basis property of the system of eigenvectors as- sociated to the pencil A + λB − λ2D are presented.

This work is based on the papers of Shkalikov [11] and Langer [4], and on the books of Bognár [1], Azizov [2], and Markus [6].

i ii Dedication

To my mother, Alix Ardila.

iii iv Acknowledgements

My gratitude forever to...

Prof. Dr. Monika Winklmeier, my teacher and my adviser in this grade work, for giving me the opportunity to work with her, but specially for introducing me to that beautiful part of the mathematics called .

Prof. PhD. Jean Carlos Cortissoz, my graduate adviser, for his guidance during these two years.

The Department of Mathematics of Universidad de los Andes, for the financial support received through of graduate assistance.

My friends and teachers in Universidad Industrial de Santander, the Professors Ricardo Monturiol, Edilberto Reyes, Rafael Isaacs, Rafael Castro and Sonia Sabogal for their constant encouragement and help to achieve this goal.

v vi Contents

Contents vii

1 Introduction 1 1.1 Preliminaries ...... 2

2 Inner Product Spaces 5 2.1 Vector Spaces ...... 5 2.2 Inner Products on Vector Spaces ...... 6 2.3 Orthogonality ...... 10 2.4 Isotropic Vectors ...... 12 2.5 Maximal Non-Degenerate Subspaces ...... 14 2.6 Maximal Semi-Definite Subspaces ...... 15 2.7 Projections of Vectors on Subspaces ...... 17 2.8 Ortho-Complemented Subspaces ...... 19 2.9 Fundamental Decompositions ...... 20 2.10 Orthonormal Systems ...... 22

3 Linear Operators on Inner Product Spaces 24 3.1 Linear Operators ...... 24 3.2 Isometric Operators ...... 26 3.3 Symmetric Operators ...... 28 3.4 Orthogonal and Fundamental Projectors ...... 30 3.5 Fundamental Symmetries ...... 32 3.6 Angular Operators ...... 37

4 Krein Spaces 41 4.1 Krein Spaces ...... 41 4.2 Subspaces in Krein Spaces ...... 45 4.3 The Gram Operator of a Closed Subspace ...... 50 4.4 Linear Operators in Krein Spaces ...... 53

5 Riesz Basis 56 5.1 Bessel sequences ...... 56 5.2 Basis ...... 58 5.3 Riesz Basis ...... 60

vii 6 Quadratic Operator Pencils 65 6.1 Quadratic Pencils of Bounded Self-adjont Operators ...... 65 6.2 Factorization of Quadratic Pencils ...... 68 6.3 Strongly Damped Pencils ...... 69 6.4 Operators Associated to a Pencil ...... 80 6.5 Operator-Differential Equations Associated to a Quadratic Pencil ...... 93 6.6 A special case ...... 96

7 Hyperbolic and Elliptic-hyperbolic Pencils 98 7.1 Hyperbolic Pencils ...... 98 7.2 Operator-Differential Equation Associated to a Hyperbolic Pencil ...... 104 7.3 Elliptic-hyperbolic Pencils and their Associated Operator-Differential Equation . . 106

8 Pencils of Unbounded Self-adjoint Operators and their Associated Differential Equations 113 8.1 Hyperbolic and Elliptic-hyperbolic Pencils of Unbounded Self-adjoint Operators . 113 8.2 Equation Associated with a Hyperbolic Pencil ...... 116 8.3 Equation Associated with a Elliptic-hyperbolic Pencil ...... 119 8.4 Completeness and Basis Property ...... 123

A Some Results on and Operator Theory 124 A.1 Functional Analysis and Operator Theory ...... 124 A.2 Normal Points of a Bounded Operator ...... 125 A.3 Some Results about Definitizable Operators ...... 126 A.4 Distributions and Operators ...... 128

Bibliography 129

viii Chapter 1

Introduction

"No one shall expel us from Paradise that Cantor has created for us." (David Hilbert)

This monograph is based on the papers of Shkalikov [11] and Langer [4], and on the books of Bognár [1], Azizov [2], and Markus [6]. The main result is the existence and uniqueness of weak solutions, on the positive real axis or on the negative real axis, for a class of operator differential equations called elliptic-hyperbolic. These solutions, on the positive real axis, decay exponentially to zero in the infinity. A similar result is obtained on the negative real axis. Operator-differential equations of the form (1) and (2) can be considered as a generalization of ordinary lineal differential equations to the infinite-dimensional case [21]. They arise for example in the study of strongly damped mechanical systems [20], electrical systems and signal processing [23], in the study of partial differential equations (Wave and Heat Equations, for example) and in the study of Abstract Cauchy Problems [22]. The spectral analysis of the associated pencil L(λ) = λ2A + λB + C (where D = −C) generalizes the study of the quadratic eigenvalue problem which is very important in the previously mentioned applications to physics [23]. For the case where A, B and C are bounded self-adjoint operators acting in a separable H, such that L(λ) is strongly damped, the spectral analysis and linearization of the pencil L(λ) can be realized with geometric methods using Krein space theory [4]. Analogous to the finite-dimensional case, the problem of linearization of the pencil L(λ) is related to the problem of factorization of the operator equation (3) and this latter is related to the problem of existence of invariant maximal semi-definite subspaces for an associated linearizing operator L [4] [6].

1 1.1 Preliminaries

1. By a vector space we shall always mean a complex vector space. A subspace of a vector space X is a set U ⊆ X such that it is itself a vector space with the operations defined in X.

A norm on a vector space X is a function k·k : X −→ R such that, for every x, y ∈ X and α ∈ C:

i) kxk = 0 implies x = 0, ii) kαxk = |α|kxk, iii) kx + yk ≤ kxk + kyk.

The pair (X, k·k) is called a normed space. A subspace U of a normed space (X, k·k) is called closed (open) if and only if it is a closed (open) subset in the topology induced on X by the norm k·k. Note that the subspace U ⊆ X is closed if, and only if, for every sequence (un)n∈N ⊆ U and u0 ∈ X, un → u0 in the norm implies u0 ∈ U. A normed space X is called a if, and only if, every Cauchy sequence in X is convergent. A subspace U of a Banach space (X, k·k) is closed if and only if (U, k·k) is in itself a Banach space.

A pre-Hilbert space is a pair (X, h·, ·i) where X is a vector space and h·, ·i : X × X −→ C is a funtion such that for every x, y, z ∈ X and α, β ∈ C:

i) hαx + βy, zi = αhx, zi + βhy, zi, ii) hy, xi = hx, yi, iii) hx, xi > 0 if x 6= 0.

The function h·, ·i is called a positive definite inner product (see section 2.2 for the general defi- nition of an inner product). Every pre-Hilbert space (X, h·, ·i) is a normed space with the norm induced by h·, ·i; this norm is defined by the formula kxk = phx, xi, x ∈ X. In this case, if (X, k·k) is a Banach space then (X, h·, ·i) is called a Hilbert space. 2. By a linear operator from a vector space X into a vector space Y we mean a function

T : D(T ) ⊆ X −→ Y , where D(T ) ⊆ X is a subspace of X (called the domain of T ), such that

T (αu + βv) = αT u + βT v for every u, v ∈ D(T ) and α, β ∈ C. In the case Y = X we say that T is a linear operator acting in X.

2 Let X be a vector space. The linear operator P : X −→ X is called a projector if and only if P 2 = P .

If T : D(T ) ⊆ X −→ Y is a linear operator then the subspaces ker(T ) ⊆ X and R(T ) ⊆ Y are defined by:

ker(T ) = {x ∈ D(T ) | T x = 0}, R(T ) = {T x | x ∈ D(T )}.

Let (X, k·kX ) and (Y, k·kY ) be normed spaces and let T be a linear operator from X into Y . The operator T is called bounded if and only if there exists M ≥ 0 such that kT xkY ≤ MkxkX for every x ∈ D(T ). The bounded operator T is called compact if, and only if, for every bounded sequence

(xn)n∈N ∈ D(T ), the sequence (T xn)n∈N has a convergent subsequence in Y . An operator T is called closed if, and only if, for every sequence (xn)n∈N ⊆ D(T ) such that (xn)n∈N is convergent in X and (T xn)n∈N is convergent in Y , then:

 lim xn ∈ D(T ) and lim T xn = T lim xn . n→∞ n→∞ n→∞

If X and Y are normed spaces then we set:

•B (X,Y ) = {T : X −→ Y | T is a bounded linear operator}.

•B ∞(X,Y ) = {T : X −→ Y | T is a compact linear operator}.

•B (X) = B(X,X).

•B ∞(X) = B∞(X,Y ).

3. Let T be a densely defined linear operator acting in a Hilbert space H. Define the adjoint T ∗ of T by:

• D(T ∗) = {y ∈ H | x 7→ hT x, yi is a bounded operator on D(T )}.

• If y ∈ D(T ∗), then T ∗y is the unique u such that hT x, yi = hx, ui for every x ∈ D(T ).

Clearly, since T is densely defined, T ∗ is well-defined. Moreover T ∗ is a densely defined closed operator.

3 If T ∈ B(H), then we have

H = ker(T ) ⊕ R(T ∗) = ker(T ∗) ⊕ R(T ).

4. The densely defined linear operator T acting in a Hilbert space H is called:

• symmetric if and only if T ⊆ T ∗.

• essentially self-adjoint if and only if T = T ∗.

• self-adjoint if and only if T = T ∗.

• unitary if, and only if, T ∈ B(H) and TT ∗ = T ∗T = I.

4 Chapter 2

Inner Product Spaces

2.1 Vector Spaces

Definition 2.1.1. Let X be a vector space and U1,...,Un subspaces of X. Then:

(i) U1 + ··· + Un := {x1 + ··· + xn | xj ∈ Uj, j = 1, . . . , n} is the vector sum of subspaces

U1,...,Un.

(ii) U1,...,Un are linearly independent subspaces if, and only if, x1 + ··· + xn = 0 with xj ∈ Uj

(j = 1, . . . , n) implies x1 = ··· = xn = 0.

Remarks:

• Clearly, the vector sum of subspaces of X is again a subspace of X.

• In the definition 2.1.1, (ii) is equivalent to:

(U1 + ··· + Uj−1) ∩ Uj = {0} for every j = 2, . . . , n.

• The vector sum of linearly independent subspaces U1,...,Un ⊆ X is called a direct vector

sum and is denoted by U1 u ··· u Un.

• By Zorn’s lemma, if U ⊆ X is a subspace then there exists a subspace V ⊆ X such that

U u V = X (and therefore U ∩ V = {0}). V is called a complementary subspace for U

5 with respect to X. Moreover, if W ⊆ X is a subspace with U ∩ W = {0} then there exists

V ⊇ W subspace of X such that U u V = X.

Definition 2.1.2. Let X be a vector space, U ⊆ X a subspace and X/U the quotient space of X with respect to U. Then the dimension of X/U is called the codimension of U with respect to X and is denoted by codim(U); this is,

codim(U) := dim(X/U).

Proposition 2.1.3. Let X be a vector space and U ⊆ X a subspace. Then every complementary subspace for U with respect to X is isomorphic to X/U.

Proof. Let V ⊆ X be a complementary subspace for U with respect to X. Let T : V −→ X/U be such that, for every v ∈ V , T v := [v] (here [v] denote the class of equivalency of v); clearly T is well-defined and it is a linear transformation. T is injective since ker(T ) = {v ∈ V :[v] = 0} = {v ∈ V | v ∈ U} = U ∩ V = {0}. T is surjective because if w ∈ X/U then there exists x ∈ X and, therefore, u ∈ U and v ∈ V such that w = [x] = [u + v] = [v] = T v. So T is an isomorphism between V and X/U.

Corollary 2.1.4. The dimension of every complementary subspace for a subspace U is equal to codim(U).

2.2 Inner Products on Vector Spaces

Definition 2.2.1. An inner product on a vector space X is a function

[·, ·]: X × X −→ C such that:

(i) For every x, y in X, [y, x] = [x, y].

(ii) For every x, y, z in X and α, β in C, [αx + βy, z] = α[x, z] + β[y, z].

6 Remark. The pair (X, [·, ·]) where X is a vector space and [·, ·] is an inner product on X will be called an inner product space. If (X, [·, ·]) is an inner product space then (X, −[·, ·]) is called the anti-space of (X, [·, ·]).

Definition 2.2.2. Let (X, [·, ·]) and (X0, [·, ·]0) be inner product spaces. A linear operator T : X → X0 is called an isometry if, and only if, [T x, T y]0 = [x, y] for every x, y ∈ X. If further T is a linear isomorphism then X and X0 are said be isometrically isomorphic. In this case, T is called an isometrical isomorphism between X and X0.

Proposition 2.2.3 (Polarization Formula). Let X be an inner product space. Then, for every x, y in X:

4[x, y] = [x + y, x + y] − [x − y, x − y] + i[x + iy, x + iy] − i[x − iy, x − iy].

The proof of the polarization formula is a straightforward calculation of the right hand side.

Remark. If X is an inner product space then, for every x ∈ X, [x, x] ∈ R since [x, x] = [x, x].

Definition 2.2.4. Let X be an inner product space and x ∈ X. Then x is called:

(i) positive if [x, x] > 0.

(ii) negative if [x, x] < 0.

(iii) neutral if [x, x] = 0.

Remark. Clearly, 0 is neutral. Note that if X is an inner product space and x is a positive

(negative, neutral) vector then, for every α ∈ C with α 6= 0, the vector αx is positive (negative, neutral) too.

Definition 2.2.5. Let (X, [·, ·]) be an inner product space. [·, ·] is called indefinite if and only if there exist x, y in X such that [x, x] > 0 and [y, y] < 0.

Example 1. Let (H, h·, ·i) be a Hilbert space and let U, V be nontrivial subspaces of H such that H = U⊕˙ V (where ⊕˙ denotes the orthogonal direct sum of subspaces). Then, for every x ∈ H, there are unique u ∈ U and v ∈ V such that x = u + v. Now, if x, y are elements in H with

7 x = u1 + v1 and y = u2 + v2, then define [x, y] := hu1, u2i − hv1, v2i. Clearly, [·, ·] is an inner product on H. Note that, for every u ∈ U \{0}, [u, u] > 0 and, for every v ∈ V \{0}, [v, v] < 0. So, [·, ·] is an indefinite inner product on H.

Proposition 2.2.6. Let X be an indefinite inner product space. Then there exists a non-zero neutral vector in X.

Proof. Let x, y ∈ X such that [x, x] > 0 and [y, y] < 0. Note that, for every λ ∈ R:

[x + λy, x + λy] = [y, y]λ2 + 2 Re([x, y])λ + [x, x].

Since Re([x, y])2−[x, x][y, y] > 0, the discriminant of the quadratic equation [y, y]λ2+2 Re([x, y])λ+

[x, x] = 0 is positive and, therefore, there exists λ0 ∈ R such that [x + λ0y, x + λ0y] = 0, hence z0 := x + λ0y is a neutral vector. Suppose, towards a contradiction, that z0 = 0; then x = −λ0y

2 and, therefore, [x, x] = λ0[y, y] .

Definition 2.2.7. Let (X, [·, ·]) be an inner product space. If [·, ·] is not indefinite then it is called semi-definite and (X, [·, ·]) is called a semi-definite inner product space.

Remark. If [·, ·] is a semi-definite inner product on a vector space X then, for every x ∈ X, [x, x] ≥ 0 (and in this case [·, ·] is called positive semi-definite) or, for every x ∈ X, [x, x] ≤ 0 (and in this case [·, ·] is called negative semi-definite). If, for every x ∈ X, [x, x] = 0 then [·, ·] is called neutral (note that a neutral inner product is both a positive and a negative semi-definite inner product).

Proposition 2.2.8. Let X be a semi-definite inner product space and let x ∈ X be a neutral vector. Then, for every y ∈ X, [x, y] = 0.

Proof. Suppose x 6= 0 (since the case x = 0 is clear) and suppose, without loss of generality, that the space is positive semi-definite (otherwise take −[·, ·]). Let y ∈ X be any vector.

If [y, y] = 0 then, for every λ ∈ R, 0 ≤ [x+λy, x+λy] = 2λ Re([x, y]) and 0 ≤ [x+iλy, x+iλy] = 2λ Im([x, y]); so Re([x, y]) = 0 and Im([x, y]) = 0. Now, if [y, y] > 0 then, for every λ ∈ R, q(λ) := [x + λy, x + λy] = λ2[y, y] + 2λ Re([x, y]) ≥ 0 and p(λ) := [x + iλy, x + iλy] = λ2[y, y] + 2λ Im([x, y]) ≥ 0; this is, q(λ) and p(λ) represent parabolas with a double zero at

8 λ = 0 and so Re([x, y])2 and Im([x, y])2, the discriminants, are zero; this is, Re([x, y]) = 0 and Im([x, y]) = 0. Therefore, [x, y] = 0.

Corollary 2.2.9. Let X be an inner product space and let x, y ∈ X such that [x, x] = 0 and [x, y] 6= 0. Then the subspace Span({x, y}) is indefinite.

Proof. Let U := Span({x, y}). If U is not indefinite then U is a semi-definite subspace and, by proposition 2.2.8, [x, y] = 0 .

Corollary 2.2.10. If X is a neutral inner product space then, for every x, y ∈ X, [x, y] = 0.

Proposition 2.2.11 (Schwarz Inequality). Let X be a semi-definite inner product space. Then, for every x, y ∈ X: |[x, y]|2 ≤ [x, x][y, y].

Proof. Suppose, without loss of generality, that the inner product is positive semi-definite. If x or y is a neutral vector then, by proposition 2.2.8, the inequality holds.

[y,x] [y,x] Suppose that x and y are not neutral vectors. Let u = [x,x] x and v = y − [x,x] x, then [u, v] = h [y,x] [y,x] i |[x,y]|2 [x,x] x, y − [x,x] x = 0, y = u + v and, therefore, [y, y] = [u, u] + [v, v] ≥ [u, u] = [x,x] . So, |[x, y]|2 ≤ [x, x][y, y].

Definition 2.2.12. An inner product on a vector space X is called definite if and only if, for every x ∈ X, [x, x] = 0 implies x = 0.

Remarks:

(i) If U is a definite subspace of an inner product space (X, [·, ·]), then U can be endowed with p a norm, k·kU , defined by the formula kxkU = |[x, x]|, x ∈ U. This norm is called the norm induced by the inner product on U.

(ii) By proposition 2.2.6, a definite inner product is not indefinite; this is, a definite inner product is always semi-definite. So, if the inner product is positive definite then, for every x 6= 0, [x, x] > 0 and if the inner product is negative definite then, for every x 6= 0, [x, x] < 0.

9 (iii) A positive definite inner product space is what is known classically as a pre-Hilbert space (see page 2).

(iv) Clearly, if [·, ·] is a positive definite inner product on a vector space X then −[·, ·] is a negative definite inner product on X.

Proposition 2.2.13. Let X be an inner product space. If X contains at least one positive vector, then every element of X is the sum of two positive vectors.

Proof. Let x0 ∈ X be a positive vector. Then, for every α ∈ R \{0}, αx0 is positive. If x ∈ X,

2 then, clearly, there is α ∈ R \{0} such that [x + αx0, x + αx0] = [x, x] + 2 Re([x, x0])α + [x0, x0]α is positive. So there exists α ∈ R \{0} such that −αx0 and x + αx0 are positive, and x =

(x + αx0) + (−αx0).

Remark. The proposition 2.2.13 holds if positive is replaced by negative.

2.3 Orthogonality

Definition 2.3.1. Let X be an inner product space; x, y ∈ X and A, B ⊆ X. Then:

(i) x and y are orthogonal (x ⊥ y) if and only if [x, y] = 0.

(ii) A and B are orthogonal (A ⊥ B) if and only if, for every x ∈ A and y ∈ B, x ⊥ y.

Definition 2.3.2. Let X be an inner product space and A ⊆ X. The orthogonal complement of A, denoted A⊥, is defined by

A⊥ := {x ∈ X : x ⊥ A} = {x ∈ X : x ⊥ y for all y ∈ A}.

The following proposition is clear so we omit its proof.

Proposition 2.3.3. Let X an inner product space. Then:

(i) For every A ⊆ X, A⊥ is a suspace of X.

(ii) For every A, B ⊆ X with A ⊆ B, B⊥ ⊆ A⊥.

10 (iii) For every U, V subspaces of X, (U + V )⊥ = U ⊥ ∩ V ⊥.

(iv) For every A ⊆ X, A ⊆ (A⊥)⊥ =: A⊥⊥.

(v) For every A ⊆ X, A⊥ = A⊥⊥⊥.

Example 2. Let X be a two-dimensional vector space with basis {e1, e2}. On X define an inner product by the relations [e1, e1] = 1, [e2, e2] = −1 and [e1, e2] = 0. Let U := Span(e1 + e2). Then

⊥ U is a subspace of X such that U = U (because, [αe1 + βe2, e1 + e2] = 0 ⇔ α = β).

Definition 2.3.4. Let X be an inner product space and let U, V ⊆ X be subspaces. U and V are dual companions (or U and V form a dual pair) if and only if U ∩ V ⊥ = U ⊥ ∩ V = {0}.

Remarks:

• If X is a semi-definite inner product space and x ∈ X is a neutral vector then, by proposition 2.2.8, x ∈ A⊥ for every A ⊆ X.

• If x, y ∈ X are orthogonal then [x + y, x + y] = [x, x] + [y, y].

• If U is a subspace of X then it is possible that U ∩ U ⊥ 6= {0} (see example 2).

• By example 2, if U ⊆ X is a subspace and V ⊆ X is its orthogonal complement with respect to X then U and V are not necessarily linearly independent (nevertheless, if X is a definite inner product space and U ⊆ X is a subspace then its orthogonal complement is a complementary subspace for U).

• The vector sum (resp. direct vector sum) of pairwise orthogonal subspaces U1,...,Un ⊆ X

is called an orthogonal sum (resp. orthogonal direct sum) and it is denoted by U1 ⊕ · · · ⊕ Un

(resp. U1⊕˙ ... ⊕˙ Un).

• Let (Xj, [·, ·]j) be inner product spaces, j = 1, . . . , n. Let X := X1 × · · · × Xn and for Pn x = (x1, ··· , xn), y = (y1, ··· , yn) in X define [x, y] := j=1[xj, yj]j. Clearly (X, [·, ·]) is an

inner product space. Now, for every k = 1, . . . , n, let X(k) := {(x1, . . . , xn) ∈ X : xj = 0, j 6=

k}. So, X(k) is a subspaces of X which is isometrically isomorphic to Xk (k = 1, . . . , n) and

X = X(1)⊕˙ · · · ⊕˙ X(n).

11 2.4 Isotropic Vectors

Definition 2.4.1. Let X be an inner product space and U ⊆ X a subspace. The isotropic part of U is the subspace U 0 := U ∩ U ⊥.

Remarks:

• If x ∈ U 0 then x is called an isotropic vector of U.

• If x is an isotropic vector for some subspace then x is a neutral vector, but not every neutral

vector is isotropic (in the example 2, e1 + e2 is neutral but is not isotropic for X).

• If U 0 6= {0} then U is called degenerate.

• X is degenerate if, and only if, its isotropic part, X0 = X⊥, is not equal to {0}.

• In the example 2, X is not degenerate (since X⊥ = {0}) but U is a degenerate subspace of X.

• If U is a definite subspace of an inner product space X, then U is non-degenerate.

0 0 • In X/X an inner product is defined by [˜x, y˜]∼ := [x, y], where x˜, y˜ ∈ X/X , x ∈ x˜ and

0 y ∈ y˜. Note that [·, ·]∼ is well-defined since if x, u ∈ x˜ and y, v ∈ y˜ then x − u, y − v ∈ X

and, therefore, [x, y] − [u, v] = [x − u, y] + [u, y − v] = 0. That [·, ·]∼ is an inner product is clear since [·, ·] is so.

Theorem 2.4.2. Let (X, [·, ·]) be an inner product space and let U be a non-degenerate subspace of X such that U = U+ + U− with U+ a positive semi-definite subspace and U− a negative semi- definite subspace. Then U = U+ u U−, U+ is maximal positive semi-definite in (U, [·, ·]) and U− is maximal negative semi-definite in (U, [·, ·]).

Proof. Suppose, towards a contradiction, that there exists w ∈ U+ ∩ U− such that w 6= 0. Then

⊥ ⊥ 0 w is neutral and, therefore, w ∈ U+ and w ∈ U− (see proposition 2.2.8); thus w ∈ U . Hence,

U = U+ u U−.

12 Suppose, towards a contradiction, that U+ is not maximal positive semi-definite in U, and let

V ⊆ U be a positive semi-definite subspace such that V ) U+. Since U = U+ u U−, there exists v ∈ V ∩ U− (therefore, v is neutral) such that v 6= 0. Thus, v ⊥ V , v ⊥ U− and, therefore, v ⊥ U . Hence, U+ is maximal positive semi-definite and, analogously, U− is maximal negative semi-definite.

Proposition 2.4.3. Let X be an inner product space and U, U1,..., Un subspaces of X such ˙ ˙ 0 0 ˙ ˙ 0 that U = U1⊕ · · · ⊕Un. Then U = U1 ⊕ · · · ⊕Un.

0 0 Proof. Let i, j ∈ {1, . . . , n} with i 6= j. If x ∈ Ui and y ∈ Uj then x ∈ Ui and y ∈ Uj. So

0 0 [x, y] = 0 since Ui and Uj are orthogonal. Hence Ui and Uj are orthogonal.

0 If x1 + ··· + xn = 0 with xj ∈ Uj then x1 = ··· = xn = 0 since xj ∈ Uj and U1,..., Un are

0 0 linearly independent. Hence U1 ,..., Un are linearly independent.

0 0 So far, U1 ,..., Un are mutually orthogonal and linearly independent subspaces of X. 0 ˙ ˙ 0 ˙ ˙ Let x = x1 + ··· + xn ∈ U1 ⊕ ... ⊕Un. Then x ∈ U1⊕ ... ⊕Un = U and, for every uj ∈ Uj

⊥ (j = 1, . . . , n), [u1 + ··· + un, x] = [u1, x] + ··· + [un, x] = [u1, x1] + ··· + [un, xn] = 0 since xj ∈ Uj ⊥ 0 0 ˙ ˙ 0 0 (j = 1, . . . , n). Then x ∈ U and, therefore, x ∈ U . So, U1 ⊕ ... ⊕Un ⊆ U .

0 ⊥ Let x ∈ U = U ∩ U , then there exists xj ∈ Uj (j = 1, . . . , n) such that x = x1 + ··· + xn. Fix j ∈ {1, . . . , n}; then for every u ∈ Uj ⊆ U we have [x, u] = 0. But, if u ∈ Uj then [x, u] = [xj, u]; ⊥ 0 0 ˙ ˙ 0 0 0 ˙ ˙ 0 so xj ∈ Uj and, therefore, xj ∈ Uj . Then, x ∈ U1 ⊕ ... ⊕Un. So, U ⊆ U1 ⊕ ... ⊕Un. 0 0 ˙ ˙ 0 Therefore, U = U1 ⊕ ... ⊕Un.

Corollary 2.4.4. The orthogonal direct sum of non-degenerate subspaces is non-degenerate.

Proposition 2.4.5. Let X be a semi-definite inner product space. Then,

X0 = {x ∈ X | [x, x] = 0}.

Proof. Clearly, X0 ⊆ {x ∈ X | [x, x] = 0} and, by proposition 2.2.8, {x ∈ X | [x, x] = 0} ⊆ X⊥ = X0. So, X0 = {x ∈ X :[x, x] = 0}.

Corollary 2.4.6. Let X be a neutral inner product space (this is, [x, x] = 0 for every x ∈ X). Then, X0 = X.

13 Proof. This follows from proposition 2.4.5 because every neutral space is a semi-definite space.

Proposition 2.4.7. Let X be an inner product space and U ⊆ X a subspace. If U is neutral then U ⊥⊥ is neutral.

Proof. If U is neutral then, by proposition 2.2.8, U ⊆ U ⊥. Therefore, U ⊥⊥ ⊆ (U ⊥⊥)⊥, which shows that U ⊥⊥ is neutral.

Proposition 2.4.8. Let U be a degenerate subspace of an inner product space X. Then U ⊥⊥ is degenerate.

Proof. As U ⊆ U ⊥⊥ and U ⊥ = U ⊥⊥⊥, it follows that {0}= 6 U ∩ U ⊥ ⊆ U ⊥⊥ ∩ U ⊥⊥⊥. So U ⊥⊥ is degenerate.

2.5 Maximal Non-Degenerate Subspaces

Proposition 2.5.1. Let X be an inner product space; X0 its isotropic part and V a complemen- tary subspace for X0. Then V is a non-degenerate subspace and X = X0⊕˙ V .

Proof. Since X0 = X⊥ then, for every u ∈ X0 and v ∈ V , [u, v] = 0. So, X0 and V are orthogonal subspaces and, as V is a complementary subspace for X0, we have X = X0⊕˙ V . Suppose, towards a contradiction, that V is degenerate, and let x 6= 0 be such that x ∈ V 0. Then, for every u ∈ X0 and v ∈ V , [u + v, x] = [u, x] + [v, x] = 0 + 0 = 0. So x ∈ X0, but this is a contradiction since X0 and V are linearly independent.

Proposition 2.5.2. Let X be an inner product space; X0 its isotropic part and V a complemen- tary subspace for X0. Then V is isometrically isomorphic to X/X0.

Proof. By proposition 2.1.3, V is isomorphic to X/X0 and an isomorphism is given by T : V −→

0 X/X , T x :=x ˜. Note that, for every x, y ∈ V , [T x, T y]∼ = [˜x, y˜]∼ = [x, y] (see remark before definition 2.4.1). So T is an isometrical isomorphism.

Corollary 2.5.3. If X is an inner product space, then X/X0 is non-degenerate.

14 Theorem 2.5.4. Let X be an inner product space and V ⊆ X. V is a complementary subspace of X0 if and only if V is a maximal non-degenerate subspace of X.

Proof. ⇒ If V is a complementary subspace for X0 then, by proposition 2.5.1, V is a non- degenerate subspace of X and X = X0⊕˙ V . Suppose, towards a contradiction, that there exists W ⊆ X non-degenerate subspace such that

0 0 0 V ( W . As X = X u V and V ( W then W ∩ X 6= {0}. Let w ∈ W ∩ X with w 6= 0; then, for every x ∈ W , [x, w] = 0. So w ∈ W 0 and, therefore, W 0 6= {0} .

⇐ As V is a non-degenerate subspace of X it follows that V ∩ X0 = {0}, so V and X0 are

0 linearly independent. If X u V 6= X then, by the remark before definition 2.1.1, there exists a 0 subspace W ⊆ X with V ( W such that X = X u W . So W ) V and, by proposition 2.5.1, W is non-degenerate .

Corollary 2.5.5. If X is an inner product space then X contains maximal non-degenerate sub- spaces and every non-degenerate subspace admits a maximal non-degenerate extension.

2.6 Maximal Semi-Definite Subspaces

Proposition 2.6.1. Let X be an inner product space and U, V ⊆ X be two maximal positive semi-definite subspaces (or two maximal negative semi-definite subspaces). Then (U + V )0 = {x ∈ U ∩ V | [x, x] = 0}.

Proof. Suppose, without loss of generality, that U and V are both maximal positive semi-definite subspaces. Let x ∈ U ∩ V be such that [x, x] = 0; clearly x ∈ U + V . As U and V are semi-definite subspaces then, by proposition 2.2.8, x ∈ U 0 ∩ V 0 and, therefore, for every u ∈ U and v ∈ V , [u + v, x] = [u, x] + [v, x] = 0 + 0 = 0. So x ∈ (U + V )0. Let x ∈ (U + V )0, then [x, x] = 0 and x ∈ U ⊥ ∩ V ⊥. Suppose, towards a contradiction, that x 6∈ U; as, for every u ∈ U and α ∈ C, [u + αx, u + αx] = [u, u] ≥ 0 then U + Span({x}) ) U and U + Span({x}) is a positive semi-definite subspace (since U is a maximal positive semi-definite subspace). So x ∈ U and, analogously, x ∈ V . Therefore, (U +V )0 = {x ∈ U ∩V | [x, x] = 0}.

15 Proposition 2.6.2. Let X be an inner product space and let U, V ⊆ X be subspaces such that

X = U uV . If U is positive definite and V is negative semi-definite, then U is a maximal positive definite subspace and V is a maximal negative semi-definite subspace. Analogously, if U is positive semi-definite and V is negative definite, then U is a maximal positive semi-definite subspace and V is a maximal negative definite subspace.

Proof. Suppose that U is positive definite and V is negative semi-definite, and suppose, towards a contradiction, that U is not a maximal positive definite subspace. Let W ⊆ X be a positive definite subspace such that U ( W ; as X = U u V then W ∩ V 6= {0} and, therefore, there exists w ∈ W such that [w, w] > 0 and [w, w] ≤ 0 . So U is maximal positive definite and, analogously,

V is maximal negative semi-definite. The case where U is positive semi-definite and V is negative definite is similar.

Proposition 2.6.3. Let X be an inner product space and U ⊆ X. If U is a maximal positive definite subspace, then U ⊥ is negative semi-definite. Analogously, if U is a maximal negative definite subspace then U ⊥ is positive semi-definite.

Proof. Let U be a maximal positive definite subspace. Suppose, towards a contradiction, that there exists v ∈ U ⊥ such that [v, v] > 0. Then U + Span({v}) is a positive definite subspace

(since, for every u ∈ U and α ∈ C, [u + αv, u + αv] = [u, u] + |α|2[v, v]) and, as v 6∈ U (since U 0 = {0}), U + Span({v}) ) U .

The case where U is maximal negative definite is similar.

Corollary 2.6.4. Let X be an inner product space and U ⊆ X. If U is a positive definite and maximal positive semi-definite (resp., negative definite and maximal negative semi-definite) subspace, then U ⊥ is negative definite (resp., positive definite).

Proof. Suppose that U is a positive definite and maximal positive semi-definite subspace; then U ∩ U ⊥ = {0}, U is maximal positive definite (by the maximality as positive semi-definite subspace) and, by proposition 2.6.3, U ⊥ is negative semi-definite. Let v ∈ U ⊥ such that [v, v] = 0 and suppose, towards a contradiction, that v 6= 0; then U + Span({v}) is positive semi-definite and

U + Span({v}) ) U . So U ⊥ is negative definite.

16 The case where U is negative definite and maximal negative semi-definite is analogous.

2.7 Projections of Vectors on Subspaces

Definition 2.7.1. Let X be an inner product space; U ⊆ X be a subspace and x ∈ X. If there exists u ∈ U and v ∈ U ⊥ such that x = u + v then u is called a projection of x on U.

Remarks:

• Note that, given a subspace U of an inner product space X and given x ∈ X then, x has a projection on U if and only if x ∈ U + U ⊥. So a element x ∈ X does not need to have a projection on U since, in general, it is possible that X 6= U + U ⊥ (see example 2).

• If u ∈ U is a projection of the vector x on the subspace U and v ∈ U ⊥ is such that x = u+v then, for every w ∈ U 0, u + w is a projection of x on U (since u + w ∈ U and v − w ∈ U ⊥).

• By the previous remark, the projection of a vector on a subspace, if it exists, is not unique in general; this is the case when the subspace is degenerate. So, if there exists x ∈ X having exactly one projection on a subspace U ⊆ X then U is non-degenerate. Moreover, if the subspace U is non-degenerate then every vector x has at most one projection on U.

0 • Clearly, if u1, u2 ∈ U are projections of the vector x on the subspace U then u1 − u2 ∈ U

⊥ ⊥ (since there exist v1, v2 ∈ U such that x = u1 +v1 = u2 +v2 and so u1 −u2 = v2 −v1 ∈ U ;

⊥ 0 therefore u1 − u2 ∈ U ∩ U = U ).

Proposition 2.7.2. Let X be an inner product space; U ⊆ X a neutral subspace and x ∈ X. Then x admits a projection on U if and only if x ⊥ U. In this case, every element of U is a projection of x on U.

Proof. As U is neutral, by corollary 2.2.10, U ⊆ U ⊥. So, by the preceding remark, x has a projection on U if and only in x ∈ U ⊥.

Theorem 2.7.3. Let X be an inner product space; U ⊆ X a positive definite subspace and x ∈ X.

Then x admits a projection on U if and only if the function ϕ : U −→ R, ϕ(u) := [x − u, x − u] attains its minimum at a unique u0 ∈ U. This u0 is the unique projection of x on U.

17 Proof. As U is a definite subspace, U is non-degenerate. ⇒ Suppose that x admits a projection on U; as U is non-degenerate, the projection of x on U

⊥ is unique. Let u0 and v0 be the unique vectors in U and U , respectively, such that x = u0 + v0.

Note that ϕ(u0) = [v0, v0] and, for every u ∈ U with u 6= u0, as U is positive definite and u−u0 6= 0,

ϕ(u) = [(u0 + v0) − u, (u0 + v0) − u] = [v0 + (u0 − u), v0 + (u0 − u)] = [v0, v0] + [u0 − u, u0 − u] >

[v0, v0] = ϕ(u0). So u0 is the unique point of minimum for ϕ.

⇐ Suppose that ϕ has a unique point of minimum u0 ∈ U. Suppose, towards a contradiction,

⊥ that x − u0 6∈ U , and let u ∈ U such that [x − u0, u] 6= 0 (therefore u 6= 0). Then, for every

λ ∈ C, λ 6= 0, [x − (u0 + λu), x − (u0 + λu)] > [x − u0, x − u0]; this is, for every λ 6= 0, 2 ¯ |λ| [u, u] − λ[u, x − u0] − λ[x − u0, u] > 0. So, taking λ = α[x − u0, u], we have that for every

2 ⊥ α ∈ R with α 6= 0, α [u, u] − α > 0 . Therefore x − u0 ∈ U and so u0 is a projection of x on

U. Clearly this projection is unique since U is non-degenerate.

Corollary 2.7.4. Let X be an inner product space; U ⊆ X be a negative definite subspace and x ∈ X. Then x admits a projection on U if and only if the function ϕ : U −→ R, ϕ(u) :=

[x − u, x − u] attains its maximum at a unique u0 ∈ U. This u0 is the unique projection of x on U.

Theorem 2.7.5. Let X be an inner product space and U, U1,..., Un ⊆ X be subspaces such that U = U1⊕˙ · · · ⊕˙ Un. Let x ∈ X and u ∈ U; then u is a projection of x on U if and only if there exists ui projection of x on Ui (i = 1, . . . , n) such that u = u1 + ··· + un.

⊥ ⊥ Proof. Note that, for every i = 1, . . . , n, U ⊆ Ui since Ui ⊆ U. ⇒ Suppose that u is a projection of x on U and let v ∈ U ⊥ such that x = u + v. As U =

U1⊕˙ · · · ⊕˙ Un, there exist unique vectors u1,..., un such that u = u1 + ··· + un and, for i 6= j,

[ui, uj] = 0. So x = u1 + ··· + un + v and u1,..., un, v are orthogonal to each other. Therefore, for every i = 1, . . . , n, ui is a projection of x on Ui.

⇐ Suppose that there exists ui ∈ Ui projection of x on Ui (i = 1, . . . , n) such that u = u1 + ··· + un. Note that for every i = 1, . . . , n, if wi ∈ Ui then [wi, x − u] = [wi, x − ui] = 0 since

⊥ x − ui ∈ Ui ; so x − u ⊥ Ui, i = 1, . . . , n, and therefore x − u ⊥ U. Hence u is a projection of x on U.

18 2.8 Ortho-Complemented Subspaces

Definition 2.8.1. Let X be an inner product space and U ⊆ X be a subspace. If U + U ⊥ = X then U is called an ortho-complemented subspace of X.

Remarks:

• Note that a subspace is ortho-complemented if and only if it admits an orthogonal comple- mentary subspace (this is a complementary subspace contained in U ⊥).

• If U ⊆ X is a non-degenerate subspace then U is ortho-complemented if and only if X = U⊕˙ U ⊥.

• If U is a neutral subspace then, as U ⊆ U ⊥, U is ortho-complemented if and only if U ⊥ = X if and only if U ⊆ X0.

• If U, U1,..., Un ⊆ X are subspaces such that U = U1⊕˙ · · · ⊕˙ Un, then U is ortho-

complemented if and only if each Ui (i = 1, . . . , n) is ortho-complemented.

• Clearly, if U is an ortho-complemented subspace of an inner product space X then every x ∈ X has at least one projection on U.

• If U is ortho-complemented then, as U ⊆ U ⊥⊥, U ⊥ is ortho-complemented too.

Proposition 2.8.2. Let X be an inner product space and U ⊆ X be a subspace. If U is ortho-complemented then U 0 ⊆ X0.

Proof. Suppose that U is ortho-complemented, that is X = U +U ⊥, and let u ∈ U 0 = U ∩U ⊥. For every w ∈ U and z ∈ U ⊥ we have [w + z, u] = [w, u] + [z, u] = 0 + 0 = 0; so U 0 ⊆ X⊥ = X0.

Corollary 2.8.3. If X is a non-degenerate inner product space then every ortho-complemented subspace is non-degenerate.

Proposition 2.8.4. Let X be a non-degenerate inner product space and U ⊆ X an ortho- complemented subspace. Then U ⊥⊥ = U.

19 Proof. By corollary 2.8.3, as X is non-degenerate, U and U ⊥ are non-degenerate subspaces such that X = U⊕˙ U ⊥.

Clearly U ⊆ U ⊥⊥. Suppose, towards a contradiction, that U ( U ⊥⊥; then U ⊥ ∩U ⊥⊥ 6= {0} .

Proposition 2.8.5. Every definite subspace of finite dimension is ortho-complemented.

Proof. Let U be a definite subspace of an inner product space X such that n = dim(U) < ∞. Suppose, without loss of generality, that U is positive definite. Since U is finite-dimensional, it has a orthonormal basis (u1, . . . , un). Note that if x ∈ X then u = [x, u1]u1 + ··· + [x, un]un is a projection of x of U since [x − u, u] = 0. Hence, U is ortho-complemented.

2.9 Fundamental Decompositions

Definition 2.9.1. An inner product space X is decomposable if and only if there exists a positive definite subspace X+ ⊆ X and a negative definite subspace X− ⊆ X such that X = X0⊕˙ X+⊕˙ X−. A decomposition of this form is called a fundamental decomposition of X.

Remark. If X is a non-degenerate and decomposable inner product space, then every funda- mental decomposition of X is of the form X = X+⊕˙ X− where X+ is a positive definite subspace and X− is a negative definite subspace.

Lemma 2.9.2. Let X be an inner product space such that X = N⊕˙ X+⊕˙ X− where N is a neutral subspace, X+ is a positive definite subspace and X− is a negative definite subspace. Then N = X0 and therefore X is decomposable.

Proof. Since N is neutral then N 0 = N (by corollary 2.2.10) and, as X+ and X− are definite subspaces, (X+)0 = (X−)0 = {0}. Then, by proposition 2.4.2, X0 = N 0⊕˙ (X+)0⊕˙ (X−)0 = N. So X = X0⊕˙ X+⊕˙ X− and therefore X is decomposable.

Theorem 2.9.3. Let X be an inner product space and U ⊆ X be a positive definite (negative definite) subspace. Then X has a fundamental decomposition with X+ = U (X− = U) if and only if:

20 i) U is maximal positive definite (maximal negative definite);

ii) U is ortho-complemented.

Proof. Suppose, without loss of generality, that U is positive definite. ⇒ Suppose that X has a fundamental decomposition X = X0⊕˙ X+⊕˙ X− with X+ = U; clearly, U is ortho-complemented. Suppose, towards a contradicition, that U is not maximal positive definite and let W ⊆ X be a positive definite subspace such that W ) U; then W ∩ (X0⊕˙ X−) 6= {0}. Let w 6= 0 be such that w ∈ W ∩ (X0⊕˙ X−), then [w, w] > 0 (since W is positive definite) and [w, w] ≤ 0 (since X0⊕˙ X− is negative semi-definite) .

⇐ Suppose that U is a maximal positive definite and ortho-complemented subspace; then U ⊥ is negative (by proposition 2.6.3) and there exists V ⊆ U ⊥ subspace such that X = U⊕˙ V . Let N ⊆ V a maximal neutral subspace of V, then there exists X− ⊆ V subspace such that V = N⊕˙ X− (by proposition 2.2.8, N ⊥ V since V is semi-definite); clearly X− is negative definite. So, setting X+ = U, we have X = N⊕˙ X+⊕˙ X− where X+ is positive definite and X− is negative definite. By lemma 2.9.2, N = X0. Therefore X has a fundamental decomposition with X+ = U.

Remarks:

• By theorem 2.9.3, an inner product space X is decomposable if and only if it contains ortho-complemented maximal definite subspaces.

• If X is decomposable with fundamental decomposition X = X0⊕˙ X+⊕˙ X− then, by theorem 2.9.3, X+ is maximal positive definite and X− is maximal negative definite.

• Let X be a non-degenerate and decomposable inner product space and let X = X+⊕˙ X+

be a fundamental decomposition of X. Let k·kX+ and k·kX− be the norms induced by the inner product on X+ and X−, respectively (see remarks before definition 2.2.12). Then

+ − X is endowed with a norm given by the formula kxk = kx kX+ + kx kX− , where x ∈ X, x = x+ + x− with x+ ∈ X+ and x− ∈ X−.

21 Corollary 2.9.4. Every inner product space of finite dimension is decomposable.

Proof. Let X be an inner product space of finite dimension. Let U be a maximal positive definite subspace of X (such a subspace exists since X is finite-dimensional); then U is finite-dimensional and, by proposition 2.8.5, U is orthocomplemented. Hence, by the theorem 2.9.3, X is decom- posable.

Corollary 2.9.5. Every finite dimensional non-degenerate subspace of an inner product space is ortho-complemented.

Proof. Let X be an inner product space and let U be a non-degenerate and finite-dimensional subspace of X. Then, by corollary 2.9.4, U is decomposable; that is, U = U +⊕˙ U −. Since U + is a definite subspace of X with finite dimension then, by proposition 2.8.5, U + is ortho-complemented in X. So, X = U +⊕˙ (U +)⊥. Note that U − ⊆ (U +)⊥ and, again by proposition 2.8.5, U − is ortho-complemented in (U +)⊥; that is, there exists V ⊆ (U +)⊥ such that (U +)⊥ = U −⊕˙ V . Therefore, X = U +⊕˙ U −⊕˙ V = U⊕˙ V ; that is, U is ortho-complemented.

2.10 Orthonormal Systems

Definition 2.10.1. Let (X, [·, ·]) be a inner product space and let (eγ)γ∈Γ be a indexed family of vectors of X. The family (eγ)γ∈Γ is called an orthonormal system if and only if |[eγ, eγ0 ]| = δγ,γ0 .

Remark. Clearly, every orthonormal system in an inner product space is a linearly independent subset.

Lemma 2.10.2. Let X be a non-degenerate inner product space and let x ∈ X be a non-zero neutral vector. Then there exists an orthonormal system (u, v) ⊆ X such that x ∈ Span({u, v}).

Proof. Since X is non-degenerate, there exists y ∈ X such that [x, y] 6= 0 and, by the corollary 2.2.9, U = Span({x, y}) is a indefinite subspace. Clearly, dim(U) = 2. Let u ∈ U be such that [u, u] = 1. Since Span({u}) is a positive definite subspace of U, it is ortho-complemented (by the proposition 2.8.5) and, moreover, it is a maximal positive definite

22 subspace of U. So, by the proposition 2.6.3, Span({u})⊥ is a negative semi-definite 1-dimensional subspace of U and, since U is indefinite, it is negative definite. Let v ∈ Span({u})⊥ be such that [v, v] = −1. Then (u, v) ⊆ U ⊆ X is an orthonormal system such that U = Span({u, v}) (since {u, v} is a linearly independent subset) and, therefore, x ∈ Span({u, v}).

Theorem 2.10.3. Let X be a non-degenerate inner product space and (xn)n∈N ⊆ X. Then there exists a countable orthonormal system (ej)j such that, for every n ∈ N, xn ∈ Span((ej)j).

Proof. Suppose, without loss of generality, that xn 6= 0 for every n ∈ N.

−1/2 • If x1 is not neutral, let e1 = |[x1, x1]| x1. Otherwise, by the lemma 2.10.2, there is an

orthonormal system {e1, e2)} such that x1 ∈ Span({e1, e2}).

• Suppose that for some n ∈ N there is a finite orthonormal system {e1, . . . , ek} such that

its span Uk contains the vectors x1,..., xn. If xj ∈ Uk for every j ∈ N, then the as-

sertion is clear. Otherwise, let jn be the first index with xjn 6∈ Uk. Let yn = xjn − Pk j=1 [ej, ej][xjn , ej]ej; then yn 6= 0 and yn ⊥ Uk.

Since Uk = Span({e1})⊕˙ · · · ⊕˙ Span({ek}) then, by corollary 2.4.3, Uk is non-degenerate.

⊥ By corollary 2.9.5, Uk is ortho-complemented. Therefore, Uk is non-degenerate. If yn is

−1/2 not neutral then let ek+1 = |[yn, yn]| yn. Otherwise, by the lemma 2.10.2, there is a

⊥ orthonormal system {ek+1, ek+2} ⊆ Uk such that yn ∈ Span({ek+1, ek+2}). In any case,

a finite extension for the orthonormal system {e1, . . . , ek} is obtained such that its span

contains xn+1.

23 Chapter 3

Linear Operators on Inner Product Spaces

3.1 Linear Operators

Definition 3.1.1. Let X be a vector space and let T be a linear operator acting in X. Then:

i) T is invertible if and only if T is injective [that is, ker(T ) = {0}].

ii) T is completely invertible if and only if D(T ) = R(T ) = X and T is invertible.

Definition 3.1.2. Let T be a linear operator acting in a vector space X and let λ ∈ C. Then:

i) The subspace X := S ker(T − λI)n is called the principal subspace (or root subspace) of λ n∈N T associated to λ, and its non-zero vectors are called the principal vectors (or root vectors)

of T associated to λ. The dimension of Xλ is called the algebraic multiplicity of λ.

ii) The complex number λ is called an eigenvalue of T if and only if the linear operator T − λI is not invertible. In this case the subspace ker(T − λI) is called the eigenspace associated to λ, and its non-zero elements are called the eigenvectors of T associated to λ. The dimension of ker(T − λI) is called the geometric multiplicity of λ.

Remarks:

• Clearly ker(T − λI) ⊆ Xλ and, therefore, the geometric multiplicity is less than or equal to

the algebraic multiplicity for every λ ∈ C.

24 • Clearly if ker(T − λI) 6= {0} then Xλ 6= {0}; moreover, if Xλ 6= {0} then ker(T − λI) 6= {0}

[let x 6= 0 be such that x ∈ Xλ and let p ∈ N the minimum positive integer such that (T − λI)px = 0; if p = 1 then x ∈ ker(T − λI) and if p > 1 then y := (T − λI)p−1x 6= 0 and y ∈ ker(T − λI)].

• An eigenvalue λ of T is called semi-simple if, and only if, Xλ = ker(T − λI).

• Clearly, the principal subspaces of a linear operator are linearly independent.

• The set of all eigenvalues of the operator T is denoted by σp(T ).

Definition 3.1.3. Let T be a linear operator acting in a vector space X and let λ ∈ C be an eigenvalue of T .A Jordan chain of T associated to λ with length p ∈ N is a finite sequence of non-zero vectors in Xλ, (x1, . . . , xp), such that xk+1 = (T − λI)xk (k = 1, . . . , p − 1) and

(T − λI)xp = 0.

Proposition 3.1.4. Let (x1, . . . , xp) be a Jordan chain of T associated to the eigenvalue λ ∈ C.

Then x1,..., xp are linearly independent vectors.

p−1 Proof. Let α1, . . . , αp ∈ C such that α1x1 +···+αpxp = 0; applying (T −λI) we have α1xp = 0

p−k and, as xp 6= 0, α1 = 0. Let k ≤ p and suppose that α1 = ··· = αk−1 = 0. Applying (T − λI) we obtain αkxp = 0 and thus αk = 0. Therefore x1,..., xp are linearly independent vectors.

Definition 3.1.5. Let T be a linear operator acting in a vector space X and let U ⊆ X be a subspace. We say that the subspace U is invariant under T (or is T -invariant) if and only if

T (U ∩ D(T )) ⊆ U.

Definition 3.1.6. Let T be a linear operator acting in X and U1,..., Un ⊆ X subspaces such that X = U1 u ··· u Un. The direct decomposition X = U1 u ··· u Un reduces the operator T if and only if:

1) The subspaces U1,...,Un are T -invariant;

2) D(T ) = (D(T ) ∩ U1) u ··· u (D(T ) ∩ Un).

In this case T is called the direct sum of the linear operator TU1 ,..., TUn .

25 Remarks:

• Note that if Y,U1,...,Un ⊆ X are subspaces with X = U1 u ··· u Un then Y ⊇ (Y ∩ U1) u

··· u (Y ∩ Un); but is not necessary that Y = (Y ∩ U1) u ··· u (Y ∩ Un). For example take 2 X = C , U1 = {(u, 0) : u ∈ C}, U2 = {(0, v): v ∈ C}) and Y = {(u, u): u ∈ C}.

• Let U1,...,Un be subspaces of X such that X = U1 u···uUn and let T be a linear operator

acting in X with D(T ) = (D(T ) ∩ U1) u ··· u (D(T ) ∩ Un). Then, given u ∈ D(T ) there

exist unique ui, vi ∈ Ui (i = 1, . . . , n) such that u = u1 + ··· + un and T (u) = v1 + ··· + vn.

Let Tji : D(T ) ∩ Ui −→ Uj be the linear operator such that u ∈ Ui 7−→ Tji(u) ∈ Uj where

Tji(u) is the j-component in the direct decomposition of T (u) in U1 u ··· u Un. So, if

u = u1 + ··· + un ∈ (D(T ) ∩ U1) u ··· u (D(T ) ∩ U1) = D(T ) then:

Pn Pn Pn Pn Pn T (u) = i=1 T (ui) = i=1 j=1 Tji(ui) = j=1 i=1 Tji(ui).

3.2 Isometric Operators

Recall 3.2.1. A linear operator T : D(T ) ⊆ X → X is an isometry if and only if [T x, T y] = [x, y] for every x, y ∈ D(T ).

Remarks:

• Clearly, an isometric operator acting in an inner product space maps positive (negative, neutral) vectors in positive (negative, neutral) vectors.

• Note that an isometric linear operator need not be invertible. For example, in a non-zero neutral inner product space the mapping 0 is isometric but it is not invertible.

• If T is an invertible isometric linear operator, then it is clear that T −1 is isometric.

Proposition 3.2.2. Let T be an isometric linear operator acting in X and let u, v be isotropic and non-isotropic vectors of D(T ), respectively. Then T u, T v are isotropic and non-isotropic vectors of R(T ), respectively.

26 Proof. For every x ∈ D(T ), as T is isometric and u is an isotropic vector of D(T ), [T x, T u] = [x, u] = 0; so T u is an isotropic vector of R(T ).

As v ∈ D(T ) is a non-isotropic vector of D(T ), let x0 ∈ D(T ) such that [x0, v] 6= 0; then, since T is isometric, [T x0, T v] = [x0, v] 6= 0. So T v is a non-isotropic vector of R(T ).

Corollary 3.2.3. If T is an isometric linear operator acting in an inner product space X and

D(T ) is non-degenerate, then T is invertible.

Proof. Let x ∈ D(T ) with x 6= 0; as x is non-isotropic then, by proposition 3.2.2, T x is non- isotropic and therefore T x 6= 0.

Theorem 3.2.4. Let T be a isometric operator acting in a inner product space X and let λ, µ ∈ C be eigenvalues of T such that λµ 6= 1. Then Xλ ⊥ Xµ.

p q Proof. Given u ∈ Xλ and v ∈ Xµ there are p, q ∈ N such that (T − λ) u = 0 and (T − µ) v = 0, then we have show that [u, v] = 0. By induction on n := p + q:

• If n = 2 then p = q = 1; so if u ∈ Xλ and v ∈ Xµ are such that (T − λ)u = 0 and (T − µ)v = 0, then [u, v] = [T u, T v] = [λu, µv] = λµ[u, v]. Thus, as λµ 6= 1, [u, v] = 0.

• If n ≥ 2, suppose that the affirmation is true for all p + q ≤ n. Now assume that u ∈ Xλ,

p q v ∈ Xµ are such that (T − λ) u = 0 and (T − µ) v = 0 with p + q = n + 1. So with u0 = (T − λ)u and v0 = (T − λ)v we have [u, v0] = [u0, v] = [u0, v0] = 0 and, therefore, [u, v] = [T u, T v] = [u0 + λu, v0 + µv] = λµ[u, v]. Thus [u, v] = 0 since λµ 6= 1,.

Corollary 3.2.5. If T is an isometric operator acting in an inner product space X and λ ∈ C with |λ|= 6 1 is an eigenvalue of T , then Xλ is neutral.

Proposition 3.2.6. Let T be an isometric operator acting in an inner product space X and let

U ⊆ X be a subspace such that U ⊆ T (U ∩ D(T )). Then U ⊥ is T -invariant.

Proof. Let v ∈ U ⊥ ∩ D(T ). If u ∈ U then there exists w ∈ U ∩ D(T ) such that u = T w; so [u, T v] = [T w, T v] = [w, v] = 0. Thus, T v ∈ U ⊥.

27 Proposition 3.2.7. Let T be an isometric operator acting in an inner product space X and let λ ∈ C be an eigenvalue of T . If ker(T − λI) is a definite subspace then |λ| = 1 and λ is semi-simple.

Proof. Suppose that ker(T − λ) is a definite subspace; then, by corollary 3.2.5, |λ| = 1. Now suppose, towards a contradiction, that λ is not semi-simple. Let x ∈ Xλ \ ker(T − λ) and p ≥ 2 such that (T − λ)px = 0 and (T − λ)p−1x 6= 0. Then,

0 = −[(T − λ)px, λT (T − λ)p−2x] = −[T (T − λ)p−1x, λT (T − λ)p−2x] + λ[(T − λ)p−1x, λT (T − λ)p−2x] = −[(T − λ)p−1x, λ(T − λ)p−2x] + [(T − λ)p−1x, T (T − λ)p−2x] = [(T − λ)p−1x, (T − λ)p−1x] 6= 0 ( ),

where the last inequality follows because (T −λ)p−1x belongs to the definite space ker(T −λ).

3.3 Symmetric Operators

Definition 3.3.1. Let T be a linear operator acting in an inner product space X. Then T is called symmetric if and only if [T x, y] = [x, T y] for every x, y ∈ D(T ).

Remarks:

• If T is a symmetric operator acting in a Hilbert space then σp(T ) ⊆ R, but in an arbitrary inner product space this can be false. For example, in a two-dimensional inner product

space (X, [·, ·]) with basis {e1, e2} such that [e1, e1] = [e2, e2] = 0 and [e1, e2] = 1, we

have that the linear operator T which maps e1 to ie1 and e2 to −ie2 is symmetric (since

[T (α1e1 + α2e2), β1e1 + β2e2] = [iα1e1 − iα2e2, β1e1 + β2e2] = iα1β2 − iα2β1 = [α1e1 +

α2e2, iβ1e1 − iβ2e2] = [α1e1 + α2e2,T (β1e1 + β2e2)]), but its eigenvalues are ±i ∈ C \ R.

• If T is a symmetric operator acting in a Hilbert space H and λ, µ are eigenvalues of T with

λ 6= µ then Hλ ⊥ Hµ. The following proposition generalizes this for arbitrary inner product spaces.

28 Proposition 3.3.2. Let T be a symmetric operator acting in an inner product space X and let

λ, µ ∈ C be eigenvalues of T such that λ 6= µ. Then Xλ ⊥ Xµ.

Proof. Suppose, without loss of generality, that λ 6= 0. Let u ∈ Xλ, v ∈ Xµ and p, q ∈ N such that (T − λ)pu = 0 and (T − µ)qv = 0. We have to show that [u, v] = 0. By induction on n := p + q:

• If n = 2 then p = q = 1; so if u ∈ Xλ and v ∈ Xµ are such that (T − λ)u = 0 and

1 1 1 1 µ (T − µ)v = 0, then [u, v] = λ [λu, v] = λ [T u, v] = λ [u, T v] = λ [u, µv] = λ [u, v]. Thus, as µ λ 6= 1, [u, v] = 0.

• If n ≥ 2, suppose that the affirmation is true for all p + q ≤ n. Now let u ∈ Xλ, v ∈ Xµ and assume (T − λ)pu = 0 and (T − µ)qv = 0 for integers p, q such that p + q = n + 1. So with u0 = (T − λ)u and v0 = (T − λ)v we have [u, v0] = [u0, v] = [u0, v0] = 0 and, therefore,

1 0 1 1 1 0 1 µ [u, v] = λ [T u − u , v] = λ [T u, v] = λ [u, T v] = λ [u, v + µv] = λ [u, µv] = λ [u, v]. Thus, as µ λ 6= 1, [u, v] = 0.

Corollary 3.3.3. If λ is a non-real eigenvalue of a symmetric operator T acting in an inner product space X, then Xλ is neutral.

The following proposition is similar to the proposition 3.2.6.

Proposition 3.3.4. Let T be a symmetric operator acting in an inner product space X and let

U ⊆ X be a T - such that U ⊆ D(T ). Then U ⊥ is T -invariant.

Proof. Let v ∈ U ⊥ ∩ D(T ). For every u ∈ U, as U is T -invariant and U ⊆ D(T ), [u, T v] = [T u, v] = 0. Thus, U ⊥ is T -invariant.

Proposition 3.3.5. Let T be a symmetric operator acting in an inner product space X and let λ ∈ C be an eigenvalue of T . If ker(T − λI) is a definite subspace, then λ ∈ R and λ is semi-simple.

Proof. Suppose that ker(T − λ) is a definite subspace; then, by corollary 3.3.3, λ ∈ R.

Now suppose, towards a contradiction, that λ is not semi-simple. Let x ∈ Xλ \ ker(T − λ) and p ≥ 2 such that (T − λ)px = 0 and (T − λ)p−1x 6= 0. So, 0 6= [(T − λ)p−1x, (T − λ)p−1x] =

29 [(T − λ)p−1x, (T − λ)(T − λ)p−2x] = [(T − λ)p−1x, T (T − λ)p−2x] − λ[(T − λ)p−1x, (T − λ)p−2x] = [T (T − λ)p−1x, (T − λ)p−2x] − λ[(T − λ)p−1x, (T − λ)p−2x] = λ[(T − λ)p−1x, (T − λ)p−2x] − λ[(T − λ)p−1x, (T − λ)p−2x] = 0 .

Definition 3.3.6. Let (X, [·, ·]) be an inner product space and let G : X −→ X be a symmetric linear operator. In X we define an inner product [·, ·]G by [x, y]G := [Gx, y] for every x, y ∈ X. This inner product is called the G-inner product on X.

Remark. Every concept related to the G-inner product is prefixed by G and every symbol has the subindex G. For example, the vectors x and y are G-orthogonal (write x ⊥G y) if and only

[x, y]G = 0.

Proposition 3.3.7. Let X be an inner product space; U, V ⊆ X be subspaces and G : X −→ X be a symmetric operator. If U ⊥ V and U is G-invariant, then U ⊥G V .

Proof. For every u ∈ U and v ∈ V we have [u, v]G = [Gu, v] = 0 since G(U) ⊆ U and U ⊥ V .

3.4 Orthogonal and Fundamental Projectors

Definition 3.4.1. Let X be an inner product space and let P : X −→ X be a linear operator. P is called an orthogonal projector in X if and only if P is symmetric and P 2 = P .

Theorem 3.4.2. Let P be an orthogonal projector in an inner product space X. Then R(P ) is ortho-complemented and, for every x ∈ X, P x is a projection of x on R(P ). Conversely, if X is a non-degenerate inner product space, U ⊆ X is an ortho-complemented subspace and PU is the mapping that carries each vector of X into its projection on U, then PU is an orthogonal projector in X with R(PU ) = U.

Proof. Let x ∈ X, then x = P x + (x − P x) and, for every z ∈ X, as P is an orthogonal projector,

[P z, x − P x] = [z, P x − P 2x] = [z, P x − P x] = 0. So x = P x + (x − P x) with P x ∈ R(P ) and x − P x ∈ R(P )⊥. Thus R(P ) is ortho-complemented and, for every x ∈ X, P x is a projection of x on R(P ).

30 Conversely, suppose that X is a non-degenerate inner product space and U ⊆ X is an ortho-

⊥ complemented subspace; then U is non-degenerate and X = U⊕˙ U . So the mapping PU that carries each vector of X into its projection on U is well-defined, clearly it is linear, R(PU ) = U

2 ⊥ ⊥ and P = P . Now, if x = u+v ∈ U⊕˙ U and y = z+w ∈ U⊕˙ U then [PU x, y] = [u, z+w] = [u, z] and [x, PU y] = [u + v, z] = [u, z]; thus PU is symmetric. Therefore PU is an orthogonal projector in X.

Remark. Let X be a non-degenerate and decomposable inner product space and let X = X+⊕˙ X− be a fundamental decomposition of X. Then, by the theorem 3.4.2, there exist P +,P − orthogonal projectors in X such that R(P +) = X+ and R(P −) = X−. Clearly, P + + P − = I and P +P − = P −P + = 0.

Definition 3.4.3. The orthogonal projectors, P + and P −, in the preceding remark are called the fundamental projectors associated with the fundamental decomposition X = X+⊕˙ X−.

Remark. Let X be a non-degenerate and decomposable inner product space and let P + and P − be the fundamental projectors associated with the fundamental decomposition X = X+⊕˙ X−.

Let k·k, k·kX+ and k·kX− be the norms in the remarks after the theorem 2.9.3; then, for every

+ + + + x ∈ X, kP xk = kx k = kx kX+ ≤ kxk and therefore P is a bounded linear operator in the norm space (X, k·k). Analogously, P − is so.

Theorem 3.4.4. Let X be a non-degenerate and decomposable inner product space, let X = X+⊕˙ X− be a fundamental decomposition of X, and let P +, P − be the fundamental projectors associated with X = X+⊕˙ X−. If U, V ⊆ X are positive and negative semi-definite subspaces,

+ − + respectively, then P U and P V are invertible. Further, dim(U) ≤ dim(X ) and dim(V ) ≤ dim(X−).

Proof. Let U be a positive semi-definite subspace of X. Let u ∈ U, u = u+ + u− where u+ ∈ X+ and u− ∈ X−. If P +u = 0 then 0 = P +(u+ + u−) = u+, therefore u = u− and so u ∈ U ∩ X−.

+ Hence u = 0; that is P U is invertible. − Analogously, if V is a negative semi-definite subspace of X then P V is invertible.

31 Now, clearly, dim(U) = dim(P +(U)) ≤ dim(X+) since P +(U) ⊆ X+. Analogously, dim(V ) ≤ dim(X−).

Remark. Clearly, the theorem 3.4.4. holds if U and V are definite subspaces.

3.5 Fundamental Symmetries

Definition 3.5.1. Let X be a non-degenerate and decomposable inner product space, X = X+⊕˙ X− a fundamental decomposition of X and P +, P − the fundamental projectors associated with X = X+⊕˙ X−. The operator J := P + − P − is called the fundamental symmetry associated with X = X+⊕˙ X−.

Remarks:

• If J is the fundamental symmetry associated with the fundamental decomposition X = X+⊕˙ X−, then for every x = x+ +x− we have Jx = x+ −x−, where x+ ∈ X+ and x− ∈ X−.

• Clearly, if X+ 6= {0} then 1 is an eigenvalue of J and its associated eigenspace is X+; similarly, if X− 6= {0} then −1 is an eigenvalue of J and its associated eigenspace is X−.

Further, σ(J) = σp(J) ⊆ {−1, 1}.

Thus, if the fundamental symmetry associated with a fundamental decomposition is known then

+ 1 − 1 the fundamental decomposition can be determined, since then x = 2 (x+Jx) and x = 2 (x−Jx) for every x ∈ X.

Proposition 3.5.2. Every fundamental symmetry J of a non-degenerate and decomposable inner product space X is completely invertible with J −1 = J. Moreover, J is symmetric and isometric.

Proof. Let X = X+⊕˙ X− be a fundamental decomposition of X and let J be its associated fundamental symmetry. Clearly, D(J) = R(J) = X and J 2 = I; therefore J is completely invertible with J −1 = J. If x = x+ + x− and y = y+ + y−, with x+, y+ ∈ X+ and x−, y− ∈ X−, then

[Jx, Jy] = [x+ − x−, y+ − y−] = [x+, y+] + [x−, y−] = [x+ + x−, y+ + y−] = [x, y]

32 and

[Jx, y] = [x+ − x−, y+ + y−] = [x+, y+] − [x−, y−] = [x+ + x−, y+ − y−] = [x, Jy].

Therefore J is symmetric as well as isometric.

Remarks:

• Let X = X+⊕˙ X− be a fundamental decomposition of a non-degenerate and decomposable inner product space X and let J be its associated fundamental symmetry. Since J is symmetric then, as is shown in the proof of the proposition 3.5.2, the J-inner product on X

+ + − − + − + − is given by the formula [x, y]J = [x , y ]−[x , y ] for every x = x +x and y = y +y , where x+, y+ ∈ X+ and x−, y− ∈ X−.

2 2 • Note that, since J = I, [Jx, y]J = [J x, y] = [x, y] for every x, y ∈ X.

Proposition 3.5.3. Let X = X+⊕˙ X− be a fundamental decomposition of a non-degenerate and decomposable inner product space X and let J be its associated fundamental symmetry. Then X+ and X− are J-orthogonal.

+ − + − + − Proof. Let x ∈ X and y ∈ X , then x = x, x = 0, y = 0 and y = y. Thus, [x, y]J = 0.

The following proposition is very important because it will enable us later to build pre-Hilbert spaces from non-degenerate and decomposable inner product spaces.

Proposition 3.5.4. Let X = X+⊕˙ X− be a fundamental decomposition of a non-degenerate and decomposable inner product space X and let J be its associated fundamental symmetry. Then the J-inner product is positive definite.

+ + − − + − Proof. Let x ∈ X be such that [x, x]J = 0 and let x ∈ X and x ∈ X be such that x = x +x .

+ + − − + + − − + + − − Since [x, x]J = [x , x ] − [x , x ] = |[x , x ]| + |[x , x ]|, it follows that [x , x ] = [x , x ] = 0. Thus, since X+ and X− are definite subspaces, x+ = x− = 0 and, therefore, x = 0. Hence, the J-inner product is definite and, clearly, is positive.

33 Remarks:

• Let X be a non-degenerate and decomposable inner product space. If X = X+⊕˙ X− is a fundamental decomposition of X and J is its associated fundamental symmetry, then, since the J-inner product is positive definite, we can define a norm on X, called the J-norm and

p + + denoted by k·kJ , by the formula kxkJ = [x, x]J , x ∈ X. Note that, for every x ∈ X

− − + 2 + + + + − 2 − − − − and x ∈ X , we have kx kJ = [x , x ]J = [x , x ], kx kJ = [x , x ]J = −[x , x ] and,

+ − 2 + − + − + + − − + 2 − 2 therefore, kx + x kJ = [x + x , x + x ]J = [x , x ] − [x , x ] = kx kJ + kx kJ .

• Let X = X+⊕˙ X− be a fundamental decomposition of a non-degenerate and decomposable inner product space (X, [·, ·]), and let J be its fundamental symmetry. Then, for every x,

+ − + − + + − − y ∈ X, we have [x, y] = [Jx, y]J = [x − x , y + y ]J = [x , y ]J − [x , y ]J , where x = x+ + x− and y = y+ + y− with x+, y+ ∈ X+ and x−, y− ∈ X−. In particular, [x, x] =

+ 2 − 2 + − kx kJ − kx kJ and therefore x is positive (negative, neutral) if and only if kx kJ > kx kJ

+ − + − (kx kJ < kx kJ , kx kJ = kx kJ ).

• If P + and P − are the fundamental projectors associated with the fundamental decomposi-

+ + + + tion X = X ⊕˙ X , and J is its fundamental symmetry, then kP xkJ = kx kJ ≤ kxkJ and

− − + − kP xkJ = kx kJ ≤ kxkJ . That is, P and P are bounded operators acting in the normed

space (X, k·kJ ). Therefore, J is also a bounded linear operator acting in the normed space

(X, k·kJ ).

+ • Let U ⊆ X be a positive semi-definite subspace. By the theorem 3.4.4., P U is invertible and, by the previous remark, it is continuous. Let u ∈ U. Then u is non-negative and

+ − + 2 + 2 1 + 2 + 2 1 + 2 − 2 ku kJ ≥ ku kJ ; hence kP ukJ = ku kJ = 2 (ku kJ + ku kJ ) ≥ 2 (ku kJ + ku kJ ) = 1 2 + 2 (kukJ ). Therefore, P U has a bounded inverse.

− • Analogous to the previous remark, if V is a negative semi-definite subspace then P V is invertible and bounded with bounded inverse.

34 By the remarks above, the next proposition holds:

Proposition 3.5.5. Let (X, [·, ·]) be a non-degenerate and decomposable inner product space, X = X+⊕˙ X− a fundamental decompositon of X; P +, P − its associated fundamental projectors and J its associated fundamental symmetry. If U ⊆ X is a positive semi-definite subspace and

+ V ⊆ X is a negative semi-definite subspace, then P U is a homeomorphism between U and + − − P (U), and P V is a homeomorphism between V and P (V ).

Remark. Clearly, the proposition 3.5.5. holds if U and V are definite subspaces.

Proposition 3.5.6. If (X, [·, ·]) is a non-degenerate and decomposable inner product space and J is the fundamental symmetry associated with the fundamental decomposition X = X+⊕˙ X−, then for every x, y ∈ X:

|[x, y]| ≤ kxkJ kykJ .

Proof. Since (X, [·, ·]J ) is a semi-definite space then, by Schwarz Inequality, for every x, y ∈ X:

|[x, y]| = |[Jx, y]J | ≤ kJxkJ kykJ = kxkJ kykJ .

Theorem 3.5.7. Let (X, [·, ·]) be a non-degenerate and decomposable inner product space and let k·kj (j = 1, 2) be a norm on X such that (X, k·kj) is a Banach space and |[x, y]| ≤ αjkxkjkykj for every x, y ∈ X, where αj is a positive real constant. Then k·k1 and k·k2 are equivalent.

Proof. Let k·k = k·k1 + k·k2. If (xn)n∈N is a Cauchy sequence in X with respect to the norm k·k, then clearly it is a Cauchy sequence in X with respect to the norm k·kj, j = 1, 2. Let uj ∈ X be such that kxn − ujkj → 0 (n → ∞), j = 1, 2.

Note that, for every y ∈ X, |[u1 − u2, y]| ≤ |[u1 − xn, y]| + |[xn − u2, y]| ≤ α1kxn − u1k1kyk1 +

α2kxn − u2k2kyk2 → 0 (n → ∞). So, |[u1 − u2, y]| = 0 for every y ∈ X. Then, since X is non-degenerate, u1 = u2 and, therefore, xn → u1 in the norm k·k. That is, (X, k·k) is a Banach space.

For j = 1, 2, as k·kj ≤ k·k then, by theorem A.1.3, k·kj is equivalent to k·k. Hence, k·k1 and k·k2 are equivalent.

35 Theorem 3.5.8. Let X be a non-degenerate and decomposable inner product space and let U be a subspace. Then U has a dual companion which is isometrically isomorphic to U.

Proof. Let X = X+⊕˙ X− be a fundamental decomposition of X and let J be its associated fundamental symmetry; let V = J(U). Clearly V is isometrically isomorphic to U (since J is an isometric isomorphism).

⊥ If x ∈ U ∩ V then x ⊥ Jx and therefore [x, x]J = [Jx, x] = 0. Since the J-inner product is definite, this implies x = 0.

⊥ If x ∈ U ∩ V , there exists y ∈ U such that x = Jy and therefore y ⊥ Jy; so [y, y]J = [Jy, y] = 0 and, since the J-inner product is definite, it follows that y = 0 and thus x = 0. Therefore U ∩ V ⊥ = U ⊥ ∩ V = {0}; that is, U and V are dual companions.

Definition 3.5.9. Let (X, [·, ·]) be a non-degenerate and decomposable inner product space with a fundamental decomposition X = X+⊕˙ X− and let J be its associated fundamental symmetry. The subspace U ⊆ X is called uniformly positive (resp. uniformly negative) if and only if there

2 2 exists γ > 0 such that [x, x] ≥ γkxkJ (resp., [x, x] ≤ −γkxkJ ) for every x ∈ U. A subspace which is uniformly positive or uniformly negative is called uniformly definite.

Proposition 3.5.10. Let (X, [·, ·]) be a non-degenerate and decomposable inner product space with a fundamental decomposition X = X+⊕˙ X− and let J be its associated fundamental sym- metry. Then X+ and X− are uniformly definite subspaces of X.

+ + Proof. Note that, for every x ∈ X , [x, x] = [Jx, x] = [x, x]J = kxkJ . Hence, X is uniformly positive. Analogously, X− is uniformly negative.

Remarks:

• Clearly, if U ⊆ X is uniformly positive (uniformly negative) then every subspace of U is so.

• By the proposition 3.5.6, the map x ∈ X 7→ [x, x] ∈ R is continuous (since for every x,

y ∈ X, |[x, x] − [y, y]| = |[x − y, x] + [y, x − y]| ≤ |[x − y, x]| + |[y, x − y]| ≤ kx − ykJ kxkJ +

kykJ kx − ykJ ). Therefore, if U ⊆ X is a uniformly positive (uniformly negative) subspace, then U is so.

36 The previous remark implies immediately the following proposition:

Proposition 3.5.11. Let (X, [·, ·]) be a non-degenerate and decomposable inner product space with fundamental decomposition X = X+⊕˙ X− and let J be its associated fundamental symmetry. Then a subspace U is uniformly definite if and only if U is uniformly definite.

3.6 Angular Operators

Remark. Let X be a non-degenerate and decomposable inner product space with fundamental decomposition X = X+⊕˙ X− and let P + and P − be its associated fundamental projectors. Let U, V be subspaces of X:

+ + • If P U is invertible, then for every w ∈ P (U) there exists a unique x ∈ U such that w = P +x. Therefore, K+ : P +(U) −→ X,P +x 7−→ P −x

is a well-defined linear operator such that R(K+) = P −(U) and, for every x ∈ U, K+P +x = P −x. Note that:

U = P +x + P −x | x ∈ U = P +x + K+P +x | x ∈ U = w + K+w | w ∈ P +(U) .

Conversely, if there exists a subspace W ⊆ X+ and a linear operator K+ : W −→ X with

+ − + + R(K ) ⊆ X such that U = {w + K w | w ∈ W }, then P U is invertible (since, for every w ∈ W , P +(w + K+w) = 0 implies w = 0) and, clearly, W = P +(U). Moreover, note

that K+(P +x) = P −x for every x ∈ U; so R(K+) = P −(U) and K+P +x = P −x for every x ∈ U.

− − • Analogously, if P V is invertible, then for every z ∈ P (V ) there exists a unique x ∈ V such that z = P −x. Therefore,

K− : P −(V ) −→ X,P −x 7−→ P +x

37 is a well-defined linear operator such that R(K−) = P +(V ) and, for every x ∈ V , K−P −x = P +x.

Conversely, if there exists a subspace Z ⊆ X− and a linear operator K− : Z −→ X with

− + − − R(K ) ⊆ X such that V = {z + K z | z ∈ Z}, then P V is invertible (since, for every z ∈ Z, P −(z + K−z) = 0 implies z = 0) and, clearly, Z = P −(V ). Moreover, note that K−(P −x) = P +x for every x ∈ V ; so R(K−) = P +(V ) and K−P −x = P +x for every x ∈ V .

Definition 3.6.1. The operator K+ (resp. K−) in the previous remark is called the angular operator of the subspace U (resp. V ) with respect to X+ (resp. X−).

The next theorem summarizes the previous remarks:

Theorem 3.6.2. Let X be a non-degenerate and decomposable inner product space with fun- damental decomposition X = X+⊕˙ X− and let P +, P − be its associated fundamental projectors. Let U, V be subspaces of X.

+ − + − If P U and P V are invertible, then there exist W ⊆ X and Z ⊆ X such that:

U = w + K+w | w ∈ W (3.1) and V = z + K−z | z ∈ Z , (3.2) where K+ is the angular operator of U with respect to X+ and K− is the angular operator of V with respect to X−. Conversely, if U can be writtten in the form (3.1) for some subspace W ⊆ X+ and some linear

+ + − + + operator K : W −→ X, with R(K ) ⊆ X , then P U is invertible and K is the angular operator of U with respect to X+. Analogously, if V can be written in the form (3.2) for some

− − − + − subspace Z ⊆ X ans some linear operator K : Z −→ X, with R(K ) ⊆ X , then P V is invertible and K− is the angular operator of V with respect to X−.

38 Theorem 3.6.3. Let X be a non-degenerate and decomposable inner product space with funda- mental decomposition X = X+⊕˙ X− and let P + and P − be its associated fundamental projectors and let J be its associated fundamental symmetry. If U is a subspace of X, then:

i) U is positive semi-definite if, and only if, the angular operator K+ of U with respect to X+

+ + + + + exists and kK x kJ ≤ kx kJ for every x ∈ P (U).

ii) U is positive definite if, and only if, the angular operator K+ of U with respect to X+ exists

+ + + + + and kK x kJ < kx kJ for every x ∈ P (U) \{0}.

iii) U is uniformly positive if, and only if, the angular operator K+ of U with respect to X+ exists and kK+k < 1.

iv) U is neutral if, and only if, the angular operator K+ of U with respect to X+ exists and

+ + + + + kK x kJ = kx kJ for every x ∈ P (U).

v) V is negative semi-definite if, and only if, the angular operator K− of V with respect to

− − − − − − X exists and kK x kJ ≤ kx kJ for every x ∈ P (V ).

vi) V is negative definite if, and only if, the angular operator K− of V with respect to X−

− − − − − exists and kK x kJ < kx kJ for every x ∈ P (V ) \{0}.

vii) V is uniformly negative if, and only if, the angular operator K− of V with respect to X− exists and kK−k < 1.

Proof. Let U and V be subspaces of X.

+ i) Suppose that U is positive semi-definite. By theorem 3.4.4, P U is invertible and therefore + + + + + + + K exists. Let x ∈ P (U) and let x ∈ U such that x = P x, then kK x kJ =

+ + − + + kK P xkJ = kP xkJ ≤ kP xkJ = kx kJ since U is positive semidefinite (see remark after proposition 3.5.4).

+ + + + Now, suppose that the angular operator K of U with respect to X exist and kK x kJ ≤

+ + + − + kx kJ for every x ∈ P (U). Let x ∈ U, then kP xkJ ≤ kP xkJ and, therefore, [x, x] ≥ 0. That is, U is positive semi-definite.

39 The proofs for ii), iv), v) and vi) are similar to the proof of i).

+ + + + + iii) ⇐ Let 0 ≤ δ < 1 such that kK k = δ; then kK x kJ ≤ δkx kJ ≤ kx kJ for every

+ + + − x ∈ P (U). So, by i), U is positive semi-definite and, therefore, kP xkJ ≥ kP xkJ for every x ∈ U.

Let x ∈ U, then

+ − + − + + − − [x, x] = [Jx, x]J = [P x − P x, P x + P x]J = [P x, P x]J − [P x, P x]

+ 2 − 2 + 2 + + 2 2 + 2 = kP xkJ − kP xkJ = kP xkJ − kK P xkJ ≥ (1 − δ )kP xkJ

1−δ2 + 2 1−δ2 + 2 − 2 1−δ2 2 = 2 2kP xkJ ≥ 2 (kP xkJ + kP xkJ ) = 2 kxkJ .

Hence, U is uniformly positive.

⇒ Since U is uniformly positive, it is a positive definite subspace. Thus, by ii), K+ exists and kK+k ≤ 1.

2 Let γ > 0 be such that, for every x ∈ U, γkxkJ ≤ [x, x]. Suppose, towards a contradiction,

+ + that kK k = 1. Then there exists a bounded sequence (xn)n∈N ⊆ U such that kP xnkJ = 1

+ + for every n ∈ N, and kK P xnkJ → 1 (n → ∞). Note that xn 9 0 and, therefore, there

2 is a subsequence (xnk ) and ε > 0 such that kxnk kJ ≥ ε for every k ∈ N. Then, for every k ∈ N:

2 + 2 + + 2 + + 2 εγ ≤ γkxnk kJ ≤ [xnk , xnk ] = kP xnk kJ − kK P xnk kJ = 1 − kK P xnk kJ → 0 .

Hence, kK+k < 1.

The proof of vii) is similar to the proof of iii).

40 Chapter 4

Krein Spaces

4.1 Krein Spaces

Definition 4.1.1. Let (H, [·, ·]) be a non-degenerate and decomposable inner product space. Then H is called a Krein space if, and only if, it has a fundamental decomposition H = H+⊕H˙ − such that (H+, [·, ·]) and (H−, −[·, ·]) are Hilbert spaces. This decomposition is called a Krein decomposition for H.

Remarks:

• Clearly, by the proposition 3.5.10, if H = H+⊕H˙ − is a Krein decomposition of the Krein space H, then H+ and H− are uniformly definite subspaces.

• Clearly, if H is a Hilbert space or H is the anti-space of a Hilbert space, then it is a Krein space.

Theorem 4.1.2. Let H be a Krein space, H = H+⊕H˙ − be a Krein decomposition for H and J its associated fundamental symmetry. Then (H, [·, ·]J ) is a Hilbert space (in this case H is called a J-space).

Proof. By the proposition 3.5.4, (H, [·, ·]J ) is a pre-Hilbert space. Note that, for every x ∈ H, if x = x+ + x−, where x+ ∈ H+ and x− ∈ H−, then

2 + 2 − 2 kxkJ = kx kH+ + kx kH− . (4.1)

41 + Let (xn)n∈N be a Cauchy sequence in (H, [·, ·]J ). Then there exist sequences (un)n∈N ⊆ H and

− (vn)n∈N ⊆ H such that xn = un + vn for each n ∈ N. By (4.1), (un)n∈N and (vn)n∈N are Cauchy sequences in H+ and H−, respectively, and therefore they are convergent in H+ and H−, respectively. Thus, again by (4.1), (xn)n∈N is convergent in the J-norm. Therefore (H, [·, ·]J ) is a Hilbert space.

Corollary 4.1.3. Let H be a Krein space with Krein decomposition H = H+⊕H˙ − and associated fundamental symmetry J. Then J is a self-adjoint and unitary linear operator acting in the Hilbert space (H, [·, ·]J ).

Proposition 4.1.4. Let H be a Krein space with Krein decomposition H = H+⊕H˙ −, P + and P − be its associated fundamental projectors and J be its associated fundamental symmetry. Then P + and P − are self-adjoint, and therefore they are orthogonal projections in the Hilbert space

(H, [·, ·]J ).

Proof. Let x, y ∈ H with x = x+ + x− and y = y+ + y−, where x+, y+ ∈ H+ and x−, y− ∈ H−. Then

+ + + + + − + + [P x, y]J = [x , y]J = [Jx , y] = [x , y + y ] = [x , y ] and

+ + + − + + + [x, P y]J = [Jx, y ] = [x − x , y ] = [x , y ].

+ − Thus, P is self-adjoint in the Hilbert space (H, [·, ·]J ). The proof for P is analogous.

Remark. Given a Hilbert space, we can construct a Krein space as shown below. Let (H, h·, ·i) be a Hilbert space and let J : H −→ H be a self-adjoint and . Since J is a bounded operator, its spectrum σ(J) is non-empty and σ(J) = σp(J) ⊆ {−1, 1}. In H define an inner product [·, ·] by the formula [x, y] = hJx, yi for every x, y ∈ H.

• Suppose σ(J) = {−1} or σ(J) = {1}. Then, clearly, (H, [·, ·]J ) is a Krein space with fundamental symmetry J.

+ − + 1 • Suppose σ(J) = {−1, 1}. Let H = ker(I − J), H = ker(I + J), P = 2 (I + J) and − 1 + − P = 2 (I − J). Clearly H , H 6= {0} and, by the applied to J,

42 H = H+ ⊕ H− (orthogonal sum in the Hilbert space (H, h·, ·i)). Note that P +P + = P +, P −P − = P −, P +P − = P −P + = 0, P + + P − = I and P + − P − = J.

Clearly, P + and P − are mutually orthogonal projections on H+ and H− (since they are the projectors in the spectral decomposition of J). The spaces H+ and H− are also [·, ·]- orthogonal because for x ∈ H+ and y ∈ H− we obtain [x, y] = hJx, yi = hx, yi = 0.

Note that [x, y] = hx, yi for every x, y ∈ H+ and [x, y] = −hx, yi for every x, y ∈ H−. Further, since H+ and H− are closed subspaces of (H, h·, ·i), it follows that (H+, [·, ·]) = (H+, h·, ·i) and (H−, −[·, ·] = (H−, h·, ·i)) are Hilbert spaces. Therefore, H is a non- degenerate and decomposable inner product space and H = H+⊕H˙ − is a fundamental decomposition of H (with respect to inner product [·, ·]). In particular, H is a Krein space.

± 1 Clearly, I and J are [·, ·]-symmetric, hence P = 2 (I ±J) are [·, ·]-symmetric too and there- fore they are orthogonal projectors. Further, P + and P − are the fundamental projectors associated with the Krein decomposition H = H+⊕H˙ − and J is its associated fundamental symmetry.

Now we obtain immediately the following theorem:

Theorem 4.1.5. Let (H, h·, ·i) be a Hilbert space and let J : H −→ H be a self-adjoint and unitary operator. If [·, ·] is the inner product on H defined by the formula [x, y] := hJx, yi for every x, y ∈ H, then (H, [·, ·]) is a Krein space with Krein decomposition H = H+⊕H˙ − and associated fundamental symmetry J. H+ and H− are given by H+ = R(P +) and H− = R(P −),

+ 1 − 1 where P = 2 (I + J) and P = 2 (I − J).

Proposition 4.1.6. Let H be a Krein space with Krein decomposition H = H+⊕H˙ − and asso- ciated fundamental symmetry J. Then H+ and H− are closed subspaces in the topology induced by k·kJ on H.

+ Proof. Let (un)n∈N ⊆ H be a Cauchy sequence with respect to the J-norm; then, by equation

+ + (4.1), (un)n∈N is a Cauchy sequence in (H , k·kH+ ). Since (H , k·kH+ ) is a Banach space there

+ exists u0 ∈ H such that un → u0 in the norm k·kH+ and, again by equation (4.1), un → u0

43 in the J-norm. Thus H+ is a closed subspace in the topology induced by the J-norm on H. Analogously, H− is so.

Remark. Let (H, [·, ·]) be a Krein space with Krein decomposition H = H+⊕H˙ − and J its asso-

+ − ciated fundamental symmetry. By the proposition 4.1.5, (H , [·, ·]J ) and (H , [·, ·]J ) are Hilbert

+ + − − spaces and, moreover, it is clear that (H , [·, ·]J ) = (H , [·, ·]) and (H , [·, ·]J ) = (H , −[·, ·]).

+ − + − Further, by the proposition 3.5.3, H and H are mutually J-orthogonal; thus H = H ⊕J H . Remark. Let (H, h·, ·i) be a Hilbert space, with k·k the norm induced by h·, ·i, and let G ∈ B(H) be a self-adjoint operator with 0 ∈ ρ(G). Let [·, ·]G be the inner product on H defined by the formula [x, y]G = hGx, yi, x, y ∈ H. Since 0 ∈ ρ(G), there exists α > 0 such that (−α, α) ⊆ ρ(G).

• If G > 0 (resp., G < 0) then G ≥ αI (resp. G ≤ −αI) and, clearly, (H, [·, ·]G) is a Krein

space since [·, ·]G is positive definite (resp., negative definite) and its induced norm, k·kG,

2 2 2 is equivalent to k·k (because αkxk ≤ kxkG = |hGx, xi| ≤ kGkkxk for x ∈ H). Thus

(H, [·, ·]G) (resp., (H, −[·, ·]G)) is a Hilbert space and, therefore, a Krein space. Clearly, in

this case, P+ = I, P− = 0 (resp., P+ = 0, P− = I) and, therefore, J = I (resp., J = −I).

Thus, k·kJ and k·k are equivalent.

• Suppose that there are x0, y0 ∈ H such that hGx0, x0i > 0 and hGy0, y0i < 0; clearly

σ(G) ∩ R>0 6= ∅ and σ(G) ∩ R<0 6= ∅. Let P+ and P− be the Riesz projections on

+ − σ(G)∩R>0 and σ(G)∩R<0, respectively. Then, by theorem A.1.2, P and P are orthogonal projections which commute with G, P +P − = P −P + = 0, P + + P − = I and H = H+⊕H˙ −

where H+ = R(P +) and H− = R(P −) are closed G-invariant subspaces. Further, since

+ σ(GH+ ) = σ(G) ∩ R>0 and σ(GH− ) = σ(G) ∩ R<0, we have hGx, xi > 0 for every x ∈ H with kxk = 1 and hGx, xi < 0 for every x ∈ H− with kxk = 1.

+ − + − Clearly, [·, ·]G is indefinite. Note that, since H and H are G-invariant, H ⊥G H ; hence

+ − + − H = H ⊕˙ GH and (H , [·, ·]G) and (H , −[·, ·]G) are Hilbert spaces. That is, (H, [·, ·]G) is a Krein space.

Clearly P + and P − are the fundamental projectors associated with the Krein decomposition

+ − H = H ⊕˙ GH ; let J be its associated fundamental symmetry. Clearly J is a bounded

44 2 operator in the topology induced by k·k and, therefore, for every x ∈ H, kxkJ = [x, x]J =

2 [Jx, x]G = hGJx, xi = |hGJx, xi| ≤ kGJxkkxk ≤ kGJxkkxk ≤ kGJkkxk .

As G is J-selfadjoint (since [Gx, y]J = [JGx, y]G = hGJGx, yi = hJGx, Gyi = hGJx, Gyi =

−1 [Jx, Gy]G = [x, Gy]J ) and D(G) = H, the operators G and G are bounded in the topology

2 −1 −1 −1 −1 induced by k·kJ . Thus, kxk = hx, xi = hGG x, xi = [G x, x]G = [JJ G x, x]G =

−1 −1 −1 −1 2 [J G x, x]J ≤ kJ G kkxkJ .

Hence, k·k and k·kJ are equivalent. Note that the equivalence of k·k and k·kJ is also a

consequence of the theorem 3.5.7, since |[x, y]G| ≤ kxkJ kykJ and |[x, y]G| ≤ kGkkxkkyk for every x, y ∈ H.

4.2 Subspaces in Krein Spaces

Remark. Let (H, [·, ·]) be a J-space. Note that for every subspace U ⊆ H:

• J(U ⊥) = [J(U)]⊥ since x ∈ J(U ⊥) ⇔ Jx ∈ U ⊥ ⇔ [Jx, y] = 0 for every y ∈ U ⇔ [x, Jy] = 0 for every y ∈ U ⇔ x ∈ [J(U)]⊥.

• U ⊥ = [J(U)]⊥J since x ∈ U ⊥ ⇔ [x, y] = 0 for every y ∈ U ⇔ [Jx, Jy] = 0 for every y ∈ U

⊥J ⇔ [x, Jy]J = 0 for every y ∈ U ⇔ x ∈ [J(U)] .

• U ⊥J = [J(U)]⊥ since, by the preceding item, U ⊥J = (J 2(U))⊥J = (J(U))⊥.

Proposition 4.2.1. If U is a closed subspace of a J-space, then U ⊥ is closed too.

Proof. Since U is closed, J(U) is closed and, therefore, U ⊥ = [J(U)]⊥J is so.

Proposition 4.2.2. Let (H, [·, ·]) be a J-space and U ⊆ H be a closed subspace (in the topology

⊥⊥ induced by k·kJ ). Then U = U.

Proof. U ⊥⊥ = [J(U ⊥)]⊥J = [(J(U))⊥]⊥J = [(J 2(U))⊥J ]⊥J = [U ⊥J ]⊥J = U = U.

Remark. From the proof of the proposition 4.2.2, we have U ⊥⊥ = U for every subspace U of a J-space.

45 Proposition 4.2.3. Let H be a J-space and U be a closed subspace of H. Then H = U + U ⊥ if, and only if, U is non-degenerate.

Proof. By the preceding remark, proposition 2.3.3 and the closedness of U, we have U + U ⊥ = (U + U ⊥)⊥⊥ = (U ⊥ ∩ U ⊥⊥)⊥ = (U ⊥ ∩ U)⊥ = (U 0)⊥. Thus, U is non-degenerate if and only if U + U ⊥ = H.

Proposition 4.2.4. Let H be a Krein space with Krein decomposition H = H+⊕H˙ − and let J be its associated fundamental symmetry. If U is a maximal semi-definite subspace, then U is a closed subspace in the topology induced on H by the norm k·kJ .

Proof. Let U be a maximal semi-definite subspace. Suppose, without loss of generality, that U is positive semi-definite.

Let u0 ∈ H such that there is a sequence (un)n∈N ⊆ U with un → u0 in the J-norm. Then

+ − + − there exist (xn)n∈N ⊆ H , (yn)n∈N ⊆ H , x0 ∈ H and y0 ∈ H such that, for every n ∈ N0,

2 2 2 un = xn + yn. Since kun − u0kJ = kxn − x0kJ + kyn − y0kJ for every n ∈ N, then xn → x0 and yn → y0 in the J-norm.

Since each un is non-negative, kxnkJ ≥ kynkJ for every n ∈ N and, therefore, kx0kJ ≥ ky0kJ .

Thus, u0 is non-negative. Hence the closure of U is a positive semi-definite subspace of H and, since U is maximal positive semi-definite, U is closed.

Corollary 4.2.5 (of the proof). If U is a positive (negative) semi-definite subspace, then its clo- sure, in the topology induced on H by the J-norm, is a positive (negative) semi-definite subspace also.

Theorem 4.2.6. Let H be a Krein space with Krein decomposition H = H+⊕H˙ − and let P +, P − be the associated fundamental projectors. If U ⊆ H is a positive semi-definite subspace then U is maximal positive semi-definite if, and only if, P +(U) = H+. Analogously, if V ⊆ H is a negative semi-definite subspace then V is maximal negative semi- definite if, and only if, P −(V ) = H−.

Proof. Let J be the fundamental symmetry associated with H = H+⊕H˙ − and let U ⊆ X be a positive semi-definite subspace. Clearly, P +(U) ⊆ H+.

46 Suppose that U is maximal positive semi-definite and suppose, towards a contradiction, that

+ + + P (U) 6= H . Since U is closed in (H, [·, ·]J ) (by proposition 4.2.4) and P U is a homeomorphism between U and P +(U) (proposition 3.5.5), then P +(U) is closed also. Thus, P +(U) is a proper

+ + closed subspace of the Hilbert space (H , [·, ·]J ). Then there exists x ∈ H , x 6= 0, such that

+ + + x ⊥J P (U) and, therefore, x ⊥J U (since if u ∈ U then [u, x]J = [u, P x]J = [P u, x]J = 0).

Hence U+Span(x) is a positive semidefinite subspace (since [u+αx, u+αx] = [J(u+αx), u+αx]J =

+ − + − + + − − + + 2 − − [u −u +αx, u +u +αx]J = [u +αx, u +αx]J −[u , u ]J = [u , u ]J +|α| [x, x]J −[u , u ]J =

2 2 + 2 − 2 |α| kxkJ + ku kJ − ku kJ ≥ 0 for every u ∈ U and α ∈ C because u is non-negative) and U + Span(x) ) U .

+ + + + Now suppose that P (U) = H . By the theorem 3.4.4, P U : U → H is bijective. Suppose, towards a contradiction, that U is not maximal positive semi-definite and let W be a maximal

+ + + positive semi-definite subspace such that W ) U. Thus P (W ) = H and therefore P W : W → + 0 0 H is bijective too. Since U ( W , there exists a subspace U 6= {0} such that W = U u U . Let u0 ∈ U 0 with u0 6= 0 and let u ∈ U be such that P +(u0) = P +(u). Then, since u ∈ W ,

0 + −1 + 0 + −1 + u = (P W ) (P (u )) = (P W ) (P (u)) = u .

In the case where V is negative semi-definite the proof is analogous.

Corollary 4.2.7. Let H be a Krein space and let H = H+⊕H˙ − be a Krein decomposition for H. If U ⊆ H is a maximal positive semi-definite subspace, then dim(U) = dim(H+). If V ⊆ H is a maximal negative semi-definite subspace, then dim(V ) = dim(H−).

Theorem 4.2.8. Let H be a Krein space with Krein decomposition H = H+⊕H˙ −; let P +, P − be its associated fundamental projectors and J its associated fundamental symmetry. Let U ⊆ H be a positive definite (negative definite) subspace. Then:

i) If U is maximal positive definite (maximal negative definite), then P +(U) = H+ ( P −(U) = H−).

ii) If P +(U) = H+ (P −(U) = H−), then U is maximal positive definite (maximal negative definite).

47 Proof. Suppose, without loss of generality, that U is a positive definite subspace. Clearly, since P +(U) ⊆ H+ and H+ is closed, P +(U) ⊆ H+.

i) Suppose that U is maximal positive definite and suppose, towards a contradiction, that

+ + + + P (U) 6= H . Then there exists x ∈ H \{0} such that x ⊥J P (U) and, therefore, x 6∈ U.

+ + Further, note that, x ⊥J U (since for every u ∈ U, [x, u]J = [P x, u]J = [x, P u]J = 0).

Therefore U + Span(x) is a positive definite space such that U + Span(x) ) U .

ii) Suppose that P +(U) = H+. Then U is closed and, by the theorem 4.2.6, U is a maximal positive semi-definite subspace. Suppose, towards a contradiction, that U is not a maximal positive definite subspace; then there exists a positive definite subspace W ⊆ H, and

therefore also positive semi-definite, such that U ( W .

Corollary 4.2.9. If U is a maximal positive definite closed subspace of a Krein space, then U is a maximal positive semi-definite subspace.

Proposition 4.2.10. Let (H, [·, ·]) be a J-space and U ⊆ X be a closed and definite subspace.

Then U is uniformly definite if, and only if, (U, k·kU ) is a Banach space (where k·kU is the norm induced by [·, ·] on U).

Proof. Since U is closed, (U, k·kJ ) is a Banach space.

2 2 ⇒ Suppose that U is uniformly definite and let γ > 0 such that kxkU = |[x, x]| ≥ γkxkJ for

2 2 2 every x ∈ U. Then, by the proposition 3.5.6, γkxkJ ≤ kxkU ≤ kxkJ for every x ∈ U. That is, k·kJ and k·kU are equivalent on U and, therefore, (U, k·kU ) is a Banach space.

⇐ Suppose that (U, k·kU ) is a Banach space. Then, by the theorem 3.5.7, k·kU and k·kJ are

2 2 equivalent and, therefore, there exists γ > 0 such that |[x, x]| = kxkU ≥ γkxkJ for every x ∈ U. Hence, U is uniformly definite.

Corollary 4.2.11. If U is a closed and uniformly definite subspace of a J-space, then k·kU and k·kJ are equivalent norms on U.

Theorem 4.2.12. Let (H, [·, ·]) be a Krein space and U be a closed subspace of H. Then, (U, [·, ·]) is a Krein space if and only if U is decomposable with a fundamental decomposition U = U +⊕˙ U − such that U + is uniformly positive and U − is uniformly negative.

48 Proof. ⇒ It is clear. ⇐ Suppose that U is decomposable with a fundamental decomposition U = U +⊕˙ U − such that U + is uniformly positive and U − is uniformly negative.

Suppose, towards a contradiction, that U + ( U +; thus, there exists w 6= 0 such that w ∈ U + ∩U −. By the proposition 3.5.11, U + is uniformly positive and, therefore, [w, w] > 0 and [w, w] < 0 .

Hence U + is closed and, analogously, U − is closed. By the proposition 4.2.10, (U +, [·, ·]) and (U −, −[·, ·]) are Hilbert spaces and, therefore, U is a Krein space.

Theorem 4.2.13. Let (H, [·, ·]) be a J-space and U be a closed subspace. If (U, [·, ·]) is a Krein space, then U is ortho-complemented in H.

Proof. Suppose that (U, [·, ·]) is a Krein space; let U = U+⊕˙ U− be a Krein decomposition of U with associated fundamental symmetry J0. Clearly, since (U+, [·, ·]) and (U−, −[·, ·]) are Hilbert

spaces, (U+, k·kU+ ) and (U−, k·kU− ) are Banach spaces.

By the corollary 4.2.11, k·kU+ and k·kJ0 are equivalent norms on U+ (since U+ is a closed and uniformly positive subspace of U). Since k·kJ0 and k·kJ are equivalent norms on U (by the theorem

3.5.7), the norms k·kU+ and k·kJ are equivalent on U+. Let x be any vector in H and define the linear functional ϕx : U+ → C, ϕx(u) := [u, x], u ∈ U+. Since |ϕx(u)| = |[u, x]| ≤ kukJ kxkJ for every u ∈ U+, then ϕx is bounded with respect to the J-norm and, therefore, with respect to

k·kU+ . Thus, by Riesz’s Theorem, there exists u+ ∈ U+ such that ϕx(u) = [u, x] = [u, u+] for every u ∈ U+. That is, x − u+ ⊥ U+.

Hence, every vector of H has a projection on U+ and, analogously, every vector of H has a projection on U−. Therefore, every vector of H has a projection on U. That is, U is ortho- complemented in H.

Theorem 4.2.14. Let H be a J-space with Krein decomposition H = H+⊕H˙ − and U ⊆ H be a maximal semi-definite subspace. Then U ⊥ is a maximal semi-definite subspace too.

Proof. Suppose, without loss of generality, that U is maximal positive semi-definite; by the propo- sition 2.6.3, U ⊥ is negative semi-definite.

49 Let K+ : H+ → H− be the angular operator of U with respect to H+, then U = {x + K+x | x ∈

+ + − + H }. If w ∈ H, w = w+ + w− with w+ ∈ H and w− ∈ H . Then for every x ∈ H :

+ + + [x + K x, w] = [x + K x, w+ + w−] = [x − K x, w+ + w−]J

+ + ∗ = [x, w+]J − [K x, w−]J = [x, w+]J − [x, (K ) w−]J

+ ∗ = [x, w+ − (K ) w−]J .

+ ∗ ⊥ + ∗ − Thus, w ⊥ U if and only if (K ) w− = w+. Therefore, U = {(K ) w− + w− | w− ∈ H }; that is, (K+)∗ : H− → H+ is the angular operator of U ⊥ with respect to H−. By the theorem 4.2.6, since P −(U ⊥) = H−, we have that U ⊥ is maximal negative semi-definite.

4.3 The Gram Operator of a Closed Subspace

In this section (H, [·, ·]) is a Krein space with Krein decomposition H = H+⊕H˙ − and, associated fundamental projectors and symmetry P +,P − and J, respectively. Also, U is a closed subspace of H. Clearly (U, [·, ·]J ) is a Hilbert space because U is a closed subspace of (H, [·, ·]J ).

Since (H, [·, ·]J ) is a Hilbert space and U is closed, the J-orthogonal projection PU of H on U exists.

Definition 4.3.1. The bounded linear operator GU : U → U defined by GU := PU J U is called the Gram operator of U.

Remarks:

• Clearly, kGU k ≤ 1.

• For every x, y ∈ U we have [GU x, y]J = [PU Jx, y]J = [Jx, PU y]J = [Jx, y]J = [x, y].

• GU is self-adjoint in (U, [·, ·]J ) since [GU x, y]J = [x, y] = [y, x] = [GU y, x]J = [x, GU y] for every x, y ∈ U.

• By the Lax-Milgram Theorem (see [8]), GU is the unique operator with the properties above.

R 1 • Let (Eλ)λ∈R be the spectral resolution of GU . Then, GU = −1− λdEλ.

50 − R +0 + R 1 0 • Let PU = −1− λdEλ = E−0, PU = −0 λdEλ = I − E0 and PU = E0 − E−0. Clearly, − + 0 PU , PU and PU are orthogonal projections and they are pairwise orthogonal. Moreover,

− + 0 PU + PU + PU = IU (where IU is the identity operator on U).

• From the above we obtain:

+ − 0 + + − − 0 0 U = U ⊕J U ⊕J U [with U = PU (U),U = PU (U) and U = PU (U)]. (4.2)

Theorem 4.3.2. In the decomposition (4.2), U + and U − are positive definite and negative definite subspaces with U + ⊥ U −. Moreover, U 0 is the isotropic part of U.

0 Proof. (1) Note that U = (E0 − E−0)(U) = ker(GU ). If x ∈ U, then x ⊥ U ⇔ [x, u] = 0 for every u ∈ U ⇔ [Jx, u]J = 0 for every u ∈ U ⇔ [Jx, PU u]J = 0 for every u ∈ U ⇔ [PU Jx, u]J = 0

0 0 for every u ∈ U ⇔ [GU x, u]J = 0 for every u ∈ U ⇔ GU x = 0 ⇔ x ∈ ker(GU ) = U . That is U is the isotropic part of U.

+ R 1 R 1 + (2) Let x ∈ U with x 6= 0. Then [x, x] = [GU x, x]J = −1− λd[Eλx, x]J = −1− λd[Eλx, PU x]J = R 1 R 1 R 1 −1− λd[Eλx, (I − E0)x]J = −1− λd[(I − E0)Eλx, x]J = −1− λd[(Eλ − E0Eλ)x, x]J .

Since [(Eλ −E0Eλ)x, x]J = 0 if λ < 0 and [(Eλ −E0Eλ)x, x]J ≥ 0 if λ ≥ 0 because t 7→ [Etx, x]J is

+ an increasing function and EµEλ = EλEµ = Emin{µ,λ}, then [x, x] ≥ 0. Therefore, U is positive semi-definite and, analogously, U − is negative semi-definite. Suppose, towards a contradiction, that x is neutral. Then, clearly, x ∈ U 0 (since U +, U − and

U 0 are linearly independent). Hence, U + is positive definite and, analogously, U − is negative definite.

+ − R 1 R 1 (3) Let x ∈ U and y ∈ U . Then [x, y] = [GU x, y]J = −1− λd[Eλx, y]J = −1− λd[Eλ(I − R 1 E0)x, E−0y]J = −1− λd[E−0Eλ(I − E0)x, y]J = 0 (since E−0Eλ(I − E0) = 0 for every λ ∈ R). Hence U + ⊥ U −.

Corollary 4.3.3. U is non-degenerate if and only if ker(GU ) = {0}.

Corollary 4.3.4. U is decomposable with fundamental decomposition U = U +⊕˙ U −⊕˙ U 0.

51 Remark. Since for every x ∈ U, [x, x] = [GU x, x]J , the subspace U is positive semi-definite

(negative semi-definite) if and only if GU is a non-negative (non-positive) self-adjoint operator. Further, U is positive definite (negative definite) if and only if, for every x ∈ U with x 6= 0,

[GU x, x]J > 0 ([GU x, x]J < 0).

Also, it is clear that U is uniformly positive (uniformly negative) if and only if GU  0 (GU  0).

Theorem 4.3.5. If 0 ∈ ρ(GU ), then (U, [·, ·]) is a Krein space.

+ − Proof. If 0 ∈ ρ(GU ) then ker(GU ) = {0} and therefore U = U ⊕˙ U . Moreover, there exists

+ − α > 0 such that (−α, α) ⊆ ρ(GU ) and therefore U is uniformly positive and U is uniformly negative. Thus, by the theorem 4.2.12, (U, [·, ·]) is a Krein space.

Now we show that the converse of the theorem 4.2.13 holds.

Theorem 4.3.6. Let (H, [·, ·]) be a J-space and U be a closed subspace. If U is ortho-complemented in H, then 0 ∈ ρ(GU ) and (U, [·, ·]) is a Krein space.

⊥ Proof. Suppose that H = U⊕˙ U and suppose, without loss of generality, that U 6= {0}. Let GU be the Gram operator of U and let Q, I − Q be the fundamental projectors associated with the decomposition H = U⊕˙ U ⊥.

2 1 kxkJ 1 Let x ∈ U with kxkJ = 1 and let y = Jx, then 1 = kxkJ = = [x, x]J = [x, y] = kxkJ kxkJ kxkJ

[x, Qy] = [GU x, Qy]J ≤ kGU xkJ kQk. Hence, in this case, GU is injective and its inverse is bounded and closed; thus R(GU ) = U.

Therefore 0 ∈ ρ(Gu) and, by the theorem 4.3.5, (U, [·, ·]) is a Krein space.

Theorem 4.3.7. Let H be a J-space and U be a closed and definite subspace. Then, U is ortho-complemented in H if and only if U is uniformly definite.

Proof. Suppose, without loss of generality, that U is positive definite.

Suppose that U is ortho-complemented in H. By the theorem 4.3.6, 0 ∈ ρ(GU ). For every x ∈ U with kxkJ = 1 we have [GU x, x]J = [x, x] > 0. Further, since 0 ∈ ρ(GU ), there exists α > 0 such that [GU x, x]J ≥ α for every x ∈ U with kxkJ = 1. Thus GU  0 and, therefore, U is uniformly definite.

52 Suppose that U is uniformly definite. By the proposition 4.2.10, (U, k·kU ) is a Banach space and, therefore, (U, [·, ·]) is a Krein space. Thus, by the theorem 4.2.13, U is ortho-complemented in H.

Theorem 4.3.8. Let (H, [·, ·]) be a non-degenerate and decomposable inner product space. Then H is a Krein space if and only if, for every associated fundamental symmetry J, the J-inner product turns H into a Hilbert space.

Proof. ⇒ Suppose that H is a Krein space. Let H = H+⊕H˙ − be a fundamental decomposition of H with associated fundamental symmetry J. By the proposition 4.2.4, H+ and H− are closed subspaces. Therefore, by the theorem 4.3.6, (H+, [·, ·]) and (H−, −[·, ·]) are Hilbert spaces. Hence

(H, [·, ·]J ) is a Hilbert space. ⇐ This is an immediate consequence of the final remark in the section 4.1.

Remark. By the theorem 4.3.8 and the propositions 3.5.6 and 3.5.7, if (H, [·, ·]) is a Krein space then any two associated fundamental symmetries induce the same topology on H.

4.4 Linear Operators in Krein Spaces

Let (H, [·, ·]) be a J-space. If we refer to topological properties in this space, like e.g. density or closedness, we always use the topology on H which is induced by J.

Definition 4.4.1. Let (H, [·, ·]) be a J-space and let S be a closed and densely defined linear operator acting in H. Define the adjoint S? of S in the Krein space H by:

• D(S?) = {x ∈ H | (∃x0 ∈ H)(∀y ∈ D(S))( [Sy, x] = [y, x0])},

• S?x = x0 for every x ∈ D(S?).

Remark. Clearly S? is well-defined since, as D(S) is dense, if [y, x0] = [y, x00] for every y ∈ D(S) then x0 = x00 (since the inner product is non-degenerate).

Proposition 4.4.2. Let H be a J-space and let S be a closed and densely defined linear operator

∗ ? ∗ acting in H. If S is the adjoint of S in the Hilbert space (H, [·, ·]J ), then S = JS J.

53 Proof. Note that D(JS∗J) = J(D(S∗)). Let x ∈ J(D(S∗)), then (since J 2 = I) Jx ∈ D(S∗)

∗ ∗ ∗ and, for every y ∈ D(S), [Sy, x] = [JSy, Jx] = [Sy, Jx]J = [y, S Jx]J = [Jy, S Jx] = [y, JS Jx]; that is x ∈ D(S?) and S?x = JS∗Jx.

? ? Let x ∈ D(S ). Note that, for every y ∈ D(S), [Sy, Jx]J = [JSy, Jx] = [Sy, x] = [y, S x] =

? ? ? ∗ ∗ [Jy, JS x] = [y, JS x]J = [y, JS JJx]J ; that is, Jx ∈ D(S ) and, therefore, x ∈ J(D(S )). Hence, S? = JS∗J.

Corollary 4.4.3. Let (H, [·, ·]) be a J-space and S be a closed and densely defined linear operator acting in H. Then:

• S? is closed and densely defined.

• (αS)? = αS? for every α ∈ C.

• ((S − λ)−1)? = (S? − λ)−1 for every λ ∈ ρ(S).

• σ(S?) = {λ | λ ∈ σ(S)}.

Definition 4.4.4. Let (H, [·, ·]) be a J-space and let S be a closed and densely defined linear operator acting in H. Then S is called:

• symmetric in the Krein space if and only if S ⊆ S?.

• self-adjoint in the Krein space if and only if S = S?.

• unitary in the Krein space if and only if D(S) = H and SS? = S?S = I.

Remark. If the operator S is self-adjoint in the Krein space (H, [·, ·]) then, by the corollary 4.4.3,

σ(S) is a symmetric subset of C with respect to the real axis. Remark. If U : H −→ H is unitary in the Krein space H, then:

• [Ux, Uy] = [x, y] for every x, y ∈ H.

• JU = UJ since, for every x ∈ H, JUx = J(Ux++Ux−) = Ux+−Ux− = U(x+−x−) = UJx.

• U is a bounded operator in the topology induced by the J-norm, with kUk = 1, since

2 2 kUxkJ = [Ux, Ux]J = [JUx, Ux] = [UJx, Ux] = [Jx, x] = [x, x]J = kxkJ for every x ∈ H.

54 Definition 4.4.5. Let S be a self-adjoint operator in the Krein space (H, [·, ·]) with associated fundamental symmetry J. The point λ ∈ σ(S) is called of positive type (resp., negative type) if there is a sequence (xn)n∈N, with kxnkJ = 1 and [xn, xn] > 0 (resp., [xn, xn] < 0) for every n ∈ N, such that (S − λI)xn → 0. The set of points of positive type (resp., negative type) is denoted by σ+(S) (resp., σ−(S)).

Definition 4.4.6. Let S be a self-adjoint operator in the Krein space (H, [·, ·]) and λ ∈ σp(S).

+ − Then, λ ∈ σp (S) (resp., λ ∈ σp (S)) if, and only if, all its associated eigenvectors are positive (resp., negative).

55 Chapter 5

Riesz Basis

5.1 Bessel sequences

Lemma 5.1.1. Let H be a separable Hilbert space and let (vj)j∈N be a sequence in H such that, P∞ for every (αj)j∈N ∈ l2(N), the series j=1 αjvj is convergent. Then,

P∞ T : l2(N) −→ H,T ((αj)j∈N) := j=1 αjvj is a bounded linear operator; its adjoint operator is given by

∗ ∗ T : H −→ l2(N), T v = (hv, vji)j∈N

P∞ 2 2 2 and, for every v ∈ H, j=1 |hv, vji| ≤ kT k kvk .

Proof. Clearly T is well-defined and it is a linear operator. For each n ∈ N let Tn : l2(N) −→ H, Pn Tn((αj)∈N) := j=1 αjvj; then each Tn is a bounded linear operator and, for every α ∈ l2(N),

Tnα → T α (n → ∞). So (Tn)n∈N is a family of pointwise bounded operators and, by the Banach- Steinhaus theorem, this family is bounded; therefore T is bounded.

∗ Let v ∈ H and let (yj)j∈N ∈ l2(N) be such that T v = (yj)j∈N; then for every α = (αj)j∈N ∈ l2(N) P∞ ∗ P∞ ∗ we have j=1 αjyj = hα, T vi = hT α, vi = j=1 αjhvj, vi. So T v = (hv, vji)j∈N. ∗ ∗ P∞ 2 As T is bounded (since T is so) and kT k = kT k then, for every v ∈ H, j=i |hv, vji| = kT ∗vk2 ≤ kT k2kvk2.

56 Definition 5.1.2. Let H be a Hilbert space. A sequence (vj)j∈N ⊆ H is called a Bessel sequence P∞ 2 2 if and only if there exists a constant B > 0 such that, for every v ∈ H, j=1 |hv, vji| ≤ Bkvk .

In this case B is called a Bessel bound for (vj)j∈N.

Remarks:

• It is clear that if (vj)j∈N is a sequence in H then a sufficient and necessary condition for

(vj)j∈N to be a Bessel sequence is that there exists B > 0 and U ⊆ H dense subset such P∞ 2 2 that j=1 |hu, vji| ≤ Bkuk for every u ∈ U.

P∞ • Note that, by lemma 5.1.1, if (vj)j∈N ⊆ H is a sequence such that j=1 αjvj is convergent

for every (αj)j∈N ∈ l2(N), then (vj)j∈N is a Bessel sequence.

P∞ 2 • If (vj)j∈N ⊆ H is a Bessel sequence then, for every v ∈ H, the series j=1 |hv, vji| is absolutely convergent and therefore unconditionally convergent. So, for every permutation

ϕ of N, (vϕ(j))j∈N is a Bessel sequence.

Proposition 5.1.3. Let H be a Hilbert space; (vj)j∈N ⊆ H and B > 0. Then (vj)j∈N is a Bessel P∞ sequence with Bessel bound B if and only if T :(αj)j∈N 7→ j=1 αjvj defines a bounded linear √ operator from l2(N) into H and kT k ≤ B.

Proof. ⇒ Suppose that (vj)j∈N is a Bessel sequence with Bessel bound B y let S : H −→ l2(N),

Sv := (hv, vji)j∈N. Clearly, as (vj)j∈N is a Bessel sequence, S is well-defined and it is a bounded √ linear operator with kSk ≤ B.

∗ Note that S is the adjoint of the operator defined in lemma 5.1.1 and therefore S = T :(αj)j∈N 7→ √ P∞ j=1 αjvj is a bounded linear operator from l2(N) into H with kT k ≤ B. P∞ ⇐ Suppose that T :(αj)j∈N 7→ j=1 αjvj defines a bounded linear operator from l2(N) into H √ P∞ 2 2 2 2 with kT k ≤ B; then, by lemma 5.1.1, for every v ∈ H, j=1 |hv, vji| ≤ kT k kvk ≤ Bkvk . So

(vj)j∈N is a Bessel sequence with Bessel bound B.

P∞ Corollary 5.1.4. If H is a Hilbert space and (vj)j∈N ⊆ H is a Bessel sequence, then j=1 αjvj converges unconditionally for every (αj)j∈N ∈ l2(N).

57 5.2 Basis

Definition 5.2.1. Let A be a subset of the Hilbert space H. Then A is called a total subset, or a complete system, of H if and only if Span(A) = H.

Definition 5.2.2. Let H be a Hilbert space and let (uj)j∈N ⊆ H be a sequence.

i) The sequence (uj)j∈N is a (Schauder) basis for H if, and only if, for each v ∈ H there exists

unique scalar coefficients (αj)j∈N such that

∞ X v = αjuj. (5.1) j=1

ii) The sequence (uj)j∈N is an unconditional basis if, and only if, it is a basis and the series (5.1) converges unconditionally for each v ∈ H.

iii) The sequence (uj)j∈N is an orthonormal basis if, and only if, it is a basis and it is an orthonormal system.

Remark. If (uj)j∈N ⊆ H is an orthonormal system then (uj)j∈N is a Bessel sequence in H since, P∞ for every (αj)j∈N ∈ l2(N), the series j=1 αjuj is convergent (since if m, n ∈ N with m > n then Pm Pn 2 Pm 2 Pm 2 j=1 αjuj − j=1 αjuj = j=n+1 αjuj = j=n+1 |αj| → 0 if n → ∞).

Theorem 5.2.3. Let H be a Hilbert space and let (uj)j∈N ⊆ H be an orthonormal system. The following are equivalent:

i) (uj)j∈N is an orthonormal basis.

P∞ ii) For every v ∈ H, v = j=1 hv, ujiuj.

P∞ iii) For every v, w ∈ H, hv, wi = j=1 hv, ujihuj, wi.

2 P∞ 2 iv) For every v ∈ H, kvk = j=1 |hv, uji| .

v) (uj)j∈N is a total subset of H.

vi) If hv, uji = 0 for every j ∈ N, then v = 0.

58 Proof. i) ⇒ ii) Let v ∈ H. Since (uj)j∈N is a basis for H, there exists (αj)j∈N such that P∞ v = j=1 αjuj. Fix n ∈ N. Since (uj)j∈N is an orthonormal system, it follows that hv, uni = P∞ P∞ P∞ h j=1 αjuj, uni = j=1 αjhuj, uni = αn. So, v = j=1 hv, ujiuj. P∞ ii) ⇒ iii) Let v, w ∈ H. As v = j=1 hv, ujiuj, then

P∞ P∞ hv, wi = h j=1 hv, ujiuj, wi = j=1 hv, ujihuj, wi.

iii) ⇒ iv) Is clear.

iv) ⇒ v) Let v ∈ H be such that v ⊥ Span((uj)j∈N); then, for every j ∈ N, hv, uji = 0. So kvk = 0 and therefore v = 0. Then, H = Span((uj)j∈N). v) ⇒ vi) Suppose, towards a contradiction, that there exists v 6= 0 such that, for every j ∈ N,

⊥ hv, uji = 0; then (Span((uj)j∈N)) 6= {0} and therefore H 6= Span((uj)j∈N) .

P∞ vi) ⇒ i) Let v ∈ H. Then j=1 hv, ujiuj ∈ H because (uj)j∈N is a Bessel sequence (since it P∞ is an orthonormal system). Note that, for every n ∈ N, hv − j=1 hv, ujiuj, uni = 0; therefore P∞ P∞ v = j=1 hv, ujiuj. Clearly, if (αj)j∈N is such that v = j=1 αjuj then αj = hv, uji for each j ∈ N since (uj)j∈N is an orthonormal system. Therefore (uj)j∈N is an orthonormal basis for H.

Remarks:

• If H is a Hilbert space and (uj)j∈N ⊆ H is an orthonormal basis (and therefore a Bessel P∞ sequence), then each v ∈ H has an unconditionally convergent expansion v = j=1 hv, ujiuj.

• Clearly, every separable Hilbert space has an orthonormal basis. Further, if H is an infinite-

dimensional separable Hilbert space, then H is isometrically isomorphic to l2(N).

Theorem 5.2.4. Let H be a Hilbert space and let (vj)j∈N be an orthonormal basis for H. Then every orthonormal basis for H is of the form (Uvj)j∈N, where U : H −→ H is a unitary operator.

Proof. Let U : H −→ H be a unitary operator on H; clearly (Uvj)j∈N is an orthonormal system.

Let w ∈ H such that hw, Uvji = 0 for every j ∈ N and let v ∈ H such that w = Uv, then

0 = hw, Uvji = hUv, Uvji = hv, vji for every j ∈ N; so v = 0 and w = 0. Therefore, by theorem

5.2.3, (Uvj)j∈N is an orthonormal basis for H.

59 P∞ Let (wj)j∈N be an orthonormal basis for H. Define U : H −→ H, Uv := j=1 hv, vjiwj. As

(hv, vji)j∈N ∈ l2(N), the operator U is well-defined; moreover, clearly U is a bijective linear opera- tor on H such that, for every j ∈ N, wj = Uvj. Finally, it follows that U is unitary because (wj)j∈N 2 P∞ 2 P∞ 2 2 is an orthonormal basis for H and therefore kUvk = k j=1 hv, vjiwjk = j=1 |hv, vji| = kvk for every v ∈ H.

Proposition 5.2.5. Let H be a Hilbert space and let (uj)j∈N ⊆ H be a sequence of unitary 2 P∞ 2 vectors such that kvk = j=1 |hv, uji| for every v ∈ H. Then (uj)j∈N is an orthonormal basis for H.

Proof. By therorem 5.2.3 it is sufficient to show that (uj)j∈N is an orthogonal sequence. Fix 2 P∞ 2 2 P 2 P 2 n ∈ N; then 1 = kunk = j=1 |hun, uji| = |hun, uni| + j6=n |hun, uji| = 1 + j6=n |hun, uji| . P 2 So j6=n |hun, uji| = 0 and therefore hun, uji = for every j 6= n. This is, (uj)j∈N is an orthogonal sequence.

5.3 Riesz Basis

Definition 5.3.1. Let H be a Hilbert space. A Riesz basis for H is a sequence of the form

(Uvj)j∈N where (vj)j∈N is an orthonormal basis for H and U : H → H is a bijective bounded linear operator.

Remark. The operator U in definition 5.3.1 has bounded inverse by the inverse mapping theorem.

Proposition 5.3.2. Every Riesz basis for a Hilbert space is a basis.

Proof. Let (wj)j∈N be a Riesz basis for H; then there exists an orthomormal basis (vj)j∈N for H and a bijective bounded linear operator U : H → H such that, for every j ∈ N, wj = Uvj.

Let w ∈ H and let v ∈ H such that w = Uv; since (vj)j∈N is a basis, there exist unique coefficients P∞ P∞ P∞ P∞ (αj)j∈N such that v = j=1 αjvj. So, w = Uv = U( j=1 αjvj) = j=1 αjUvj = j=1 αjwj. P∞ Clearly the coefficients (αj)j∈N are unique such that w = j=1 αjwj since, if (βj)j∈N is such that P∞ −1 −1 P∞ w = j=1 βjwj, then, as v = U w and U is a bounded linear operator, v = j=1 βjvj and therefore (αj)j∈N = (βj)j∈N.

60 Proposition 5.3.3. Let (vj)j∈N be a Riesz basis for a Hilbert space H. Then there exist constants 2 P∞ 2 2 A, B > 0 such that, for every x ∈ H, Akxk ≤ j=1 |hx, vji| ≤ Bkxk .

Proof. Choose an orthonormal basis (ej)j∈N and a bijective bounded linear operator U : H → H P∞ 2 P∞ 2 such that vj = Uej for every j ∈ N. Then, for every x ∈ H, j=1 |hx, vji| = j=1 |hx, Ueji| = P∞ ∗ 2 P∞ ∗ 2 ∗ 2 2 P∞ 2 j=1 |hU x, eji| = k j=1 hU x, ejiejk = kU xk . So, with B = kUk , we have, j=1 |hx, vji| = kU ∗xk2 ≤ Bkxk2. Now, for every x ∈ H, kxk2 = k(U ∗)−1U ∗xk2 ≤ kU −1k2kU ∗xk2. So, with A = 1/kU −1k2, we have

2 P∞ 2 Akxk ≤ j=1 |hx, vji| .

Corollary 5.3.4. Every Riesz basis is a Bessel sequence.

Theorem 5.3.5. Let H be a Hilbert space and let (vj)j∈N be a Riesz basis for H. Then there exists a unique Riesz basis for H, (wj)j∈N, such that for every x ∈ H:

∞ X x = hx, wjivj. (5.2) j=1

The series (5.2) converges unconditionally for every x ∈ H.

Proof. Let (vj)j∈N be a Riesz basis. Choose an orthonormal basis (ej)j∈N for H and a bijective bounded linear operator U : H → H such that vj = Uej for every j ∈ N.

Let x ∈ H; as (vj)j∈N is a basis for H (by proposition 5.3.2), there exists a unique scalar se- P∞ −1 −1 quence (βj)j∈N such that x = j=1 βjvj. Then, as U is a bounded linear operator, U x = −1 P∞ P∞ −1 P∞ −1 −1 ∗ U ( j=1 βjvj) = j=1 βjU vj = j=1 βjej; so, βj = hU x, eji = hx, (U ) eji for every

−1 ∗ j ∈ N. Therefore, with wj := (U ) ej for every j ∈ N, we have that (wj)j∈N is a Riesz basis for P∞ H such that (5.2) holds. Clearly, if (zj)j∈N is a Riesz basis for H such that x = j=1 hx, zjivj for every x ∈ H, then hx, zji = hx, wji for every x ∈ H; so zj = wj for each j ∈ N.

Let x ∈ H. Since (wj)j∈N is a Bessel sequence then (hx, wji)j∈N ∈ l2(N); so, by corollary 5.1.4, the series (5.2) converges unconditionally.

Remark. The unique Riesz basis (wj)j∈N satisfying (5.2) in the theorem 5.3.5 is called the dual

−1 ∗ Riesz basis of (vj)j∈N, and the proof of the theorem shows that wj = (U ) ej for each j ∈ N.

61 −1 ∗ −1 ∗ Now the dual Riesz basis of (wj)j∈N is given by (((U ) ) ) ej = Uej = vj for each j ∈ N; that is, (vj)j∈N is the dual Riesz basis of (wj)j∈N. In this case, (vj)j∈N and (wj)j∈N are called a pair of P∞ P∞ dual Riesz bases and j=1 hx, wjivj = j=1 hx, vjiwj for every x ∈ H.

Corollary 5.3.6. Let H be a Hilbert space and let (vj)j∈N and (wj)j∈N be a pair of dual Riesz bases for H. Then (vj)j∈N and (wj)j∈N are biorthogonal sequences; that is, hvj, wki = δj,k for every j, k ∈ N.

P∞ Proof. Fix j ∈ N; then vj = k=1 hvj, wkivk and, since (vj)j∈N is a basis for H, hvj, wji = 1 and hvj, wki = 0 for every k 6= j.

Lemma 5.3.7. Let H1, H2 be Hilbert spaces, (vj)j∈N ⊆ H1 and (wj)j∈N ⊆ H2. Suppose that

(vj)j∈N is complete in H1, that (wj)j∈N is a Bessel sequence with Bessel bound B, and that there exists a constant A > 0 such that, for every finite scalar sequence (αj),

P 2 P 2 A |αj| ≤ k αjvjk .

P P Then the map U : αjvj 7→ αjwj defines a bounded linear operator from Span((vj)j∈N) into

Span((wj)j∈N) and U has a unique extension to a bounded linear operator from H1 into H2; the norm of U as well as its extension is at most pB/A.

Proof. If (αj), (βj) are finite sequences (without loss of generality suppose that both have the

P P P 2 P 2 same length) such that αjvj = βjvj, then A |αj − βj| ≤ k (αj − βj)vjk = 0; so,

(αj) = (βj) and therefore U is well-defined. Clearly U is a linear operator from Span((vj)j∈N) into Span((wj)j∈N). P 2 Let v ∈ Span((vj)j∈N) and let (αj) the finite scalar sequence such that v = αjvj; then kUvk = P 2 P 2 k αjwjk ≤ B |αj| (by proposition 5.1.3, since every finite scalar sequence can be considered

2 B P 2 B 2 as a sequence in l2(N)). So kUvk ≤ A k αjvjk = A kvk and therefore U is bounded with q B kUk ≤ A .

Clearly, as (vj)j∈N is a complete system in H1, the unique bounded continuous extension of U q B from H1 into H2 has norm smaller or equal to A .

62 Theorem 5.3.8. Let H be a Hilbert space and (vj)j∈N ⊆ H. The following are equivalent:

i) (vj)j∈N is a Riesz basis for H.

ii) (vj)j∈N is complete in H, and there exist constants A, B > 0 such that, for every finite scalar

sequence (αj), 2 X 2 X X 2 A |αj| ≤ αjvj ≤ B |αj| . (5.3)

Proof. i) ⇒ ii) Suppose that (vj)j∈N is a Riesz basis for H, then it is a Schauder basis and therefore it is a complete system in H.

Let (ej)j∈N be an orthonormal basis and let U : H −→ H be the linear operator defined by

Uej = vj for all j ∈ N. Then U is a bijective bounded linear operator with bounded inverse, hence kU −1k−2kxk2 ≤ kUxk2 ≤ kUk2kxk2 for every x ∈ H. Therefore for every finite scalar P sequence (αj)j, the inequalities (5.3) hold if we set x = αjej.

ii) ⇒ i) Clearly, the left-hand inequality in (5.3) implies that (vj)j∈N is a linearly independent sequence.

Pm Pn 2 Pm 2 Pm 2 If (αj)j∈N ∈ l2(N) then, for m > n, k j=1 αjvj− j=1 αjvjk = k j=n+1 αjvjk ≤ B j=n+1 |αj| → P∞ 0 (n → ∞). Therefore j=1 αjvj converges and (5.3) holds for every (αj)j∈N ∈ l2(N); by lemma

5.1.1 and proposition 5.1.3, (vj)j∈N is a Bessel sequence with bound B.

As (vj)j∈N is a complete system in H, the space H is separable; let (ej)j∈N an orthonormal basis for H. Then, by lemma 5.3.7, the mappings U : ej 7→ vj and V : vj 7→ ej (j ∈ N) extend to bounded operators on H and UV = VU = I; so U is bijective and therefore (vj)j∈N is a Riesz basis for H.

Remarks:

• If (vj)j∈N ⊆ H satisfies (5.3) for all finite scalar sequences (αj) then it is called a Riesz sequence.

• By theorem 5.3.8, if (vj)j∈N ⊆ H is a Riesz sequence then (vj)j∈N is a Riesz basis for

Span((vj)j∈N).

• Clearly, every sub-sequence of a Riesz sequence is a Riesz sequence.

63 Proposition 5.3.9. Let H be a Hilbert space and let the sequence (vj)j∈N be a total subset of H P 2 P 2 such that, for every finite scalar sequence, k αjvjk = |αj| . Then (vj)j∈N is an orthonormal basis for H.

P∞ 2 P∞ 2 Proof. By theorem 5.3.8, (vj)j∈N is a Riesz basis for H and k j=1 αjvjk = j=1 |αj| for every

(αj)j∈N ∈ l2(N).

Choose an orthonormal basis (ej)j∈N for H and a bijective bounded linear operator U : H → H P∞ 2 P∞ 2 P∞ 2 such that vj = Uej for every j ∈ N; then kU( j=1 αjej)k = k j=1 αjvjk = j=1 |αj| = P∞ 2 k j=1 αjejk for every (αj)j∈N ∈ l2(N). Therefore U is a unitary operator and thus (vj)j∈N is an orthonormal basis for H.

Theorem 5.3.10. Let Z ∈ B(H) be an operator similar to a self-adjoint operator. Then every root vector of Z is an eigenvector, and the set of all eigenvectors of Z contains a Riesz basis for its closed linear span.

Proof. Let S be a bounded self-adjoint operator such that Z is similar to S. Then there exists a linear homeomorphism T such that Z = T −1ST .

(1) Let x be a root vector of Z. Then there exist λ ∈ C and n ∈ N such that

0 = (λ − Z)nx = (λT −1T − T −1ST )nx = [T −1(λ − S)T ]nx = T −1(λ − S)nT x

Since T −1 is injective, it follows that (λ−S)nT x = 0. Hence T x is a root vector of S and, since S is self-adjoint, T x is an eigenvector of S. Therefore, (λ−Z)x = (λ−T −1ST )x = T −1(λ−S)T x = 0; that is, x is an eigenvector of Z. (2) Let U be the closed span of eigenvectors of Z. Since U ⊆ H is closed, T (U) is a separable

Hilbert space and contains an orthonormal basis (wn)n∈N consisting of eigenvectors of S.

−1 Let un := T wn for every n ∈ N. Then (un)n∈N is a Riesz basis for U consisting of eigenvectors of Z.

64 Chapter 6

Quadratic Operator Pencils

In this chapter, (H, h·, ·i) denotes a separable Hilbert space. By convention, the sets C ∪ {∞} and R ∪ {∞} (the one-point compactification of C and R, resp.) are denoted by C∞ and R∞, respectively.

6.1 Quadratic Pencils of Bounded Self-adjont Operators

Definition 6.1.1. A quadratic operator pencil (of bounded self-adjoint operators) in H is a function L : C∞ → B(H) for which there exist fixed self-adjoint operators A, B, C ∈ B(H) such that: L(∞) = A and L(λ) = λ2A + λB + C if λ 6= ∞.

Remark. Note that, if L is a quadratic operator pencil, then L(λ) is a self-adjoint operator for every λ ∈ R∞.

Definition 6.1.2. Let L be a quadratic operator pencil in a Hilbert space H. Then:

• ρ(L) = {λ ∈ C∞ | L(λ) is bijective} is the of L.

• σ(L) = C∞ \ ρ(L) is the spectrum of L.

• σp(L) = {λ ∈ C∞ | L(λ) is not injective} is the point spectrum of L.

• σc(L) = {λ ∈ C∞ | λ 6∈ σp(L), R(L(λ)) 6= H & R(L(λ)) = H} is the continuous spectrum of L.

65 • σr(L) = {λ ∈ C∞ | λ 6∈ σp(L)& R(L(λ)) 6= H} is the residual spectrum of L.

Remark. Clearly, if L is an operator pencil, then σ(L) = σp(L) ∪˙ σc(L) ∪˙ σr(L).

Definition 6.1.3. Let L be a quadratic operator pencil and let λ0 ∈ σp(L). Then λ0 is called an eigenvalue of the pencil L and each non-zero vector x ∈ ker(L(λ0)) is called an eigenvector of the pencil L associated to λ0. The subspace ker(L(λ0)) is called the eigenspace of L associated to λ0 and dim(ker(L(λ0))) is called the multiplicity of λ0.

Theorem 6.1.4. Let L(λ) = λ2A + λB + C be a quadratic pencil.

(i) If 0 ∈ ρ(A), then σ(L) is bounded.

(ii) If λ = 0 is an isolated point of σ(A) and it is an eigenvalue of A with finite geometric multiplicity, then either ∞ is an isolated point of σ(L) or there exists a neighborhood of ∞

contained in σp(L).

Proof. (i) Suppose that 0 ∈ ρ(A); since L(∞) = A and A is bijective, it follows that ∞ 6∈ σ(L).

2 1 −1 1 −1 Note that, for every λ ∈ C with λ 6= 0, L(λ) = λ A(I + λ A B + λ2 A C) and, for every |λ| 1 −1 1 −1 −1 sufficiently large, k λ A B + λ2 A Ck< 1 (since A , B and C are bounded operators); that is, 1 −1 1 −1 there exists α > 0 such that k λ A B + λ2 A Ck < 1 for every λ ∈ C with |λ| > α and, therefore, 1 −1 1 −1 I + λ A B + λ2 A C is bijective. Hence, σ(L) ⊆ B(0; α). (ii) Suppose that 0 is an isolated point of σ(A) and that 0 is an eigenvalue of A with finite geometric multiplicity; so 0 is a normal point of A (see appendix A.2). Then the orthogonal projection P on ker(A) is a finite-dimensional bounded operator and therefore it is a . Thus, by the theorem A.2.4, we have ρ˜(A) =ρ ˜(A + P ).

Since A is self-adjoint, H = ker(A) ⊕ R(A). If u ∈ ker(A) and v ∈ R(A) are such that (A + P )(u + v) = 0, then u + Av = 0 and, therefore, u = Av = 0; that is u = 0 and v ∈ ker(A) ∩ R(A).

So, u = v = 0 and, therefore, A + P is injective. Thus, since 0 ∈ ρ˜(A + P ) but 0 6∈ σp(A + P ), A + P is bijective.

Clearly, since L(∞) = A and A is not injective, we have ∞ ∈ σp(L). Note that, since 0 ∈ ρ(A+P ),

1 1 there exists α > 0 such that, for every λ ∈ C with |λ| > α, A + P + λ B + λ2 C is bijective and,

66 therefore, for every λ ∈ C with |λ| > α:

2 2 1 1 2 1 1 L(λ) = λ A + λB + C = λ (A + λ B + λ2 C) = λ (A + P + λ B + λ2 C − P ) 2 1 1 −1 1 1 = λ [I − P (A + P + λ B + λ2 C) ](A + P + λ B + λ2 C).

2 −1 1 Now, let A(µ) = P (A+P +µB +µ C) and T (µ) = I −A(µ) for every µ ∈ C such that |µ| < α . Since A(µ) is a holomorphic operator-valued function whose values are compact operators then, by

1 the theorem A.1.5, there exists ε > 0, with ε < min( α , 1), such that, for every µ with 0 < |µ| < ε, the equation T (µ)x = 0 has the same number of linearly independent solutions.

Since B(0; ε) ⊆ ρ(I) ⊆ ρ˜(I) then, by the theorem A.2.4, B(0; ε) ⊆ ρ˜(T (µ)) for every µ ∈ C with

|µ| < ε. So, if there exists µ0 ∈ B(0; ε) \{0} such that T (µ0) is bijective then, by the conclusion in the previous paragraph, T (µ) is bijective for any µ ∈ B(0; ε) \{0} and, therefore, in this case ∞ is an isolated point of σ(L). Otherwise, T (µ) is not injective for every µ ∈ B(0; ε) \{0} and, therefore, there exists a neighborhood of ∞ contained in σp(L).

Remark. Let a, b, c and d be complex numbers such that ad − bc = ±1 and consider the Möbius transformation: aλ + b µ = µ(λ) = . (6.1) cλ + d

Clearly, µ(λ) is a bijective map from C∞ onto C∞ with inverse mapping:

dµ − b λ = λ(µ) = . −cµ + a

So, for λ 6= ∞ we have cµ − a 6= 0 and, therefore, (cµ − a)2L(λ) = µ2(d2A − dcB + c2C) + µ[−2bdA + (ad + bc)B − 2acC] + (b2A − abB + a2C). That is,

(cµ − a)2L(λ) = µ2A˜ + µB˜ + C˜ = L˜(µ), (6.2) where A˜ = d2A − dcB + c2C, B˜ = −2bdA + (ad + bc)B − 2acC and C˜ = b2A − abB + a2C.

67 Therefore, for every λ ∈ C:

˜ ˜ (i) λ ∈ ρ(L) ⇔ µ(λ) ∈ ρ(L)(ii) λ ∈ σp(L) ⇔ µ(λ) ∈ σp(L) ˜ ˜ (iii) λ ∈ σc(L) ⇔ µ(λ) ∈ σc(L)(iv) λ ∈ σr(L)⇔ µ(λ) ∈ σr(L)

6.2 Factorization of Quadratic Pencils

In this section let L(λ) = λ2A + λB + C with 0 ∈ ρ(A). Thus, by the theorem 6.1.4, σ(L) is bounded.

Proposition 6.2.1. Let L be a quadratic pencil and suppose that X, Y , Z ∈ B(H) are such that L(λ) = (λX + Y )(λ − Z). Then:

(i) σp(Z) ⊆ σp(L).

(ii) σ(Z) ⊆ σ(L).

(iii) If σ(F) ∩ σ(Z) = ∅, then σ(L) = σ(F) ∪ σ(λ − Z).

Where Z and F are the linear pencils defined by Z(λ) := λ − Z and F(λ) := λX + Y .

Proof. (i) and (iii) are clear.

(ii) Let λ0 ∈ σ(Z). Since the operator λ0 − Z is not bijective, there exists (xn)n∈N ⊆ H, with kxnk = 1 for every n ∈ N, such that (λ0 − Z)xn → 0. Thus, there exists a sequence (xn)n∈N ⊆ H, with kxnk = 1 for every n ∈ N, such that L(λ0)xn → 0 and, therefore, L(λ0) is not bijective.

Hence, λ0 ∈ σ(L).

Proposition 6.2.2. Let L(λ) = λ2A + λB + C be a quadratic pencil and Z ∈ B(H). Then, L(λ) = (λX + Y )(λ − Z) for some X and Y ∈ B(H) if, and only if, AZ2 + BZ + C = 0.

Proof. ⇒ If there exist X, Y ∈ B(H) such that L(λ) = (λX + Y )(λ − Z), then C = −YZ, B = Y −XZ and A = X. Thus, AZ = Y −B and, therefore, AZ2 +BZ +C = (Y −B)Z +BZ +C = 0. ⇐ If AZ2 + BZ + C = 0 then, L(λ) = L(λ) − 0 = λ2A + λB + C − (AZ2 + BZ + C) = A(λ2 − Z2) + B(λ − Z) = [A(λ + Z) + B](λ − Z) = [λA + (AZ + B)](λ − Z).

68 Remark. An operator Z ∈ B(H) which satisfies the operator equation AZ2 + BZ + C = 0 is called an operator root of the pencil L.

2 Theorem 6.2.3. Let L(λ) = λ A + λB + C be a quadratic pencil and, for i = 1, 2, let Xi, Yi,

Zi ∈ B(H) be such that L(λ) = (λXi +Yi)(λ−Zi) and σ(Fi)∩σ(Zi) = ∅, where Zi and Fi are the linear pencils defined by Zi(λ) := λ − Zi and Fi(λ) := λXi + Yi, i = 1, 2. If σ(Z1) = σ(Z2) =: σ then Z1 = Z2.

0 0 Proof. By the proposition 6.2.1, we have σ(F1) = σ(F2) =: σ . Note that σ is compact and σ is closed (since for every bijective operator T in B(H) there exists a neighborhood of it which contains only bijective operators); thus, σ and σ0 are separated. Hence, there exists an open subset U ⊆ C such that σ ⊆ U ⊆ ρ(F1) = ρ(F2). Since (λX1 + Y1)(λ − Z1) = (λX2 + Y2)(λ − Z2)

0 −1 −1 for every λ ∈ C, then for λ 6∈ σ ∪ σ , we have (λX2 + Y2) (λX1 + Y1) = (λ − Z2)(λ − Z1) .

−1 Let f : C → B(H) such that f(λ) = (λX2 + Y2) (λX1 + Y1) for λ ∈ σ, and f(λ) = (λ − Z2)(λ −

−1 Z1) for λ ∈ C \ σ. Clearly, f is holomorphic (because C = U ∪ (C \ σ) and the functions f U and f C\σ are holomorphic) and, moreover, f(λ) → I at infinity (because, for |λ| sufficiently −1 large, f(λ) = (λ − Z2)(λ − Z1) ). Thus, by Liouville’s Theorem, f(λ) = I for every λ ∈ C and, therefore, Z1 = Z2.

6.3 Strongly Damped Pencils

Definition 6.3.1. The operator pencil L(λ) = λ2A + λB + C is called strongly damped if and only if

hBx, xi2 > 4hAx, xihCx, xi for every x ∈ H with x 6= 0.

Proposition 6.3.2. Let L(λ) be a strongly damped pencil. Then the sign of hBx, xi is constant on the set U0 = {x ∈ H \{0} | hAx, xi = 0}. If further C ≥ 0, then the sign of hBx, xi is constant on the set U+ = {x ∈ H \ {0} | hAx, xi ≥ 0}.

69 Proof. Clearly, since L(λ) is strongly damped, we have hBx, xi 6= 0 for every x ∈ U0. Suppose, towards a contradiction, that there exist u, v ∈ U0 such that hBu, ui < 0 and hBv, vi > 0.

Note that, since hB(αu), αui < 0 for every α ∈ C with α 6= 0, u can be choosen such that Re(hAu, vi) = 0. So, for every t ∈ [0, 1], w(t) = (1 − t)u + tv 6= 0 and

hAw(t), w(t)i = (1 − t)2hAu, ui + t2hAv, vi + (1 − t)thAu, vi + (1 − t)thAv, ui = 2(1 − t)t Re(hAu, vi) = 0;

that is, for every t ∈ [0, 1], w(t) ∈ U0. Let f(t) = hBw(t), w(t)i, t ∈ [0, 1]. Then f is a continuous real-valued function defined in the interval [0, 1] with f(0) < 0 and f(1) > 0. So, by Bolzano’s

Intermediate Value Theorem, there exists t0 ∈ (0, 1) such that f(t0) = 0; that is, there exists w0 = w(t0) ∈ U0 such that hBw0, w0i = 0 . Hence, the sign of hBx, xi is constant on U0.

Now suppose that C ≥ 0, then for every x ∈ U+ we have hAx, xihCx, xi ≥ 0 and, since L(λ) is strongly damped, hBx, xi= 6 0. By proceeding as in the case above, we obtain that the sign of hBx, xi is constant on U+.

Definition 6.3.3. Let L(λ) be a quadratic operator pencil. For each x ∈ H \ {0} we define the function ϕx : R∞ −→ R, ϕx(λ) = hL(λ)x, xi.

Remark. If L(λ) is a strongly damped pencil and x ∈ H is such that x 6= 0, then:

2 • ϕx(∞) = hAx, xi and, for every λ ∈ R, ϕx(λ) = hAx, xiλ + hBx, xiλ + hCx, xi.

• If hAx, xi= 6 0 then ϕx has two real zeros given by the formula:

−hBx, xi ± d(x) , where d(x) = phBx, xi2 − 4hAx, xihCx, xi. 2hAx, xi

hCx,xi • If hAx, xi = 0 (and therefore hBx, xi 6= 0), then − hBx,xi is a real zero of ϕx. Also, ∞ is a

zero of ϕx. Note that, in this case, ∞ is a simple zero.

Proposition 6.3.4. The pencil L(λ) is strongly damped if, and only if, for each x ∈ H \ {0}, the equation ϕx(λ) = 0 has exactly two different zeros in R∞.

70 Proof. ⇒ Is clear by the preceding remark. ⇐ Suppose, towards a contradiction, that L(λ) is not strongly damped; then there exists x ∈ H with x 6= 0 such that hBx, xi2 ≤ 4hAx, xihCx, xi.

If hAx, xi = 0 then hBx, xi = 0 and, therefore, ϕx(λ) = hCx, xi. So, if hCx, xi = 0 then every real number is a zero of ϕx ( ) and if hCx, xi= 6 0 then ∞ is a double zero of ϕx ( ). Thus, hAx, xi= 6 0

2 and, therefore, the zeros of ϕx are both real and different. Thus, hBx, xi > 4hAx, xihCx, xi .

Definition 6.3.5. Let L(λ) be a strongly damped pencil. Let p+, p− : H\{0} −→ R∞ be the functions defined by the formulas:   1 [−hBx, xi + d(x)] if hAx, xi= 6 0  2hAx,xi  hCx,xi p+(x) = − if hAx, xi = 0 and hBx, xi > 0 ,  hBx,xi   ∞ if hAx, xi = 0 and hBx, xi < 0 and   1 [−hBx, xi − d(x)] if hAx, xi= 6 0  2hAx,xi  p−(x) = ∞ if hAx, xi = 0 and hBx, xi > 0 .   hCx,xi  − hBx,xi if hAx, xi = 0 and hBx, xi < 0

The functions p+ and p− are called, respectively, the first and second type functionals associated to L.

Remarks:

• Clearly, for each x ∈ H with x 6= 0, p+(x) and p−(x) are the roots of the function ϕx.

d • Let x ∈ H \ {0} be such that hAx, xi= 6 0. Then, for every λ ∈ R, dλ ϕx(λ) = 2hAx, xiλ + d d hBx, xi. Thus, dλ ϕx(p+(x)) = d(x) > 0 and dλ ϕx(p−(x)) = −d(x) < 0. That is, p+(x) is

the zero of ϕx where ϕx is an increasing function and p−(x) is the zero of ϕx where ϕx is a decreasing function.

• Clearly, if x ∈ H \ {0} and α ∈ C \{0}, then p+(αx) = p+(x) and p−(αx) = p−(x).

Therefore, the values of p+ and p− are determined by their values on the unit sphere of H.

71 • If µ = µ(λ) as in the equation (6.1) then, by equation (6.2), for every x ∈ H \ {0} we have:

˜ 2 2 ϕfx(µ) = hL(µ)x, xi = (cµ − a) hL(λ)x, xi = (cµ − a) ϕx(λ). (6.3)

So, if L is a strongly damped pencil then the equation ϕfx(λ) = 0 has two different zeros in ˜ R∞ for every x ∈ H \ {0} and, by the proposition 6.2.4, also L is a strongly damped pencil. Note that, from (6.3) we have:

d 1  d 2c  ϕ (λ) = ϕ (µ) − ϕ (µ) . (6.4) dλ x ad − bc dµ fx cµ − a fx

˜ Let pf+ and pf− be the functionals of first and second type, respectively, associated to L. Then, clearly, if ad − bc = 1:   λ = p+(x) ⇔ µ(λ) = p+(x) f ,   λ = p−(x) ⇔ µ(λ) = pf−(x)

and if ad − bc = −1, then:   λ = p+(x) ⇔ µ(λ) = p−(x) f .   λ = p−(x) ⇔ µ(λ) = pf+(x)

Definition 6.3.6. Let L(λ) be a strongly damped pencil, π+ = R(p+) and π− = R(p−). The sets π+ and π− are called the spectral zones of L(λ).

Proposition 6.3.7. Let L(λ) be a strongly damped pencil. Then p+ and p− are continuous.

Proof. Let U0 be as in proposition 6.2.2. Suppose, without loss of generality, that hBx, xi > 0 for every x ∈ U0. Let x0 ∈ H \ {0} and (xn)n∈N ⊆ H \ {0} such that xn → x0 if n → ∞.

hCx0,x0i (1) Suppose that x0 ∈ U0; then p+(x0) = − and p−(x0) = ∞. Note that: hBx0,x0i

• If there is an n0 ∈ N such that xn ∈ U0 for every n ≥ n0 (and without restriction suppose

hCxn,xni hCx0,x0i that n0 = 1), then p+(xn) = − → − = p+(x0) when n → ∞, and p−(xn) = hBxn,xni hBx0,x0i

∞ = p−(x0) for every n ∈ N.

72 • If there is an n0 ∈ N such that xn 6∈ U0 for every n ≥ n0 (and without restriction suppose

−hBxn,xni+d(xn) −2hCxn,xni hCx0,x0i that n0 = 1), then p+(xn) = = → − = p+(x0) when 2hAxn,xni hBxn,xni+d(xn) hBx0,x0i n → ∞ [since d(x ) → hBx , x i if n → ∞], and p (x ) = −hBxn,xni−d(xn) → ∞ = p (x ) n 0 0 − n 2hAxn,xni − 0

when n → ∞ [since hBx0, x0i= 6 0].

• If (xn)n∈N can be split in two subsequences (ynk )k∈N ⊆ U0 and (znk )k∈N ⊆ H \ U0, then by

the two preceding items we have p+(xn) → p+(x0) and p−(xn) → p−(x0) if n → ∞.

−hBx0,x0i+d(x0) −hBx0,x0i−d(x0) (2) Suppose that x0 6∈ U0; then p+(x0) = and p−(x0) = . 2hAx0,x0i 2hAx0,x0i

Since hAx0, x0i= 6 0 and the function x 7→ hAx, xi is real-valued and continuous, there exists a neighborhood of x0, B(x0; ε) ⊆ H \ {0}, such that hAx, xi= 6 0 for every x ∈ B(x0; ε). Since xn → x0 (n → ∞), suppose, without loss of generality, that (xn)n∈N ⊆ B(x0; ε). Therefore,

−hBxn,xni+d(xn) −hBx0,x0i+d(x0) p+(xn) = → = p+(x0) if n → ∞ and, analogously, p−(xn) → p−(x0) 2hAxn,xni 2hAx0,x0i if n → ∞.

Hence, p+ and p− are continuous.

Corollary 6.3.8. Let L be a strongly damped pencil. Then π+ and π− are connected subsets of

R∞.

Proof. Since the unit sphere in H is a connected set and p+, p− are continuous, then π+ and π− are connected subsets of R∞.

Proposition 6.3.9. Let L be a strongly damped pencil. Then π+ ∩ π− = ∅.

Proof. Suppose, towards a contradiction, that π+ ∩ π− 6= ∅. Then there exist u, v ∈ H \ {0} such that p+(u) = p−(v) =: λ0. Note that λ0 6= ∞ since, otherwise, u, v ∈ U0 with hBu, ui < 0 and hBv, vi > 0 (by the proposition 6.2.2). Also, note that, as in the proof of the proposition

6.2.2, u and v can be choosen such that Re(hL(λ0)u, vi) = 0. Clearly, ϕu(λ0) = ϕv(λ0) = 0,

d d dλ ϕu(λ0) > 0 and dλ ϕv(λ0) < 0.

73 Let wt = (1 − t)u + tv for t ∈ [0, 1]. Then, for every t ∈ [0, 1], wt 6= 0 and

ϕwt (λ0) = hL(λ0)wt, wti

2 2 = (1 − t) hL(λ0)u, ui + 2t(1 − t) Re(hL(λ0)u, vi) + t hL(λ0)v, vi

2 2 = (1 − t) ϕu(λ0) + t ϕv(λ0) = 0

d d d d Moreover, dλ ϕwt (λ0) is a continuous function of t, with dλ ϕw0 (λ0) = dλ ϕu(λ0) > 0 and dλ ϕw1 (λ0) = d ϕ (λ ) < 0. Thus, there exists t ∈ (0, 1) such that d ϕ (λ ) = 0 = ϕ (λ ). dλ v 0 0 dλ wt0 0 wt0 0

Therefore, λ0 is a real solution of the system of equations:   2  λ0hAwt0 , wt0 i + λ0hBwt0 , wt0 i + hCwt0 , wt0 i = 0 .   2λ0hAwt0 , wt0 i + hBwt0 , wt0 i = 0

Now, if λ0 = 0 then hBwt0 , wt0 i = hCwt0 , wt0 i = 0 (since L(λ) is strongly damped), and

1 1 if λ0 6= 0 then hAwt , wt i = − hBwt , wt i, hCwt , wt i = − λ0hBwt , wt i and, therefore, 0 0 2λ0 0 0 0 0 2 0 0 2 4hAwt0 , wt0 ihCwt0 , wt0 i = hBwt0 , wt0 i .

Lemma 6.3.10. Let L(λ) be a strongly damped pencil and suppose that ∞ 6∈ π+. Let β = sup(π+) and   inf(π+) if inf(π+) > −∞ α = .  ∞ if inf(π+) = −∞ Then we have L(α) ≤ 0 and L(β) ≥ 0.

Proof. Since ∞ 6∈ π+ then, by the corollary 6.3.8, π+ is a connected subset of R and, therefore, it is a interval of R.

(1) If inf(π+) > −∞, then α = inf(π+) ∈ R.

Suppose, towards a contradiction, that there exists x0 ∈ H such that hL(α)x0, x0i > 0; then

ϕx0 (α) > 0 and α < p+(x0).

• If hAx0, x0i= 6 0 then, clearly, hAx0, x0i > 0 (since for hAx0, x0i < 0 the parabola ϕx0 is

d concave and, as ϕx0 (p+(x0)) = 0, dλ ϕx0 (p+(x0)) > 0 and α < p+(x0), then ϕx0 (α) < 0 ).

Thus, the parabola ϕx0 is a convex function and, therefore, p−(x0) ∈ (α, p+(x0)) ⊆ π+; but this is a contradiction to the proposition 6.3.9.

74 • If hAx0, x0i = 0 then, since ∞ 6∈ π+, hBx0, x0i > 0. So, for every λ ∈ R, ϕx0 (λ) =

λhBx0, x0i + hCx0, x0i and, since α < p+(x0) and ϕx0 (p+(x0)) = 0, then ϕx0 (α) < 0 .

Hence, L(α) ≤ 0.

If π+ is not bounded above, then (α, +∞) ⊆ π+ and, therefore, A ≥ 0 (Otherwise, if there is a

x0 ∈ H such that hAx0, x0i < 0, then the parabola ϕx0 is concave and, therefore, its zeros are such that p+(x0) < p−(x0) ). Thus, L(β) = L(∞) = A ≥ 0.

If π+ is bounded above, then β = sup(π+) ∈ R. Then, by a similar argument as in the proof of L(α) ≤ 0, we have L(β) ≥ 0.

(2) Suppose that inf(π+) = −∞; so, α = ∞. Note that A ≤ 0 (since if there is a x0 ∈ H such

that hAx0, x0i > 0 then the parabola ϕx0 is a convex function and, therefore, p−(x0) and p+(x0) are real numbers with p−(x0) < p+(x0) ). Thus, L(α) ≤ 0.

The proof that L(β) ≥ 0 in this case is similar to the case (1).

Proposition 6.3.11. Let L(λ) be a strongly damped pencil and suppose that there exist α,

β ∈ R∞ such that L(α) ≤ 0 and L(β) ≥ 0. Then the points α and β can be choosen in R if any of the following conditions holds:

i) A is indefinite.

ii)0 ∈ ρ(A).

iii) λ = 0 is an isolated point of σ(A) and, further, it is an eigenvalue of finite multiplicity.

Proof. i) Suppose that α cannot be choosen in R, then α = ∞ and A = L(∞) ≤ 0; so A is not indefinite. Analogously, if β cannot be choosen in R, then β = ∞ and A = L(∞) ≥ 0; so A is not indefinite. Hence, if A is indefinite then α and β can be choosen in R. ii) Suppose that 0 ∈ ρ(A). Since A is a bounded self-adjoint operator then its spectrum σ(A) is a non-empty and bounded subset of R. Further, since 0 ∈ ρ(A) and ρ(A) is a open subset of C, there exists δ > 0 such that |λ| > δ for every λ ∈ σ(A).

• If λ > δ for every λ ∈ σ(A), then hAx, xi ≥ δ > 0 for every x ∈ H \ {0}. Therefore, π+ is

bounded below and ∞ 6∈ π+.

75 Suppose, towards a contradiction, that π+ is not bounded, and let (xn)n∈N ⊆ H be a

sequence such that p+(xn) → +∞ and kxnk = 1 for every n ∈ N. Since p+(xn) = −hBxn,xni+d(xn) and (−hBx , x i + d(x )) is a bounded sequence in (since A, B and C hAxn,xni n n n n∈N R

are bounded self-adjoint operators) then hAxn, xni → 0 .

Let α = inf(π+) and β = sup(π+). Clearly α and β are real numbers and, by the lemma 2.6.10, L(α) ≤ 0 and L(β) ≥ 0.

• If λ < −δ for every λ ∈ σ(A), then hAx, xi ≤ −δ < 0 for every x ∈ H \ {0}. Therefore, π+

is bounded above and ∞ 6∈ π+. Analogous to the preceding item, we have that there exist

α, β ∈ R such that L(α) ≤ 0 and L(β) ≥ 0.

• If there are λ1, λ2 ∈ σ(A) such that λ1 < −δ and λ2 > δ, then A is indefinite and, by i), α

and β can be choosen in R. iii) Suppose that 0 is an eigenvalue of A with finite multiplicity and, moreover, it is an isolated point of σ(A); that is 0 ∈ σd(A). Suppose, without loss of generality, that A ≥ 0 [since the case A ≤ 0 is analogous and the case where A is indefinite is treated in part i)] and that hBx, xi > 0 for every x ∈ U0, where U0 = {x ∈ H \ {0} | hAx, xi = 0}. Thus, ∞ 6∈ π+.

Suppose, towards a contradiction, that π+ is not bounded above. Then there exists a sequence

(xn)n∈N in H such that, for every n ∈ N, kxnk = 1 and |p+(xn)| > n. If (xn)n∈N contains a subsequence (xnk )k∈N such that xnk 6∈ U0 for every k ∈ N, then hAxnk , xnk i → 0 when k → ∞.

∗ Since A = A and A ≥ 0 we have Axnk → 0 when k → ∞ and, as 0 ∈ σd(A), (xnk )k∈N contains a convergent subsequence (without restriction, suppose that (xnk )k∈N itself is convergent). Let x0 ∈ H be such that xnk → x0; then kx0k = 1, hAx0, x0i = 0 and, therefore, hBx0, x0i > 0. Thus,

hCx0,x0i p+(xnk ) → p+(x0) = − hBx ,x i ∈ R and, for every k ∈ N, |p+(xnk )| > nk ≥ k . Thus there 0 0 exists n0 ∈ N such that xn ∈ U0 for every n ≥ n0 (without restriction, suppose that n0 = 1).

Since hAxn, xni = 0 for every n ∈ N then, clearly, hAxn, xni → 0 and, therefore, Axn → 0. Thus, there exists a subsequence (xnk )k∈N of (xn)n∈N and x0 ∈ H, with kx0k = 1, such that xnk → x0, hAx0, x0i = 0 and hBx0, x0i > 0. Then, again, p+(xnk ) → p+(x0) ∈ R and, for every k ∈ N,

|p+(xnk )| > nk ≥ k .

76 Hence, π+ is bounded and, by the lemma 6.3.10, there exist α and β in R such that L(α) ≤ 0 and L(β) ≥ 0.

Definition 6.3.12. Let L be a strongly damped pencil and let λ0 ∈ C∞. The point λ0 is called a spectral point of first (resp. second) type for L if, and only if, there exists a sequence (xn)n∈N in

H, with kxnk = 1 for every n ∈ N, such that L(λ0)xn → 0 and p+(xn) → λ0 (resp., p−(xn) → λ0) in R∞. The set of all the spectral points of first (resp., second) type for L is denoted by σ+(L) (resp., σ−(L)) and it is called the spectrum of first (resp. second) type of L.

Remarks:

• Clearly, by the definition of spectral point of first or second type, σ+(L) ⊆ σ(L) and σ−(L) ⊆ σ(L) [thus the name is justified].

• Clearly, if λ0 is a spectral point of first or second type, then λ0 ∈ R∞. Thus, in the topology

+ + − − of C∞, we have σ (L) ⊆ σ (L) ⊆ R∞ and σ (L) ⊆ σ (L) ⊆ R∞.

+ • By the definition 6.3.12, if λ0 ∈ σ (L) then λ0 ∈ π+ (closure in the topology of R∞). That

+ is, σ (L) ⊆ π+.

− • Analogous to the preceding item, σ (L) ⊆ π−.

Proposition 6.3.13. Let L be a strongly damped pencil. Then σ+(L) and σ−(L) are closed in

R∞.

+ + Proof. Let λ0 ∈ σ (L). Suppose that λ0 ∈ R and let (λk)k∈N ⊆ σ (L)∩R such that λk → λ0. For each k ∈ N, let (xk,n)n∈N ⊆ H, sequence with kxk,nk = 1 for every n ∈ N, such that L(λk)xk,n → 0

2 2 and p+(xk,n) → λk when n → ∞. Note that kL(λk) − L(λ0)k = k(λk − λ0)A + (λk − λ0)Bk → 0 when k → ∞. Then, clearly, there exist (km)m∈N ⊆ N and (nkm )m∈N ⊆ N, strictly increasing sequences, such that |λ − λ | < 1 , kL(λ ) − L(λ )k < 1 , kL(λ )x k < 1 and 0 km 2m 0 km 2m km km,nkm 2m |λ − p (x )| < 1 for every m ∈ . Thus, (x ) ⊆ H is a normalized sequence km + km,nkm 2m N km,nkm m∈N such that kL(λ )x k ≤ kL(λ )x k + k(L(λ ) − L(λ ))x k < 1 → 0 (m → ∞) 0 km,nkm km km,nkm km 0 km,nkm m and |p (x )−λ | ≤ |p (x )−λ |+|λ −λ | < 1 → 0 (m → ∞). That is, λ ∈ σ+(L). + km,nkm 0 + km,nkm km km 0 m 0

77 + Suppose that λ0 = ∞ is an accumulation point of σ (L) in R∞. Then there exists (λk)k∈N ⊆

+ σ (L) such that, for every k ∈ N, |λk| > k. For each k ∈ N, let (xk,n)n∈N ⊆ H, sequence with kxk,nk = 1 for every n ∈ N, such that L(λk)xk,n → 0 and p+(xk,n) → λk when n → ∞.

Clearly, there exist (km)m∈N ⊆ N and (nkm )m∈N ⊆ N, strictly increasing sequences, such that 1 1 1 1 |p+(xkm,nk )| > m and 2 kL(λkm )xkm,nk k + kBk + 2 kCk < . Thus, (xkm,nk )m∈ m |λkm | m |λkm | |λkm | m m N is a normalized sequence such that p (x ) → ∞ (in ) and, for every m ∈ , + km,nkm R∞ N

1 2 kL(λ0)xkm,nk k = kAxkm,nk k = 2 kλk Axkm,nk k m m |λkm | m m 1 2 ≤ 2 kλk Axkm,nk + λkm Bxkm,nk + Cxkm,nk − λkm Bxkm,nk − Cxkm,nk k |λkm | m m m m m m 1 1 1 ≤ 2 kL(λkm )xkm,nk k + kBk + 2 kCk |λkm | m |λkm | |λkm | 1 < m .

That is, L(λ )x → 0 when m → +∞. Thus λ = ∞ ∈ σ+(L). Hence σ+(L) is a closed 0 km,nkm 0 − subset of R∞. Analogously, σ (L) is a closed subset of R∞.

Remarks:

• Let L be a strongly damped pencil. If x0 ∈ H is an eigenvector associated with the

eigenvalue λ0 ∈ σp(L), then hL(λ0)x0, x0i = 0 and, therefore, λ0 is a root of ϕx0 . That is,

either λ0 = p+(x0) or λ0 = p−(x0).

• By the preceding item, if L is a strongly damped pencil, then σp(L) ⊆ π+∪˙ π− ⊆ R∞.

• Clearly, if λ0 ∈ σp(L) and if there exists x0 ∈ ker(L(λ0)) such that λ0 = p+(x0) (resp.,

λ0 = p−(x0)) then, by the proposition 6.3.9, for every x ∈ ker(L(λ0)) we have λ0 = p+(x)

(resp., λ0 = p−(x)).

These remarks justify the following definition:

Definition 6.3.14. Let L be a strongly damped pencil and x0 ∈ H be an eigenvector associated with the eigenvalue λ0 ∈ σp(L). If λ0 = p+(x0), then λ0 is called an eigenvalue of first type and x0 is called an eigenvector of first type. Similarly, if λ0 = p−(x0) then λ0 is called an eigenvalue of second type and x0 is called an eigenvector of second type.

78 + − The set of all the eigenvalues of first (resp., second) type of L is denoted by σp (L) (resp., σp (L)).

Remarks:

+ + − − • It is clear that σp (L) ⊆ σ (L) and σp (L) ⊆ σ (L).

+ − • By definition 6.2.14, we have σp(L) = σp (L)∪˙ σp (L).

Proposition 6.3.15. Let L be a strongly damped pencil, then each boundary point of π+ (resp.,

+ − π−) in R∞ belongs to σ (L) (resp., σ (L)).

Proof. Let λ0 be a boundary point of π+ in R∞; thus, by the lemma 6.2.10, L(λ0) is a self-adjoint semi-definite operator. Let (xn)n∈N ⊆ H an normalized sequence such that λn := p+(xn) → λ0.

Suppose that λ0 6= ∞. Clearly, since hL(p+(x))x, xi = ϕx(p+(x)) = 0 for every x ∈ H \ {0}, we have hL(λ0)xn, xni = hL(λ0)xn, xni − hL(p+(xn))xn, xni = h(L(λ0) − L(p+(xn)))xn, xni → 0 when n → ∞. Since L(λ0) is a self-adjoint and semi-definite linear operator, then L(λ0)xn → 0.

+ Therefore, λ0 ∈ σ (L).

If λ0 = ∞, suppose, without restriction, that λn ∈ R\{0} for every n ∈ N. Then, hL(∞)xn, xni =

1 2 1 1 1 hAxn, xni = 2 hλ Axn, xni = 2 hL(λn)xn−λnBxn−Cxn, xni = − hBxn, xni− 2 hCxn, xni → 0. λn n λn λn λn + Thus, L(λ0)xn → 0 and, therefore, λ0 ∈ σ (L).

Proposition 6.3.16. Let L(λ) = λ2A + λB + C be a strongly damped pencil. If λ = 0 is an isolated point of σ(A) and if it is an eigenvalue of A with finite multiplicity, then ∞ is an isolated point of σ(L).

Proof. Clearly ∞ ∈ σ(L). By the theorem 6.1.4, and specifically by its proof, it is sufficient to show that ∞ is an accumulation point of ρ(L) in C∞. Since L is a strongly damped pencil, we have that σp(L) ⊆ R∞ and, therefore, every neighborhood of ∞ in C∞ contains points of ρ(L).

Definition 6.3.17. Let L be a quadratic operator pencil and x0 ∈ H be an eigenvector of L with eigenvalue λ0. The vectors x1,..., xm are called associated vectors to x0 if, and only if, for each

Pj 1 dk j = 1,..., m, k=0 k! dλk L(λ0)xj−k = 0.

79 Proposition 6.3.18. Let L be a stronlgy damped operator with 0 ∈ ρ(A). Then no eigenvector of L has associated vectors.

Proof. Let x0 be an eigenvector of L with eigenvalue λ0. Clearly λ0 ∈ R since σp(L) ⊆ R∞ and ∞ ∈ ρ(L).

d Suppose, towards a contradiction, that there exists x1 ∈ H such that L(λ0)x1 + dλ L(λ0)x0 = d 0. Thus, h dλ L(λ0)x0, x0i = h−L(λ0)x1, x0i = −hL(λ0)x1, x0i = −hx1,L(λ0)x0i = 0; that is, 0 h2λ0Ax0 + Bx0, x0i = 0 and, therefore, ϕx (λ0) = 0 . 0

6.4 Operators Associated to a Pencil

Definition 6.4.1. Let L(λ) = λ2A + λB + C be a quadratic operator pencil in H with 0 ∈ ρ(A). Let H2 = H × H and define G, L : H2 → H2 by the matrices:     0 A −A−1B −A−1C     G =   and L =  . AB I 0

The operator L is called the linearizer of the pencil L.

Remarks:

• Clearly G and L are bounded linear operators acting in H2.     A 0 −B −C   2   • GL =   and GL =  . 0 −C −C 0

  −A−1BA−1 A−1 −1   • Since A is bijective, also G is bijective and it is easy see that G =  . A−1 0 Therefore, G−1 is bounded.

80 • G is self-adjoint since, for every (x, y), (u, v) ∈ H2,

hG(x, y), (u, v)iH2 = h(Ay, Ax + By), (u, v)iH2 = hAy, ui + hAx + By, vi = hAy, ui + hAx, vi + hBy, vi = hy, Aui + hx, Avi + hy, Bvi = hx, Avi + hy, Aui + hy, Bvi = hx, Avi + hy, Au + Bvi

= h(x, y), (Av, Au + Bv)iH2

= h(x, y), G(u, v)iH2 .

• The name linearizer for L is due to the following relation between L and L: for every λ ∈ C and every x ∈ H, (L − λ)(λx, x) = (−A−1L(λ)x, 0).

• Also we have the following relation: for every λ ∈ R and every x ∈ H,

0 hG(λx, x), (λx, x)iH2 = 2λhAx, xi + hBx, xi = ϕx(λ).

Proposition 6.4.2. Let L(λ) = λ2A + λB + C be a quadratic operator pencil with 0 ∈ ρ(A).

Then: (i) ρ(L) = ρ(L), (ii) σp(L) = σp(L), (iii) σc(L) = σc(L) and (iv) σr(L) = σr(L).

Proof. Let λ ∈ C. Note that for every (u, v) and (w, z) in H2:

  L(λ)v = −Bz − Aw − λAz (L − λ)(u, v) = (w, z) ⇔ . (6.5)  u = z + λv

Thus, (i) and (ii) holds by (6.5).

Let λ ∈ σc(L) and w ∈ H.

2 −1 • If λ = 0, let ((un, vn))n∈N ⊆ H be such that L(un, vn) → (−A w, 0). Then un → 0 and

−1 −1 therefore −A Cvn → −A w. Thus, L(0)vn → w and therefore 0 ∈ σc(L).

2 1 −1 • Suppose that λ 6= 0 and let ((un, vn))n∈N ⊆ H be such that (L−λ)(un, vn) → (− λ A w, 0). 2 Then λ Aun + λBun + λCvn → w and Cun − λCvn → 0. Thus summing the two sequences

81 we have L(λ)un → w and therefore λ ∈ σc(L).

2 Now, let λ ∈ σc(L) and (w, z) ∈ H .

• If λ = 0, let (vn)n∈N ⊆ H be such that Cvn = L(0)vn → −Bz − Aw. Then L(z, vn) =

−1 −1 −1 −1 (−A Bz−A Cvn, z) → (−A Bz−A (−Bz−Aw), z) = (w, z) and therefore 0 ∈ σc(L).

• Suppose that λ 6= 0 and let (un)n∈N, (vn)n∈N be sequences in H such that L(λ)vn →

−Bz − Aw − λAz and un − λvn → z. Then

−1 −1 (L − λ)(un, vn) = (−A Bun − λun − A Cvn, un − λvn)

−1 = (−A [Bun + λAun + Cvn], un − λvn)

−1 2 = (−A [λA(un − λvn) + B(un − λvn) + λ Avn + λBvn + Cvn], un − λvn)

−1 = (−A [λA(un − λvn) + B(un − λvn) + L(λ)vn], un − λvn) → (−A−1[λAz + Bz − Bz − Aw − λAz], z) = (w, z).

Therefore λ ∈ σc(L).

Thus (iii) holds and therefore (iv) too.

Remarks:

• Note that σp(L) is countable because σp(L) = σp(L) by the proposition 6.4.2, and the point spectrum of a bounded operator in a separable Hilbert space is countable.

• Since G : H2 → H2 is a self-adjoint operator so, by definition 3.3.6, it induces an inner

2 2 product on H , [·, ·], given by the formula [x, y] := hGx, yiH2 ; x, y ∈ H . By the final remark in the section 4.1 we have that (H2, [·, ·]) is a Krein space since G bijective.

• Note that L is a G-self-adjoint operator, because for every (u, v), (x, y) ∈ H2:

[L(u, v), (x, y)] = hGL(u, v), (x, y)iH2 = h(Au, −Cv), (x, y)iH2 = hAu, xi + h−Cv, yi

= hu, Axi + hv, −Cyi = h(u, v), (Ax, −Cy)iH2 = h(u, v), GL(x, y)iH2

= hG(u, v), L(x, y)iH2 = [(u, v), L(x, y)].

82 Proposition 6.4.3. Let L be a strongly damped pencil in H such that 0 ∈ ρ(A). Then L is a definitizable operator in (H2, [·, ·]) (see appendix A.3) and L has no G-neutral eigenvectors.

Proof. (1) By the theorem 6.1.4, σ(L) is bounded and therefore, by the proposition 6.4.2, ρ(L) 6=

∅. Now, by the lemma 6.3.10, there exist α, β ∈ R∞ such that L(α) ≤ 0 and L(β) ≥ 0 and, by the proposition 6.3.11, α and β can be chosen in R. So, let α, β in R such that L(α) ≤ 0 and L(β) ≥ 0.

• If α 6= β then:

  −B − (α + β)A −C + αβA 2   G(L − α)(L − β) = GL − (α + β)GL + αβG =   −C + αβA (α + β)C + αβB      L(β) −αL(β) L(α) −βL(α)  = 1   −   . α−β      −αL(β) α2L(β) −βL(α) β2L(α) 

Let T be the first matrix in the braces. For every x = (x, y) ∈ H2,

hTx, xiH2 = hL(β)x − αL(β)y, xi − αhL(β)x − αL(β)y, yi = hL(β)(x − αy), x − αyi ≥ 0.

Thus T is positive semi-definite. Analogously, the second matrix in the braces is negative semi-definite. Therefore, for every x ∈ H2:

[(L − αI)(L − βI)x, x] ≥ 0 if α − β > 0, and [(L − αI)(L − βI)x, x] ≤ 0 if α − β < 0.

Thus, in this case, L is definitizable with definitizing polynomial p(t) = (t − α)(t − β) if α − β > 0 or p(t) = −(t − α)(t − β) if α − β < 0.

• If α = β, then L(α) = 0 and, therefore, −C = α2A+αB. Thus, since L is strongly damped, for every x ∈ H \ {0}:

hBx, xi2 > 4hAx, xihCx, xi ⇔ h(2αA + B)x, xi2 > 0.

Thus the operator D := 2αA + B is self-adjoint and definite, and the operator

83     −(2αA + B) α(2αA + B) −D αD 2     G(L − α) =   =   α(2αA + B) −α2(2αA + B) αD −α2D

is such that, for every x = (x, y) ∈ H2,

[(L − α)2x, x] = h−Dx + αDy, xi − αh−Dx + αDy, yi = h−Dx + αDy, x − αyi = h−D(x − αy), x − αyi.

Thus, in this case, L is definitizable with definitizing polynomial p(t) = −(t − α)2 if D is positive definite and p(t) = (t − α)2 if D is negative definite.

Hence, L is definitizable.

2 (2) Suppose, towards a contradiction, that x0 = (u0, v0) ∈ H \{0} is a G-neutral eigenvector of L with eigenvalue λ0. Then u0 = λ0v0 (thus, v0 6= 0), L(λ0)v0 = 0 and 2λ0hAv0, v0i + hBv0, v0i = 0.

d That is, λ0 is a real root of ϕv0 such that dλ ϕv0 (λ0) = 0 .

Proposition 6.4.4. Let L be a strongly damped pencil in H with 0 ∈ ρ(A). Then,

+ + − − + + − − σ (L) = σ (L), σ (L) = σ (L), σp (L) = σp (L), σp (L) = σp (L),

and every eigenvalue λ ∈ σp(L) = σp(L) has the same multiplicity for L and L.

Proof. (1) Let λ0 ∈ R.

+ • Suppose that λ0 ∈ σ (L) and let (xn)n∈N be a sequence in H with kxnk = 1 for every n ∈ N,

such that L(λ0)xn → 0 and p+(xn) → λ0.

For each n ∈ , let x = √ 1 (p (x )x , x ) ∈ H2. Note that, for every n ∈ : N n 2 + n n n N 1+[p+(xn)]

1 [xn, xn] = hGxn, xni 2 = 2 hG(p+(xn), xn), (p+(xn), xn)i 2 H 1+[p+(xn)] H 1 d = 2 ϕx (p+(xn)) > 0, 1+[p+(xn)] dλ n

and (L − λ )x = √ 1 (−A−1[p (x )λ Ax + p (x )Bx + Cx ], [p (x ) − λ ]x ). 0 n 2 + n 0 n + n n n + n 0 n 1+[p+(xn)]

84 Therefore, for every n ∈ N, [xn, xn] > 0 and (L − λ0)xn → 0 when n → ∞. Moreover,

2 for every n ∈ N, kxnkH2 = 1. The J-norm of the Krein space (H , [·, ·]) is equivalent to 1 k·kH2 , so if we set y = xn for every n ∈ , we have ky kJ = 1, [y , y ] > 0 and n kxnkJ N n n n + (L − λ0)yn → 0 when n → ∞. That is, λ0 ∈ σ (L) (see definition 4.4.5).

+ 2 • Suppose that λ0 ∈ σ (L) and let (xn)n∈N ⊆ H (with xn = (un, vn)) be a sequence such

that, for every n ∈ N, kxnkJ = 1 and [xn, xn] > 0, and such that (L − λ0)xn → 0 when

n → ∞. Thus un − λ0vn → 0, λ0Aun + Bun + Cvn → 0 and, therefore,

2 L(λ0)vn = λ0Avn + λ0Bvn + Cvn = −λ0A(un − λ0vn − un) − B(un − λ0vn − un) + Cvn

= (−λ0A − B)(un − λ0vn) + [λ0Aun + Bun + Cvn] → 0.

If λ0 is not a critical point of L then, by the proposition A.3.7, the sequence (xn)n∈N can

be choosen such that [xn, xn] ≥ δ for some δ > 0 and, therefore, [xn, xn] = 2 RehAvn, uni +

hBvn, vni ≥ δ. Thus, since 2 RehAvn, λ0vn − uni → 0, there exists n0 ∈ N such that for

every n ≥ n0:

d δ dλ ϕvn (λ0) = 2 RehAvn, uni + hBvn, vni + 2 RehAvn, λ0vn − uni ≥ 2 .

δ If for every n ≥ n0 we have hAvn, vni = 0, then hBvn, vni ≥ 2 > 0 and λ0hBvn, vni + hCv , v i → 0; thus, p (v ) = − hCvn,vni → λ . Now, if for every n ≥ n we have hAv , v i= 6 n n + n hBvn,vni 0 0 n n 1 1 1 0, then hAvn, vni(λ0 − p−(vn)) = λ0hAvn, vni − [− 2 hBvn, vni − 2 d(vn)] = 2 [2λ0hAvn, vni + 1 δ hBvn, vni]+ 2 d(vn) ≥ 4 . Thus, since hL(λ0)vn, vni = hAvn, vni(λ0−p−(vn))(λ0−p+(vn)) → 0, + we have p+(vn) → λ0. Therefore, λ0 ∈ σ (L).

Suppose that λ0 is a critical point of L, then λ0 ∈ {α, β} where α and β are as in the proof

+ of the proposition 6.4.3. Thus, by the proposition 6.3.15, λ0 ∈ σ (L).

Hence σ+(L) = σ+(L) and, analogously, σ−(L) = σ−(L).

(2) Let λ0 ∈ R. By the proposition 6.4.2 (and its proof) σp(L) = σp(L) and (u, v) is an eigenvector of L with eigenvalue λ0 if, and only if, v is an eigenvector of L with eigenvalue λ0 and u = λ0v.

85 d + Note that [(λ0v, v), (λ0v, v)] = 2λ0hAv, vi + hBv, vi = dλ ϕv(λ0) and, therefore, λ0 ∈ σp (L) ⇔ + λ0 ∈ σp (L).

+ + − − Hence, σp (L) = σp (L) and, analogously, σp (L) = σp (L). Further, the correspondence between eigenvector of L and eigenvectors of L shows that, all λ ∈ σp(L) = σp(L) has the same multiplicity for L and L.

Remark. If 0 ∈ ρ(A), then 0 ∈ ρ(A2) and, therefore, A2 is a positive self-adjoint operator with

1 2 k(A−1)2k = dist(0, σ(A )).

• Let γ = kA−1k2(1 + kBk); then, for every x ∈ H with kxk = 1, h(2γA2 + B)x, xi =

2 2 1 1 2γhA x, xi + hBx, xi ≥ 2γ dist(0, σ(A )) − kBk = 2γ k(A−1)2k − kBk ≥ 2γ k(A−1)k2 − kBk > 2 + kBk ≥ 2.

That is, 2γA2 + B is a uniformly positive self-adjoint operator and therefore it is a bijective operator.

• Thus, the function (x, y) ∈ H2 7→ h(2γA2 + B)x, yi defines a positive definite inner product on H.

• Let |||·||| be the norm induced by the inner product of the preceding item. Note that, for √ every x ∈ H, 2kxk ≤ |||x||| ≤ pk2γA2 + Bkkxk. Therefore |||·||| and k·k are equivalent; that is, (H, |||·|||) is a Banach space.

In the rest the this section it is assumed that 0 ∈ ρ(A).

Definition 6.4.5. The norm |||·||| defined in the preceding remark is called the G-norm on H and a contraction with respect to this G-norm is called a G-contraction.

2 Remark. Let U0 be the subspace of (H , [·, ·]) given by the equation

     γAx  2   U0 = x ∈ H : x =   , x ∈ H . (6.6)  x 

86 ⊥ Its orthogonal complement U0 is given by

     −γAx − A−1Bx  U ⊥ = x ∈ H2 : x =   , x ∈ H (6.7) 0    x 

since (u, v) ⊥ U0 ⇔ [(u, v), (γAx, x)] = 0 for every x ∈ H ⇔ hG(u, v), (γAx, x)iH2 = 0 for every x ∈ H ⇔ h(Av, Au + Bv), (γAx, x)iH2 = 0 for every x ∈ H ⇔ hAv, γAxi + hAu, xi + hBv, xi = 0 for every x ∈ H ⇔ hγA2v + Bv + Au, xi = 0 for every x ∈ H ⇔ γA2v + Bv + Au = 0 ⇔ u = −γAv − A−1Bv.

• If x ∈ U0, then there exists x ∈ H such that x = (γAx, x) and, therefore, [x, x] =

2 2 2 hGx, xiH2 = hAx, γAxi + hγA x + Bx, xi = h(2γA + B)x, xi = |||x||| . Hence (U0, [·, ·]) is a positive definite subspace of H and, further, as a normed space (with the norm induced

by [·, ·] on U0) it is isometrically isomorphic to (H, |||·|||). Since (H, |||·|||) is a Banach space,

then (U0, [·, ·]) is a Hilbert space.

⊥ −1 • If x ∈ U0 , then there exists x ∈ H such that x = (−γAx − A Bx, x) and, therefore,

−1 2 2 2 [x, x] = hGx, xiH2 = hAx, −γAx − A Bxi + h−γA x, xi = h−(2γA + B)x, xi = −|||x||| .

⊥ ⊥ Hence (U0 , [·, ·]) is a negative definite subspace. Thus, (U0 , −[·, ·]) is a positive definite

⊥ subspace which as a normed space (with the norm induced by −[·, ·] on U0 ) is isometrically

⊥ isomorphic to (H, |||·|||). Since (H, |||·|||) is a Banach space, then (U0 , −[·, ·]) is a Hilbert space.

⊥ • By the preceding items, U0 ∩ U0 = {0}.

• Note that if u, v ∈ H, then the system of equations   u = γAx − γAy − A−1By

 v = x + y

has a unique solution (since 2γA2 + B is bijective) given by   x = v − (2γA2 + B)−1[−Au + γA2v] .  y = (2γA2 + B)−1[−Au + γA2v]

87 2 ⊥ That is, H = U0 + U0 .

2 ˙ ⊥ 2 + − Hence H = U0⊕U0 is a Krein decomposition of (H , [·, ·]). Let P and P be the fundamental

⊥ projectors on U0 and U0 , respectively.

Proposition 6.4.6. For every maximal positive semi-definite subspace U of (H2, [·, ·]) there exists a G-contraction K in H such that

     γAx − γAKx − A−1BKx  2   U = x ∈ H : x =   , x ∈ H . (6.8)  x + Kx 

Proof. Let U be a maximal positive semi-definite subspace of (H2, [·, ·]) and let K+ the angular

+ operator of U with respect to U0. By the theorem 4.2.4, we have P (U) = U0 and, therefore, U =

+ + + {x+ + K x+ | x+ ∈ U0}. Moreover, by the theorem 3.6.3, note that −[K x+, K x+] ≤ [x+, x+] for every x+ ∈ U0. Note also that for each x ∈ H there exists a unique y ∈ H such that

    γAx −γAy − A−1By +     K   =   . (6.9) x y

So, define K : H → H by Kx := the unique y ∈ H such that (6.9) holds, x ∈ H. Clearly, K is a linear operator.

Let x ∈ H, y = Kx and x+ = (γAx, x). Since

+ + + + 2 −1 −[K x+, K x+] = −hGK x+, K x+iH2 = −h(Ay, −γA y), (−γAy − A By, y)iH2 = h(2γA2 + B)y, yi = |||y|||2 = |||Kx|||2

2 2 2 and [x+, x+] = h(Ax, γA x + Bx), (γAx, x)iH2 = h(2γA + B)x, xi = |||x||| , then |||Kx||| ≤ |||x|||. Hence, K is a G-contraction such that (6.8) holds.

Remark. By imitating of the proof of the proposition 6.3.6, is clear that if U is a closed and maximal positive definite subspace of (H2, [·, ·]), then (6.8) holds with a G-contraction K such

88 that |||Kx||| < |||x||| for every x ∈ H with x 6= 0. Thus K has no eigenvalues of norm 1 and, by the theorem A.1.6, the subspace (I + K)(H) is dense in H. Therefore the operator

Z = (γA − γAK − A−1BK)(I + K)−1 (6.10) is a densely defined and closed linear operator acting in H.

Proposition 6.4.7. Let U be a closed and maximal positive definite subspace of (H2, [·, ·]). Then there exists a densely defined and closed linear operator Z acting in H such that

    Zx    U =   : x ∈ D(Z) . (6.11)  x 

Moreover,     −A−1Z∗Ax − A−1Bx  ⊥   ∗ U =   : x ∈ D(Z A) . (6.12)  x 

Proof. The existence of a Z that satisfies (6.11) is clear from the preceding remark and (6.8). It is given by (6.10).

Now, (u, v) ⊥ U ⇔ [(u, v), (Zx, x)] = 0 for every x ∈ D(Z) ⇔ h(Av, Au + Bv), (Zx, x)iH2 = 0 for every x ∈ D(Z) ⇔ hAv, Zxi + hAu + Bv, xi = 0 for every x ∈ D(Z) ⇔ v ∈ D(Z∗A) and hZ∗Av + Au + Bv, xi = 0 for every x ∈ D(Z) ⇔ v ∈ D(Z∗A) and Z∗Av + Au + Bv = 0 (since D(Z) is dense in H) ⇔ v ∈ D(Z∗A) and u = A−1Z∗Av − A−1Bv. Thus, (6.12) holds.

Theorem 6.4.8. Let Z be a bounded linear operator in H. Then the subspace

    Zx    U =   : x ∈ H (6.13)  x  is invariant under the operator L if, and only if,

AZ2 + BZ + C = 0.

In this case, we have that σp(Z) = σp(L U ), σc(Z) = σc(L U ) and σr(Z) = σr(L U ). Moreover,

89 a vector x ∈ H is an eigenvector of Z with eigenvalue λ if and only if (λx, x) is an eigenvector of the restriction of L to U.

Proof. (1) U is L-invariant ⇔ −A−1BZx − A−1Cx = Z(Zx) for every x ∈ H ⇔ AZ2x + BZx + Cx = 0 for every x ∈ H ⇔ AZ2 + BZ + C = 0. (2) Suppose that U is L-invariant. Then, for every x, y ∈ H, (L − λ)(Zx, x) = (Zy, y) ⇔

(Z − λ)x = y. Therefore, σp(Z) = σp(L U ), σc(Z) = σc(L U ) and σr(Z) = σr(L U ). Clearly, x ∈ H is an eigenvector of Z with eigenvalue λ if and only if (λx, x) is an eigenvector of L U .

Remarks:

• Let U be a subspace of H2 and Z : H → H be a linear operator. We say that U and Z correspond to each other if, and only if, the equation (6.13) holds.

• Clearly, if Z : H → H is a bounded operator, then the subspace U given by the formula (6.13) is homeomorphic to H.

Definition 6.4.9. The strongly damped pencil L satisfies the property (S) if, and only if, for every sequence (xn)n∈N ⊆ H the following is true:

w xn −→ 0, (Axn, xn) → 0, (Bxn, xn) → 0 and (Cxn, xn) → 0 implies xn → 0. (S)

Theorem 6.4.10. Let L be a strongly damped pencil with the property (S) and with 0 ∈ ρ(A).

Then there exist bounded operators Z1 and Z2 which satisfy the equation

AZ2 + BZ + C = 0, (6.14) and additionally

∗ ∗ Z1 A + AZ1 + B > 0 and Z2 A + AZ2 + B < 0. (6.15)

+ − Moreover σ(Z1) = σ (L) and σ(Z2) = σ (L). The eigenvalues of Z1 (resp. Z2) are exactly the eigenvalues of first type (resp., second type) of the pencil L and the corresponding eigenvectors

90 coincide. In particular, we can choose Z1 and Z2 such that they satisfy

−1 ∗ −1 Z2 = −A Z1 A − A B. (6.16)

2 ∗ ∗ In this case, λ A + λB + C = (λ − Z2 )A(λ − Z1) = (λ − Z1 )A(λ − Z2).

Proof. (1) By the proposition 6.4.3 and the theorem A.3.9, the operator L has an L-invariant,

2 + maximal positive semi-definite and positive definite subspace U ⊆ H such that σ(L U ) = σ (L).

By the proposition 6.7.7, there exists a densely defined and closed linear operator Z1 acting in H such that     Z1x    U =   : x ∈ D(Z1) .  x 

Suppose, towards a contradiction, that Z1 is not bounded; then there exists a sequence (xn)n∈N ⊆

D(Z1) such that xn → 0 and kZ1xnk = 1 for every n ∈ N. Without loss of generality, we may suppose that (Z1xn)n∈N is weakly convergent since every bounded sequence in a Hilbert space w contains a weakly convergent subsequence. Clearly Z1xn −→ 0.

Let xn = (Z1xn, xn) ∈ U (n ∈ N). Then [xn, xn] = hAxn,Z1xni + hAZ1xn, xni + hBxn, xni → 0 and, by the proposition 2.2.11 and since U is an L-invariant semi-definite space, we have

2 2 2 |[Lxn, xn]| ≤ [Lxn, Lxn][xn, xn] = [L xn, xn][xn, xn] = hGL xn, xniH2 [xn, xn]

= (h−BZ1xn − Cxn,Z1xni − hCZ1xn, xni)[xn, xn] → 0.

Thus, [Lxn, xn] → 0 and, analogously, [Lxn, Lxn] → 0. Therefore, hAZ1xn,Z1xni → 0 and hBZ1xn,Z1xni → 0.

By the proof of the proposition 6.3.11, since 0 ∈ ρ(A), there exist α and β in R such that L(α) ≤ 0

2 and L(β) ≥ 0. Thus, for every n ∈ N, α hAZ1xn,Z1xni + αhBZ1xn,Z1xni ≤ −hCZ1xn,Z1xni ≤

2 β hAZ1xn,Z1xni + βhBZ1xn,Z1xni. Therefore, hCZ1xn,Z1xni → 0. Since L satifies the property

(S) we have Z1xn → 0 . Thus Z1 is a densely defined, closed and bounded operator; therefore,

D(Z1) = H.

2 By the theorem 6.3.8, we have AZ1 + BZ1 + C = 0 and σ(Z1) = σ(L U ).

91 ∗ ∗ ∗ Clearly, Z1 A + AZ1 + B is a self-adjoint operator. Note that Z1 A + AZ1 + B > 0 ⇔ h(Z1 A +

AZ1 + B)x, xi > 0 for every x ∈ H \ {0} ⇔ hAx, Z1xi + hAZ1x, xi + hBx, xi > 0 for every x ∈ H \ {0} ⇔ [(Z1x, x), (Z1x, x)] = 0 for every x ∈ H \ {0} ⇔ [x, x] > 0 for every x ∈ U \{0}.

∗ Thus, since U is positive definite, we have Z1 A + AZ1 + B > 0. (2) Since U is maximal positive semi-definite then, by the theorem 4.2.14, U ⊥ is maximal negative semi-definite. Moreover, since U ⊥ is L-invariant (because U is L-invariant and L is G-selfadjoint),

− then σ(L U ⊥ ) = σ (L). −1 ∗ −1 Let Z2 = −A Z1 A − A B ∈ B(H). Note that Z2 is the unique operator in B(H) such that

⊥ U has the form (6.13). By the proposition 6.3.7 and the theorem 6.3.8, AZ2 + BZ2 + C = 0

∗ ∗ and σ(Z2) = σ(L U ⊥ ). Note that Z2 A + AZ2 + B = −(Z1 A + AZ1 + B) < 0. Further, ∗ 2 ∗ (λ − Z2 )A(λ − Z1) = λ A + λB + C = (λ − Z1 )A(λ − Z2).

− + − (3) By the proposition A.3.9, σ(L U ⊥ ) = σ (L); thus, σ(Z1) = σ (L) and σ(Z2) = σ (L). + + − − + Since σ (L) = σ (L) and σ (L) = σ (L) (by the proposition 6.3.4), then σ(Z1) = σ (L) and

− σ(Z2) = σ (L).

2 Clearly, since AZ1 + BZ1 + C = 0, we have that Z1x = λx implies L(λ)x = 0; that is, σp(Z1) ⊆

+ 2 σp (L) and every eigenvector of Z1 is also an eigenvector of L. Analogously, since AZ2 +BZ2 +C =

− 0, we have that Z2x = λx implies L(λ)x = 0; that is, σp(Z2) ⊆ σp (L) and every eigenvector of

Z2 is also an eigenvector of L.

+ Let λ ∈ σp (L) and x ∈ H with x 6= 0 such that L(λ)x = 0; note that λ 6∈ σp(Z2). Since

∗ + L(λ)x = (λ−Z2 )A(λ−Z1)x, then λ ∈ σp(Z1) and x is an eigenvector of Z1. Thus, σp(Z1) = σp (L) and the corresponding eigenvectors coincide.

− Analogously, σp(Z2) = σp (L) and the corresponding eigenvectors coincide.

Corollary 6.4.11. Let L, Z1 and Z2 be as in the theorem 6.4.10. Then σp(L) = σp(Z1) ∪ σp(Z2), where Zi is the linear pencil defined by Zi(λ) = λ − Zi, i = 1, 2.

+ − Proof. σp(L) = σp (L) ∪ σp (L) = σp(Z1) ∪ σp(Z2) = σp(Z1) ∪ σp(Z2).

92 Remark. Let L be a strongly damped pencil with 0 ∈ ρ(A) and assume that it satisfies the property (S). Then:

• If Z ∈ B(H) satisfies the equation AZ2 + BZ + C = 0 and Z∗A + AZ + B > 0 then the subspace given by the formula (6.13) is L-invariant and maximal positive semi-definite (the maximality is due to the fact that U is homeomorphic to H). Moreover, σ(Z) = σ+(L).

• If, further, L does not have singular critical points then, by the theorem A.3.9, U and, therefore, Z are uniquely determined and U is uniformly positive. Thus, Z is uniquely determined by the property Z∗A + AZ + B > 0. Further, from of the proof of the theorem 6.4.9 and the uniform positivity of U, we have that the operator T := Z∗A + AZ + B is self-adjoint, uniformly positive and, therefore, it is a homeomorphism from H in itself.

• Note that TZ = Z∗AZ − C = Z∗T = (TZ)∗. Therefore, Z is similar to a self-adjoint operator since Z = T −1/2(T −1/2TZT −1/2)T 1/2 and T −1/2TZT −1/2 is self-adjoint.

This remark shows the following theorem.

Theorem 6.4.12. Let L be a strongly damped pencil with 0 ∈ ρ(A) such that it satisfies the property (S). If the linearizer L does not have singular critical points, then the operators Z1 and Z2 in the theorem 6.4.10 are uniquely determined by the property (6.15). Further, they are similar to self-adjoint operators.

6.5 Operator-Differential Equations Associated to a Quadratic

Pencil

Given a quadratic pencil L(λ) = λ2A + λB + C we can associate to it the following operator differential equation Au00(t) + Bu0(t) + Cu(t) = 0, (6.17) where u is a H-valued function.

93 Remarks:

• If Z ∈ B(H), then the function T : R → B(H), t ∈ R 7→ etZ , is infinitely differentiable with

dn tZ n tZ dtn (e ) = Z e for every n ∈ N0 (see [14]).

• By the previous item, if Z ∈ B(H) and x0 ∈ H is a fixed vector, then the function u : R → H,

tZ (n) n u(t) = e x0, is infinitely differentiable with u (t) = Z u(t) for every n ∈ N0.

tZ Proposition 6.5.1. Let Z ∈ B(H). The vector function u(t) = e x0 is a solution of (6.17) for

2 any x0 ∈ H if, and only if, AZ + BZ + C = 0.

2 tZ Proof. ⇐ Let x0 ∈ H and suppose that AZ + BZ + C = 0, then if u(t) = e x0 we have Au00(t) + Bu0(t) + Cu(t) = AZ2u(t) + BZu(t) + Cu(t) = (AZ2 + BZ + C)u(t) = 0. That is, the function u is a solution of (6.17).

tZ ⇒ Suppose that for any x0 ∈ H the function u(t) = e x0 is solution of (6.17). Then, for every x ∈ H and for every t ∈ R, (AZ2 + BZ + C)etZ x = 0. Thus with t = 0 we have that, for every x ∈ H, (AZ2 + BZ + C)x = 0 and, therefore, AZ2 + BZ + C = 0.

Remark. Suppose that the pencil L satisfies the conditions of the theorem 6.4.10. Then there

2 exist Z1, Z2 ∈ B(H) such that AZj + BZj + C = 0 (j = 1, 2) and, therefore, for any x1, x2 ∈ H,

tZj the functions uj(t) = e xj (j = 1, 2) are solutions of (6.17). Further, for any x1, x2 ∈ H,

tZ1 tZ2 u(t) = e x1 + e x2 (6.18) is a solution of (6.17). Remark. Consider the Cauchy problem in H2 given by the formula

0 w (t) − Lw(t) = 0, w(0) = (u0, v0), (6.19) where L is the linearizer of the the pencil L (see [14]). The unique solution to this problem is

tL 0 −1 −1 given by w(t) = e w0. Note that if w(t) = (u(t), v(t)) then u (t) + A Bu(t) + A Cv(t) = 0

0 00 0 and v (t) − u(t) = 0. Therefore, v(t) satisfies Av (t) + Bv (t) + Cv(t) = 0 with v(0) = v0 and

0 v (0) = u0.

94 00 0 0 0 Conversely, if Av (t) + Bv (t) + Cv(t) = 0 with v(0) = v0 and v (0) = u0, then setting u(t) = v (t) we have that w(t) = (u(t), v(t))) satisfies the Cauchy problem (6.19).

Theorem 6.5.2. Suppose that the pencil L satisfies the conditions of the theorem 6.4.10. Let

Z1 and Z2 operator roots of L. Then the following are equivalent:

1) Any solution of (6.17) has the form (6.18).   Z1 Z2 2   2) R(W ) = H ; where W =  . II

2 Proof. 1) ⇒ 2) Let (u0, v0) ∈ H and let w(t) = (u(t), v(t)) be the unique solution of the Cauchy

tZ1 tZ2 problem (6.19). Since v(t) satisfies (6.17), there exist x1, x2 ∈ H such that v(t) = e x1 + e x2

0 and, therefore, Z1x1 + Z2x2 = v (0) = u0 and x1 + x2 = v0; that is (u0, v0) ∈ R(W ). Hence R(W ) = H2.

2 2) ⇒ 1) Let v(t) a solution of (6.17). Since R(W ) = H , there exist x1, x2 ∈ H such that

0 0 Z1x1 + Z2x2 = v (0) and x1 + x2 = v(0). Since the Cauchy problem (6.19), with u0 = v (0) and

0 tZ1 tZ2 v0 = v(0), has a unique solution and (f (t), f(t)), where f(t) = e x1 + e x2, is a solution of (6.19), then v(t) = f(t). That is, v(t) is of the form (6.18).

Theorem 6.5.3. Let L, Z1, Z2 and W be as in the theorem 6.5.2. Then the following are equivalent:

1) Every solution of (6.17) of the form (6.18) admits a unique such representation.

2) ker(W ) = 0.

Proof. It is clear.

Corollary 6.5.4. Under conditions of theorem 6.5.3, the following are equivalent:

1) Any solution of (6.17) has a unique representation of the form (6.18).

2) W is bijective.

95 Remark. Clearly, W is bijective if and only if the operator Z1 − Z2 is bijective. Moreover, by

∗ the proof of theorem 6.4.10, Z1 − Z2 is bijective if and only if AZ1 + Z1 A + B is bijective. Since

∗ ∗ ∗ AZ1 + Z1 A + B > 0 then AZ1 + Z1 A + B is bijective if and only if AZ1 + Z1 A + B is an uniformly definite operator.

6.6 A special case

In this section we assume that the operator A satisfies A  0; that is, there exists δ > 0 such that A ≥ δI. Moreover, the operator C satisfies C ≤ 0. Remarks:

• For every x ∈ H with x 6= 0 we have that p+(x) and p−(x) are real and distinct.

• π+ and π− are bounded subsets of R. Further, π+ ⊆ [0, +∞) and π− ⊆ (−∞, 0].

• σ+(L) and σ−(L) are compact subsets of R.

w • Clearly 0 ∈ ρ(A), and if (xn)n∈N is a sequence in H such that xn −→ 0 and hAxn, xni → 0,

then xn → 0. Thus, L satisfies the property (S) and, therefore, the conditions of the theorem 6.3.10.

−1 −1 −1 2 kxk2 δ 2 −1 • For every x 6= 0, hAA x, A xi ≥ δkA xk ≥ δ kAk2 = kAk2 kxk ; that is hA x, xi = −1 δ 2 −1 hx, A xi ≥ kAk2 kxk . Therefore, A  0.

• Note that, x = (u, v) ∈ ker(L) ⇔ u = 0 and v ∈ ker(C).

Proposition 6.6.1. Let λ0 ∈ R.

(i) If λ0 6∈ π+ ∪ π−, then L(λ0) > 0 or L(λ0) < 0.

(ii) If λ0 ∈ R \ [π+ ∪ π−], then L(λ0) ≥ 0 or L(λ0) ≤ 0.

(iii) If λ0 6∈ π+ ∪ π−, then L(λ0) is uniformly definite.

96 Proof. Since λ0 ∈ R then, clearly, L(λ0) is self-adjoint.

(i) Suppose, towards a contradiction, that there exists x 6= 0 such that hL(λ0)x, xi = 0. Then λ0 is a root of ϕx and, therefore, λ0 ∈ π+ ∪ π− .

(ii) It is clear from (i).

(iii) If λ0 6∈ π+ ∪ π− then, by (i), L(λ0) > 0 or L(λ0) < 0. Suppose, without loss of generality, that L(λ0) > 0.

Suppose, towards a contradiction, that L(λ0) is not uniformly definite. Then there exists (xn)n∈N ⊆

2 H, with kxnk = 1 for every n ∈ N, such that hL(λ0)xn, xni → 0. That is, λ0hAxn, xni +

λ0hBxn, xni + hCxn, xni → 0.

2 For each n ∈ N set qn(λ) := λ hAxn, xni+λhBxn, xni+hCxn, xni. Since A, B and C are bounded and self-adjoint operators, there exists a subsequence qnk and a polynomial q0 such that qnk → q0 when k → ∞. Note that, since A  0, we have q0 6= 0 and, moreover, q0(λ0) = 0.

By the Hurwitz theorem, see [12], every neighborhood of λ0 contains at least one root of every

polynomial qnk with a sufficiently large index k. Thus, λ0 ∈ π+ ∪ π− .

97 Chapter 7

Hyperbolic and Elliptic-hyperbolic Pencils

In this chapter (H, h·, ·i) is a separable Hilbert space; A, B, C ∈ B(H) are self-adjoint operators and the quadratic pencil operator L has the form:

L(λ) = A + λB + λ2C. (7.1)

+ − + − With R , R , R∞ and R∞ we denote, respectively, the subsets [0, +∞), (−∞, 0], [0, +∞) ∪ {∞} and (−∞, 0] ∪ {∞} of R∞.

7.1 Hyperbolic Pencils

Definition 7.1.1. The pencil operator L is called hyperbolic if and only if A  0, C ≤ 0, C 6≡ 0 and hBx, xi= 6 0 for every x ∈ ker(C) with x 6= 0.

Remarks:

• Clearly, every hyperbolic quadratic pencil is strongly damped.

− + • From the definitions 6.3.6 and 6.3.5 we have π+ ⊆ R∞ and π− ⊆ R∞.

• 0 6∈ σ(L).

• Clearly, if A  0 then 0 ∈ ρ(A) and, therefore, we can associate to the hyperbolic pencil the operators:

98     0 A −A−1B −A−1C     G =   and L =  . AB I 0

2 2 • As in the chapter 6, [x, y] := hGx, yiH2 (x, y ∈ H ). Thus, (H , [·, ·]) is a Krein space.

1 2 2 • Let µ = λ . Then we have L(λ) = λ L1(µ), where L1(µ) = µ A + µB + C.

The proof of the following proposition is basically rewrite the one of the proposition 6.3.2.

Proposition 7.1.2. If L is a hyperbolic pencil then the sign of the form hBx, xi is constant on ker(C) \{0}.

Theorem 7.1.3 (Langer Factorization). If L is a hyperbolic pencil operator, then there exist operators Z1 and Z2 such that:

∗ ∗ (1) L(λ) = (I − λZ2 )A(I − λZ1) = (I − λZ1 )A(I − λZ2).

+ − (2) σ(Z1) ⊆ R∞ and σ(Z2) ⊆ R∞, where Zi is the linear pencil defined by Zi(λ) = I − λZi, i = 1, 2.

(3) The operator S = A(Z1 − Z2) is self-adjoint and positive.

+ − (4) U = {(Z1x, x) | x ∈ H} and U = {(Z2x, x) | x ∈ H} are maximal positive definite and negative definite subspaces of (H2, [·, ·]), respectively. Further, U + and U − are L-invariant.

2 1 Proof. Consider the pencil L1(µ) = µ A + µB + C (where µ = λ ). Clearly L1 is strongly damped and, since A  0, L1 satisfies the property (S) of definition 6.4.5. By the theorem 6.4.10

−1 ∗ −1 (and its proof) there exist Z1, Z2 ∈ B(H), with Z2 = −A Z1 A − A B, such that L1(µ) =

∗ ∗ + − ∗ (µ − Z2 )A(µ − Z1) = (µ − Z1 )A(µ − Z2), σ(F1) = σ (L1), σ(F2) = σ (L1), Z1 A + AZ1 + B > 0

∗ and Z2 A + AZ2 + B < 0, where Fi is the linear pencil defined by Fi(µ) = µ − Zi, i = 1, 2.

Further, the subspaces U+ = {(Z1x, x) | x ∈ H} and U− = {(Z2x, x) | x ∈ H} are, respectively, maximal positive definite and maximal negative definite; also they are L-invariant. Therefore,

∗ ∗ L(λ) = (I − λZ2 )A(I − λZ1) = (I − λZ1 )A(I − λZ2) and, thus, (1) and (4) holds.

+ + 1 + Since σ(F1) = σ (L1) ⊆ R∞ (see section 6.5) and λ = µ , then σ(Z1) ⊆ R∞. Analogously, − σ(Z2) ⊆ R∞ and, thus, (2) holds.

99 −1 ∗ −1 ∗ From Z2 = −A Z1 A − A B we obtain −AZ2 = Z1 A + B. Therefore, S = A(Z1 − Z2) =

∗ AZ1 − AZ2 = AZ1 + Z1 A + B > 0. Thus, (3) holds.

Corollary 7.1.4. Let L be a hyperbolic pencil and let Z1 and Z2 be the operators in its Langer

Factorization. Then ker(Z1) ⊆ ker(C) and ker(Z2) ⊆ ker(C). Moreover, if hBx, xi > 0 (< 0) for every x ∈ ker(C) \{0}, then ker(Z2) = {0} ( ker(Z1) = {0}).

∗ ∗ Proof. By (1) of the theorem 7.1.3, C = Z2 AZ1 = Z1 AZ2 and, therefore, ker(Z1) and ker(Z2) are subsets of ker(C).

Suppose that hBx, xi > 0 for every x ∈ ker(C) \{0}. Thus for every x ∈ ker(Z2) \{0} we have

∗ 0 < hBx, xi = h(−Z1 A − AZ2)x, xi = hx, −AZ1xi = hx, −Sxi ≤ 0 . Hence, ker(Z2) = {0}.

Analogously, if hBx, xi < 0 for every x ∈ ker(C) \{0}, then ker(Z1) = {0}.

Proposition 7.1.5. If Z1 and Z2 are the operators in the Langer Factorization of the hyperbolic

−1 pencil L, then the operator S = A(Z1 − Z2) satisfies the inequality SA S ≥ 4D, where D = −C ≥ 0.

∗ ∗ −1 ∗ −1 ∗ Proof. Since S is self-adjoint and C = Z2 AZ1 = Z1 AZ2, we have SA S = S A S = (Z1 −

∗ −1 ∗ ∗ ∗ ∗ ∗ ∗ ∗ Z2 )AA A(Z1 − Z2) = (Z1 − Z2 )A(Z1 − Z2) = Z1 AZ1 − Z1 AZ2 − Z2 AZ1 + Z2 AZ2 = Z1 AZ1 +

∗ −1 ∗ ∗ Z2 AZ2 − 2C. Thus, for every x ∈ H, h(SA S + 4C)x, xi = h(Z1 AZ1 + Z2 AZ2 + 2C)x, xi =

∗ ∗ 1/2 1/2 1/2 1/2 ∗ hZ1 AZ1x, xi+hZ2 AZ2x, xi+2hCx, xi = hA Z1x, A Z1xi+hA Z2x, A Z2xi+hZ2 AZ1x, xi+

∗ 1/2 2 1/2 2 1/2 1/2 1/2 1/2 1/2 hZ1 AZ2x, xi = kA Z1xk +kA Z2xk +hA Z1x, A Z2xi+hA Z2x, A Z1xi = hA Z1x+

1/2 1/2 1/2 −1 A Z2x, A Z1x + A Z2xi ≥ 0. Hence, SA S ≥ 4D where D = −C.

Definition 7.1.6. Let L be a hyperbolic pencil.

• σd(L) := {λ ∈ R∞ | λ is an isolated point of σ(L) and λ ∈ σp(L) has finite multiplicity} is called the discrete spectrum of L.

+ • E := {x ∈ H | x is an eigenvector with associated eigenvalue λ ∈ σd(L) and λ > 0}.

− • E := {x ∈ H | x is an eigenvector with associated eigenvalue λ ∈ σd(L) and λ < 0}.

Lemma 7.1.7. Let L be a hyperbolic pencil. Then there exists δ > 0 such that kL(λ)−1k ≤

δ−1| sin(θ)|−1 for every λ = reiθ ∈ C \ R.

100 Proof. Since A  0 then δ := infkxk=1 hAx, xi > 0. Let x ∈ H with kxk = 1. Suppose that x ∈ ker(C) and suppose, without restriction, that hBx, xi > 0, then

kL(λ)xk ≥ |hL(λ)x, xi| = |hAx + λBx, xi| = |hAx, xi + λhBx, xi| = rhAx, xi|µ − p+(x)|,

1 2 where µ = λ and p+ is the functional of first type associated to the pencil L1(µ) = µ A + µB + C (see definition 6.3.5). Thus,

kL(λ)xk ≥ rhAx, xi|µ|| sin(−θ)| = hAx, xi| sin(θ)| ≥ δ| sin(θ)| = δ| sin(θ)|.

1 If x 6∈ ker(C) then, with µ = λ and p+, p− the functionals of first and second type associated to 2 the pencil L1(µ) = µ A + µB + C, we have

kL(λ)xk ≥ |hL(λ)x, xi| = |hAx, xi + λhBx, xi + λ2hCx, xi|

= r2hAx, xi µ2 + µ hBx,xi + hCx,xi hAx,xi hAx,xi 2 = r hAx, xi|µ − p+(x)||µ − p−(x)| ≥ r2hAx, xi|µ|2| sin(−θ)| = hAx, xi| sin(θ)|,

since p+(x) ≥ 0 and p−(x) ≤ 0. So, for every y ∈ H, kyk ≤ δ−1| sin(θ)|−1kL(λ)yk and therefore kL(λ)−1yk ≤ δ−1| sin(θ)|−1kyk. Hence kL−1(λ)k ≤ δ−1| sin(θ)|−1.

Remark. From 7.1.7 we have that if λ = reiθ ∈ C\R then |hL−1(λ)x, xi| ≤ kA−1/2k| sin(θ)|−1kxk2

−1  1  for every x ∈ H (independently of |λ|). However in [19] it is proved that kL (λ)xk = o |λ| as λ → ∞ for each x ∈ H.

+ + Proposition 7.1.8. If L is a hyperbolic pencil and σ(L) ∩ R ⊆ σd(L), then the system E is a total subset of the subspace R(Z1).

− − Analogously, if σ(L) ∩ R ⊆ σd(L), then the system E is a total subset of the subspace R(Z2).

+ Proof. Let x ∈ E , then there exists λ > 0 such that λ ∈ σp(L) and x is an eigenvector associated

1 1 to λ. By the theorem 7.1.3, (I − λZ1)x = 0. Thus, Z1x = λ x and, since λ 6= 0, x ∈ R(Z1); that

101 + + is, E ⊆ R(Z1) and, therefore, Span(E ) ⊆ R(Z1).

+ Suppose, towards a contradiction, that E is not a total subset of R(Z1). Let x 6= 0 such that

+ x ∈ R(Z1) and x ⊥ Span(E ).

∗ −1 Consider the function g(λ) = (I − λZ1 ) x, λ ∈ C \ R. Clearly g is an analytic function. Further,

iθ −1 for each λ = |λ|e ∈ C \ R we have g(λ) = A(I − λZ2)L (λ)x and, by lemma 7.1.7,

∗ −1 −1 −1 kg(λ)k = k(I − λZ1 ) xk = kA(I − λZ2)L (λ)xk ≤ M(1 + |λ|)| sin(θ)| , where M is a positive constant. Therefore g(λ) is a polynomial of degree ≤ 1.

−1 From the preceding remark, and since g(λ) = A(I − λZ2)L (λ)x for every λ ∈ C \ R, we have that g(λ) is constant. Let w ∈ H such that g(λ) = w for every λ in the domain of g.

∗ ∗ Then x = (I − λZ1 )w and therefore, with λ → 0, we have x = w. Thus x ∈ ker(Z1 ). Since

∗ x ∈ R(Z1) ∩ ker(Z1 ), we have x = 0 .

+ 1/2 Theorem 7.1.9. If L is a hyperbolic pencil and σ(L) ∩ R ⊆ σd(L), then the system {D x | x ∈ E+} is complete in the subspace R(C), where D = −C.

− 1/2 − Analogously, if σ(L)∩R ⊆ σd(L), then the system {D x | x ∈ E } is complete in the subspace R(C), where D = −C.

+ Proof. Suppose that σ(L) ∩ R ⊆ σd(L). Let Z1 and Z2 be the operators in the Langer Factor- ization and set S = A(Z1 − Z2).

• Recall that S > 0, hence it is invertible. In particular, H = ker(S) ⊕ R(S) = R(S). Set T = D1/2S−1. Note that D(T ) = R(S) is a dense subspace of H.

For every x = Su, with u ∈ H, we have kT xk2 = kD1/2uk2 = hD1/2u, D1/2ui = hDu, ui ≤

1 −1 1 −1 1 −1 1 −1/2 −1/2 1 −1/2 2 4 hSA Su, ui = 4 hA Su, Sui = 4 hA x, xi = 4 hA x, A xi = 4 kA xk ; thus 1 −1/2 kT xk ≤ 2 kA kkxk. So T is a bounded operator with dense domain in H and, therefore, T admits a bounded extension to H which will also be called T . Clearly R(T ) = R(D1/2) = R(D) = R(C).

• Let E0 be a basis for the subspace ker(C). Suppose, towards a contradiction, that S(E0) ∪ S(E+) is not total in H; then there exists w ∈ H, with w 6= 0, such that hSx, wi = 0 for every x ∈ E0 ∪ E+. Thus hx, Swi = 0 for every x ∈ E0 ∪ E+.

102 + ∗ Since E is a total subset of R(Z1), then Sw ⊥ R(Z1) and, therefore, Sw ∈ ker(Z1 );

∗ ∗ ∗ ∗ that is, 0 = Z1 Sw = (Z1 AZ1 − Z1 AZ2)w = (Z1 AZ1 − C)w. Now if w 6∈ ker(Z1) then

∗ ∗ ∗ hZ1 AZ1w, wi > 0 and, thus, 0 = hZ1 Sw, wi = h(Z1 AZ1 − C)w, wi > 0 (since −C ≥ 0).

Hence, w ∈ ker(Z1) and, by the corollary 7.1.4, w ∈ ker(C).

Since Sw ⊥ E0 and w ∈ ker(C), we have que hSw, wi = 0 and, since S > 0, w = 0 .

Hence S(E0) ∪ S(E+) is total in H.

• Since S(E0) ∪ S(E+) is a total subset of H and T ∈ B(H), then D1/2(E+) = T (S(E0) ∪

S(E+)) is a total subset of R(T ) = R(C).

Theorem 7.1.10. Let L be a hyperbolic pencil. Again let D = −C and suppose that B admits a representation B = B+ − B−, where B+ ≥ 0 and B− satisfies that, for every x, y ∈ H:

2 |hB−x, yi| ≤ 4hAx, xihDy, yi. (7.2)

Then the operator −Z2 in the Langer Factorization of L is similar to an accretive operator and

− − R(Z2) = H. In particular, if σ(L) ∩ R∞ ⊆ σd(L), then the system E is a total subset of H.

1/2 −1/2 −1/2 −1/2 −1/2 −1/2 Proof. Set W1 = A Z2A , S1 = A SA and D1 = A DA . Thus, by the propo-

2 −1/2 −1 −1/2 −1/2 −1/2 sition 7.1.5, S1 = A SA SA ≥ 4A DA = 4D1 ≥ 0 and, by the Löwner-Heinz 1/2 inequality ([16] [17]), we have S1 ≥ 2D1 .

∗ 1/2 −1/2 −1/2 ∗ 1/2 −1/2 −1/2 Since S = A(Z1−Z2) = −Z2 A−AZ2−B, then S1 = −A Z2A −A Z2 A −A BA =

∗ −1/2 −1/2 1/2 ∗ 1/2 −1/2 −1/2 1/2 −W1 − W1 − A BA ≥ 2D1 . Thus, −W1 − W1 ≥ 2D1 + A BA = 2D1 +

−1/2 −1/2 1/2 −1/2 −1/2 −1/2 −1/2 1/2 −1/2 −1/2 A (B+ − B−)A = 2D1 + A B+A − A B−A ≥ 2D1 − A B−A =

1/2 −1/2 −1/2 2D1 − B1, where B1 = A B−A .

2 −1/2 −1/2 2 −1/2 −1/2 2 By (7.2), for every x, y ∈ H, we have |hB1x, yi| = |hA B−A x, yi| = |hB−A x, A yi| ≤

−1/2 −1/2 −1/2 −1/2 1/2 −1/2 −1/2 −1/2 4hA(A x),A xihDA y, A yi = 4hA x, A xihA DA y, yi = 4hx, xihD1y, yi. Thus, for y ∈ H fixed,

2 2 2 2 hB1 y, yi = kB1yk = sup |hx, B1yi| = sup |hB1x, yi| ≤ 4hD1y, yi. kxk=1 kxk=1

103 2 1/2 ∗ Then B1 ≤ 4D1 and, by the Löwner-Heinz inequality, B1 ≤ 2D1 . Therefore, −W1 − W1 ≥

1/2 1/2 1/2 2D1 − B1 ≥ 0; that is −W1 is an accretive operator (see [13]). Since −Z2 = A (−W1)A , the operator −Z2 is similar to an accretive operator.

Note that, by (7.2), for every x ∈ ker(C) we have hB−x, xi = 0 and, therefore, hBx, xi = hB+x, xi ≥ 0. Since L is hyperbolic, hBx, xi > 0 and, by the corollary 7.1.4, ker(Z2) = {0}.

∗ ∗ Thus ker(W1) = {0} and, since −W1 is accretive, ker(W1 ) = {0}. Therefore ker(Z2 ) = {0} and

R(Z2) = H.

− − Now, if σ(L) ∩ R∞ ⊆ σd(L) then, by the proposition 7.1.8, E is a total subset of H.

7.2 Operator-Differential Equation Associated to a Hyper-

bolic Pencil

Given the hyperbolic pencil (7.1) we can associate the following operator-differential equation

 d  L i u(t) = Au(t) + iBu0(t) + Du00(t) = 0. (7.3) dt

Definition 7.2.1. A continuously differentiable function u : [0, +∞) → H is called a generalized solution of the equation (7.3) if, and only if, iBu(t) + Du0(t) is continuously differentiable and (iBu(t) + Du0(t))0 = −Au(t).

Remark. Since A1/2 is a bounded self-adjoint operator with A1/2  0 we can define a positive

1/2 1/2 definite inner product h·, ·i1 on H by hx, yi1 = hA x, A yi; x, y ∈ H. Clearly the norm k·k1 induced by this inner product is equivalent to the norm k·k. Thus, H1 = (H, h·, ·i1) is a Hilbert

0 0 space. Let H = H1 × H , where H = R(C), and V : H → H the bounded operator defined by   A−1BA−1D1/2   V =  . D1/2 0

The operator V is self-adjoint since, for every x1 = (x, y), x2 = (z, w) in H, we have

104 −1 −1 1/2 1/2 hVx1, x2iH = h(A Bx + A D y, D x), (z, w)iH

−1 −1 1/2 1/2 = hA Bx + A D y, zi1 + hD x, wi = hBx + D1/2y, zi + hD1/2x, wi = hBx, zi + hD1/2x, wi + hD1/2y, zi = hx, Bzi + hx, D1/2wi + hy, D1/2zi = hx, Bz + D1/2wi + hy, D1/2zi

−1 −1 1/2 1/2 = hx, A Bz + A D wi1 + hy, D zi

−1 −1 1/2 1/2 = h(x, y), (A Bz + A D w, D z)iH

= hx1, Vx2iH.

The operator V is injective since if (x, y) ∈ ker(V) then D1/2x = 0, A−1Bx + A−1D1/2y = 0

−1 −1 1/2 1/2 1/2 and, therefore, 0 = hA Bx + A D y, xi1 = hBx + D y, xi = hBx, xi + hy, D xi = hBx, xi. Thus, x ∈ ker(D) = ker(C) and hBx, xi = 0. Since the pencil is hyperbolic we have x = 0. Thus y ∈ R(C) and A−1D1/2y = 0; that is, y ∈ R(C) and y ∈ ker(C). Therefore, y = 0.

−1 ∗ −1 −1 Since (iV ) = −iV then, by a theorem of Stone, iV generates a unitary C0-group (see [15]). Let U(t) = eitV−1 , t ≥ 0.

0 Given x1 ∈ H1 and x0 ∈ H , the function u(t) = (u(t), v(t)) = U(t)x0 with x0 = (x1, −ix0) ∈ H solves the Cauchy problem (Vu(t))0 − iu(t) = 0

and limt→0+ ku(t) − x0kH = 0. Thus we have limt→0+ ku(t) − x1k1 = 0, limt→0+ kv(t) + ix0k = 0, D1/2u0(t) = iv(t) and (Bu(t) + D1/2v(t))0 = iAu(t). Hence,

0 0 0 (iBu(t) + Du (t)) = −Au(t), lim ku(t) − x1k = 0 and lim ku (t) − x0k = 0. (7.4) t→0+ t→0+

Suppose that u(t) satisfies (7.4) with x1 = x0 = 0. Note that, for every s ∈ C with Re(s) > 0, the following integral exists

Z +∞ Z +∞ (iBu(t) + Du0(t))0e−stdt = − Au(t)e−stdt. 0 0

105 After integration by parts we have

Z +∞ Z +∞ s (iBu(t) + Du0(t))e−stdt = − Au(t)e−stdt. 0 0

Denoting by uˆ(s) the Laplace transform of u(t), we have Auˆ(s) + isBuˆ(s) + s2Duˆ(s) = 0. Thus L(is)ˆu(s) = 0 and, since L(is) is bijective, we have uˆ(s) ≡ 0. Hence u(t) ≡ 0. So we have shown the following theorem:

0 Theorem 7.2.2. For any vectors x1 ∈ H and x0 ∈ H , there exists a unique generalized solution of (7.3) which satisfies (7.4).

7.3 Elliptic-hyperbolic Pencils and their Associated Operator-

Differential Equation

Definition 7.3.1. The hyperbolic pencil L is called elliptic-hyperbolic if, and only if, the operator

B admits a representation B = B+ − B−, where B+ ≥ 0 and B− is such that for some ε > 0, with ε < 2, and every x, y ∈ H,

2 2 |hB−x, yi| ≤ (2 − ε) hAx, xihDy, yi,D = −C ≥ 0. (7.5)

Remark. From the proof of theorem 7.1.10, we have that if L is an elliptic-hyperbolic pencil,

∗ then ker(Z2) = ker(Z2 ) = {0}. Given the elliptic-hyperbolic pencil L(λ) = A+λB +λ2C we can consider the following associated operator-differential equation:

Au(t) + Bu0(t) − Du00(t) = 0,D = −C. (7.6)

1 Lemma 7.3.2. Suppose that u ∈ C0 [0, +∞; H] (see appendix A.4) is a solution of the equation

106 (7.6), then

Z +∞ Z +∞ kA1/2u(0)k2 + kD1/2u0(0)k2 = 4 (kA1/2u0(t)k2 + kD1/2u00(t)k2)dtdα. 0 α

1 Analogously, if u ∈ C0 [−∞, 0; H] is a solution of (7.6), then

Z 0 Z β kA1/2u(0)k2 + kD1/2u0(0)k2 = 4 (kA1/2u0(t)k2 + kD1/2u00(t)k2)dtdβ. −∞ −∞

1 Proof. (1) Suppose that u ∈ C0 [0, +∞; H] is solution of (7.6). By the lemma A.4.1, we have:

R +∞ 0 00 0 0 = −2 Re 0 hAu(t) + Bu (t) − Du (t), u (t)idt R +∞ 0 R +∞ 0 0 R +∞ 00 0 = −2 Re 0 hAu(t), u (t)idt − 2 Re 0 hBu (t), u (t)idt + 2 Re 0 hDu (t), u (t)idt 0 0 R +∞ 0 0 = hAu(0), u(0)i − hDu (0), u (0)i − 2 0 hBu (t), u (t)idt 1/2 2 1/2 0 2 R +∞ 0 0 = kA u(0)k − kD u (0)k − 2 0 hBu (t), u (t)idt.

Moreover for every α ≥ 0 we have:

R +∞ 0 00 00 0 = −2 Re α hAu(t) + Bu (t) − Du (t), u (t)idt R +∞ 00 R +∞ 0 00 R +∞ 00 00 = −2 Re α hAu(t), u (t)idt − 2 Re α hBu (t), u (t)idt + 2 Re α hDu (t), u (t)idt R +∞ d 0 0 0 0 0 R +∞ 00 00 = −2 Re α [ dt hAu(t), u (t)i − hAu (t), u (t)i]dt + hBu (α), u (α)i + 2 Re α hDu (t), u (t)idt 0 0 0 R +∞ 1/2 0 2 1/2 00 2 = 2 RehAu(α), u (α)i + hBu (α), u (α)i + 2 α [kA u (t)k + kD u (t)k ]dt.

Thus,

R +∞ R +∞ 1/2 0 2 1/2 00 2 R +∞ 0 0 0 4 0 α (kA u (t)k + kD u (t)k )dtdα = 2 0 [−hBu (α), u (α)i − 2 RehAu(α), u (α)i]dα R +∞ 0 0 = 2hAu(0), u(0)i − 2 0 hBu (α), u (α)idα = 2kA1/2u(0)k2 − kA1/2u(0)k2 + kD1/2u0(0)k2 = kA1/2u(0)k2 + kD1/2u0(0)k2.

107 1 (2) Suppose that u ∈ C0 [−∞, 0; H] is solution of (7.6). By the lemma A.4.1, we have:

R 0 0 00 0 0 = 2 Re −∞ hAu(t) + Bu (t) − Du (t), u (t)idt R 0 0 R 0 0 0 R 0 00 0 = 2 Re −∞ hAu(t), u (t)idt + 2 Re −∞ hBu (t), u (t)idt − 2 Re −∞ hDu (t), u (t)idt 0 0 R 0 0 0 = hAu(0), u(0)i − hDu (0), u (0)i + 2 −∞ hBu (t), u (t)idt 1/2 2 1/2 0 2 R 0 0 0 = kA u(0)k − kD u (0)k + 2 −∞ hBu (t), u (t)idt.

Moreover for every β ≤ 0 we have:

R β 0 00 00 0 = 2 Re −∞ hAu(t) + Bu (t) − Du (t), u (t)idt R β 00 R β 0 00 R β 00 00 = 2 Re −∞ hAu(t), u (t)idt + 2 Re −∞ hBu (t), u (t)idt − 2 Re −∞ hDu (t), u (t)idt R β d 0 0 0 0 0 R β 00 00 = 2 Re −∞ [ dt hAu(t), u (t)i − hAu (t), u (t)i]dt + hBu (β), u (β)i − 2 Re −∞ hDu (t), u (t)idt 0 0 0 R β 1/2 0 2 1/2 00 2 = 2 RehAu(β), u (β)i + hBu (β), u (β)i − 2 −∞ [kA u (t)k + kD u (t)k ]dt.

Thus,

R 0 R β 1/2 0 2 1/2 00 2 R 0 0 0 0 4 −∞ −∞ (kA u (t)k + kD u (t)k )dtdβ = 2 −∞ [hBu (β), u (β)i + 2 RehAu(β), u (β)i]dβ R 0 0 0 = 2hAu(0), u(0)i + 2 −∞ hBu (β), u (β)idβ = 2kA1/2u(0)k2 − kA1/2u(0)k2 + kD1/2u0(0)k2 = kA1/2u(0)k2 + kD1/2u0(0)k2.

1 Proposition 7.3.3. Let L be an elliptic-hyperbolic pencil. Then for every solution u ∈ C0 [0, +∞; H] of the equation (7.6) there exists ε1 > 0 such that, for every t ≥ 0:

1/2 0 1/2 ε1kD u (t)k ≤ kA u(t)k. (7.7)

1 Analogously, for every solution u ∈ C0 [−∞, 0; H] of (7.6) there exists ε1 > 0 such that, for every t ≤ 0:

1/2 1/2 0 ε1kA u(t)k ≤ kD u (t)k. (7.8)

1 Proof. (1) Suppose that u ∈ C0 [0, +∞; H] is a solution of (7.6). Since L is elliptic-hyperbolic, let

108 2 2 ε > 0, with ε < 2, such that, for every x, y ∈ H, |hB−x, yi| ≤ (2 − ε) hAx, xihDy, yi. Then, since B+ ≥ 0 we obtain from the lemma A.4.1,

0 0 0 0 R +∞ 0 00 R +∞ 0 00 −hBu (α), u (α)i ≤ hB−u (α), u (α)i = −2 Re α hB−u (t), u (t)idt ≤ α 2|hB−u (t), u (t)i|dt R +∞ 0 0 1/2 00 00 1/2 ≤ |2 − ε| α 2hAu (t), u (t)i hDu (t), u (t)i dt R +∞ 0 0 00 00 ≤ |2 − ε| α [hAu (t), u (t)i + hDu (t), u (t)i]dt R +∞ 1/2 0 2 1/2 00 2 = |2 − ε| α [kA u (t)k + kD u (t)k ]dt.

R +∞ 0 0 R +∞ R +∞ 1/2 0 2 1/2 00 2 Thus, −2 0 hBu (α), u (α)idα ≤ 2|2 − ε| 0 α [kA u (t)k + kD u (t)k ]dtdα and, by 1/2 0 2 1/2 2 |2−ε| 1/2 0 2 1/2 2 the lemma 7.3.2 (and its proof), kD u (0)k − kA u(0)k ≤ 2 [kD u (0)k + kA u(0)k ]. p ε Therefore, with ε1 = 4−ε , we have that (7.7) holds for t = 0. Since clearly u(t + t0) is also solution of (7.6) for every t0 > 0, we have that (7.7) holds.

1 (2) If u ∈ C0 [−∞, 0; H] is a solution of (7.6), then

0 0 0 0 R β 0 00 R +∞ 0 00 hBu (β), u (β)i ≥ −hB−u (β), u (β)i = −2 Re −∞ hB−u (t), u (t)idt ≥ − α 2|hB−u (t), u (t)i|dt R β 0 0 1/2 00 00 1/2 ≥ −|2 − ε| −∞ 2hAu (t), u (t)i hDu (t), u (t)i dt R β 0 0 00 00 ≥ −|2 − ε| −∞ [hAu (t), u (t)i + hDu (t), u (t)i]dt R β 1/2 0 2 1/2 00 2 = −|2 − ε| −∞ [kA u (t)k + kD u (t)k ]dt.

R 0 0 0 R 0 R β 1/2 0 2 1/2 00 2 Thus, 2 −∞ hBu (β), u (β)idβ ≥ −2|2−ε| −∞ −∞ [kA u (t)k + kD u (t)k ]dtdβ and, by the 1/2 0 2 1/2 2 |2−ε| 1/2 0 2 1/2 2 lemma 7.3.2 (and its proof), kD u (0)k − kA u(0)k ≥ − 2 [kD u (0)k + kA u(0)k ]. p ε Therefore, with ε1 = 4−ε , we have that (7.8) holds for t = 0. Since clearly u(t + t0) is also solution of (7.6) for every t0 < 0, we have that (7.8) holds.

Theorem 7.3.4. Let L be a elliptic-hyperbolic pencil, then the operators Z1 and Z2 in the Langer Factorization satisfy, for every x ∈ H,

1/2 1/2 1 1/2 ε1kA Z1xk ≤ kD xk ≤ kA Z2xk, (7.9) ε1

p ε with ε1 = 4−ε . 1/2 −1 −1/2 Further, the operators D Z2 and Z1D are bounded on dense subsets of the respective

109 spaces H and H0 := R(C) and admit bounded extensions in these spaces.       A−1 0 −BD1/2 −A−1BA−1D1/2       Proof. Let A =  , T =   and F = AT =  . Clearly 0 I D1/2 0 D1/2 0 A is a bijective and   A 0 −1   A =   . 0 I

Note that A−1 is a bounded self-adjoint operator acting in H2 and A−1  0. Thus, the positive

2 −1 2 definite inner product on H , h·, ·iA−1 , defined by hx, yiA−1 = hA x, yiH2 for every x, y ∈ H , induces a topology which is equivalent to the one induced by h·, ·iH2 .

1/2 (1) Set U = {(Z2x, D x) | x ∈ H} and let U be the closure of U with respect to the topology

2 induced by h·, ·iA−1 . Since AZ2 + BZ2 − D = 0 then, for every x ∈ H,

1/2 −1/2 −1/2 1/2 −1 1/2 F(Z2x, D x) = (−A BZ2x + A Dx, D Z2x) = (A [−BZ2x + Dx],D Z2x)

−1 2 1/2 1/2 = (A [AZ2 x],D Z2x) = (Z2(Z2x),D (Z2x)) ∈ U.

That is, U is an F-invariant subspace and, therefore, U is also F-invariant. Note that, since

1/2 ker(Z2) = ker(C), Z2x = 0 implies x ∈ ker(C) and, therefore, D x = 0. Thus, points of the

∗ ∗ form (0, v), with v 6= 0, are not in U. Then, since −D = Z1 AZ2 and S = AZ1 + Z1 A + B > 0,

1/2 we have that for every x = (Z2x, D x) 6= 0,

−1 1/2 1/2 hFx, xiA−1 = hA Fx, xiH2 = hTx, xiH2 = h(−BZ2x + Dx, D Z2x), (Z2x, D x)iH2

1/2 1/2 = h−BZ2x + Dx, Z2xi + hD Z2x, D xi

= h−BZ2x + Dx, Z2xi + hZ2x, Dxi

∗ ∗ = h−BZ2x − Z1 AZ2x, Z2xi + hZ2x, −Z1 AZ2xi

∗ ∗ = −h(B + Z1 A)y, yi − hy, Z1 Ayi; where y = Z2x

∗ = −h(B + Z1 A)y, yi − hAZ1y, yi

∗ = −h(AZ1 + Z1 A + B)y, yi = −hSy, yi < 0.

That is, F U < 0 and, therefore, ker(F) ∩ U = {0}. Let F− = F U : U → U and x0 =

110 k·k 1/2 A−1 (u, v) ∈ U \ U. If (xn)n∈N ⊆ H such that (Z2xn,D xn) −−−−→ (u, v), where k·kA−1 is the

1/2 norm induced by h·, ·iA−1 , then it is clear that Z2xn → u and D xn → v; therefore we have that hF−x0, x0iA−1 = −hSu, ui. Thus, if u 6= 0 we have that hF−x0, x0iA−1 < 0 and, therefore, x0 6∈ ker(F−).

1/2 1/2 −1 Since Z2 is a closed operator, we have U = {(Z2x, D x) | x ∈ H} = {(y, D Z2 y) | y ∈

1/2 −1 R(Z2)} = graph of the closed operator D Z2 . Thus, since k·kH2 and k·kA−1 are equivalent, we have that (0, v) 6∈ U for every v ∈ H \ {0}. Hence F− < 0 and, therefore, F− is injective.

2 −1 Further, since the operator F is self-adjoint in (H , h·, ·iA−1 ), the densely defined operator F− is −1 unbounded and self-adjoint in (U, h·, ·iA−1 ). Thus, F− generates a strongly stable C0-semigroup of contractions (see [14] [15]).

1/2 tF−1 Let x0 = (Z2x, D x) be any vector in U, and let w(t) = (e − )x0, t ≥ 0. Then w(t) is a solution of the operator-differential equation Fw0(t) − w(t) = 0 with w(t) → 0 when t → +∞. Thus, if w(t) = (u(t), v(t)) then u, v ∈ C1[0, +∞; H] and

  0 = −A−1Bu0(t) + A−1D1/2v0(t) − u(t),

 0 = D1/2u0(t) − v(t).

That is, v(t) = D1/2u0(t) and, therefore, Au(t) + Bu0(t) − Du00(t) = 0. Then, by the proposition

1/2 0 1/2 p −1 1/2 0 7.2.3, ε1kD u (0)k ≤ kA u(0)k with ε1 = ε(4 − ε) . Note that u(0) = Z2x, D u (0) =

1/2 1/2 1/2 D x and, therefore, ε1kD xk ≤ kA Z2xk (x ∈ H). Thus, for every y ∈ R(Z2), we have

1/2 −1 1/2 1/2 1/2 −1 ε1kD Z2 yk ≤ kA yk ≤ kA kkyk. So, the densely defined operator D Z2 is bounded and, therefore, it admits a bounded extension to R(Z2) = H.

1/2 (2) Analogous to the preceding proof, with U = {(Z1x, D x) | x ∈ H}, changing F by −F

1/2 1/2 and defining w(t) for t ≤ 0, we have that ε1kA Z1xk ≤ kD xk for every x ∈ H. Further,

−1/2 1/2 −1 −1 −1/2 the function Z1D := (D Z1 ) is a bounded operator with D(Z1D ) = R(C). Thus,

−1/2 0 Z1D admits a bounded extension to H = R(C).

Remark. From the proof of the theorem 7.3.4, we have that for every x1 ∈ H there exists a continuously differentiable function u(t) for t > 0 which satisfies the equation (7.6) such that

111 limt→0+ ku(t) − x1k = 0 and limt→+∞ ku(t)k = 0.

Theorem 7.3.5. Let L be an elliptic-hyperbolic pencil, and let Z1,Z2 be the operators in the

1/2 −1/2 Langer Factorization. Then Z2 is similar in H to a negative self-adjoint operator, and D Z1D is similar in the subspace H0 = R(C) ⊆ H to a positive self-adjoint operator.

Proof. Define the projections P,Q : H2 → H2 by P (u, v) = (u, 0) and Q(u, v) = (0, v).

(1) Let F, U, U and F− be as in the first part of the proof of 7.2.4; note that F− is a negative self-

1/2 −1 adjoint operator. Let K be the bounded extension of D Z2 to H. Then U = {(x, Kx) | x ∈ H} and, since we can to identify R(P ) with H, the operator P : U → H is bijective. Thus, since

−1 −1 Z2x = P F−P x for every x ∈ H, we have Z2 = P F−P . Hence, Z2 is similar to a negative self-adjoint operator.

(2) Let F, U, and U be as in the second part of the proof of 7.2.4, and let F+ := F U : U → U.

Thus, F+ is a positive self-adjoint operator. Since we can identify R(Q) with H, the operator

1/2 −1 −1/2 Q : U → H is bijective. Thus, for every y ∈ R(D ), we have QF+Q y = QF+(Z1D y, y) =

−1/2 1/2 −1/2 1/2 −1/2 1/2 −1/2 −1 1/2 Q(Z1(Z1D y),D (Z1D y)) = D Z1D y. Then D Z1D = QF+Q on R(D )

1/2 −1/2 −1 1/2 −1/2 and, by continuity, D Z1D = QF+Q on R(C). Hence, D Z1D is similar to a positive self-adjoint operator in H0 = R(C).

− + Theorem 7.3.6. Let L be an elliptic-hyperbolic pencil with σ(L)∩R ⊆ σd(L) (resp. σ(L)∩R ⊆

− 1/2 + σd(L)). Then the system E (resp., the system {D x | x ∈ E }) contains a Riesz basis for H

(resp., for the subspace R(C)).

− Proof. (1) If σ(L) ∩ R ⊆ σd(L) and since Z2 is similar to a self-adjoint operator then, by the proposition 7.1.8 and theorem 5.3.10, E− contains a Riesz basis for H.

+ 1/2 −1/2 (2) Suppose that σ(L) ∩ R ⊆ σd(L). Since D Z1D is similar to a positive self-adjoint operator and the system {D1/2x | x ∈ E+} are the eigenvalues of this operator then, by the theorems 7.1.9 and 5.3.10, {D1/2x | x ∈ E+} contains a Riesz basis for R(C).

112 Chapter 8

Pencils of Unbounded Self-adjoint Operators and their Associated Differential Equations

8.1 Hyperbolic and Elliptic-hyperbolic Pencils of Unbounded

Self-adjoint Operators

Remark. Let A  0 be a self-adjoint operator acting in a Hilbert space and let θ ∈ R.

θ/2 θ/2 • If θ ≥ 0, set Hθ = D(A ) and kxkθ = kA xk for every x ∈ Hθ. Clearly k·kθ defines

a norm on Hθ and this norm is induced by the positive definite inner product hx, yiθ :=

θ/2 θ/2 hA x, A yi. Further, (Hθ, k·kθ) is a Banach space [since if (xn)n∈N ⊆ Hθ is a Cauchy

θ/2 sequence with respect to the norm k·kθ, then the sequence (A xn) is Cauchy with respect

θ/2 k·k k·kθ to the norm k·k and, therefore, there exists x0 ∈ H such that A xn −→ x0; thus xn −−→

−θ/2 A x0 ∈ Hθ] and, therefore, Hθ is a Hilbert space.

θ/2 • If θ < 0 then kxkθ = kA xk (x ∈ H) defines a norm on H. Moreover, analogous the

previous case, k·kθ is generated by a positive definite inner product. Define Hθ as the

completation of H with respect the norm k·kθ.

113 Definition 8.1.1. If A  0 then the family {Hθ}θ∈R, where Hθ is defined as in the remark above, is called the scale of Hilbert spaces generated by A1/2.

Lemma 8.1.2. If the operators A and S are self-adjoint with A  0 and D(S) ⊇ D(A), then

−1/2 −1/2 the operator S1 = A SA admits a bounded extension in H.

Proof. Clearly S1 is a well-defined and densely defined symmetric operator. Since S is closed and

D(S) ⊇ D(A), there exists a real number M 0 > 0 such that kSxk ≤ M 0[kAxk + kxk] for every x ∈ D(A) (see [16]). Then, since A  0, there exists M > 0 such that kSxk ≤ MkAxk for every x ∈ D(A). Thus, by the second Heinz inequality (see [16]), |hSx, xi| ≤ MkA1/2xk2 for every

−1/2 −1/2 −1/2 x ∈ D(A). Thus, for every y ∈ D(S1) with A y ∈ D(A), |hS1y, yi| = |hA SA y, yi| =

−1/2 −1/2 2 −1/2 −1/2 |hSA y, A yi| ≤ Mkyk . Then, for every y, z ∈ D(S1) with A y, A y ∈ D(A),

|hS1y, zi| ≤ Mkykkzk (see [17]) and, therefore, the form hS1y, zi extends by continuity to the whole of H. Thus, S1 has a bounded extension on H.

Remark. In the conditions of lemma 8.1.2, we have that the form hSx, xi can be extended by continuity to D(A1/2) and, therefore, the condition in the second part of the following definition is unambiguous.

Definition 8.1.3. (1) A quadratic pencil L (of unbounded self-adjoint operators) in a Hilbert space H is an operator-valued function with D(L) = C∞, for which there exist fixed self-adjoint operators A, B and D, acting in H, such that L(∞) = −D and, for every λ ∈ C, L(λ) = A + λB − λ2D. (2) The quadratic pencil L(λ) = A + λB − λ2D is called hyperbolic if, and only if, A  0, D ≥ 0,

D 6≡ 0, D(D) ⊇ D(A), D(B) ⊇ D(A) and hBx, xi 6= 0 for every x ∈ ker(D) ∩ D(A1/2) with x 6= 0.

2 (3) The hyperbolic pencil L(λ) = A + λB − λ D is called elliptic-hyperbolic if BD(A) admits a representation BD(A)= B+ − B−, where B+ ≥ 0 and there exists ε > 0, with ε < 2, such that 2 2 |hB−x, yi| ≤ (2 − ε) hAx, xihDy, yi for every x, y ∈ D(A).

114 Remark. Given the hyperbolic pencil

L(λ) = A + λB − λ2D (8.1)

define the operators B1, D1 and T by

−1/2 −1/2 −1/2 −1/2 1/2 −1/2 B1 = A BA ,D1 = A DA ,T = D A . (8.2)

Moreover, define H0 := R(T ).

• By the lemma 8.1.2, B1 and D1 are bounded operators defined on H. Moreover, D1 ≥ 0.

• Note that T is a bounded operator defined on H, since for every x ∈ H,

2 1/2 −1/2 1/2 −1/2 −1/2 −1/2 2 kT xk = hD A x, D A xi = hA DA x, xi = hD1x, xi ≤ kD1kkxk .

• Since T ∈ B(H) then, by the polar representation of T , there exists an operator U, which

0 1/2 −1/2 −1/2 1/2 1/2 maps H isometrically onto R(D1 ) = R(D1), such that UT = (A DA ) = D1 .

1/2 1/2 −1/2 1/2 −1 1/2 1/2 Thus D1 = UD A and, therefore, D = U D1 A .

• Note that H0 ⊆ R(D).

• In the particular case in which D is bounded, we have that H0 = R(D) [since if D is bounded and self-adjoint, then D(D) = H, D1/2 ∈ B(H), R(T ) = R(D1/2) and R(D1/2) = R(D); thus, H0 = R(T ) = R(D1/2) = R(D)].

0 1/2 • Note that, H = R(D) if and only if D D(A1/2) is essentially self-adjoint. Further, if 0 DD(A) is essentially self-adjoint then H = R(D).

1/2 Lemma 8.1.4. Let {Hθ}θ∈R the scale of Hilbert spaces generated by A . Then the operators

1/2 D : H1 → H−1, B : H1 → H−1 and D : H → H−1, understood as operators acting from one Hilbert space into another, are bounded.

115 −1/2 −1/2 −1/2 1/2 1/2 Proof. (1) For every x ∈ H1, kDxk−1 = kA Dxk = kA DA A xk ≤ kD1A xk ≤

1/2 kD1kkA xk = kD1kkxk1. Therefore, D ∈ B(H1; H−1) (2) Is analogous to (1).

(3) Since T ∈ B(H) then T ∗ ∈ B(H). Moreover T ∗ = A−1/2D1/2. Thus, for every x ∈ D(D1/2),

1/2 −1/2 1/2 ∗ ∗ kD xk−1 = kA D xk = kT xk ≤ kT kkxk. Therefore, extension by continuity gives

1/2 D ∈ B(H; H−1).

8.2 Equation Associated with a Hyperbolic Pencil

This section (its result and proof) is analogous to the section (7.3). Given the hyperbolic pencil (8.1) we can associate the following operator-differential equation

 d  L i u(t) = Au(t) + iBu0(t) + Du00(t) = 0. (8.3) dt

1/2 In this section {Hθ}θ∈R denotes the scale of Hilbert spaces generated by A .

Definition 8.2.1. A continuous function u : [0, +∞) → H1 is called a generalized solution of equation (8.3) if Du(t) is continuously differentiable on [0, +∞) as a function with values in H−1, the function iBu(t) + Du0(t) has the same property, and

[iBu(t) + Du0(t)]0 = −Au(t), (8.4)

understood as equality of functions with values in H−1.

0 Theorem 8.2.2. For any pairs of vectors x1 ∈ H1 and x0 ∈ H there exists a unique generalized

1/2 0 solution u of (8.3), in the terms of definition 8.2.1, such that u is bounded in H1, D u (t) is continuous and bounded in the space H0 ⊆ H, and

1/2 0 lim ku(t) − x1k1 = 0, lim kD u (t) − x0k = 0. (8.5) t→0+ t→0+

116 0 Proof. (1) In H = H1 × H consider the following equation

  A−1BA−1D1/2 0   (Vu(t)) − iu(t) = 0; V =   . (8.6) D1/2 0

Since in the product space the Euclidean norm and the norm of the sum are equivalent, there exist α > 0 such that, for every x = (x0, y0) ∈ H, by the lemma 8.1.4,

−1 −1 1/2 1/2 −1 −1 1/2 1/2 kVxkH = k(A Bx0 + A D y0,D x0)kH ≤ α[kA Bx0 + A D y0k1 + kD x0k]

−1/2 −1/2 1/2 1/2 −1/2 1/2 = αkA Bx0 + A D y0k + αkD A A x0k

1/2 1/2 1/2 1/2 = αkBx0 + D y0k−1 + αkTA x0k ≤ αkBx0k−1 + αkD y0k−1 + αkT kkA x0k

1/2 ≤ αkBkkx0k1 + αkD kky0k + αkT kkx0k1.

That is, V is a bounded operator on H. For (x, y), (w, z) ∈ H, we have

−1 −1 1/2 1/2 hV(x, y), (w, z)iH = h(A Bx + A D y, D x), (w, z)iH

−1 −1 1/2 1/2 = hA Bx + A D y, wi1 + hD x, zi

−1 −1 1/2 1/2 = hA Bx, wi1 + hA D y, wi1 + hD x, zi

−1 1/2 −1 1/2 1/2 1/2 = hBA x, wi1 + hA A D y, A wi + hx, D zi

−1 1/2 −1 1/2 = hx, A Bwi1 + hD y, wi + hA Ax, D zi

−1 1/2 −1 1/2 = hx, A Bwi1 + hy, D wi + hAx, A D zi

−1 1/2 −1 1/2 = hx, A Bwi1 + hy, D wi + hx, A D zi1

−1 −1 1/2 1/2 = hx, A Bwi1 + hx, A D zi1 + hy, D wi

−1 −1 1/2 1/2 = hx, A Bw + A D zi1 + hy, D wi

= h(x, y), V(z, w)iH.

Therefore, V is self-adjoint.

1/2 −1 −1 1/2 −1 Let (x, y) ∈ ker(V). Then D x = 0, 0 = A Bx + A D y ∈ H1 and 0 = hA Bx +

−1 1/2 1/2 1/2 1/2 A D y, xi1 = hBx + D y, xi = hBx, xi + hD y, xi = hBx, xi + hy, D xi = hBx, xi. Thus

1/2 x ∈ H1 ∩ ker(D) = D(A ) ∩ ker(D) and hBx, xi = 0. Since the pencil is hyperbolic, we have

117 x = 0 and, therefore, A−1D1/2y = 0. Then A−1/2D1/2y = 0; that is T ∗y = 0. Since y ∈ ker(T ∗) and y ∈ R(T ), then y = 0. Hence, V is injective.

−1 ∗ −1 −1 Since (iV ) = −iV then, by a theorem of Stone, iV generates a unitary C0-group (see

itV−1 −1 [15]). Let U(t) = e (t ∈ R) be the unitary C0-group generate by the operator iV .

0 (2) Existence. Suppose that x1 ∈ H1 and x0 ∈ H . Let x0 = (x1, −ix0) and u(t) = (u(t), v(t)) :=

U(t)x0 (t ≥ 0). Then u(t) satisfies (8.6) and limt→0+ ku(t) − x0kH = 0. Thus, limt→0+ ku(t) − x1k1 =

−1 −1 1/2 0 1/2 0 0, limt→0+ kv(t) + ix0k = 0 and, by (8.6), (A Bu(t)+A D v(t)) = iu(t) ∈ H1 and D u (t) = iv(t) ∈ H0. Note that:

• Since U(t) is a C0-group, u(t) is continuously differentiable as a function with values in H1.

(n) (n) −1/2 (n) (n) • For n ∈ N0 and t0 ∈ [0, +∞), kDu (t) − Du (t0)k−1 = kA D[u (t) − u (t0)]k =

−1/2 −1/2 1/2 (n) (n) 1/2 (n) (n) 1/2 (n) kA DA A [u (t) − u (t0)]k = kD1A [u (t) − u (t0)]k ≤ kD1kkA [u (t) −

(n) (n) (n) t→t0 u (t0)]k = kD1kku (t) − u (t0)k1 −−−→ 0 (by the preceding item). Thus, Du(t) is

continuously differentiable as a function with values in H−1.

(n) (n+1) (n) (n+1) • For n ∈ N0 and t0 ∈ [0, +∞), kiBu (t) + Du (t) − iBu (t0) − Du (t0)k−1 ≤

(n) (n) (n+1) (n+1) (n+1) kBu (t)−Bu (t0)k−1+kDu (t)−Du (t0)k−1. By the preceding item, kDu (t)−

(n+1) t→t0 (n) (n) −1/2 (n) (n) Du (t0)k−1 −−−→ 0. Moreover, kB[u (t) − u (t0)]k−1 = kA B[u (t) − u (t0)]k =

−1/2 −1/2 1/2 (n) (n) 1/2 (n) (n) 1/2 (n) kA BA A [u (t) − u (t0)]k = kB1A [u (t) − u (t0)]k ≤ kB1kkA [u (t) −

(n) (n) (n) t→t0 0 u (t0)]k = kB1kku (t) − u (t0)k1 −−−→ 0 (by the first item). Thus, iBu(t) + Du (t) is

continuously differentiable as a function with values in H−1.

• Since A−1 is bounded and (A−1Bu(t) + A−1D1/2v(t))0 = iu(t), then (Bu(t) + D1/2v(t))0 =

1/2 0 1 0 0 iAu(t). Moreover, since D u (t) = iv(t), we have (Bu(t) + i Du (t)) = iAu(t) and, therefore, (iBu(t) + Du0(t))0 = −Au(t).

• Since the group U(t) is unitary, we have ku(t)kH = kx0kH (t ≥ 0) and, therefore, ku(t)k1 ≤

kx0kH for every t ≥ 0. That is, u(t) is bounded in H1.

• Since D1/2u0(t) = iv(t) then D1/2u0(t) is continuous as function with values in H0. Moreover, as in the preceding item, D1/2u0(t) is also bounded in H0.

118 1/2 0 • limt→0+ ku(t) − x1k1 = 0 and, since that limt→0+ kv(t) + ix0k = 0, limt→0+ kD u (t) − x0k =

limt→0+ kiv(t) − x0k = limt→0+ kv(t) + ix0k = 0.

(3) Uniqueness. Suppose that u(t) is a generalized solution of (8.3) which satisfies the conditions of the theorem with x1 = x0 = 0. Then, for every s ∈ C with Re(s) > 0, the following integral exists in H−1 Z +∞ Z +∞ (iBu(t) + Du0(t))0e−stdt = − Au(t)e−stdt. (8.7) 0 0

1/2 Since B : H1 → H−1 and D : H → H−1 are bounded (lemma 8.1.4), limt→0+ ku(t)k1 = 0 and

1/2 0 limt→0+ kD u (t)k = 0, then (8.7), after integration by parts, becomes

Z +∞ Z +∞ s (iBu(t) + Du0(t))e−stdt = − Au(t)e−stdt. 0 0

Thus, denoting with uˆ(s) the Laplace Transform of u(t), we have Auˆ(s) + isBuˆ(s) + s2Duˆ(s) = 0; that is, L(is)ˆu(s) = 0.

−1/2 −1/2 −1/2 −1/2 2 Consider the pencil L1(λ) = A L(λ)A = A L(λ)A = I + λB1 − λ D1. Note that

L1(λ) is a quadratic pencil of bounded and self-adjoint operators. Moreover, L1 is a hyperbolic pencil. Thus, for every s ∈ C with Re(s) > 0, we have that L1(is) is invertible and, therefore, L(is) is invertible also. Then uˆ ≡ 0 and, therefore, u ≡ 0.

8.3 Equation Associated with a Elliptic-hyperbolic Pencil

Given the elliptic-hyperbolic pencil

L(λ) = A + λB − λ2D, (8.8) we can associate the following operator-differential equation

 d  L u(t) = Au(t) + Bu0(t) − Du00(t) = 0. (8.9) dt

1/2 In this section {Hθ}θ∈R denotes the scale of Hilbert spaces generated by A .

119 Definition 8.3.1. A function u(t) is called a generalized solution of (8.9) on the interval (a, b) if u(t) is continuously differentiable as a function with values in H1 and has a Bochner-integrable second derivative in the distribution theory sense in H1 on any compact subset of (a, b), and (8.9) holds for almost all t ∈ (a, b) in the space H−1.

Theorem 8.3.2. For any x1 ∈ H1 there exists a unique generalized solution u(t) of (8.9) on

(0, +∞) (in the terms of the definition 8.3.1) such that u(t) is a continuous function in H1 and

lim ku(t) − x1k1 = 0, lim ku(t)k1 = 0. (8.10) t→0+ t→+∞

Moreover, u(t) is infinitely differentiable as a function with values in H1 for t > 0 and decreases exponentially as t → +∞. If D(B) ⊇ H1 and D(D) ⊇ H1, then the solution u(t) is the classical solution for t > 0.

0 Analogously, for any x0 ∈ H there exists a unique generalized solution u(t) of (8.9) on (−∞, 0) (in the terms of the definition 8.3.1) such that D1/2u0(t) is a continuous function in H0 and

1/2 0 lim kD u (t) − x0k = 0, lim ku(t)k1 = 0. (8.11) t→0− t→−∞

Moreover, u(t) is infinitely differentiable as a function with values in H1 for t < 0 and decreases exponentially as t → −∞. If D(B) ⊇ H1 and D(D) ⊇ H1, then the solution u(t) is the classical solution for t < 0.

Proof. 1. Solution of (8.9), (8.11):

2 Let B+, B− and ε > 0, with ε < 2, such that BD(A)= B+ − B−, B+ ≥ 0 and |hB−x, yi| ≤ (2 − ε)2hAx, xihDy, yi for every x, y ∈ D(A).

+ −1/2 −1/2 − −1/2 −1/2 −1/2 Let B1 = A B+A and B1 = A B−A . Then, for every x with A x ∈ D(A),

−1/2 −1/2 −1/2 −1/2 + − B1x = A B+A x−A B−A x = B1 x−B1 x. Thus, by the lemma 8.1.2 and extending

+ − by continuity, B1 = B1 − B1 .

120 + −1/2 −1/2 Clearly B1 ≥ 0 and, for every x, y ∈ H with A x, A y ∈ H2,

− 2 −1/2 −1/2 2 −1/2 −1/2 2 |hB1 x, yi| = |hA B−A x, yi| = |hB−A x, A yi| ≤ (2 − ε)2hAA−1/2x, A−1/2xihDA−1/2y, A−1/2yi = (2 − ε)2hx, xihA−1/2DA−1/2y, yi

2 = (2 − ε) hx, xihD1y, yi.

− 2 2 Therefore, extending by continuity, we have |hB1 x, yi| ≤ (2−ε) hx, xihD1y, yi for every x, y ∈ H.

2 Thus, the hyperbolic pencil of bounded operators L1(λ) = I + λB1 − λ D1, is elliptic-hyperbolic

−1/2 −1/2 (B1 and D1 are defined as in (8.2)). Note that L1(λ) = A L(λ)A . Existence. Consider in H2 the following operator-differential equation

  1/2 −B1 D1 Fu0(t) − u(t) = 0, F =   . (8.12)  1/2  D1 0

2 Similar to the proofs of theorems 7.3.4 and 7.3.5, there exists a closed subspace U+ ⊆ H such that F+ := FU+ : U+ → U+ is a positive operator. Moreover, U+ = {(Kx, x) | x ∈ R(D1)}, where K is a bounded operator on the subspace R(D1) ⊆ H.

0 Let U be the operator which maps H isometrically onto R(D1) (see remark before lemma 8.1.4).

0 tF−1 If x0 ∈ H then the function u(t) = (u1(t), u2(t)) = (e + )x0 (t < 0), where x0 = (KUx0, Ux0), is solution of (8.12). Further, u(t) is infinitely differentiable and

lim ku(t) − x0kH2 = 0, lim ku(t)kH2 = 0. (8.13) t→0− t→−∞

0 1/2 0 1/2 0 From (8.12) we have −B1u1(t) + D1 u2(t) − u1(t) = 0, D1 u1(t) − u2(t) = 0 and, therefore, in H

 d  L u (t) = u (t) + B u0 (t) − D u00(t) = 0. 1 dt 1 1 1 1 1 1

121 −1/2 Let u(t) = A u1(t). Then,

 d   d   d  L u(t) = A1/2L A1/2A−1/2u (t) = A1/2L u (t) = 0. dt 1 dt 1 1 dt 1

That is, u(t) is satisfies (8.9) and it is a function with values in H1. Clearly, u(t) is infinitely differentiable for t < 0 as a function with values in H1 and, therefore, is a generalized solution of

(8.9) in the sense of the definition 8.3.1 (since, clearly, H1 is dense in H−1). Since F+ is bounded

−1 −1 and F+ > 0 then, F+  0 and, therefore, u(t) is exponentially decreasing as t → −∞ (see [14],[15]).

1/2 0 1/2 0 1/2 1/2 −1/2 1/2 0 By (8.13), since u2(t) = D1 u1(t) = D1 u (t) and D1 = UD A , 0 = limt→0− kD1 u1(t) −

1/2 −1/2 0 1/2 0 Ux0k = limt→0− kUD A u1(t) − Ux0k = limt→0− kUD u (t) − Ux0k. Thus, since U is in-

0 1/2 0 1/2 vertible on H , limt→0− kD u (t) − x0k = 0. Moreover, limt→−∞ku(t)k1 = limt→−∞kA u(t)k = limt→−∞ku1(t)k = 0.

0 00 Now, if D(B) ⊇ H1 and D(D) ⊇ H1, then Bu (t) and Du (t) are infinitely differentiable as functions with values in H for t < 0. Therefore, Au(t) = −Bu0(t) − Du00(t) has the same property. Thus, u(t) takes values in D(A) and, therefore, is a classical solution for t < 0.

1/2 Uniqueness. Suppose that u(t) satisfies (8.9) and (8.11) with x0 = 0. Then u1(t) = A u(t)

0 00 1 satisfies u1(t) + B1u1(t) − D1u1(t) = 0 in H and for t < 0, and u1(t) ∈ C0 [−∞, −δ] for any

1/2 0 δ > 0 (see section A.4). By the proposition 7.3.3, for every t < 0, ε1ku1(t)k ≤ kD1 u1(t)k

1/2 −1 1/2 1/2 1/2 0 and, since D = U D1 A , we have ε1ku(t)k1 ≤ kD u (t)k for every t < 0. Thus, since

1/2 0 limt→0− kD u (t)k = 0, we have limt→0− ku(t)k1 = 0. Thus, by an argument analogous to the proof of uniqueness in the theorem 8.2.2, we have u(t) ≡ 0. 2. Solution of (8.9), (8.10):

Existence. Let x1 ∈ H1. By the proof of the theorem 7.3.4, there exists an infinitely differentiable

0 00 function u1(t) which satisfies u1(t) + B1u1(t) − D1u1(t) = 0 (t > 0), limt→+∞ku1(t)k = 0 and

1/2 limt→0+ ku1(t) − A x1k = 0.

−1/2 Let u(t) = A u1(t) (t > 0). Then, u(t) is an infinitely differentiable function with values in

0 00 H1. Clearly, Au(t) + Bu (t) − Du (t) = 0 (t > 0). Note that,

122 1/2 1/2 1/2 limt→0+ ku(t) − x1k1 = limt→0+ kA u(t) − A x1k = limt→0+ ku1(t) − A x1k = 0 and,

1/2 limt→+∞ku(t)k1 = limt→+∞kA u(t)k = limt→+∞ku1(t)k = 0.

If D(B) ⊇ H1 and D(D) ⊇ H1 then, as in the previous case, u(t) is the classical solution. Uniqueness. It is similar to the proof of uniqueness for (8.9), (8.11).

8.4 Completeness and Basis Property

Let E+ (E−) denote the set of all eigenvectors of the hyperbolic pencil L(λ) = A + λB − λ2D corresponding to its positive (negative) eigenvalues.

+ − 1/2 Theorem 8.4.1. If σ(L) ∩ R ⊆ σd(L) (resp. σ(L) ∩ R ⊆ σd(L)), then the system {D x}x∈E+

1/2 0 (resp. {D x}x∈E− ) is complete in the subspace H ⊆ H.

If B D(A) admits the representation B D(A)= B+ − B−, where B+ ≥ 0 and the operator B− is such that

|hB−x, yi| ≤ 4hAx, xihDy, yi for every x, y ∈ D(A),

− then the system E is complete in H1.

2 Proof. Let B1 and D1 be the operators defined in (8.2) and let L1(λ) = I + λB1 − λ D1. Since

−1/2 −1/2 L1(λ) = A L(λ)A we have that the eigenvalues of L(λ) and L1(λ) coincide. Moreover,

−1/2 x ∈ H is an eigenvector of L1 associated to the eigenvalue λ if and only if A x is an eigenvector of L with eigenvalue λ. Therefore, the affirmation holds by the theorems 7.1.9 and 7.1.10.

From theorem 7.3.6 and the formulas (8.2) the next theorem follows.

Theorem 8.4.2. If L(λ) is an elliptic-hyperbolic pencil and its spectrum on R− (R+) is discrete,

− 1/2 then the system E (the system {D x}x∈E+ ) contains a Riesz basis in the space H1 (in the subspace H0 ⊆ H).

123 Appendix A

Some Results on Functional Analysis and Operator Theory

A.1 Functional Analysis and Operator Theory

Theorem A.1.1. Let T a self-adjoint linear operator acting in a Hilbert space and let α, β ∈ R. Then:

i) hT x, xi ≤ β for every x ∈ D(T ) with kxk = 1 if and only if (β, ∞) ⊆ ρ(T ).

ii) hT x, xi ≥ α for every x ∈ D(T ) with kxk = 1 if and only if (−∞, α) ⊆ ρ(T ).

Theorem A.1.2 (Riesz Projections). Let T be a bounded linear operator acting in a Hilbert space H. Suppose that σ(T ) = σ1 ∪ σ2 where σ1 and σ2 are disjoint and non-empty sets such that

σ(T ) \ σ1 and σ(T ) \ σ2 are open. Then there exist P1, P2 ∈ B(H) orthogonal projections such that:

i) P1P2 = P2P1 = 0 and P1 + P2 = I.

ii) H = R(P1)⊕˙ R(P2).

iii) T commutes with Pi, i = 1, 2.

iv) R(Pi) is T -invariant, i = 1, 2.

124 v) σ(T R(Pi)) = σi, i = 1, 2.

Theorem A.1.3 (Banach). Let X be a vector space and let k·k1, k·k2 be norms on X such that

(X, k·kj) is a Banach space, j = 1, 2. If there is an α > 0 such that k·k1 ≤ αk·k2, then k·k1 and k·k2 are equivalent.

Theorem A.1.4. Let H be a Hilbert space and let T ∈ B(H). If U ⊆ H is a closed T -invariant subspace, then U ⊥ is T ∗-invariant.

Theorem A.1.5. Let A(µ) be an operator valued function, holomorphic in a neighborhood Uµ0 of the point µ0, and suppose that all the values of A(µ) are compact operators. Then there exists

ε > 0 such that, for all values µ with 0 < |µ − µ0| < ε, the equation (I − A(µ))x = 0 has the same number of linearly independent solutions.

Theorem A.1.6. Let H be a Hilbert space and K : H → H be a bounded linear operator such that kKxk < kxk for every x 6= 0. Then (I + K)(H) is dense in H.

A.2 Normal Points of a Bounded Operator

Definition A.2.1. Let H be a Hilbert space, A ∈ B(H) and λ ∈ σp(A). λ is called a of A if and only if:

(1) The algebraic multiplicity of λ is finite, and

(2) H = Hλ u Yλ, where Hλ is the root space of H associated to λ and Yλ is an A-invariant

subspace such that (A − λI) Yλ is bijective.

Theorem A.2.2. Let H be a Hilbert space, A ∈ B(H) and λ ∈ σp(A). Then, λ is a normal eigenvalue of A if, and only if, it is an isolated point of σ(A) and the corresponding Riesz projector

Pλ corresponding to λ is of finite rank. In this case, R(Pλ) = Hλ.

Definition A.2.3. Let H be a Hilbert space, A ∈ B(H) and λ ∈ C. λ is called a normal point of A if and only if λ ∈ ρ(A) or it is a normal eigenvalue of A.

Remark. The set of all normal points of A is denoted by ρ˜(A). Clearly, ρ˜(A) is open.

125 Theorem A.2.4. Let H be a Hilbert space, A ∈ B(H) be a self-adjoint operator and P ∈ B∞(H). Then ρ˜(A) =ρ ˜(A + P ).

A.3 Some Results about Definitizable Operators

Definition A.3.1. Let (H, [·, ·]) be a J-space and let T be a self-adjoint operator in the Krein space H. T is called definitizable if, and only if, ρ(T ) 6= ∅ and there exists a non-zero real polynomial p(λ) such that [p(T )x, x] ≥ 0 for every x ∈ D(p(T )). In this case p(λ) is called a definitizing polynomial for T .

Theorem A.3.2. Let T be a definitizable operator and S a self-adjoint operator in the Krein space (H, [·, ·]). If ρ(T ) ∩ ρ(S) 6= ∅ and if (T − λ)−1 − (S − λ)−1 has finite-dimensional range for some (and hence for all) λ ∈ ρ(T ) ∩ ρ(S), then S is definitizable.

Theorem A.3.3. Let T be a definitizable self-adjoint operator in the Krein space (H, [·, ·]).

If there exists a definitizing polynomial such that all its zeros are real, then σ(T ) ⊆ R and

σr(T ) = ∅. Further, in this case, there exists a function t ∈ R \{zeros of p} 7→ Et, called the spectral function of T , such that for each t ∈ R, with p(t) 6= 0, Et is a self-adjoint operator in the Krein space; morevoer, if s, t ∈ R are such that p(t), p(s) 6= 0 then:

+ • For every x: Etx → 0 (t → −∞), Etx → x (t → ∞) and Etx → Et0 x (t → t0 ).

• EtEs = Emin(s,t).

• If x ∈ H is fixed, then the function [Etx, x] is monotone non-decreasing in t = t0 if p(t0) > 0

and is monotone non-increasing in t = t0 if p(t0) < 0.

• EtT = TEt.

• σ(T Et(H)) ⊆ (−∞, t].

Definition A.3.4. The set c(T ) of all critical points of T is defined by

T c(T ) = [ N(p)] ∩ σ(T ) ∩ R,

126 where N(p) is the set of zeros of the polynomial p and the intersection T N(p) runs over all the definitizing polynomials p of T .

Theorem A.3.5. Let T , p(t) and Et be as in the theorem A.3.6, and let α ∈ R. Then α ∈ c(T ) if, and only if, for all ε > 0, the subspace (Eα+ε −Eα−ε)(H) contains positive and negative elements.

In this case, if the strong limits Eα+0 and Eα−0 exist, then α is called a regular critical point of L; otherwise the critical point α is called singular.

Theorem A.3.6. Suppose that T is a non-negative operator and self-adjoint in the Krein space H. Then T is similar to a self-adjoint operator in a Hilbert space if and only if zero is not a singular critical point of T and ker(T 2) = ker(T ).

Proposition A.3.7. Let T be a definitizable operator with definitizing polynomial p and let t0 ∈ σ(T ) be such that p(t0) 6= 0. Then λ0 is of positive type (resp., negative type) if p(t0) > 0

(resp., if p(t0) < 0). In this case the sequence (xn) in the definition 4.3.5 can be choosen such that [xn, xn] ≥ δ (resp., [xn, xn] ≤ −δ) for some δ > 0 given.

Theorem A.3.8. Let T , U+ and U− as in the theorem A.3.9., and let α be a critical point. Then α is regular if, and only if, T has no neutral eigenvectors and for at least one of of the invariant subspaces U+ or U−, name it U, we have that α is an isolate point of σ(T U ) and α is an eigenvalue of T U with finite multiplicity.

Theorem A.3.9. Let T be a definitizable operator in the Krein space (H, [·, ·]). Then there exists U+ (resp., U−) T -invariant subspace of H such that U+ (resp., U−) is a maximal positive semi-definite (resp., a maximal negative semi-definite) and, therefore, closed subspace.

+ − For each such subspace U+ (resp., U−) we have σ(T U+ ) = σ (T ) (resp., σ(T U− ) = σ (T )) and

σr(T U+ ) (resp., σr(T U− )) contains only singular critical points.

Further, if the operator T has no neutral eigenvectors then U+ is positive definite (resp., U− is negative definite). If, moreover, all its critical points are regular, then the subspace U+ (resp.,

U−) is uniquely determined and U+ is uniformly positive (resp., U− is uniformly negative).

127 A.4 Distributions and Operators

k If −∞ ≤ a < b ≤ +∞, then with C0 [a, b; H] we denote the set of H-valued functions that have k continuous derivatives on [a, b] and a (k + 1)-st derivative in the sense of distribution theory which is Bochner-integrable on any compact subset of the interior of [a, b]. Moreover, if b = +∞ (resp., if a = −∞) then all k derivatives tend strongly to zero as t → +∞ (resp., t → −∞).

0 Lemma A.4.1. If T is a bounded self-adjoint operator on H, and u ∈ C0 [α, +∞; H], then

Z +∞ Z +∞ 2 Re hT u0(t), u(t)idt = 2 Re hT u(t), u0(t)idt = −hT u(α), u(α)i. α α

0 Analogously, if u ∈ C0 [−∞, β; H], then

Z β Z β 2 Re hT u0(t), u(t)idt = 2 Re hT u(t), u0(t)idt = hT u(β), u(β)i. −∞ −∞

128 Bibliography

[1] Bognár, J., Indefinite Inner Product Spaces, Springer-Verlag, 1974.

[2] Azizov, T. Ya. and Iokhvidov, I. S., Linear Operators in Spaces with an Indefinite Metric, John Wiley & Sons, 1989.

[3] Christensen, O., Frames and Bases: An Introductory Course, Birkhäuser, 2008.

[4] Langer, H., «Über stark gedämpfte Scharen im Hilbertraum», Journal of Mathematics and Mechanics, vol. 17, No. 7, pp. 685–705, 1968.

[5] Langer, H., «Spectral Functions of Definitizable Operators in Krein Spaces», Lecture Notes in Mathematics, vol. 948, pp. 1–46, 1982.

[6] Markus, A. S., Introduction to of Polynomial Operator Pencil, AMS, 1988.

[7] Gohberg, I. C. and Krein, M. G., Introduction to the Theory of Linear Non-Self-adjoint Operators, AMS, 1969.

[8] Winklmeier, M., Functional Analysis, Lecture Notes, 2015.

[9] Winklmeier, M., Operator Theory, Lecture Notes, 2015.

[10] Hislop, P. D. and Sigal, I. M., Introduction to spectral theory. With applications to Schrödinger operators, Springer-Verlag, 1996.

[11] Shkalikov, A. A., «Strongly damped pencils of operators and solvability of the corre- sponding operator-differential equations», Math. USSR Sbornik, vol. 63, No. 1, pp. 97–119, 1989.

[12] Conway, J. B., Functions of One Complex Variable, Springer-Verlag, 1978.

129 [13] Sz.-Nagy, B., Foias, C., Bercovici, H. and Kérchy, L., Harmonic Analysis of Opera- tors on Hilbert Space, Springer, 2010.

[14] Engel, K., and Nagel, R., One-Parameter Semigroups for Linear Evolution Equations, Springer, 2000.

[15] Chueshov, I., Lectures on Operator Semigroups, Lecture Notes, 2015.

[16] Krasnoselskii, M. A., Zabreiko, P. P., Pustylnik, E. I. and Sbolevskii, P. E., Integral Operators in Spaces of Summable Functions, Noordhoff International Publishing, 1976.

[17] Kato, T., Perturbation Theory for Linear Operators, Springer, 1995.

[18] Schmüdgen, K., Unbounded Self-adjoint Operators, Springer, 2012.

[19] Kostyuchenko, A. G. and Shkalikov, A. A., «Self-adjoint Quadratic Operator Pencils and Elliptic Problems», Funktsional. Anal. i Prilozhen, vol. 17, no. 2, pp. 38–61, 1983.

[20] Krein, M.G. and Langer, H., «On some mathematical principles in the linear theory of damped oscillations of continua I», Integral Equations and Operator Theory, vol. 1/3, pp. 364–399, 1978.

[21] Kozlov, V. and Maz’ya, V., Differential Equations with Operator Coefficients, Springer, 1999.

[22] Zaidman, S. D., Abstract differential equations, Pitman, 1979.

[23] Tisseur, F. and Meerbergen, K., «The Quadratic Eigenvalue Problem», SIAM, vol. 43, no. 2, pp. 235–286, 2001.

130