<<

The concept of monodromy for linear problems and its application

Misael Avenda˜noCamacho ii Contents

Introduction 1

1 Monodromy for linear system with boundary conditions. 3 1.1 Fundamental solutions ...... 3 1.1.1 Linear Systems on Matrix Lie Algebras...... 4 1.1.2 Lax Equation...... 6 1.1.3 Zero curvature equation ...... 8 1.1.4 Lyapunov Transformations...... 9 1.2 Definition of monodromy for linear boundary problems ...... 10 1.2.1 Periodic case ...... 10 1.2.2 Quasiperiodic case ...... 13 1.2.3 Decreasing case ...... 16 1.3 Analytic Properties of Fundamental solution ...... 20 1.3.1 Case U = U0 + λU1 ...... 20 1.4 The Time Evolution of the Monodromy Matrix...... 24

2 Linear problem in sl(2, C) 29 2.1 The Lie algebra sl(2, C) ...... 29 2.1.1 Involution Relations ...... 31 2.2 Linear Problem in sl(2, C) with Involution Property ...... 33 2.2.1 Quasi-periodic case...... 34 2.2.2 Decreasing case...... 40 2.2.3 The Spectral Problem...... 43

3 The Inverse Problem: The Rapidly Decreasing Case. The Nonlinear Schr¨odinger Equation 51 3.1 Formulation of results ...... 51 3.2 The inverse Problem for Zero curvature Equation ...... 57 3.3 The Nonlinear Schr¨odinger Equation ...... 61 3.4 Zero curvature representation for NLS equation ...... 61 3.5 Application of the inverse problem to the NLS equation ...... 63 3.6 Non solitonic and solitonic solutions of NLS equation ...... 65

iii iv CONTENTS Introduction

The notion of a monodromy matrix (operator) naturally appears under the study of linear systems with periodic coefficients. This notion gives rise to the well known result [3, 12] on the reducibility at the linear periodic systems (Floquet’s Theorem) which says that the monodromy matrix contains the complete information about a given system. The goal of the present work is to develop a unified concept of monodromy for linear systems on Lie algebras in the quasiperiodic and decreasing cases. The quasiperiodic case is a natural generalization of the periodic one. The decreasing case can be interpreted as the ”limiting” periodic case (the period tends to infinity). Such class of linear problems arise in the integrability theory of nonlinear partial differential equations in the frame work of the so-called inverse scattering method [7, 8]. The main idea of this method is to represent a nonlinear partial differential equation as the consistence condition for two linear problems which leads to the study of the zero curvature equation [7], or Lax equation [8]. The important feature of the inverse scattering method is that, the linear problem involves a (spectral) parameter λ. The main idea is to study the analytic properties of the fundamental solution and the monodromy in λ. This gives us a spectral (scattering) data determining the problem. The time evolution of the spectral data gives solutions to the original nonlinear evolutory equation. In Chapter 1-3, we study the linear problem and the zero curvature equation in the quasiperiodic and decreasing cases (independtly of the integrability theory for nonlinear equations) and then, in Chapter 4 we illustrate our result for the case of nonlinear Schr¨odingerequation [7]. In chapter 1, the goal is to give a definition of the monodromy for linear system on general matrix Lie algebras in the quasiperiodic and rapidly decreasing cases. We describe general properties of the fundamental solution and the monodromy matrix and study the dependence of the ”spectral” parameter. In the next chapter we apply the general results obtained in Chapter 1 to the study of linear problems on the Lie algebra sl(2, C) possessing the involution property. Such a class of linear problems plays an important role in the integrability theory for nonlinear evolution equations. In Chapter 3, we formulate results on the inverse problem for linear systems on sl(2, C) in the rapidly decreasing case and zero curvature equation. We mean that we shall show that it is possible reconstruct the linear system in sl(2, C) (Chapter 2) from its spectral data (definition (2.2.2)) and assuming the linear time dynamics for the spectral data (2.2.91),(2.2.92) we shall get solution for the zero curvature equation. Finally, as an application of the inverse problem, we construct some solutions of the Nonlin- ear Schr¨odingerEquation (NLS equation). This equation arises in various physical contexts, for example, it describes the effects of self-focusing of the envelope of a monochromatic plane wave propagating in nonlinear media [2]. The NLS equation appears also in the theory of surfaces waves on shallow water [4]. Equation (3.3.1) may be also considered as the Hatree-Fock equation for one dimensional quantum Boson gas equation with point intersection . Physically, the constant κ in (3.3.1) plays the role of acoupling constant: the case κ > 0 corresponds to attractive interaction

1 2 CONTENTS and κ < 0 is the repulsive case. The two cases are essentially different in optical applications, describing self-focusing or defocusing of the light rays [2]. Mathematically, these two cases are also very different because the first one correspond to a selfadjoint linear problem while the second one is related a non-selfadjoint linear problem. The nonlinear Schr¨odingerequation was first solved by the inverse scattering method by Zakharov and Shabat [13]. In our treatment we shall follow an approach [7], using the result of Chapter 3. In the context of the integrability of NLS equation, the key observation is that, the NLS equation admits a zero curvature representation or Lax’s pair. Chapter 1

Monodromy for linear system with boundary conditions.

The goal of this chapter is to give a definition of the monodromy for linear system on general matrix Lie algebras in the quasiperiodic and rapidly decreasing cases. We describe general properties of the fundamental solution and the monodromy matrix and study the dependence of the ”spectral” parameter.

1.1 Fundamental solutions

Let V be a finite dimensional vector space over R or C. Denote by gl(V) the Lie algebra of all the linear transformation of V and by GL(V) the general linear consisting of all invertible linear transformation. Given a C∞ linear R 3 x 7→ U(x) ∈ gl(V), consider the follows linear system df = U(x)f, (f = f(x) ∈ V) (1.1.1) dx We shall assume that U is bounded in R with respect to some norm on gl(V) kU(x)k < ∞, on R. (1.1.2) Then, as is well known, [10, 11], there exists the fundamental solution of (1.1.1), that is, a function R2 3 (x, y) 7→ F(x, y) satisfying the Cauchy Problem d F(x, y) = U(x)F(x, y), (1.1.3) dx F(x, y)|x=y = I, (1.1.4)

0 for every y ∈ R. The solution of (1.1.1) with initial data f|x=y = f is given by f(x) = F(x, y)f 0. (1.1.5)

For a fixed y ∈ R. Proposition 1.1.1. The fundamental solution F(x, y) is differentiable in x, y and has the proper- ties: (i) Non degeneracy, det F(x, y) 6= 0., (1.1.6) and hence F(x, y) ∈ GL(V).

3 4 CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS.

(ii) The transition property F(x, z)F(z, y) = F(x, y), (1.1.7) for all x, y z.

(iii) The inverse of the monodromy matrix is given by

F−1(x, y) = F(y, x),

and satisfies d F(x, y) = −F(x, y)U(y). dy

Proof. (i) Let, s ∈ R be a fixed point. The fundamental solution F(x, y) can be seen as an integrable curve on the differentiable manifold gl(V ), so the fundamental solution F(x, y) is a continuous map that joints the identity element I, with the point F(s, y). Therefore, for each x the fundamental solution F(x, y) is in the connected component of the identity element gl0(V ). Since the determinant of a matrix is a continuous function

GL0(V ) = {X ∈ GL(V )| det X > 0} ,

and therefore, F(x, y) is in GL(V ).

ii) Consider that the points z and y are fixed, recall that F(x, y) is the fundamental solution of the linear problem (1.1.3),(1.1.4); if we change the boundary condition at x = z, then F|x=z = F(z, y). On the other hand, the matrix function G(x, y) = F(x, z)F(z, y) also satisfy the linear problem and the same boundary condition, therefore by uniqueness F(x, z)F(z, y) = F(x, y), as we desired.

iii) By ii) we have F(x, y)F(y, x) = F(x, x) = I then F−1(x, y) = F(y, x).

iv) By (ii), we know that F(x, y)F(y, x) = I. From this, we can derive

dF(x, y) dF(y, x) = −F(x, y) F(x, y) dy dy = −F(x, y)U(y)F(y, x)F(x, y) = −F(x, y)U(y)

1.1.1 Linear Systems on Matrix Lie Algebras.

Now, let us consider linear system (1.1.1), in the case when V = F is a field, and we only consider F = R or C. Assume that the coefficient U(x) takes values in a subalgebra g of gl(n, F), which is the Lie algebra consisting of n×n matrix with entries in the field F. Denote by GL(n, F) Lie group of n×n 1.1. FUNDAMENTAL SOLUTIONS 5

invertible matrices. For each X a n × n real or complex matrix, the exponential of X, denoted by eX or exp X, is defined by the power series

∞ X Xm eX = . (1.1.8) m! i=0

The map t 7→ exp tX is a smooth curve in gl(n, F). Let G ⊂ GL(n) be a matrix Lie subgroup. The Lie algebra of G, denoted by g, is the set of matrices X such that etX is in G for all real t,

g := {X ∈ gl(n, F)| exp(tX) ∈ G, ∀t} . (1.1.9)

In fact, g is a Lie subalgebra of gl(n, F). In general, if G is a Lie group, denote by G0 the component connected of the element identity in G. According to [6], G0 has the following properties:

• pathwise connected,

• open and closed subset in G,

• a normal subgroup of G.

Proposition 1.1.2. Let g be a subalgebra of gl(n, F). Fix a smooth map U(x) ∈ g such satisfy ( 1.1.2). If F(x, y) is the fundamental solution of the linear system ( 1.1.1) on g, then F(x, y) lies on the connected component of the identity element of GL(n).

Proof. The fundamental solution F(x, y) is a continuous curve which joints the identity matrix I with F(x, y). Since GL0(n) is pathwise connected, it follows that F(x, y) must belongs to the connected component of the identity.

0 Consider linear system (1.1.1) on the Lie algebra g , with initial data f |x=y = f . We define the map f(x) 7→ ˜f(x) = B(x)f(x), (1.1.10)

Here B(x) is a C∞ matrix map in GL(n, F). Hence, ˜f(x) is solution of the linear system

d ef(x) = Ue (x)ef(x), (1.1.11) dx ˜0 ef(x) |x=y = f . (1.1.12)

Here ∂B Ue = B−1 + BUB−1. ∂x

Proposition 1.1.3. Let F(x, y) and Fe(x, y) be the fundamental solutions of the linear system ( 1.1.11)-( 1.1.12) and system ( 1.1.1) on g, respectively. Then

F˜(x, y) = B(x)F(x, y)B−1(x). (1.1.13) 6 CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS.

Proof. Since Fe(x, y) is the fundamental solution of (1.1.11), we have

Fe(x, y)ef 0 = ef(x) = B(x)f(x) = B(x)F(x, y)f 0 = B(x)F(x, y)B−1(x)ef 0.

Therefore, F˜(x, y) = B(x)F(x, y)B−1(x).

1.1.2 Lax Equation. We shall consider another special type of linear system, in the case when V = g and the coefficients U(x) takes values in the adjoint algebra of g. First we recall the definition of adjoint group and adjoint algebra. Let G be a Lie subgroup, and let g be the Lie algebra of G. The adjoint representation of G on g is a homomorphism

Ad : G → GL(g) (1.1.14) d Ad (X) def= g exp (tX) g−1| . (1.1.15) g dt t=0 In particular, if G is a matrix group, then

−1 AdX (Y ) := XYX . (1.1.16)

The adjoint operator of g on g is the homomorphism given by the differential of the adjoint representation (1.1.15).

ad : g → End(n) (1.1.17) def ad = d (Ad)e . (1.1.18)

thus, if g is a matrix algebra, the adjoint operator is given by the matrix conmutators,

def adX (Y ) = [X,Y ] = XY − YX. (1.1.19)

Let us define the adjoint group Ad(g) of the Lie algebra g as the subgroup of GL(g) generated by the element e(AdX), for X ∈ g. The adjoint algebra ad(g) is the Lie algebra of the adjoint group.

Proposition 1.1.4. Let g be a subalgebra of gl(n, F). Fix a smooth map U(x) ∈ ad(g) which is bounded on R. Then the fundamental solution F(x, y) of the corresponding linear system ( 1.1.1) takes values on the adjoint group,

F(x, y) ∈ Ad(g) ∀ x, y. (1.1.20) 1.1. FUNDAMENTAL SOLUTIONS 7

Proof. By Proposition (1.1.2) F(x, y) belongs to the connected component of a Lie group whose Lie algebras are ad(g). This connected component is Ad(G0), where G0 is the connected component of G. According to [6] we have the following facts: the adjoint group of g is the unique connected Lie subgroup of GL(g) with Lie algebra equal to ad(g), and hence

Ad(G0) = Ad(g). (1.1.21)

Therefore (1.1.21) holds F(x, y) is in the adjoint group.

Under the hypothesis of Proposition (1.1.4), the data U and f in (1.1.1) can be represented as follows

U(x) = AdA(x) f(x) = L(x); (1.1.22) where A(x) ∈ g and L(x) ∈ g. Therefore, system (1.1.1) takes the form

dL = [L, A] , (1.1.23) dx This system is called the Lax equation [8]. In this case, condition (1.1.2) reads

kA(x)k < ∞. (1.1.24)

Proposition 1.1.5. Under condition ( 1.1.24), the solution L(x) of Lax equation with initial data L(0) = L0 is well defined and given by

−1 L(x) = AdΦ(x)L0 = Φ(x)L0Φ (x). (1.1.25)

Where Φ is the fundamental solution of the linear system associted wiht A,

dΦ = −AΦ, (1.1.26) dx Φ(0) = I. (1.1.27)

Proof. Since kA(x)k < ∞, there exists the fundamental solution Φ(x) in (1.1.26)-(1.1.27). Define

−1 Le(x) = Φ(x)L0Φ (x). (1.1.28)

Differentiating this function gives

dLe dΦ dΦ = L Φ−1 − ΦL Φ−1 Φ−1 (1.1.29) dx dx 0 0 dx −1 −1 = ΦL0Φ A − AΦL0Φ (1.1.30) h i = Le, A , (1.1.31)

moreover, Le(0) = L0. Therefore Le satisfy the Lax equation (1.1.23); and hence, by the uniqueness property, we have L(x) = Le(x).

Corollary 1.1.6. The eigenvalues of L(x), tr Lk(x) and det Lk(x) do not depend on x, that is, they are first integral of the Lax equation. 8 CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS.

Proof. By Proposition (1.1.5) we have that for all x L(x) is conjugate to L0. Hence the eingenvalues of L(x) are the same to eigenvalues of L0 [5]. The rest of the corollary follows from basic properties of the trace and determinant:

−1 −1 tr(L(x)) = tr(Φ(x)L0Φ (x)) = tr(Φ(x)Φ (x)L0) = tr(L0), (1.1.32) and −1 −1 det(L(x)) = det(Φ(x)L0Φ (x)) = det(Φ(x)Φ (x)L0) = det(L0). (1.1.33) Thus, they do not depend on x, only of the initial condition. Recall also the geometric property of the Lax equation. Denote by

def OV = {Adg(V )|g ∈ G} = Ad(G)V (1.1.34) .the adjoint orbit of V ∈ g

Corollary 1.1.7. If L is a solution of the Lax equation ( 1.1.23), then L(x) ∈ OL0 for all x.

Proof. It follows directly from the Proposition (1.1.5) and the definition of OL0 given above.

1.1.3 Zero curvature equation

2 Let g be a Lie algebra of a Lie group G. Let U, V : R(x,t) → g. The equation ∂U ∂V − + [U, V] , (1.1.35) ∂t ∂x is called the zero curvature equation. The name of zero curvature provide of and interpretation on theory. Let α = Udx + Vdt be an 1-form in R2 with values in g. The zero curvature equation (1.1.35) is equivalent to 1 dα + [α ∧ α] = 0. (1.1.36) 2 This last equation is called Maurer-Cartan. If we interpret α as a connection in the G-bundle R2 × G, then the Maurer-Cartan equation says that the curvature of this connection is zero (see [8]).

Proposition 1.1.8. Let G be a Lie group. Let α = Udx + Vdt be an 1-form in R2, where 2 U, V : R(x,t) → g. The following statements are equivalents: 1. U and V satisfy the zero curvature equation ( 1.1.35).

2. There exist a function F : R2 → G such that α = F −1dF. The proof of this result is out of the scope of this text, the reader interested can be found the proof in [8]. There is a closely relationship between Lax equations and zero curvature equation. Let U, V : 2 R(x,t) → g be some smooth functions. Consider the following couple of Lax equations

dL = [L, U] , (1.1.37) dx dL = [L, V] . (1.1.38) dt 1.1. FUNDAMENTAL SOLUTIONS 9

The compatibility condition for the existence of L(x, t), with L(0, 0) = L0, leads to the following fact: U and V satisfy the zero curvature equation (1.1.35). Moreover, there also exist a relationship between the solution L(x, t), L(0, 0) = L0 and the map F in Proposition (1.1.8), which is

L(x, t) = AdF −1(x,t)L0. (1.1.39)

1.1.4 Lyapunov Transformations. Given a change of coordinates x 7→ T(x), it is possible to transform linear system (1.1.1) into other linear system. Here we shall state an equivalence relation between linear system, under a particular change of coordinates known as Lyapunov transformation.

Definition 1.1.1. The change of coordinates

ef(x) = T(x)f(x), (1.1.40)

is called a Lyapunov transformation if the C∞ function x 7→ T(x) ∈ G0 satisfies the following conditions:

dT (i) T(x) and dx are bounded,

dT sup kT(x)k < ∞ and sup (x) < ∞; (1.1.41) x∈R x∈R dx

(ii) There exist a real number m such that | det T(x)| ≥ m > 0 for all x ∈ R. Remark that the inverse of a Lyapunov transformation is also a Lyapunov transformation. Indeed, we have | det T(x)| ≤ M < ∞. (1.1.42) It follows that 1 1 | det T−1(x)| = ≥ . (1.1.43) | det T(x)| M Using 1 T−1(x) = [∆ ] , (1.1.44) | det T(x)| ij by (i), (ii) we get that

−1 −1 dT sup kT (x)k < ∞ and sup (x) < ∞. (1.1.45) x∈R x∈R dx Moreover, it is easy to see that composition for two Lyapunov transformation is also Lyapunov.

Definition 1.1.2. Two linear systems df = U(x)f U(x) ∈ g, kU(x)k < ∞, (1.1.46) dx

def = Ue (x)ef Ue (x) ∈ g, kUe (x)k < ∞. (1.1.47) dx are equivalent (in the sense of Lyapunov) if they are related by a Lyapunov transformation. 10CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS.

Thus, the Lyapunov transformation gives an equivalence relation in the set of linear system of type (1.2.24). Definition 1.1.3. A linear system df = U(x)f, U(x) ∈ g (1.1.48) dx is called reducible if it is equivalent to a system with constant coefficients

def = Kef, K ∈ g. (1.1.49) dx Lyapunov’s Criterion. The linear system (1.2.24) is reducible if and only if its fundamental solution F(x, y) has a representation

F(x, y) = T(x)exK(y); (1.1.50) where T is a Lyapunov transformation and K(y) ∈ g does not depend on x.

1.2 Definition of monodromy for linear boundary problems

Now we define the monodromy matrix and state its main properties. Let G be a subgroup of GL(n, F) and g its Lie algebra. We let consider the linear system d F(x, y) = U(x)F(x, y), (1.2.1) dx F(x, y)|x=y = I. (1.2.2)

We shall supplement the initial value problem with the following boundary condition: periodic, quasi-periodic and decreasing case.

1.2.1 Periodic case We recall that the linear system (1.2.1) is periodic if there exists a real number L > 0 such that the matrix coefficients satisfy U(x + 2L) = U(x) for each x ∈ R . We will define the monodromy matrix in relation with the fundamental solution of the system (1.2.1). Although the material exposed here is well known [11, 12] , this case works as a model to define the monodromy matrix in the quasiperiodic and decreasing cases. The main result for periodic coefficient is the Floquet Theorem, which states that by means of a change of variables one can reduce equation (1.2.1) to an equation with constant coefficients. Definition 1.2.1. Let F(x, y) be the fundamental solution of the periodic linear problem ( 1.2.1). The matrix M(y) def= F(y + 2L, y) (1.2.3) is called the monodromy matrix.

By Proposition (1.1.1), we have that F(x, y) ∈ G0 for each x ∈ R and hence M(y) ∈ G0. Proposition 1.2.1. Let F(x, y) be the fundamental solution of the periodic linear problem ( 1.2.1). 1. The function F(x + 2L, y) is solution of the equation ( 1.2.1). 1.2. DEFINITION OF MONODROMY FOR LINEAR BOUNDARY PROBLEMS 11

2. The fundamental solution F(x, y) of ( 1.2.1) and the map F(x + 2L, y) are related by the equation F(x + 2L, y) = F(x, y)M(y), (1.2.4) where M(y) is the monodromy matrix.

Proof. 1. Since F(x, y) is fundamental solutions, by (1.2.1) we get

d F(x + 2L, y) = U(x + 2L)F(x + 2L, y). dx So, U(x) is 2L-periodic and

d F(x + 2L, y) = U(x)F(x + 2L, y). dx Hence F(x + 2L, y) is also a solution of (1.2.1).

2. By (1) we have the function F(x, y) and F(x+2L, y) are both solution of the same differential equation. From basic facts about theory of ODE [11],[12]; it follows that F(x, y) and F(x + 2L, y) are linearly dependent. This implies the existence of a constant matrix C, such that

F(x + 2L, y) = F(x, y)C. (1.2.5)

Putting x = y, we obtain C = F(2L + y, y) = M(y)

Now we recall the Floquet theorem in the case when F = C. According to [6], we need the following fact [6].

Proposition 1.2.2. Let G ⊂ GL(n, C) be a subgroup and g ⊂ gl(n, C) its Lie algebra. Then

exp(g) = G0. (1.2.6)

Since M(y) ∈ G0, from Proposition (1.2.2), there exist a matrix K ∈ g such that

1 K = ln M. (1.2.7) 2L Theorem 1.2.3 (Floquet-Lyapunov). Let F(x, y) be fundamental solution of the linear problem ( 1.2.1),( 1.2.2), with periodic coefficients in g ⊂ gl(n, C). Then, the fundamental solution can be expressed in the form F(x, y) = Ψ(x, y)exK(y), (1.2.8) where K is given by ( 1.2.7) and Ψ is a matrix function with the following properties:

1. Ψ(x + 2L, y) = Ψ(x, y),

2. Ψ(x, y) ∈ G0, for all x. 12CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS.

Proof. Let us define Ψ(x, y) def= F(x, y)e−xK(y). (1.2.9) It is clear that Ψ(x, y) is in G0 because is the product of F(x, y) with e−xK(y) both belong to G0. We have to check that Ψ is periodic,

Ψ(x + 2L, y) = F(x + 2L, y)e−(x+2L)K(y) = F(x, y)M(y)e−2LK(y)e−xK(y) = F(x, y)e−xK(y) = Ψ(x, y).

The classical Floquet-Lyapunov reducibility theorem provide a periodic change of variable, which reduces equation (1.2.1) to a system with constant coefficient.As a consecuence of Theorem (1.2.3) we get the following analogue of Floquet-Lyapunov theorem formulated in terms of the fundamental solution.

Theorem 1.2.4 (Lyapunov reducibility). Let K(y) given by ( 1.2.7) and Ψ(x, y) by ( 1.2.8). Under the 2L-periodic change of variables

Fe(x, y) = Ψ−1(x, y)F(x, y)Ψ(y, y) (1.2.10) the system ( 1.2.1) reduces to the form

d Fe(x, y) = K(y)Fe(x, y), (1.2.11) dx Fe(x, y)|x=y = I. (1.2.12)

Proof. We only must differentiate (1.2.11)

dFe dΨ (x, y) = −Ψ−1(x, y) (x, y)Ψ−1(x, y)FΨ(y, y) + Ψ−1(x, y)U(x)FΨ(y, y) dx dx On other hand, differentiating the equation (1.2.9), gives

dΨ (x, y) = U(x)Ψ(x, y) − Ψ(x, y)K(y). dx Substituiting the last equation into one above, we get

dFe = − Ψ−1(x, y)U(x)Ψ(x, y)Fe + K(y)Fe + Ψ−1(x, y)U(x)Ψ(x, y)Fe dx dFe = K(y)Fe. dx Now, putting x = y into (1.2.10) leads to

Fe(y, y) = Ψ−1(y, y)F(y, y)Ψ(y, y) = Ψ−1(y, y)Ψ(y, y) = I. (1.2.13) 1.2. DEFINITION OF MONODROMY FOR LINEAR BOUNDARY PROBLEMS 13

If U(x) ∈ gl(n, R) then F(x, y) belongs to GL(n, R), but in general the matrices Ψ(x, y) and K in (1.2.8) take values in a complex Lie group and complex Lie algebra, respectively. In this case, the Floquet-Lyapunov Theorem (1.2.3) is still true under the following assumption.

Corollary 1.2.5. If the matrix coefficient in Theorem ( 1.2.3) takes values in g ⊂ gl(n, R) then fundamental solution has the form ( 1.2.8) only if

M(y) ∈ exp(g) (1.2.14)

Proof. The condition (1.2.14) guarantees that K ∈ gl(n, R), this proves the statement.

1.2.2 Quasiperiodic case

Let G ⊂ GL(n, F) be a Lie subgroup and g its Lie algebra. The linear system (1.2.1) is called quasiperiodic if there exist a real number L > 0 and a constant matrix Q ∈ G0 such that the matrix coefficient U satisifes the condition

U(x + 2L) = Q−1U(x)Q (1.2.15) for all x ∈ R. Definition 1.2.2. Let F(x, y) be the fundamental solution of the quasiperiodic linear problem ( 1.2.1). The matrix M(y) def= F(y + 2L, y) (1.2.16) is called the monodromy.

In the quasiperiodic case, the monodromy M(x) has some additional properties.

Proposition 1.2.6. Let F(x, y) be the fundamental solution of the quasiperiodic linear problem ( 1.2.1) and M(y) its monodromy matrix. We have the following relations:

a) F(x + 2L, y) = Q−1F(x, y)QM(y), (1.2.17)

b) F(x + 2L, y + 2L) = Q−1F(x, y)Q, (1.2.18)

c) QM(x) = F−1(0, x)QM(0)F(0, x). (1.2.19)

d) The monodromy matrix satisfies the quasiperiodicity

−1 M(y + 2L) = Q M(y)Q ∀y ∈ R. (1.2.20)

Proof. a) The matrix function G(x, y) = F(x, y)QM(y) is a solution of (1.1.3), because F(x, y) is its fundamental solution, whenever y is fixed. Evaluating at x = y, we have G(y, y) = QM(y). On other hand, as we have established in Proposition (1.2.1), the function H(x, y) = QF(x + 2L, y) is also a fundamental solution, which satisfies the same boundary condition, H(y, y) = QM(y) = G(y, y). Then, H(x, y) = G(x, y). 14CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS.

b) Using the transition property, by Proposition (1.1.1), we have

F(x + 2L, y + 2L) = F(x + 2y, y)F(y, y + 2L), = F(x + 2L, y)M−1(y).

It follows from (a) that

F(x + 2L, y + 2L) = Q−1F(x, y)QM(y)M−1(y)

c) By the definition, M(0) = F(2L, 0). Now, we have to prove the identity

F(x, 0)QF(2L, 0)F(0, x) = F(x, 0)QF(2L, x).

By (b), we obtain

F−1(0, x)QM(0)F(0, x) = QF(x + 2L, 2L)Q−1QF(2L, x) = QF(x + 2L, x) = QM(x).

d) Directily from definition of monodromy (1.2.16) we have

M(y + 2L) = F(y + 4L, y + 2L). (1.2.21)

By part (b)

M(y + 2L) = Q−1F(y + 2L, y)Q (1.2.22) = Q−1M(y)Q (1.2.23)

We shall give an analogue result to Floquet-Lyapunov theorem (1.2.3). Before we make some observation about the class of transformation which reduce a quasiperiodic linear problem (1.2.15) into a periodic problem. We recall the vectorial linear system (1.1.1)

df = U(x)f, f = f(x) ∈ n (1.2.24) dx F with initial data f(x, y) = f 0, when x = y. Assume still that the matrix coefficient U(x) is quasiperiodic. Let T(x) ∈ G0 be a smooth function such that

T(x + 2L) = T(x)Q. (1.2.25)

We define the change of coordinates

ef(x, y) = T(x)f(x, y). (1.2.26)

Proposition 1.2.7. Let f(x, y) be the solution of the quasiperiodic system ( 1.2.24). Let T(x) ∈ G0 be a matrix function which holds ( 1.2.25). Then the function ef(x, y) given by ( 1.2.26) is a solution of a periodic system. 1.2. DEFINITION OF MONODROMY FOR LINEAR BOUNDARY PROBLEMS 15

Proof. Differentiating (1.2.26), we obtain that ef(x, y) satisfy the system

def = Ue (x)ef, (1.2.27) dx where dT Ue = T−1 + TUT−1. dx Only we must check that Ue (x) is periodic, dT Ue (x + 2L) = (x + 2L)T−1(x + 2L) + T(x + 2L)U(x + 2L)T−1(x + 2L) dx dT = (x)QQ−1T−1(x) + T(x)QQ−1U(x)QQ−1T−1(x) dx = Ue (x).

By now, let F = C, since Q and M(y) belong to G0, which is a subgroup of G ⊂ GL(n, C), then by Proposition (1.2.2) there exist a matrix K ∈ g such that

1 K(y) = ln QM(y). (1.2.28) 2L Proposition 1.2.8. Let F(x, y) be fundamental solution of the linear problem ( 1.2.1),( 1.2.2), with quasiperiodic coefficient U in g ⊂ gl(n, C). Then, the fundamental solution can be expressed in the form F(x, y) = Ψ(x, y)exK(y), (1.2.29) where K is given by ( 1.2.28) and Ψ is a matrix function with the following properties:

1. Ψ(x + 2L, y) = Q−1Ψ(x, y), (1.2.30)

2. Ψ(x, y) ∈ G0, (1.2.31) for all x.

Proof. Let define Ψ(x, y) def= F(x, y)e−xK(y). (1.2.32) Ψ(x, y) is well defined and clearly is in G0 because is the product of F(x, y) with e−xK(y) both belong to G0. We have to check that Ψ holds the property (1)

Ψ(x + 2L, y) = F(x + 2L, y)e−(x+2L)K(y) = Q−1F(x, y)QM(y)e−2LK(y)e−xK(y) = Q−1F(x, y)e−xK(y) = Q−1Ψ(x, y). 16CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS.

If we take T (x, y) = Ψ−1(x, y), it follows by theorem above that T(x, y) holds the condition (1.2.25). Hence, the Proposition (1.2.7) imply that T produces a change of coordinates that reduces the quasiperiodic linear system (1.2.1) to a system with periodic coefficient. Even more, it is reduced to a system with constant coefficient as we are going to establish in the next result. Theorem 1.2.9. Let K(y) and Ψ(x, y) be matrix valued function given by ( 1.2.28) and ( 1.2.29), respectively. Under the transformation of coordinates Fe(x, y) = Ψ−1(x, y)F(x, y)Ψ(y, y) (1.2.33) quasiperiodic system ( 1.2.1) is reduced to the system with constant coefficients d Fe(x, y) = K(y)Fe(x, y), (1.2.34) dx Fe(x, y)|x=y = I. (1.2.35) Proof. We have to proceed in the same way as in the proof of the periodic reducibility Theorem (1.2.4). We can make some observations about results obtained in the Theorem (1.2.4) and Proposition (1.2.9) for periodic and quasiperiodic case , respectively; in the context of Lyapunov reducibility. According to Lyapunov criterion we can arrive to the following conclusions: • In the periodic case the Theorem (1.2.3) states that the fundamental solution F(x, y) has the form (1.1.50)and that Ψ(x, y) is periodic. This fact implies that Ψ is bounded, sup kΨ(x, y)k = sup kΨ(x, y)k < ∞, (1.2.36) x∈R x∈[0,2L] and analogously its derivative also is bounded. Hence,every periodic linear system is reducible in the sense of Lyapunov, as was stated in Theorem (1.2.11). • In the quasiperiodic case the situation is a little different, because we have to include an additional condition for that quasiperiodic system will be reducible. By Proposition (1.2.8) we find that F(x, y) also can be expresed in the form (1.1.50), and Ψ satisfy Ψ(x + 2L, y) = Q−1Ψ(x, y). (1.2.37) Therefore, a quasiperiodic system is reducible if kQk < 1 or kQ−1k < 1. (1.2.38)

1.2.3 Decreasing case We shall analyze the properties of the fundamental solution F(x, y) on the whole axis −∞ < x, y < ∞, under the assumption that Uij(x) vanish as |x| → ∞. More precisely, these functions will 1 be supposed absolutely integrable on R , i.e. Uij(x) lies in L1(−∞, ∞). In terms of the matrix coefficient U(x) of the system dF = U(x)F, dx this results equivalent to Z ∞ kU(x)kdx < ∞, (1.2.39) −∞ where k·k is some matricial norm. In the successive, the space of n×n matrix functions satisfaying n×n (1.2.39) will be denoted by L1 (−∞, ∞). Under this assumption we will show the next result. 1.2. DEFINITION OF MONODROMY FOR LINEAR BOUNDARY PROBLEMS 17

Proposition 1.2.10. There exists the limits

F±(x) = lim F(x, y), (1.2.40) y→±∞ for each x in R1. Moreover, this limits hold the integral representation Z x F−(x) = I + U(z)F−(z)dz, (1.2.41) −∞ Z ∞ F+(x) = I − F+(z)U(z)dz. (1.2.42) x Proof. Recall that F(x, y) holds the integral equation

Z x F(x, y) = I + U(z)F(z, y)dz, y

then Z x kF(x, y)k ≤ 1 + kU(z)kkF(z, y)kdz, y assuming, without loss of generality kIk = 1. Using the Gronwall‘s inequality, we have

Z x kF(x, y)k ≤ exp kU(z)kdz, y

taking limit as y → −∞, it follows Z x kF−(x)k ≤ exp kU(z)kdz < ∞, −∞

n×n since U(x) lies on L1 (R); therefore F−(x) exists for each real x. Now we prove the integral representation for F−(x), first we have to prove that the map U(z)F−(z), as function of z, belongs to L1(−∞, x), for each x fixed. From the integral equation for the fundamental solution, Z x U(x)F(x, y) = U(x) + U(x) U(z)F(z, y)dz, y

then Z x kU(x)F(x, y)k ≤ kU(x)k + kU(x)k kU(z)F(z, y)kdz, y integrating over y < s < x, we obtain Z x Z x Z x Z s kU(s)F(s, y)kds ≤ kU(s)kds + kU(s)k kU(z)F(z, y)kdzds. y y y y

By Gronwall’s inequality, it follows

Z x Z x  Z x  kU(s)F(s, y)kds ≤ kU(s)kds exp kU(s)kds y y y Z x  Z x  ≤ kU(s)kds exp kU(s)kds < ∞. −∞ −∞ 18CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS.

From the last estimate, we conclude that Z x kU(z)F−(z)kdz < ∞. −∞

It which implies U(z)F−(z) lies on L1(−∞, x). Even more, we can take limit as y → −∞ in the integral equation for F(x, y) and obtain Z x F−(x) = I + U(z)F−(z)dz. −∞ The existence of the limit (1.3.26) as y → ∞ is established by similar arguments; recalling that −1 −1 F (x, y) = F(y, x), we can see that F+ (x) exists and has the integral representation Z ∞ −1 −1 F+ (x) = I + U(z)F+ (z)dz. x By other hand

dF (x) dF−1(x) + = F (x) + F (x) dx + dx + = F+(x)U(x)

the last expression is amounts to Z ∞ F+(x) = I − F+(z)U(z)dz. x

Corollary 1.2.11. The maps F±(x), satisfies the linear problem d F (x) = U(x)F (x) (1.2.43) dx ± ± with the asymptotic conditions F±(x) = I if x → ±∞

Proof. Differentiating the integral equation for F±(x) with respect x and taking limits as y → ±∞ it deduces, easily, the differential equation and the asymptotic conditions. Now, we are ready to establish the concept of Monodromy in this case. Definition 1.2.3. The matrix −1 M = F+ (x)F−(x), (1.2.44) is the Monodromy matrix in the decreasing case. One fact that is not evidented from the above definition, is that M is constant for all −∞ < x < ∞. Even more, it enjoys of some useful properties as we shall establish in the next proposition. Proposition 1.2.12. The monodromy matrix M has the properties a) M is independent at x.

−1 b) F(x, y) = F+(x)MF− (y) 1.2. DEFINITION OF MONODROMY FOR LINEAR BOUNDARY PROBLEMS 19

Proof. a)

d d d M = (F−1(x))F (x) + F−1(x) (F (x)) dx dx + − + dx − −1 −1 = −F+ U(x)F−(x) + F+ (x)U(x)F−(x) = 0

−1 b) Considering again the fundamental solution F(x, y), we only need to verify the map F+(x)MF− (y) also satisfy the linear problem at the same boundary condition.

d (F (x)MF−1(y)) = U(x)F (x)MF−1(y) (1.2.45) dx + − + −

Now, if we put x = y then

−1 −1 −1 F+(y)MF− (y) = F+(y)F+ (y)F−(y)F− (y) = I

As a consecuence from the part b of the proposition above, We can deduce that the monodromy matrix can be computed taking some limits, as we shall see in the next corollary.

Corollary 1.2.13. The monodromy matrix can be expresed as

M = lim lim F(x, y) (1.2.46) x→∞ y→−∞

Proof. By part (b) of the proposition (1.2.12) and corollary (1.2.11), it follow that

−1 lim lim F(x, y) = lim lim F+(x)MF− (y) x→∞ y→−∞ x→∞ y→−∞   −1 = lim F+(x) lim MF− (y) x→∞ y→−∞

= lim F+(x)M x→∞ = M

The formula (1.2.46) can be simplified if we put x = L an y = −L

M = lim F(L, −L) (1.2.47) L→∞

The function F(L, −L) on the right hand side of (1.2.47) coincides precisely with the monodromy matrix M(L) of the periodic and quasiperiodic case, therefore the form F(L, −L) can be played the role the monodromy matrix for the decreasing case, regarded as the infinite period limit, as L → ∞, of the periodic monodromy matrix with the oscilating factors reduced out. 20CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS. 1.3 Analytic Properties of Fundamental solution

Let G ⊂ GL(n, C) be a Lie subgroup and g its Lie algebra. Consider the linear system df = U (x) + λU (x) + λ2U (x) + ... f, (1.3.1) dx 0 1 2 where Ui are smooth functions on a compact domain D ⊂ R. We assume that if |λ| < r0, the series ∞ Z X k λ |Uk(x)|dx (1.3.2) k=0 D is convergent. Proposition 1.3.1. Let F(x, y, λ) be the fundamental solution of ( 1.3.1). Under the assumption ( 1.3.2) we have that the sequence on function

F0(x, y, λ) = I, (1.3.3) Z x " ∞ # X l Fk(x, y, λ) = λ Ul(τ) Fk−1(τ, λ)dτ, (l = 1, 2, ...) (1.3.4) y l=0 are and converge uniformly in x, λ to F(x, y, λ). More over, if the the matrix 2 coefficients U0(x) + λU1(x) + λ U2(x) + ... is an entire funtion on λ, then F(, λ) is an entire function of λ for fixed x. We do not present the proof here, which can be seen (????)

1.3.1 Case U = U0 + λU1 Let consider the case when the matrix coefficient of system (1.3.1) takes the form

U(x, λ) = U0(x) + λU1, (1.3.5) where U0(x) ∈ g is a smooth function and U1 ∈ g is an invertible constant matrix. When the matrix coefficient is as (1.3.5), linear system (1.3.1) can be analized as an spectral problem

L f = λf, (1.3.6) with the differential operator −1 d −1 L = U − U U0(x). (1.3.7) 1 dx 1

Integral equation Here, we derive integral equation for the fundamental solution of (1.3.1). First, we note that linear system (1.3.1) is equivalent to Z x F(x, y, λ) = I + U(z, y, λ)F(z, y, λ)dz. (1.3.8) y Let E(x − y, λ) be the fundamental solution of the linear system d E = λU E, (1.3.9) dx 1 E(0, λ) = I, for each λ ∈ R. (1.3.10) 1.3. ANALYTIC PROPERTIES OF FUNDAMENTAL SOLUTION 21

Now, let G(x, y, λ) and H(x, y, λ) be, respectively, the fundamental solutions of the following differential equations

d G(x, y, λ) = E(y − x, λ)U (x)E(x − y, λ)G(x, y, λ) (1.3.11) dx 0 G(x, y, λ)|x=y = I, and d H(x, y, λ) = −H(x, y, λ)E(x − y, λ)U (y)E(y − x, λ) (1.3.12) dy 0

H(x, y, λ)|x=y = I.

Lemma 1.3.2. The fundamental solution F(x, y, λ) of the linear problem ( 1.3.1) can be expressed as F(x, y, λ) = E(x − y, λ)G(x, y, λ) (1.3.13) and F(x, y, λ) = H(x, y, λ)E(x − y, λ) (1.3.14)

Proof. This proof is straight forward calculation, we just must check that the right hand side of (1.3.13)-(1.3.14) both satisfy the same Cauchy problem that F(x, y, λ).

d E(x − y, λ)G(x, y, λ) = λU E(x − y, λ)G(x, y, λ) + U (x)E(x − y, λ)G(x, y, λ) dx 1 0 = U(x, λ)E(x − y, λ)G(x, y, λ). on other hand d H(x, y, λ)E(x − y, λ) = −H(x, y, λ)E(x − y, λ)U (y) − H(x, y, λ)E(x − y, λ)λU dy 0 1 = −H(x, y, λ)E(x − y, λ)U(y, λ), this equation implies

d H(x, y, λ)E(x − y, λ) = U(x, λ)H(x, y, λ)E(x − y, λ). (1.3.15) dx so (1.3.13)-(1.3.14) hold.

In the next result we established a couple of integral representation for F(x, y, λ), that will be useful in the preceding sections.

Proposition 1.3.3. The fundamental solution F(x, y, λ) of de linear problem ( 1.3.21)-( 1.3.22) has the following integral representations

Z x F(x, y, λ) = E(x − y, λ) + F(x, z, λ)U0(z)E(z − y, λ)dz, (1.3.16) y

Z x F(x, y, λ) = E(x − y, λ) + E(x − z, λ)U0(z)F(z, y, λ)dz. (1.3.17) y 22CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS.

Proof. First we note the linear systems (1.3.11) and (1.3.12) are equivalent to the integral equations

Z x G(x, y, λ) = I + E(y − z, λ)U0(z)E(z − y, λ)G(z, y, λ)dz, (1.3.18) y Z x H(x, y, λ) = I + H(x, z, λ)E(x − z, λ)U0(z)E(z − x, λ)dz. (1.3.19) y We only must multiply E(x − y, λ) on the left with (1.3.18) and on the right with (1.3.19), respec- tively, so use the identities (1.3.13),(1.3.14). Recall that we are considering that the matrix coefficient U(x, λ) is defined for x on a compact domain. The Proposition (1.3.1) establish that F(x, y, λ) is analytic on λ for each x. Using the integral representation (1.3.16),(1.3.17) and the formula of integration by parts we can derive the asymptotic expansion for F(x, y, λ)

∞ X F(x, y, λ) F(x, y, λ) = E(x − y, λ) + E(x − y, λ) λn n=1 ∞ X Fe(x, y, λ) + E(y − x, λ) + O(|λ|−∞). (1.3.20) λn n=1 Without loss of generality, we can assume valid of the result exposed here, for the periodic and quasiperidic boundary condition. Because in both case we can reduce the analysis to a fundamental domain which size is given by the period. Therefore the Monodromy matrix holds, in these case, the same analytic properties of the fundamental solution.

Decreasing case A special treatment must be done for the decreasing condition for to derive the analytic properties of the fundamental solution and monodromy. Now, we consider the linear system dF = U(x, λ), (1.3.21) dx F(x, y, λ)|x=y = I, (1.3.22) where the matrix function U(x, λ), takes the form

U(x, λ) = U0(x) + λU1, (1.3.23)

n×n with λ is a real parameter, U0(x) ∈ L1 (−∞, ∞) and U1 is a constant matrix in g. Before to continue, we have to check that in a compact domain the integral equation (1.3.16),(1.3.17) still remain valid and so we shall extend it to whole real.

n×n Lemma 1.3.4. Let U0(x) be a function in L1 (−∞, ∞). Then the map E(y − x, λ)U0(x)E(x − n×n y, λ) also belongs to L1 (−∞, ∞). Proof. Recall that E(x − y, λ) is the fundamental solution on the linear system (1.3.9)-(1.3.10), so it has the integral representation Z x E(x − y, λ) = I + λU1E(s − y, λ)ds. (1.3.24) y 1.3. ANALYTIC PROPERTIES OF FUNDAMENTAL SOLUTION 23

Then, by Gronwall’s inequality [11], we have

Z x kE(x − y, λ)k ≤ exp{λ kU1kds}. (1.3.25) y Thus,

Z y Z x kE(y − x, λ)U0(x)E(x − y, λ)k ≤ exp{λ kU1kds}kU0(x)k exp{λ kU1kds} x y Z x Z x ≤ exp{−λ kU1kds} exp{λ kU1kds}kU0(x)k y y

≤ kU0(x)k, hence, the map E(y − x, λ)U0(x)E(x − y, λ) is absolutely integrable on R. Therefore, the integral equation (1.3.16),(1.3.17) are also valid for the rapidly decreasing con- dition.

Proposition 1.3.5. Let F(x, y, λ) be the fundamental solution of the system ( 1.3.21)-( 1.3.22).

(i) The maps F±(x, λ) = lim F(x, y, λ)E(y, λ), (1.3.26) y→±∞ are well defined for each x and λ in R.

(ii) F±(x, λ) satisfy the differential equation d F = U(x, λ)F , (1.3.27) dx ± ± with the asymptotic conditions

F±(x, λ) → E(x, λ) as x → ±∞. (1.3.28)

Proof. (i) By the Lemma (1.3.2), the fundamental solution F(x, y, λ), can be written

F(x, y, λ) = H(x, y, λ)E(x − y, λ).

Thus, F(x, y, λ)E(y, λ) = H(x, y, λ)E(x, λ), n×n since the function H(x, y, λ) is the solution of a system whose coefficients are in L1 , (1.3.12), it follows from Proposition (1.2.10) that the limits

H±(x, λ) = lim H(x, y, λ) y→±∞

exist. Therefore

F±(x, λ) = lim H(x, y, λ)E(x, λ) = H±(x, λ)E(x, λ) y→±∞

is well defined. 24CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS.

(ii)

Definition 1.3.1. The monodromy matrix for the decreasing case with spectral parameter λ is defined −1 M(λ) = F+ (x, λ)F−(x, λ) (1.3.29) Proposition 1.3.6. The monodromy matrix M has the properties:

a) M(λ) is independent of x.

−1 b) F(x, y, λ) = F+(x, λ)M(λ)F− (y, λ)

1.4 The Time Evolution of the Monodromy Matrix.

Let g ⊂ gl(n, C) be a Lie subalgebra. We shall assume that the matrix functions (x, t) 7→ U(x, t) ∈ g, (x, t) 7→ V(x, t) ∈ g,

are given and satisfy the Zero Curvature Equation ∂U ∂V − + [U, V] = 0. (1.4.1) ∂t ∂x For every fixed t that F(x, y, t) is the fundamental solution of the linear problem associated with U(x, t) d F(x, y, t) = U(x, t)F(x, y, t), (1.4.2) dx F(x, y, t)|x=y = I. (1.4.3)

We have the following basic lemma.

Lemma 1.4.1. If F(x, y, t) is the solution of the system ( 1.4.2),( 1.4.3), then ∂ F(x, y, t) = V(x, t)F(x, y, t) − F(x, y, t)V(y, t) (1.4.4) ∂t Proof. By straigth forward computation we derive

∂2F ∂U ∂F = F + U , ∂x∂t ∂t ∂t ∂V ∂F = F + VUF − UVF + U . ∂x ∂t Here U and V satisfy the zero curvature equation. Next,

∂2F ∂ ∂F = (VF) + U( − VF), ∂x∂t ∂x ∂t and ∂ ∂F ∂F ( − VF) = U( − VF). ∂x ∂t ∂t 1.4. THE TIME EVOLUTION OF THE MONODROMY MATRIX. 25

∂F So, ∂t − VF is also a solution of equation (1.4.2). This implies the existence of a non-singular matrix C, independent of x, such that ∂F − VF = FC, ∂t putting x = y, we obtain C(y, t) = −V(y, t)

Now, let us think of the variable t as time and study the evolution of the monodromy matrix.

Proposition 1.4.2. Assume that the matrix maps U(x, t) and V(x, t) satisfy de zero curvature equation.

i) Let U(x, t) and V(x, t) be quasi-periodic, i.e.,

U(x + 2L, t) = Q−1U(x, t)Q, V(x + 2L, t) = Q−1V(x, t)Q,

then the monodromy matrix M(y, t) satisfies the Lax equation d (MQ) = [V(L, t), MQ]. (1.4.5) dt

n×n ii) If for every fixed t, the map U(x, t) lies in L1 (−∞, ∞) and

lim V(x, t) = V0, x→±∞

where V0 is a constant matrix, then dM = [V , M]. (1.4.6) dt 0

Proof. From the basic lemma (1.4.1), putting x = L and y = −L ∂ F(L, −L, t) = V(L, t)F(L, −L, t) − F(L, −L, t)V(−L, t), (1.4.7) ∂t now, we prove each incise,

i) Recall that in the quasiperiodic case the monodromy matrix

ML(t) = F(L, −L, t)

multiplying the equation (1.4.7) by Q and using the identity QV(L, t) = V(−L, t)Q, we obtain the result desired.

ii) For this case, we have been proved before

M(t) = lim F(L, −L, t). L→∞ only we must take limit as L → ∞ to equation (1.4.7) 26CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS.

As direct consequence from the proposition above we have an useful characteristic of the mono- dromy matrix. Before, we recall an basic important result, which is known as Liouville’s Formula

Lemma 1.4.3. Let A(t) be a smooth real valued matrix function in GL(n, C). The derivative of the determinant det A(t) is given by the formula

1 d  d  det A(t) = tr A−1(t) A(t) . (1.4.8) det A(t) dt dt

Theorem 1.4.4. Let F(x, y) be the fundamental solution of ( 1.1.1), and let M(t) be the monodromy matrix given by:

• M(y) = F(2L + y, y), in quasiperiodic case, and

−1 • M(λ) = F+ (x, λ)F−(x, λ) in the decreasing case.

In both cases tr(M(t)) and det(M(t)) are independent of the variable t.

Proof. An important fact, essential in this proof, is that the trace of a square matrix and the derivative are two commutative linear operator, i.e., for any A(t) smooth matrix, we have

 d  d tr A(t) = (tr A(t)) . dt dt

In the decreasing case the Proposition 1.4.2 acclaim the monodromy matrix M(t) satisfy the Lax equation (1.4.6), if we calculate the trace in both sides of that equation, we obtain

d  d  (tr M(t)) = tr M(t) dt dt

= tr[V0, M] = 0.

The Lax equation (1.4.5) for quasiperiodic case imply

d (M) = VM − MQVQ−1. (1.4.9) dt Then d (tr M(t)) = tr VM − MQVQ−1 dt = tr (VM) − tr MQVQ−1 = tr (VM) − tr (VM) = 0.

Since in any case the derivative of the trace is equal to zero it follow that the trace of monodromy matrix is time independent. 1.4. THE TIME EVOLUTION OF THE MONODROMY MATRIX. 27

Now, we let arrive at the same conclusion for the determinant of monodromy matrix. From the equation (1.4.9), for quasiperiodic case, and the Liouville formula (1.4.8), we have

1 d  d  (det M) = tr M−1 (M(t)) det M dt dt = tr M−1VM − QVQ−1 = tr M−1VM − tr QVQ−1 = 0.

On other hand, in the decreasing case, it follows from Lax equation (1.4.6) and Liouville formula again (1.4.8)

1 d  d  (det M) = tr M−1 (M(t)) det M dt dt −1  = tr M V0M − V0 −1  = tr M V0M − tr (V0) = 0.

As we can see, in both case we conclude that d (det M) = 0, dt therefore, det M is not depending on variable t. 28CHAPTER 1. MONODROMY FOR LINEAR SYSTEM WITH BOUNDARY CONDITIONS. Chapter 2

Linear problem in sl(2, C)

In this Chapter we apply the general results obtained in Chapter 1 to the study of linear problems on the Lie algebra sl(2, C) possessing the involution property. Such a class of linear problems plays an important role in the integrability theory for nonlinear evolution equations.

2.1 The Lie algebra sl(2, C)

Let us consider the Lie algebra sl(2, C) ⊂ gl(2, C) consisting of all 2 × 2 matrices with zero trace,

sl(2, C) = {A ∈ gl(2, C)| tr A = 0} . (2.1.1)

Each element of sl(2, C) can be written as a linear combination in some basis of sl(2, C) [5]. The Pauli matrices defined by

 0 1   0 −i   1 0  σ = , σ = , σ = , (2.1.2) 1 1 0 2 i 0 3 0 −1 form a basis of sl(2, C). Consider also the matrices

σ + iσ  0 1  σ − iσ  0 0  σ = 1 2 = , σ = 1 2 = . (2.1.3) + 2 0 0 − 2 1 0

It is clear that {σ+, σ−, σ3} is also a base of sl(2, C). We need the following properties of Pauli matrices:

(a) Idempotent property:

σi · σi = I, for all i = 1, 2, 3. (2.1.4)

(b) Commutating relations:

[σ1, σ2] = 2iσ3,

[σ2, σ3] = 2iσ1, (2.1.5)

[σ3, σ1] = 2iσ2.

(c) Anticommutating relations:

[σi, σj]+ := σiσj + σjσi = 0, for all i, j = 1, 2, 3. (2.1.6)

29 30 CHAPTER 2. LINEAR PROBLEM IN SL(2, C)

(d) Hermitian property ∗ σi = σi, for i = 1, 2, 3. (2.1.7)

In terms of the basis {σ+, σ−, σ3} these properties are written as • Commutating relations:

[σ+, σ−] = σ3,

[σ−, σ3] = 2σ−, (2.1.8)

[σ3, σ+] = 2σ+.

• Anticommutating relations:

[σ+, σ−]+ = I,

[σ−, σ3]+ = [σ3, σ+]+ = 0, (2.1.9)

• σ+ and σ− are hermitian conjugate, ∗ σ+ = σ−. (2.1.10) Moreover, we have the following algebraic identities:

σ1 · σ3 · σ1 = −σ3,

σ1 · σ+ · σ1 = σ−, (2.1.11)

σ1 · σ− · σ1 = σ+, and

σ2 · σ3 · σ2 = −σ3,

σ2 · σ+ · σ2 = −σ−, (2.1.12)

σ2 · σ− · σ2 = −σ+.

As consequence of these identities, also we deduce for every X ∈ sl(2C),

σiXσi ∈ sl(2C), for i = 1, 2, 3. (2.1.13) Consider now the Lie group

GL(2, C) = {X ∈ M(n, C)| det X 6= 0} . (2.1.14)

Denoted by SL(2, C), the special linear group consisting of all matrices with determinant equal 1,

SL(2, C) = {A ∈ GL(2, C)| det A = 1} , (2.1.15) is a SL(2, C) is a connected Lie subgroup of GL(2, C) [6]. Next we prove that the Lie algebra of SL(2, C) is precisely sl(2, C). Lemma 2.1.1. For any X ∈ gl(2, C), we have det eX = etr(X). (2.1.16)

Proof. 2.1. THE LIE ALGEBRA SL(2, C) 31

Proposition 2.1.2. We let consider the Lie subgroup SL(2, C). If g ⊂ gl(2, C) is its Lie algebra, then g = sl(2, C). (2.1.17) Proof. g is the matrix Lie algebra of a Lie group G ⊂ GL(n, F), if for each X ∈ g we have exp(X) ∈ G. (2.1.18)

Let X ∈ sl(2, C), then by Lemma (2.1.1) we have det exp(X) = exp(tr X) = 1, (2.1.19) hence, exp(X) ∈ SL(2, C). Since sl(2, C) is a complex Lie algebra, then, by Proposition (1.2.2) we get 0 exp(sl(2, C)) = G . (2.1.20) Here G0 is the maximal connected Lie subgroup whose Lie algebra is sl(2, C), thus it follows that G0 ⊂ SL(2, C).

2.1.1 Involution Relations

Here follow [7] we introduce the involution relation for elements of sl(2, C). Given ξ, η, λ ∈ C, consider the matrix function C 3 λ 7→ H(λ) ∈ sl(2C) defined by λ H(λ) def= ξσ + ησ + σ . (2.1.21) + − 2i 3 Definition 2.1.1. We say that H(λ) satisfies the involution property with respect to σ if

σiH(λ)σi = H(λ) ∀λ ∈ C, i = 1, 2., (2.1.22)

Observe that by property (2.1.13), we have that σiH(λ)σi ∈ sl(2, C) for i = 1, 2. Proposition 2.1.3. Condition ( 2.1.22) holds if and only if ξ = η,  λ  2i η H(λ) = λ , (2.1.23) η − 2i where λ and η are arbitrary complex numbers and  1 if σ = σ  = 1 (2.1.24) −1 if σ = σ2 Proof. By (2.1.21) we get λ H(λ) = ξσ + ησ − σ . (2.1.25) + − 2i 3 It follows from here that letting σ = σ1, condition (2.1.22) is written as follows, λ σ H(λ)σ = ησ + ξσ − σ . (2.1.26) 1 1 + − 2i 3

Thus we get ξ = η. Now, for σ = σ2 (2.1.22) is reduced to λ σ H(λ)σ = −ησ − ξσ − σ , (2.1.27) 2 2 + − 2i 3 then we obtain ξ = −η. 32 CHAPTER 2. LINEAR PROBLEM IN SL(2, C)

The involution property can be extended to the elements Lie group SL(2, C). Let H(λ) the function defined by (2.1.21). Define

def A(λ) = exp H(λ) ∈ SL(2, C) (2.1.28)

Proposition 2.1.4. If H(λ) satisfies ( 2.1.22), then we get

σA(λ)σ = A(λ), ∀λ ∈ C. (2.1.29)

Proof.

σA(λ)σ = σ exp H(λ) σ (2.1.30) = exp σH(λ)σ (2.1.31) = exp H(λ) = exp (H(λ)) = A(λ). (2.1.32)

We can formulate a result analogous to Proposition (2.1.3) for matrices in the group SL(2, C), whit the involution relation.

Proposition 2.1.5. Let λ 7→ A(λ) ∈ SL(2, C) be a function of λ satisfying the involution property ( 2.1.32). Then  a(λ) εb(λ)  A(λ) = , (2.1.33) b(λ) a(λ) where ε is defined by ( 2.1.24).

Proof. Let  a(λ) c(λ)  A(λ) = , b(λ) d(λ) and  a(λ) c(λ)  A(λ) = . (2.1.34) b(λ) d(λ)

 d(λ) b(λ)  σ A(λ)σ = , (2.1.35) 1 1 c(λ) a(λ) and  d(λ) −b(λ)  σ A(λ)σ = . (2.1.36) 2 2 −c(λ) a(λ) Comparing (2.1.34) with (2.1.35) and (2.1.34) with (2.1.36) we obtain

d(λ) = a(λ), c(λ) = εb(λ). 2.2. LINEAR PROBLEM IN SL(2, C) WITH INVOLUTION PROPERTY 33

2.2 Linear Problem in sl(2, C) with Involution Property Let us consider the following linear problem,

df = U(x, λ)f; (2.2.1) dx where U(x, λ) is a matrix function of the form

√ √ λ U(x, λ) = κ ψ(x)σ + κψ(x)σ + σ . (2.2.2) + − 2i 3 where ψ is a function in x ∈ R, λ is a complex parameter and κ ∈ R. We have tr U(x, λ) = 0√ and hence U(x, λ) ∈ sl(2, C). It is clear that U(x, λ) has the representation (2.1.23), where η = κψ and and  = sign(κ). Then by Proposition (2.1.3) the matrix function U satisfies the involution property σU(x, λ)σ = U(x, λ) (2.2.3) where if κ > 0 then σ = σ1 and if κ < 0 then σ = σ2. We shall assume that ψ is a C∞ function in x and bounded on R. Then the fundamental solution T(x, y) of (2.2.1)

d T = U(x, λ)T, (2.2.4) dx T|x=y = I. (2.2.5) is well defined for all (x, y) ∈ R2. Proposition (1.1.1) implies that T(x, y, λ) is in SL(2, C). Proposition 2.2.1. The fundamental solution T(x, y, λ) of the linear problem ( 2.2.4) satisfies the involution property σT(x, y, λ)σ = T(x, y, λ), (2.2.6)

Proof. Condition (2.2.6) can be written as follow

T(x, y, λ) = σT(x, y, λ)σ, (2.2.7)

We only need to prove that the expression in the right hand side of (2.2.7) is also the fundamental solution. Using (2.1.4), we derive

d d σT(x, y, λ)σ = σ T(x, y, λ) σ, dx dx  d  = σ T(x, y, λ) σ, dx = σU(x, λ)T(x, y, λ)σ, = σU(x, λ)σ · σT(x, y, λ)σ, = U(x, λ)σT(x, y, λ)σ,

Taking into account σT(x, x, λ)σ = σIσ = I, by definition of the fundamental solution, we get (2.2.7). 34 CHAPTER 2. LINEAR PROBLEM IN SL(2, C) 2.2.1 Quasi-periodic case.

Assume that U is quasi-periodic i.e, there exist a real number L > 0, and a matrix Q ∈ SL(2, C), such that U(x + 2L, λ) = Q−1U(x, λ)Q. Since monodromy matrix is defined by

M(λ) = T(y + 2L, y, λ), (2.2.8)

it follows immediately from Proposition (2.2.1) that the monodromy matrix satisfies involution property.

Proposition 2.2.2. The monodromy matrix takes the form

 a(λ) εb(λ)  M(λ) = , (2.2.9) b(λ) a(λ)

where  1 if κ > 0  = (2.2.10) −1 if κ < 0

Proof. This result follows immediately from proposition (2.1.5) because the monodromy matrix is in the group SL(2, C) for each λ ∈ C, and satisfy the involution property. The complex functions a(λ) and b(λ) are called the transition coefficients [7]. Since M(λ) ∈ SL(2, C), for each real λ the transition coefficient satisfy the normalization condition

|a(λ)|2 − ε|b(λ)|2 = 1. (2.2.11)

Riccati’s Equations and Asymptotic Series Now, we consider the linear problem (2.2.4)-(2.2.5). Observe that U(x, λ) (2.2.1) can be represented as follows: λ U(x, λ) = U (x) + σ , 0 2i 3 with √  0 ψ(x) U (x) = κ . 0 ψ(x) 0 We assume that U has the quasiperiodicity condition (2.2.8) with

iθ ! e 2 0 Q = − iθ ∈ SL(2, C). (2.2.12) 0 e 2 where θ ∈ R is a constant. Proposition 2.2.3. The quasiperiodic condition ( 2.2.8) for the matrix coefficient U(x, λ), with Q in ( 2.2.12) is equivalent to the following condition for ψ:

ψ(x + 2L) = ψ(x)eiθ. (2.2.13) 2.2. LINEAR PROBLEM IN SL(2, C) WITH INVOLUTION PROPERTY 35

Proof. Since σ3 and Q commute. λ Q−1U(x + 2L)Q = Q−1(U (x + 2L) + σ )Q 0 2i 3 λ = Q−1(U (x + 2L)Q + σ ) 0 2i 3 λ = (U (x) + σ ). 0 2i 3

− iθ !   iθ ! −1 √ e 2 0 0 ψ(x + 2L) e 2 0 Q (U0(x + 2L))Q = κ iθ − iθ 0 e 2 ψ(x + 2L) 0 0 e 2 √  0 ψ(x + 2L)e−iθ = κ ψ(x + 2L)eiθ 0

Theorem 2.2.4. The fundamental solution T(x, y, λ) of the system ( 2.2.4)-( 2.2.5) has the follow- ing representation:

T(x, y, λ) = (I + W(x, λ)) · exp(Z(x, y, λ)) · (I + W(y, λ))−1. (2.2.14) Here • W is an anti-diagonal matrix which satisfy the Riccati equation dW + iλσ W + WU W − U = 0 (2.2.15) dx 3 0 0 .

• Z is a diagonal matrix such that (x − y)λ Z x Z(x, y, λ) = σ3 + U0(z)W(z, λ)dz, (2.2.16) 2i y Z(x, x, λ) = 0. (2.2.17)

• W and Z has the following asymptotic representations as |λ| → ∞:

∞ X Wn(x) W(x, λ) = + O(|λ|−∞), (2.2.18) λn n=1

∞ (x − y)λ X Zn(x, y) Z(x, y, λ) = σ + + O(|λ|−∞). (2.2.19) 2i 3 λn n=1 Proof. Putting the expression (2.2.14) into (2.2.4), we get dT dW ∂Z = exp(Z) + (I + W)exp(Z) (I + W(y, λ)−1). (2.2.20) dx dx ∂x On other hand, we have

−1 UT = (U0 + λU1)(I + W(x, λ)) · exp(Z(x, y, λ)) · (I + W(y, λ)) , (2.2.21) 36 CHAPTER 2. LINEAR PROBLEM IN SL(2, C) comparing (2.2.20) with (2.2.21), the matrix (I + W(y, λ))−1, which is x-independent, is canceled and it is possible to split the result into diagonal and anti-diagonal parts. Thus we obtain

dW ∂Z + W = U + λU W, (2.2.22) dx ∂x 0 1 ∂Z = U W + λU . (2.2.23) ∂x 0 1

Substituting (2.2.23) into (2.2.22) and using the fact that U1 anticommutes with W, we obtain the Riccati type equation (2.2.15). Now, we must note that the boundary condition (2.2.5) implies Z(x, y, λ) = 0, and therefore equation (2.2.23) can be easily solved and we get equation (2.2.16). Now let us suppose that ∞ X Wn(x) W(x, λ) = , λn n=1 satisfies de Riccati equation (2.2.15). Then we have

∞ ∞ ∞ ! ∞ ! X 1 dWn X Wk X Wm(x) X Ws(x) + iσ + U = 0 (2.2.24) λn dx 3 λk−1 λm 0 λx n=1 k=1 m=1 s=1

Note that we can simplify the third term in the left hand side of (2.2.24):

∞ ! ∞ ! ∞ ∞ X Wm(x) X Ws(x) X X WmU0Ws U = , λm 0 λx λm+s m=1 s=1 m=1 s=1 ∞ s=r−1 X X Wr+1−sU0Ws = , λr+1 r=1 s=1 where r = m + s − 1. thus, we take q = k − 1 and then we can rewrite equation (2.2.24) as follow

  ∞ n−1 ! 1 dW1 X 1 dWn X iσ W − U + + iσ W + + iσ W + W U W = 0. 3 1 0 λ dx 3 2 λn dx 3 n+1 k 0 n−k n=2 k=1

Since this equation holds for all λ ∈ R − {0} and all natural numbers n ≥ 2, then we have the following set of recursion relations:

W1(x) = −iσ3U0(x), dW (x) W (x) = iσ 1 , 2 3 dx n−1 ! dWn(x) X W (x) = iσ + W (x)U (x)W (x) . n+1 3 dx k 0 n−k k=1

Therefore, the coefficients of the asymptotic series for W(x, λ) can be expressed, locally, in terms of U0 and its derivatives. Now, taking into account the map Z(x, y, λ) satisfies the equation (2.2.16), which depends on the map W(x, λ), it implies the existence of its asymptotic expansion.

Proposition 2.2.5. The matrix diagonal map W(x, λ), which appears as factor in the fundamental solution T(x, y, λ), has the following properties: 2.2. LINEAR PROBLEM IN SL(2, C) WITH INVOLUTION PROPERTY 37

i) Involution property W(x, λ) = σW(x, λ)σ.

ii) Quasi-periodicity, W(x + 2L, λ) = Q−1W(x, λ)Q.

Proof. i) Using the relations (2.1.11)-(2.1.12), it is enough to prove the involution property for the coefficients of the asymptotic expansion for W(x, λ).

σW1(x)σ = σ(−iσ3U0(x))σ = iσ3(σU0(x)σ) = iσ3U0(x) = W1(x),

 dW (x) dW (x) σW (x)σ = σ iσ 1 σ = −iσ 1 = W (x). 2 3 dx 3 dx 2

Now, assuming that Wk has the involution property for all positive integer k ≤ n, by induc- tion, one can deduce immediately that the matrix

n−1 ! dWn(x) X W (x) = σ + W (x)U (x)W (x) , n+1 3 dx k 0 n−k k=1

also possess this property.

iθ ii) In order to prove that W is quasi-periodic, first we note both σ3 and Q = exp{ 2 σ3} are diagonal matrices and hence they commute. Since the matrix map U(x, λ) is quasiperiodic, U0(x) also has this property. Now, using the recursion relation for the coefficient of the asymptotic expansion of W(x, λ), we get

−1 −1 W1(x + 2L) = −iσ3U0(x + 2L) = Q (−iσ3U0(x))Q = Q W1(x)Q, dW (x + 2L) dW (x) W (x + 2L) = iσ 1 = Q−1(iσ 1 )Q = Q−1W (x)Q. 2 3 dx 3 dx 2

Then again by induction. Suppose that the coefficient Wk(x) of the asymptotic serie are quasi-periodic for all positive integer k ≤ n. we conclude that the coefficient Wn+1(x) is also quasi-periodic. as a consequence, W(x, λ) has the quasi-periodicity property.

Taking into account the properties of the anti-diagonal matrix map W(x, λ) (which has been used in the Proposition (2.2.4) and (2.2.5)) we derive the following result.

Definition 2.2.1. We say that a complex valued function u(x) which depends on the real parameter x satisfies the quasi-periodicity condition, if there exists a real number L > 0 and 0 ≤ θ < 2π such that u(x + 2L) = eiθu(x)

Corollary 2.2.6. The matrix map W(x, λ) can be expressed by the formula √ W(x, λ) = i κ(w(x, λ)σ− − w(x, λ)σ+), (2.2.25) where w(x, λ) is a complex valued function, differentiable on x and analytic in λ, with the following properties: 38 CHAPTER 2. LINEAR PROBLEM IN SL(2, C)

i) Asymptotic expansion ∞ X wn(x) w(x, λ) = , (2.2.26) λn n=1 with

w1(x) = ψ(x, t), ∂ψ(x, t) w (x) = −i , 2 ∂x n−1 dwn(x) X w (x) = −i + κψ(x, t) w w . n+1 dx k n−k k=1

ii) w(x, λ) satisfies the property w(x + 2L, λ) = eiθw(x, λ). (2.2.27)

Proof. Since de matrix map W(x, λ) belongs to SL(2, C),it is anti-diagonal and satisfies the in- volution property. The existence of the complex valued function w(x, λ) that satisfies equation (2.2.25) is guaranteed by the algebraic Lemma (2.1.5). Properties i) and ii) are direct consequences of Propositions (2.2.4) and (2.2.5). Corollary 2.2.7. The monodromy matrix M(λ) has the following representation

−1 M(λ) = (I + W(L, λ)) exp(ZL(λ))(I + W(−L, λ)) , where Z L ZL(λ) = −iλLσ3 + U0W(x, λ)dx, (2.2.28) −L and W(x, λ) is defined as in the Theorem ( 2.2.4).

Time evolution and Integrals of motion In Section 1.4 of Chapter 1, we conclude that the determinant and the trace of the monodromy matrix are time independent. Hence, they are the integrals of motion of the zero curvature equation. Our purpose here is to write these time-invariants in terms of the coefficient which defined the matrix function U(x, t, λ) (2.2.1). Theorem 2.2.8. The trace of the monodromy matrix M(λ) takes the form θ2 F (λ) = tr(M(λ)Q(θ)) = 2 cos(ϕ (λ) + − λL), (2.2.29) L L 2 where ϕL(λ) is given by Z L ϕL(λ) = κ ψ(x, t)w(x, λ)dx, (2.2.30) −L and has the following properties:

1. ϕL(λ) = ϕL(λ).

P∞ In −∞ 2. ϕL(λ) = κ n=1 λn + O(|λ| ), with Z L In = ψ(x, t)wn(x). (2.2.31) −L 2.2. LINEAR PROBLEM IN SL(2, C) WITH INVOLUTION PROPERTY 39

Proof. Before to give a complete proof of the theorem, we have to state some preliminary facts. Since the matrix map W(x, λ) is quasi-periodic, the matrix (I + W(x, λ)) and its inverse are also quasi-periodic:

I + W(x + 2L, λ) = I + Q−1W(x, λ)Q = Q−1(I + W(x, λ))Q.

Using this fact and theorem (2.2.7), we find

−1 M(λ)Q(θ) = (I + W(L, λ)) exp(ZL(λ))Q(θ)(I + W(L, λ)) , (2.2.32)

The commutativity of σ3 and ZL(λ) gives  iθ  F (λ) = tr M(λ)Q(θ) = tr exp Z (λ) + σ . (2.2.33) L L 2 3 Remark that M(λ)Q(θ) has determinant equal to 1. Then,

−∞ tr ZL(λ) = O(|λ| ). (2.2.34) Now, we notice that the following fact. If u and v are two quasi-periodic complex functions then uv is a periodic function; indeed, u(x + 2L)v(x + 2L) = eiθu(x)eiθv(x) = u(x)v(x). After this observation we can see that the diagonal matrix U0W involved in the integral equation (2.2.28),

 ψ(x)w(x, λ) 0  U (x)W(x, λ) = iκ , (2.2.35) 0 0 −ψ(x)w(x, λ)

is periodic. Therefore, the integral in (2.2.28) is independent of the choice of the fundamental domain. We define the function Z L ϕL(λ) := κ ψ(x)w(x, λ)dx. (2.2.36) −L Taking into account that the function w(x, λ) has the asymptotic series in (2.2.26), we obtain

∞ X In ϕ (λ) = κ + O(|λ|−∞), (2.2.37) L λn n=1 with Z L In(ψ, ψ) = ψ(x)wn(x). (2.2.38) −L Moreover, it follows from here that series (2.2.37) has real coefficients, and hence

ϕL(λ) = ϕL(λ), (2.2.39) from the fact U0W belongs to SL(2, C). Thus, ZL(λ) can be expressed as

ZL(λ) = iσ3(ϕL(λ) − λL), (2.2.40) and therefore  θ  F (λ) = 2 cos ϕ (λ) + − λL (2.2.41) L L 2 40 CHAPTER 2. LINEAR PROBLEM IN SL(2, C)

For each n = 1, 2, ..., the functionals In, (2.2.38) are the local integral of motion for the quasi- periodic case. Using the formulas for the coefficients of the asymptotic expansion of w(x) (2.2.26) we can express the first four integral of motion in terms of ψ(x):

Z L 2 I1 = |ψ(x)| dx, −L Z L   ¯ d I2 = −iψ(x) ψ(x) dx, −L dx Z L  2  ¯ d 4 I3 = −ψ(x) 2 ψ(x) + κ|ψ(x)| dx, −L dx Z L  3   ¯ d 2 d ¯ ¯ d I4 = i ψ(x) 3 ψ(x) − κ|ψ(x)| ψ(x) ψ(x) + 4ψ(x) ψ(x) dx. −L dx dx dx

2.2.2 Decreasing case.

λ Recall that the matrix coefficient of the linear problem (2.2.4) has the form U(x, λ) = U0(x)+ 2i σ3. 2×2 T ∞ Let us assume that U0(x) ∈ L1 (R) C (R), and λ is a real parameter. The assumption of integrability of U0(x) implies that ψ ∈ L1(R). Moreover, this is a sufficient condition for the existence of the fundamental solution T(x, y, λ) of the linear problem (2.2.4)-(2.2.5). It follows from Proposition (1.3.5) that the limits

T±(x, λ) = lim T(x, y, λ)E(y, λ), (2.2.42) y→±∞ exist for each x and real λ. Here  λ  E(x, λ) = exp (x)σ (2.2.43) 2i 3 is the fundamental solution of the linear problem d λ E(x, λ) = σ E(x, λ) dx 2i 3 E(0, λ) = I,

for each real λ. We have by the Proposition (1.2.10) the integral representations for T±(x, λ) are given Z x T−(x, λ) = E(x, λ) + E(x − z, λ)U0(z)T−(z, λ)dz, (2.2.44) −∞ Z ∞ T+(x, λ) = E(x, λ) − E(x − z, λ)U0(z)T+(z, λ)dz, . (2.2.45) x

Moreover, by Proposition (1.3.5) it follows that the matrix function T±(x, λ) satisfy the linear problem d T = U(x, λ)T , (2.2.46) dx ± ± with the asymptotic conditions

T±(x, λ) → E(x, λ) as x → ±∞. (2.2.47) 2.2. LINEAR PROBLEM IN SL(2, C) WITH INVOLUTION PROPERTY 41

Follow [7] the functions T±(x, λ) will be called Jost solution and play a significant role in posterior developments. We shall use the column notation for the matrix T±(x, λ):

(1) (2) T±(x, λ) = (T± (x, λ), T± (x, λ)). (2.2.48)

Let us examine the analytic properties for the elements of T±(x, λ) considered as functions of λ for a fixed x. Recall that if the matrix U(x, λ) is analytic with respect the parameter λ, then T(x, y, λ) is an entire function. However, T±(x, λ) are not analytic in general, because their definition involves a limit. Although, the columns of the T±(x, λ) can be analytic extended into distinct half complex planes.

Proposition 2.2.9. Let T±(x, λ) be the functions defined by ( 2.2.42). Then,

(1) (2) a) The column vectors T− (x, λ) and T+ (x, λ) can be extended analytically to the upper half plane.

(1) (2) b) The column vectors T+ (x, λ) and T− (x, λ) can be extended analytically to the lower half plane.

Proof. This result follows from of the integral representation (2.2.44)-(2.2.45) and the absolute integrability of E(x − z, λ)U0(z). The monodromy matrix in the decreasing case is defined by

−1 M(λ) = T+ (x, λ)T−(x, λ). (2.2.49)

Proposition 2.2.10. The monodromy matrix M(λ) satisfies the involution property

M(λ) = σM(λ)σ.

Moreover, there exist complex valued functions a(λ),b(λ) such that

 a(λ) b(λ)  M(λ) = , (2.2.50) b(λ) a(λ) where  1 if κ > 0 ε = (2.2.51) −1 if κ < 0 and |a(λ)|2 − |b(λ)|2 = 1. (2.2.52)

Proof. We only have to check that T±(x, λ) also satisfy the involution property. Since T(x, y, λ)E(y, λ) converges uniformly to T±(x, λ), as y → ±∞, it only remains to verify that this sequence also satisfy the involution property.

σT(x, y, λ)E(y, λ)σ = σT(x, y, λ)σσE(y, λ)σ λ = T(x, y, λ) exp{ σσ σ} 2i 3 −λ = T(x, y, λ) exp{ σ } 2i 3 = (T(x, y, λ)E(y, λ)). 42 CHAPTER 2. LINEAR PROBLEM IN SL(2, C)

Therefore, the involution property also holds for T±(x, λ). On the other hand,

−1 M(λ) = (T+ (x, λ)T−(x, λ)) −1 = (T+(x, λ)) T−(x, λ), −1 = (σT+ (x, λ)σ)σT−(x, λ)σ, = σM(λ)σ.

The existence of the complex valued functions a(λ), b(λ) is guaranteed by Lemma (2.1.5), because M(λ) has the involution property. These functions satisfy the normalized relation (2.2.52) because of unimodularity of the monodromy.

Trace formulas for monodromy Let us now discuss the local integrals of motion. We assume that ψ(x), and ψ(x) are of Schwartz type. In order to exploit the results obtained earlier, suppose that ψ(x), and ψ(x) are the limits, as L → ∞ of the 2L-periodic functions ψL(x), ψL(x). In this case the densities Pn(x) = ψ(x)w(x, λ) of the local integral of the motion defined by the quasi-periodic case have limits as L → ∞ which are also of Schwartz type. Therefore, we can take the limit, as L → ∞, and obtain Z ∞ In = ψ(x)w(x, λ)dx. −∞

Here, each Pn(x) is constructed from ψ(x), and ψ(x) according to the formulae for the quasi-periodic case. Let us now consider the limit, as L → ∞, of the generating function PL(λ),

1  P (λ) = arccos tr M (λ) . L 2 L

The definition of the monodromy matrix in the quasiperiodic case implies that for real λ

−iλL iλL tr ML = e a(λ) + e a(λ) + o(1), = 2|a(λ)| cos (arga(λ) − λL) + o(1) as L → ∞.

Since b(λ) is of Schwartz type, the normalization relation yields

|a(λ)| = 1 + O(|λ|−∞).

Thus, the generating function for the conservation laws in the limit L → ∞, coincides with log a(λ). 1 lim (PL(λ) + λL) = log a(λ), L→∞ i up to terms of order O(|λ|−∞). We deduce that log a(λ) is the generating function of local integral of motion ∞ X In log a(λ) = iκ + O(|λ|−∞). λn n=1 2.2. LINEAR PROBLEM IN SL(2, C) WITH INVOLUTION PROPERTY 43

The coefficients In can be determined from the representation of a(λ) trough b(λ) and its zeros. 2 1 Since log (1 + |b(µ)| ) in the expression for a(λ) are of Schwartz type, then expanding µ−λ in a geometric progression leads to the asymptotic expansion ∞ X cn log a(λ) = iκ + O(|λ|−∞), λn n=1 where m 1 Z ∞ 1 X c = log 1 + |b(λ)|2 λn−1dλ + λ−n − λn , n 2πκ 2nκ j j −∞ j=1 n = 1, 2, ..., Here, if  = 1, the sum over the zeros on the right hand side of the last equation disappears; therefore Z ∞ cn = In = Pn(x)dx, −∞

relating ψ(x), and ψ(x) with the functionals of b(λ), b(λ) and λj, λj. In spectral theory such formulae are called trace identities [7].

2.2.3 The Spectral Problem. Consider the linear problem df = U(x, λ)f, f(x) = (f (x), f (x)) ∈ 2. (2.2.53) dx 1 2 C 2 Let X = (L2(R) ⊗ C , h, i) be a Hilbert space with inner product Z ∞ hf(x), g(x)i = f(x) · g(x)dx. (2.2.54) −∞

Multiplying both sides of (2.2.53) by iσ3, it is easy to see that the linear problem 2.2.53 for λ ∈ C is equivalent to the eigenvalue problem in X λ L f = f, (2.2.55) 2 for the first order matrix differential operator d √ L = iσ3 + i κ(ψ(x)σ− − ψ(x)σ+). (2.2.56) dx So λ is interpreted as a spectral parameter for L . Let L ∗ the adjoint operator of L , hL f, gi = hf, L ∗gi. (2.2.57)

Using properties of σ± (2.1.10), we get

∗ d √ L = iσ3 + isign(κ) κ(ψ(x)σ− − ψ(x)σ+). (2.2.58) dx It follows from here that Case κ ≥ 0. L is formally self-adjoint, L = L ∗ and the eigenvalues λ are real. This implies that the analytic extension of function the a(λ) has no zeros (see Proposition 2.2.15).

Case κ < 0. L is not self-adjoint and the function a(λ) may have zeros, which correspond to the discrete spectrum of (2.2.55) 44 CHAPTER 2. LINEAR PROBLEM IN SL(2, C)

Analytic behavior of the transition coefficients Now, we shall deduce some analytic properties of the transition coefficients and obtain integral representation for them, which will be useful in the study of time evolution. Recall

(1) (2) T±(x, λ) = (T± (x, λ), T± (x, λ)). (2.2.59)

Observe that the columns of the matrix functions T± are the eigenfunctions of eigenvalue problem (2.2.55). In order to study the analytic behavior of the transition coefficients, we need to prove the following result.

Lemma 2.2.11. The transition coefficients have the following expressions

(1) (2) a(λ) = det(T− (x, λ), T+ (x, λ)), (2.2.60) (1) (1) b(λ) = det(T+ (x, λ), T− (x, λ)). (2.2.61)

Proof. Let us denote

(1)1 ! (2)1 ! (1) T± (2) T± T± = (1)2 and T± = (2)2 . T± T±

With this notation, we can express the monodromy matrix (2.2.49) as

(2)2 (2)1 ! (1)1 (2)1 ! T+ −T+ T− T− M(λ) = (1)2 (1)1 (1)2 (2)2 . −T+ T+ T− T−

By Proposition (2.2.10), we have

(1)1 (2)2 (1)2 (2)1 (1) (2) a(λ) = T− T+ − T− T+ = det(T− (x, λ), T+ (x, λ)), (1)1 (1)2 (1)2 (1)1 (1) (1) b(λ) = T+ T− − T+ T− = det(T+ (x, λ), T− (x, λ)).

Proposition 2.2.12. The transition coefficient a(λ) has an analytic extension into the upper half plane Imλ ≥ 0, with the asymptotic behavior

a(λ) → 1 as |λ| → ∞.

Proof. By Lemma (2.2.11), (1) (2) a(λ) = det(T− (x, λ), T+ (x, λ)).

(1) Then, a(λ) has an analytic extension, since by Proposition (2.2.9), the column vectors T− (x, λ) (2) and T+ (x, λ) can be analytically extended into the upper half plane.

Corollary 2.2.13. The complex valued function a∗(λ) := a(λ) has an into the lower half plane Imλ ≤ 0. 2.2. LINEAR PROBLEM IN SL(2, C) WITH INVOLUTION PROPERTY 45

Proof. Suppose that a(λ) = u(λ) + iv(λ), λ = x + iy. Taking into account that a(λ) is analytic into the upper half plane, we get ∂u ∂v = ∂x ∂y for all y ≥ 0. Let a∗(λ) =u ˆ(λ) + ivˆ(λ). Then

uˆ(x + iy) = u(x − iy) vˆ(x + iy) = −v(x − iy)

and ∂uˆ ∂u ∂v ∂vˆ = = = , ∂x ∂x ∂y ∂y only if y ≤ 0. Therefore, a∗(λ) := a(λ) has an analytic extension into the lower half plane, in general. (1) Remark 1. The analytic properties of T± (x, λ) together with ( 2.2.61) imply that, b(λ) has no analytic continuation off the real line. Now, let us analyze the zeros of the function a(λ) on its domain of analyticity Proposition 2.2.14. The transition coefficient a(λ) has no zeros on the real line.

Proof. Suppose that for some real λ0, a(λ0) = 0. Then, by virtue of the normalization relation |a(λ)|2 − |b(λ)|2 = 1,

for λ = λ0, we have 2 |b(λ0)| = −1. This is clearly a contradiction. For each real λ the spectral problem (2.2.55) has solution of multiplicity two. This follows from the fact that for each λ the two linearly independent columns that form the matrix T+(x, λ) satisfy (2.2.4),(2.2.5), which is equivalent to the spectral problem (2.2.55). Proposition 2.2.15. If κ is positive, then the coefficient a(λ) has no zeros on its domain of analyticity.

Proof. Suppose that a(λ0) = 0 with Imλ0 > 0. By Lemma 2.2.11,

(1) (2) a(λ) = det(T− (x, λ), T+ (x, λ)). (2.2.62)

(1) (2) Then, it follows that the column vectors T− (x, λ0) and T+ (x, λ0) are linearly dependent. Thus, for λ = λ0, the linear problem (2.2.4) has a vector column solution decaying exponentially as |x| → ∞. However, that problem is equivalent to the spectral problem (2.2.55) which has a non-real eigenvalue λ0. But as the operator L is self-adjoint, it only has real eigenvalues. This contradiction shows that a(λ) has no complex zeros. 46 CHAPTER 2. LINEAR PROBLEM IN SL(2, C)

The situation is very different from the case when κ < 0. Since L is not self-adjoint a(λ) may have zeros. The analyticity and the asymptotic behavior of the coefficient a(λ) (Proposition 2.2.12) imply that the zeros are located in a bounded region of the half plane Imλ ≥ 0 and may only accumulate towards the real line. In order to simplify our analysis, we make the following assumptions:

1. No zeros occur on the real axis;

2. All the zeroes are simple.

By compactness, under the above hypotheses, it follows that a(λ) has only a finite number of zeros and there is a strict inequality for |b(λ)| < 1.

Remark 2. The set of functions ψ(x), and ψ(x) satisfying the conditions 1 and 2, is in a natural sense, open and dense in the space of decreasing functions.

Let λ1, λ2, ..., λn be the complete list of zeros of a(λ), with Imλj > 0, j = 1, 2, ..., n. From the (1) (2) Lemma (2.2.11), we have that the column of T− (x, λ) is proportional to T+ (x, λ). Let γj be the proportionality coefficient,

(1) (2) T− (x, λj) = γjT+ (x, λj), j = 1, 2, ..., n. (2.2.63)

Note that γj 6= 0, (2.2.64) because if this do not occur it implies that monodromy matrix has determinant equal to zero. Using the involution property, we find that

(1) (2) T− (x, λj) = γjT+ (x, λj)

The set of complex numbers γj is one of the characteristics of the auxiliary linear problem and will play an important role in what follows. Indeed the numbers λj, λj constitute the discrete part of the spectrum of the differential operator L for κ < 0. Furthermore, for any κ, L has continuous spectrum of multiplicity two on the whole real line, according with the existence of two linearly independent solution of the spectral problem which implies the existence of the fundamental solution of the linear problem. According to observations below, we shall call a(λ) and b(λ) the transitions coefficients for the continuous spectrum, and γj,γj, j = 1, 2, ..., n: will be called transition coefficients for the discrete spectrum.

Definition 2.2.2. The set {b(λ), λj, γj} is called the spectral data of the linear problem ( 2.2.4) if conditions

• Imλj > 0,

• λj are simple zeroes hold.

Now, let us show how the analyticity of a(λ) and the normalized relation are using to express a(λ) through its zeros and b(λ0). First recall some facts from the theory of analytic functions.

Lemma 2.2.16. Suppose we are given a complex-valued function g(λ) which is 2.2. LINEAR PROBLEM IN SL(2, C) WITH INVOLUTION PROPERTY 47

• analytic for Imλ > 0,

• continuos for Imλ > 0,

• vanishing at the infinity lim g(λ) = 0. (2.2.65) |λ|→∞ Then, the real Reg(λ) and imaginary Img(λ) parts are related by the formula

1 Z ∞ Reg(λ) Img(λ) = − p.v. dµ, λ ∈ R (2.2.66) π −∞ µ − λ

Proof. By the Cauchy formula we have

1 Z ∞ g(µ) g(λ) = dµ, Imλ > 0. (2.2.67) 2πi −∞ µ − λ The Sokhotskii-Plemlj formula implies that

1 Z ∞ g(µ) g(z + 0) def= lim dµ λ→z 2πi −∞ µ − λ g(z) 1 Z ∞ g(µ) = + p.v. dµ (2.2.68) 2 2πi −∞ µ − z for every z ∈ R. Here Z ∞ g(µ) Z z− g(µ) Z ∞ g(µ)  p.v. dµ = lim dµ + dµ (2.2.69) −∞ µ − z →0 −∞ µ − z z+ µ − z is the principal value integral. On other hand, since g(λ) is continuous, we have

g(z) 1 Z ∞ g(µ) g(z) = g(z + 0) = + p.v. dµ. (2.2.70) 2 2πi −∞ µ − z This implies 1 Z ∞ g(µ) g(z) = p.v. dµ (2.2.71) πi −∞ µ − z

As we already known the transition coefficients a(λ), b(λ) satisfy the normalization condition

|a(λ)|2 − ε|b(λ)|2 = 1 ε = signκ. (2.2.72)

Case κ < 0. The function a(λ) is analytic in the upper half plane, continuous for Imλ ≥ 0 and has the asymptotic behavior a(λ) = 1 + O(1) as |λ| → ∞. (2.2.73)

Moreover, a(λ) has zeroes λ1, ..., λn in the upper half plane. Let us introduce the function

n Y λ − λi a(λ) = · a(λ) (2.2.74) e λ − λ i=1 i 48 CHAPTER 2. LINEAR PROBLEM IN SL(2, C)

The function ea(λ) is also analytic and ea(λ) 6= 0 for Imλ > 0. It is clear that

2 2 2 |ea(λ)| = |a(λ)| = 1 − |b(λ)| , λ ∈ R. (2.2.75) The function def g(λ) = ln ea(λ) = ln |ea(λ)| + iargea(λ). (2.2.76) satisfies the hypotheses of the Lemma (2.2.16). Taking into account that

1 Reg(λ) = ln |a(λ)| = ln(1 − |b(λ)|2), (2.2.77) e 2 we deduce from (2.2.66) the following representation

1 Z ∞ ln(1 − |b(λ)|2) Img(λ) = argea(λ) = − p.v. dµ, (2.2.78) 2π −∞ µ − λ for λ ∈ R. We have proved the following result.

Proposition 2.2.17. For every λ ∈ R, we have

n !  Z ∞ 2  Y λ − λi 2 1 1 ln(1 − |b(λ)| ) a(λ) = · (1 − |b(λ)| ) 2 · exp p.v. dµ (2.2.79) λ − λ 2πi µ − λ i=1 i −∞

One can rewrite (2.2.79), using the Sokhotski type formula:

Z ∞ f(µ) Z ∞ f(µ) p.v. dµ = πif(µ) − dµ. (2.2.80) −∞ µ − λ −∞ µ − λ − 0i Here f(µ) → 0 as |µ| → ∞ and

Z ∞ f(µ) Z ∞ f(µ) dµ = lim dµ. (2.2.81) −∞ µ − λ − 0i →0 −∞ µ − λ − i

Applying (2.2.68) for f(µ) = ln(1 − |b(λ)|2), we get

n    Z ∞ 2  Y λ − λi 1 ln(1 − |b(λ)| ) a(λ) = · exp dµ . (2.2.82) λ − λ 2πi µ − λ − 0i i=1 i −∞ One can show that this formula is valid for Imλ ≥ 0. Case κ > 0. In this case, a(λ) is analytic in the upper half plane and has no zeroes. Applying the same arguments as in the previous case to g(λ) = ln a(λ), we get

 1 Z ∞ ln(1 + |b(λ)|2)  a(λ) = exp dµ . (2.2.83) 2πi −∞ µ − λ

Time evolution of spectral data. Recall that the evolution equation (see Section 1.4), for the monodromy matrix is given by

∂ T(x, y, λ) = V(x, λ)T(x, y, λ) − T(x, y, λ)V(y, λ). ∂t 2.2. LINEAR PROBLEM IN SL(2, C) WITH INVOLUTION PROPERTY 49

Let us consider a matrix V(x, λ) given by iλ2 V(x, λ) = V (x, λ) + σ (2.2.84) 0 2 3 2×2 where V0(x, λ) ∈ L1 (R) and λ is a real parameter. Taking limits as x → ∞, y → ±∞ and multiplying by E(y, λ) on the right, we get iλ2 V(x, λ) → V(λ) = σ , as |x| → ∞ 2 3 and ∂ iλ2 T (x, λ) = V(x, λ)T (x, λ) − T (x, λ)σ . ∂t ± ± 2 ± 3 Performing the same operation with respect to x, we obtain the equation for the monodromy matrix ∂ iλ2 M(λ, t) = [σ , M(λ, t)] . ∂t 2 3 This equation posses a remarkable property: the dependence on ψ(x), ψ(x) is completely eliminated. In terms of the transition coefficients for the continuous spectrum, the last equation is equivalent to ∂ a(λ, t) = 0 (2.2.85) ∂t ∂ b(λ, t) = −iλ2b(λ, t). (2.2.86) ∂t In particular, we deduce that a(λ) is time independent for real λ. a(λ, t) = a(λ, 0).

By the analyticity property, the same holds for Imλ > 0, so that the zeros λj are time independent as well. Thus in the decreasing case the generating function for the conservation laws is just a(λ). Let us now determine the evolution of the transition coefficients for the discrete spectrum. We (1) (2) have for the column vectors T− (x, λ) and T+ (x, λ),

∂ iλ2 T(1)(x, λ) = V(x, λ)T(1)(x, λ) − T(1)(x, λ), (2.2.87) ∂t − − 2 − ∂ iλ2 T(2)(x, λ) = V(x, λ)T(2)(x, λ) − T(2)(x, λ). (2.2.88) ∂t + + 2 + These relations also hold for Imλ > 0. They are compatible with

(1) 2 T− (x, λj) = γjT+(x, λj), j = 1, 2, ..., n, (2.2.89) only if d γ (t) = −iλ2γ (t). (2.2.90) dt j j j The last equation and the differential equation for a(λ) and b(λ) can easily be solved so that the time dependence of transition coefficients is given by the simple formulae b(λ, t) = e−iλ2tb(λ, 0), (2.2.91) −iλ2t γj(t) = e γj(0). j = 1, 2, ..., n. (2.2.92) 50 CHAPTER 2. LINEAR PROBLEM IN SL(2, C) Chapter 3

The Inverse Problem: The Rapidly Decreasing Case. The Nonlinear Schr¨odingerEquation

In this Chapter, we formulate results on the inverse problem for linear systems on sl(2, C) in the rapidly decreasing case and zero curvature equation. We mean that we shall show that it is possible reconstruct the linear system in sl(2, C) (Chapter 2) from its spectral data (definition (2.2.2)) and assuming the linear time dynamics for the spectral data (2.2.91),(2.2.92) we shall get solution for the zero curvature equation.

3.1 Formulation of results

Recall that the Schwartz space S(R) consists of all functions f ∈ C∞(R) such that for each integer n ≥ 0 n m d f |x | → 0 as |x| → ∞, (3.1.1) dxn dnf for any m > 0. It is clear that for every f ∈ S(R), we have dxn ∈ L1(R). Given a ψ ∈ S(R), we consider the follow linear problem d  λ  T(x, y, λ) = U (x) + σ T(x, y, λ), (3.1.2) dx 0 2i 3 T(x, x, λ) = I, (3.1.3)

where √ U0(x) = κ(ψ(x)σ+ + ψ(x)). (3.1.4) We shall refer to the Cauchy problem (3.1.2),(3.1.3) as the linear problem corresponding to ψ in the rapidly decreasing case. The form of the coefficient U(x, λ) in (3.1.2) implies that involution property holds (Proposition 2.1.3) and hence, its monodromy matrix M(λ) (2.2.49) can be written a(λ) b(λ) M(λ) = , b(λ) a(λ) where  = sign(κ) (Proposition 2.1.5). In Subsection 2.2.3, we show that in the decreasing case the linear system (3.1.2) is interpreted as the spectral problem: λ L f = f, 2 51 52CHAPTER 3. THE INVERSE PROBLEM: THE RAPIDLY DECREASING CASE. THE NONLINEAR SCHRODINGER¨ EQUATION

where L is the differential linear operator given by (2.2.55). This gives rise the notion of spectral data for the linear problem (3.1.2),(3.1.3) which consist of the function b(λ) for κ > 0, and the set {(b(λ), b(λ), λj, λj, γj, γj)} for κ < 0. Follow [7], by inverse problem we mean the reconstruction of the matrix coefficients of U(x, λ) in (3.1.2) from spectral data. We have two main results for the inverse problem, which depend of the sign of the constant κ in the matrix coefficient U(x, λ) (3.1.4).

Theorem 3.1.1 (Case κ > 0). Let b(λ) ∈ S(R) be a complex valued function satisfying

|b(λ)| < 1.

Define  1 Z ∞ log(1 + |b(µ)|)  a(λ) = exp dµ . (3.1.5) 2πi −∞ µ − λ − i0 Then there exist an unique ψ(x) ∈ S(R) such that

a(λ) b(λ) M(λ) = b(λ) a(λ)

is the monodromy matrix of the linear problem ( 3.1.2) corresponding to ψ(x); and

|a(λ)|2 + |b(λ)|2 = 1, for all real λ; (3.1.6)

Theorem 3.1.2 (Case κ < 0). Let {(b(λ), b(λ), λj, λj, γj, γj)} be a set consisting of complex number λj, γj (j = 1, 2, ..., n) and b(λ) ∈ S(R) which satisfy the following condition:

• Imλj > 0, λi 6= λj if i 6= j,

• γj 6= 0,

• |b(λ)| < 1.

Define n  Z ∞  Y λ − λj 1 log(1 + |b(µ)|) a(λ) = exp dµ . (3.1.7) 2πi µ − λ − i0 j=1 λ − λj −∞

Then there exist an unique ψ(x) ∈ S(R) such that

a(λ) b(λ) M(λ) = b(λ) a(λ)

is the monodromy matrix of the linear problem ( 3.1.2) corresponding to ψ(x), satisfying:

1. |a(λ)|2 + |b(λ)|2 = 1, for all real λ;

 (1) (2)  2. the columns T− (x, λ), T+ (x, λ) of the Jost solutions T±(x, λ) in ( 2.2.46)-( 2.2.47) satisfy the conditions (1) (2) T− (x, λj) = γjT+ (x, λj), j = 1, 2, ..., n. (3.1.8) 3.1. FORMULATION OF RESULTS 53

The basic tool for solving the inverse problem formulated in the proceed Theorems is provided by the Riemann-Hilbert Problem or analytic factorization problem [7]. The theorem above can be interpreted in terms of differential operators and spectral data. For a given set {(b(λ), b(λ), λj, λj, γj, γj)} satisfying the conditions of Theorem (3.1.2), there exist a unique complex valued function ψ ∈ S(R) such that the corresponding linear problem this set as spectral data.

Sketch proof of Theorem (3.1.1) and Theorem (3.1.2). The Theorems formulated above formally establish the existence of the linear problem for the given spectral data. However, they do not say how that linear problem can be obtained. The motivation for analyze the proof of these Theorems is mainly to give a complete study of this result and secondly is because it provides essentially how to derive the linear system in question. There is no difference in the proof of both theorems up in the application of the Riemann Problem.

We give the scheme of the proof in steps. Step one. Let us assume that the complex valued function b(λ) satisfy the condition of Theorem (3.1.1) or (3.1.2). We define the matrix valued function R2 3 (x, y) 7→ G(x, λ)

 −iλx def 1 b(λ)e G(x, λ) = E(x, λ)G(λ)E−1(x, λ) = , (3.1.9) −b(λ)eiλx 1 with xλ   1 b(λ) E(x, λ) = exp σ , G(x, λ) = . (3.1.10) 2i 3 −b(λ) 1 The matrix G(x, λ) has the following properties:

1. det G(x, λ) = 1 + |b(λ)|2 > 0, because |b(λ)| < 1.

2. Since b(λ) ∈ S(Rλ) we have G(x, λ) = I + o(1), as |λ| → ∞ 3. Integral representation Z ∞ G(x, λ) = I + Φ(x + s)eiλsds, (3.1.11) −∞ where  0 β(−s) Φ(s) = , (3.1.12) −β(s) 0 and β(s) is given by 1 Z ∞ β(s) = b(λ)e−iλsdλ. (3.1.13) 2π −∞

By the Riemman Problem (see Appendix) there exit unique matrices G±(x, λ) such that

• G+(x, λ) is analytic into the upper half plane for Imλ ≥ 0.

• G−(x, λ) is analytic into the lower half plane for Imλ ≤ 0.

• For each λ ∈ (−∞, ∞),

G(x, λ) = G+(λ)G−(λ). (3.1.14) 54CHAPTER 3. THE INVERSE PROBLEM: THE RAPIDLY DECREASING CASE. THE NONLINEAR SCHRODINGER¨ EQUATION

Step two. Let us define the matrices

−1 T+(x, λ) = G+ (x, λ)E(x, λ) (3.1.15) and T−(x, λ) = G−(x, λ)E(x, λ). (3.1.16)

We claim that T±(x, λ) satisfy the linear problem dT  λ  ± = σ + U (x) T , (3.1.17) dx 2i 3 0 ±

n×n where U0(x) is in L1 (Rx). We have the following facts about T± which are consequence of their definition (3.1.15), (3.1.16),

• T+ is analytic and non-degenerate in the upper half plane, except, maybe, for simple poles at λ = λj, j = 1, 2, .., n.

• T− is analytic and non-degenerate in the lower half plane, except, maybe, for simple poles at λ = λj, j = 1, 2, .., n. • T−(x, λ) = T+(x, λ)G(λ). (3.1.18) Differentiating (3.1.18) with respect to x we get dT dT + (x, λ)T−1(x, λ) = − (x, λ)T−1(x, λ) (3.1.19) dx + dx − We claim, without proof [7], that both sides of (3.1.19) are entire functions on λ and we shall analyze its asymptotic behavior as |λ| → ∞. For this purpose, we realize that since G−(x, λ) is analytic in the lower half plane the function T− has the representation  Z ∞  −iλsds T−(x, λ) = I + Φ−(x + s)e E(x, λ), (3.1.20) 0

2 ∂Φ− ∂Φ− ∂ Φ− where Φ−(x + s) is absolutely continuous in x, and ∂x , ∂s , ∂x∂s as function of s belong to 2×2 L1 (0, ∞) [7]. Then, for Imλ ≤ 0 has the asymptotic behavior  Φ (x, 0)  1  T (x, λ) = I + − + o E(x, λ), as |λ| → ∞. (3.1.21) − iλ |λ| It follows that dT λσ 1 − (x, λ)T−1(x, λ) = 3 + [σ , Φ (x, 0)] + o(1)as |λ| → ∞. (3.1.22) dx − 2i 2 3 − Similarly, we obtain from  Z ∞  −1 iλsds T+(x, λ) = E (x, λ) I + Φ+(x + s)e , (3.1.23) 0 that for Imλ ≥ 0, dT dT−1 + (x, λ)T−1(x, λ) = T (x, λ) + (x, λ) dx + + dx λσ 1 = 3 + [σ , Φ (x, 0)] + o(1)as |λ| → ∞. 2i 2 3 + 3.1. FORMULATION OF RESULTS 55

Hence, by the Liouville Theorem we get dT dT + (x, λ)T−1(x, λ) = − (x, λ)T−1(x, λ) dx + dx − λσ = 3 + U (x), (3.1.24) 2i 0 where 1 1 U (x) = [σ , Φ (x, 0)] = [σ , Φ (x, 0)] . (3.1.25) 0 2 3 + 2 3 −

To conclude we must note that U0 is anti-diagonal and satisfy involution property, therefore it can be written √  0 ψ(x) U = κ . (3.1.26) 0 ψ(x) 0 Step Three To conclude that a(λ) b(λ) M(λ) = b(λ) a(λ) is the monodromy matrix for the linear system corresponding to ψ(x). For this purpose, follow [7] we have analyze the asymptotic behavior for G±(x, λ) and realize that they are composed by the Jost solution of the linear problem for Ψ, which defined de monodromy matrix (Chapter 2). Formulae for ψ. Case κ > 0. Let 1 Z ∞ β(s) = b(λ)e−iλsdλ (3.1.27) 2π −∞ be the Fourier transform of b(s) ∈ S(R). Then β ∈ S(R) Let C+(x, s) be the solution of the Winner-Hopf integral equation Z ∞ Z ∞  C+(x, s) = β(s − x) − β(s − t)β(s0 − t)dt · C+(x, s0)ds0 (3.1.28) 0 x for s ≥ 0. Then, 1 ψ(x) = √ C+(x, 0). (3.1.29) κ Case κ < 0. In this case ψ(x) consists of two parts ψ(x) = ψc(x) + ψd(x). The first part ψc is defined by b(λ), 1 ψc(x) = √ C−(x, 0) (3.1.30) κ

where C−(x, 0) is the solution of the Winer-Hopf equation

Z ∞ Z ∞  C−(x, s) = β(s − x) − β(s − t)β(s0 − t)dt · C−(x, s0)ds0 (3.1.31) 0 x

d The second part ψ is determined by the data (λi, γi), b(λ) in the context of the Riemann problem. In general, when b 6= 0 the problem is reduce to the computing the Blashke factor and is quit complicated. Now, we consider the particular case when

b(λ) ≡ 0. (3.1.32) 56CHAPTER 3. THE INVERSE PROBLEM: THE RAPIDLY DECREASING CASE. THE NONLINEAR SCHRODINGER¨ EQUATION

In this case, the function a(λ) take the form

n Y λ − λj a(λ) = , (3.1.33) j=1 λ − λj and the inverse problem can be solved in closed form. If κ > 0, then β ≡ 0 and hence, ψ ≡ 0.

Now, let us assume κ < 0. Let {b(λ), λj, λj, γj, γj} be the spectral data, then the function ψ ∈ S(R) is given by n i X ψ(x) = √ p (x), (3.1.34) κ j j=1 where, coefficients pj(x) satisfy the linear system of equations n   X 1 + γj(x)γk(x) pk(x) = γj(x), j = 1, 2, ..., n; (3.1.35) k=1 λj − λk with iλj x γj(x) = γje . (3.1.36) In order to illustrate the formula (3.1.34), we derive the case n = 1 and n = 2. Taking n = 1 in (3.1.35), we get

2 1 + |γ1(x)| p1(x) = γ1(x). (3.1.37) −2iIm(λ1)

Solving the last equation for p1 and substituting in (3.1.34) we obtain 2Imλ γ (x) √ 1 1 ψ(x) = · 2 , (3.1.38) κ 1 + |γ1(x)| where iλ1x γ1(x) = γ1e . (3.1.39) For n = 2 we get the following algebraic system     1 + γ1(x)γ1(x) 1 + γ1(x)γ2(x) p1(x) + p2(x) = γ1(x), λ1 − λ1 λ1 − λ2     1 + γ2(x)γ1(x) 1 + γ2(x)γ2(x) p1(x) + p2(x) = γ2(x). λ2 − λ1 λ2 − λ2

Solving for the unknown functions p1(x) and p2(x),

γ (x) 1+γ2(x)γ2(x) − γ (x) 1+γ1(x)γ2(x) 1 λ2−λ2 2 λ1−λ2 p1(x) =        , (3.1.40) 1+γ1(x)γ1(x) 1+γ2(x)γ2(x) − 1+γ2(x)γ1(x) 1+γ1(x)γ2(x) λ1−λ1 λ2−λ2 λ2−λ1 λ1−λ2

γ (x) 1+γ2(x)γ1(x) − γ (x) 1+γ1(x)γ1(x) 1 λ2−λ1 2 λ1−λ1 p2(x) =        , (3.1.41) 1+γ1(x)γ1(x) 1+γ2(x)γ2(x) − 1+γ2(x)γ1(x) 1+γ1(x)γ2(x) λ1−λ1 λ2−λ2 λ2−λ1 λ1−λ2 and p + p ψ(x) = i 1√ 2 . (3.1.42) κ 3.2. THE INVERSE PROBLEM FOR ZERO CURVATURE EQUATION 57 3.2 The inverse Problem for Zero curvature Equation

We shall formulate the result on the reconstruction of solutions of the zero curvature equation on sl(2C), ∂U ∂V − + [U, V] = 0, (3.2.1) ∂t ∂x from spectral data which appear in Theorems (3.1.2),(3.1.1). The following results are a direct consequence of Theorems (3.1.2),(3.1.1).

Proposition 3.2.1 (Case κ > 0). Let b(λ) ∈ S(Rλ) be a complex valued function satisfying

|b(λ)| < 1.

Let us define

b(λ, t) = e−iλ2tb(λ), (3.2.2)  1 Z ∞ log(1 + |b(µ, t)|)  a(λ, t) = exp dµ . (3.2.3) 2πi −∞ µ − λ − i0 for t ∈ R. Then, for each t ∈ R

|a(λ, t)|2 + |b(λ, t)|2 = 1, for all real λ; (3.2.4) and there exist an unique ψ(x, t) ∈ S(Rx) such that

a(λ, t) b(λ, t) M(λ) = b(λ, t) a(λ, t) is the monodromy matrix of the linear problem ( 2.2.4) corresponding to ψ(x, t).

Proof. For each fixed t ∈ R we have

|b(λ, t)|2 = b(λ, t)b(λ, t) = e−iλ2tb(λ)eiλ2tb(λ) = e−iλ2teiλ2tb(λ)b(λ) = b(λ)b(λ) = |b(λ)|2 < 1.

2 It is clear that b(λ, t) belongs to S(R), because this space is a ring of functions and both e−iλ t, b(λ) are in S(R). It follows from Theorem (3.1.1) the existence of the function ψ(x, t) which define a liner problem of type (2.2.4) for each t ∈ R and we get the rest of the proposition.

Proposition 3.2.2 (Case κ < 0). Let {(b(λ), b(λ), λj, λj, γj, γj)} be a set consisting of complex number λj, γj (j = 1, 2, ..., n) and b(λ) ∈ S(Rλ) which satisfy the following condition:

• Imλj > 0, λi 6= λj if i 6= j,

• γj 6= 0,

• |b(λ)| < 1. 58CHAPTER 3. THE INVERSE PROBLEM: THE RAPIDLY DECREASING CASE. THE NONLINEAR SCHRODINGER¨ EQUATION

Let us define

b(λ, t) = e−iλ2tb(λ), (3.2.5) n  Z ∞  Y λ − λj 1 log(1 + |b(µ, t)|) a(λ) = exp dµ . (3.2.6) 2πi µ − λ − i0 j=1 λ − λj −∞ for t ∈ R. Then, for each t ∈ R |a(λ, t)|2 + |b(λ, t)|2 = 1, for all real λ; (3.2.7)

and there exist an unique ψ(x, t) ∈ S(Rx) such that a(λ, t) b(λ, t) M(λ, t) = b(λ, t) a(λ, t) is the monodromy matrix of the linear problem ( 2.2.4) corresponding to ψ(x, t) and the columns  (1) (2)  T− (x, t, λ), T+ (x, t, λ) of the Jost solutions T±(x, t, λ) in ( 2.2.46)-( 2.2.47) satisfy the condi- tions (1) (2) T− (x, t, λj) = γj(t)T+ (x, t, λj), j = 1, 2, ..., n. (3.2.8) where −iλ2t γi(t) = e γi. (3.2.9) The proof of this statement follows the same arguments as Proposition (3.2.1). Now we formulate the main result Theorem 3.2.3. Let ψ(x, t) be the function defined in Proposition ( 3.2.1) or in Proposition ( 3.2.2). The sl(2, C)-valued functions U(x, t, λ) and V(x, t, λ) defined by

U(x, t, λ) = U0(x, t) + λU1 (3.2.10) 2 V(x, t, λ) = V0(x, t) + λV1(x, t) + λ V2(x, t), (3.2.11) where √  0 ψ(x, t) 1 U (x, t) = κ , U = σ . 0 ψ(x, t) 0 1 2i 3

V1(x, t) = −U0(x, t), V2(x, t) = −U1, and ∂U (x, t) V (x, t) = iσ U2(x, t) + i 0 σ . 0 3 0 ∂x 3 satisfy the zero curvature equation ( 3.2.1), for any λ. In the case b = 0 we can derive explicit solution for zero curvature equation. We are going to give derivation of the corresponding formulaes in chapter four where we consider nonlinear Schrodinger equation.

Sketch of the proof of Theorem (3.2.3 ). Using the function ψ(x, t) given in the Proposition (3.2.1) or Proposition (3.2.2) we can construct the matrix U(x, t, λ) (3.2.10). We only have to prove that the matrix V(x, t, λ) of the form (3.2.11) satisfy together with U the zero curvature equation. Let us define the matrix valued function

G(x, tλ) = E−1(t, λ2)G(x, t, λ)E(t, λ2), (3.2.12) 3.2. THE INVERSE PROBLEM FOR ZERO CURVATURE EQUATION 59

λ2t  where E(t, λ2) = exp σ and G(x, t, λ) given by (3.1.9). 2 3 Consider the Riemann problem

G(x, t, λ) = G+(x, t, λ)G−(x, t, λ) (3.2.13) which is uniquely solvable in the Schwartz space for x and t. Let T±(x, t, λ) be the function of the form

−1 −1 2 T+(x, t, λ) = G+ (x, t, λ)E(x, λ)E (t, λ ) −1 2 T−(x, t, λ) = G−(x, t, λ)E(x, λ)E (t, λ ).

For each fixed t the functions T±(x, t, λ) satisfy the linear problem corresponding to ψ(x, t) (see the sketchs’proof of Theorem 3.1.1 and Theorem 3.1.2). We shall show that they also satisfy a differential equation with respect to t for each fixed x. Since G±(x, t, λ) satisfy the Riemann problem (3.2.13) we get

T−(x, t, λ) = T+(x, t, λ)G(λ), (3.2.14) and from this relation we derive ∂T ∂T − (x, t, λ)T−1(x, t, λ) = + (x, t, λ)T−1(x, t, λ). (3.2.15) ∂t − ∂t + ∂T We remark that the functions − (x, t, λ)T−1(x, t, λ) are non-singular in their respective domain of ∂x − analyticity and hence, by (3.2.15) give rise to an entire function of λ. Proceeding in the same manner as in sketchs’proof of Theorem (3.1.1) and Theorem (3.1.2), we use the integral representation

 Z ∞  −iλs −1 2 T−(x, t, λ) = I + Φ−(x, t, s)e ds E(x, λ)E (t, λ ), (3.2.16) 0 and derive the asymptotic expansion

 Φ (x, t, 0) 1 ∂Φ  1  T (x, t, λ) = I + − ) − − (x, t, 0) + O E(x, λ)E−1(t, λ2), (3.2.17) − iλ λ2 ∂s |λ|3 as |λ| → ∞. Differentiating with respect to t, we obtain

∂T  1  − (x, t, λ)T−1(x, t, λ) = V(x, t, λ) + O , (3.2.18) ∂t − |λ| where 2 V(x, t, λ) = λ V2 + λV1 + V0, (3.2.19) with iσ 1 V (x, t) = 3 , V (x, t) = [Φ (x, t, 0), σ ] = −U (x, t), (3.2.20) 2 2 1 2 − 3 0 and i  ∂Φ  i V (x, t) = σ , − (x, t, 0) + [Φ (x, t, 0), σ ]Φ (x, t, 0). (3.2.21) 0 2 3 ∂s 2 − 3 − 60CHAPTER 3. THE INVERSE PROBLEM: THE RAPIDLY DECREASING CASE. THE NONLINEAR SCHRODINGER¨ EQUATION

On other hand, applying successively integration by parts to (3.2.16) and differentiating with respect to x we find

∞ ∂T− λσ3 X Tn(x, t) (x, t, λ)T−1(x, t, λ) = + U (x) + + O |λ|−∞ (3.2.22) ∂x − 2i 0 (iλ)n n=1

as |λ| → ∞, Imλ < 0. In particular, we have

1  ∂Φ  ∂Φ  T (x, t) = σ , − (x, t, 0) + [Φ (x, t, 0), σ ]Φ (x, t, 0) + 2 − (x, t, 0) 1 2 3 ∂s − 3 − ∂x ∂Φ = −iV (x, t) + − (x, t, 0). 0 ∂x

However, T− satisfies the linear problem corresponding to ψ(x, t) and it follows that

Tn(x, t) = 0 for n = 1, 2, ...

This implies that ∂Φ V (x, t) = −i − (x, t, 0). (3.2.23) 0 ∂x ∂Φ We can deduce from equation (3.2.20) that the anti-diagonal part of − (x, t, 0) is equal to ∂x ∂U − 0 (x, t)σ . To find the diagonal part, we consider again that T (x, t) = 0 and using the ∂x 3 1 2 equations (3.2.20) and (3.2.23) we get that it equals to −σ3U (x, t). Finally we obtain

∂U (x, t) V (x, t) = iσ U2(x, t) + i 0 σ . 0 3 0 ∂x 3

Similarly, we can obtain

∂T  1  + (x, t, λ)T−1(x, t, λ) = V(x, t, λ) + O . (3.2.24) ∂t + |λ|

Therefore T±(x, t, λ) satisfy the linear system

∂T ± (x, t, λ)V(x, t, λ)T (x, t, λ); ∂t ±

now, recalling that they also are solution of

∂T ± (x, t, λ)U(x, t, λ)T (x, t, λ), ∂x ±

it follows that the two linear system are compatible ant it implies the zero curvature equation

∂U ∂V − + [U, V] = 0. ∂t ∂x 3.3. THE NONLINEAR SCHRODINGER¨ EQUATION 61 3.3 The Nonlinear Schr¨odingerEquation

The notion of integrable ODE is related with the existence of a number of first integrals. Specially, the integrability property is clear for finite dimensional Hamiltonian systems [3]. In the infinite dimensional case, the situation is more complicated. One of the possible definitions is that [9]: a system of nonlinear differential equations is integrable if it can be represented as the consistence condition of an overdeterminated linear system which is equivalent to zero curvature equation with spectral parameter. An example of an integral system is the Nonlinear Schr¨odingerequation (NLS equation), a dynamical system generated by the equation ∂ψ ∂2ψ i = − + 2κ|ψ|2ψ (κ ∈ ) (3.3.1) ∂t ∂x2 R with the initial condition ψ(x, t)|t=0 = ψ(x). (3.3.2) As an application of the inverse problem discussed in Chapter 3, we construct some solutions of the Nonlinear Schr¨odinger Equation (NLS equation). This equation arises in various physical contexts, for example, it describes the effects of self-focusing of the envelope of a monochromatic plane wave propagating in nonlinear media [2]. The NLS equation appears also in the theory of surfaces waves on shallow water [4]. Equation (3.3.1) may be also considered as the Hatree-Fock equation for one dimensional quantum Boson gas equation with point intersection . Physically, the constant κ in (3.3.1) plays the role of acoupling constant: the case κ > 0 corresponds to attractive interaction and κ < 0 is the repulsive case. The two cases are essentially different in optical applications, describing self-focusing or defocusing of the light rays [2]. Mathematically, these two cases are also very different because the first one correspond to a selfadjoint linear problem while the second one is related a non-selfadjoint linear problem. The nonlinear Schr¨odingerequation was first solved by the inverse scattering method by Zakharov and Shabat [13]. In our treatment we shall follow an approach [7], using the result of Chapter 3. In the context of the integrability of NLS equation, the key observation is that, the NLS equation admits a zero curvature representation or Lax’s pair.

3.4 Zero curvature representation for NLS equation

Consider the zero curvature equation on sl(2, C) ∂U ∂V − + [U, V] = 0. (3.4.1) ∂t ∂x Suppose that U(x, t, λ) ∈ sl(2, C) is of the form λ √ 0 Φ U(x, t, λ) = U (x, t) + σ , U (x, t) = κ (3.4.2) 0 2i 3 0 Φ 0 where Φ = Φ(x, t) is a differentiable complex valued function, λ ∈ C is a parameter and κ ∈ R is a constant. Recall that representation (3.4.2) comes from a claim of λ-parameter curves in sl(2, C) satisfying the involution property (see Chapter 2). Moreover, let us choose V(x, t, λ) ∈ sl(2, C) as follows λ2 V = V − λU − σ , (3.4.3) 0 0 2i 3 α β V = (3.4.4) 0 γ −α 62CHAPTER 3. THE INVERSE PROBLEM: THE RAPIDLY DECREASING CASE. THE NONLINEAR SCHRODINGER¨ EQUATION

where α,β and γ are some smooth functions of x and t. It is easy to see that U and V in (3.4.2) and (3.4.3) satisfy (3.4.1) for all λ if and only if ∂U ∂V 0 − 0 + [U , V ] = 0, (3.4.5) ∂t ∂x 0 0 ∂U 2i 0 + [σ , V ] = 0. (3.4.6) ∂x 3 0 The following observation clarify our choice of representation (3.4.3) for V. Proposition 3.4.1. The differential operators ∂ λ L = i − iU0 + σ3 (3.4.7) ∂x 2 ∂ ∂ M = − λ − V0, (3.4.8) ∂t ∂x

commute for all λ if and only if U0 and V0 satisfy ( 3.4.5),( 3.4.6). The proof is the straight forward computation. Thus, the zero curvature equation (3.4.1) in the class of matrix functions of the form (3.4.2),(3.4.3) is reduced to the commutativity condition

[L , M ] = 0. Now, we proceed to the study of equations (3.4.5),(3.4.6). Putting (3.4.2),(3.4.3) into (3.4.5),(3.4.6) we get the following set of equations for functions Φ, α, β, γ : ∂α √ − + κ Φγ − Φβ = 0, (3.4.9) ∂x √ ∂Φ ∂β √ κ − − 2 κ Φα = 0, (3.4.10) ∂t ∂x √ ∂Φ ∂γ √ κ − + 2 κ Φα = 0, (3.4.11) ∂t ∂x √ ∂Φ i κ + β = 0, (3.4.12) ∂x √ ∂Φ i κ − γ = 0. (3.4.13) ∂x The last relations (3.4.12), (3.4.13) give

√ ∂Φ √ ∂Φ β = −i κ , γ = i κ . (3.4.14) ∂x ∂x Then, from (3.4.9) it follows that ∂α ∂ = iκ ΦΦ (3.4.15) ∂x ∂x and hence ic α = iκ ΦΦ + , (3.4.16) 2 where c = c(t) is an arbitrary function of t. Putting (3.4.14), (3.4.16) into (3.4.10), (3.4.11), we observe that the compatibility condition for (3.4.10) and (3.4.11) implies that c(t) ∈ R, 3.5. APPLICATION OF THE INVERSE PROBLEM TO THE NLS EQUATION 63

and then (3.4.10) is equivalent to the following equation for Φ:

∂Φ ∂2Φ i = − + 2κ|Φ|2Φ + c(t)Φ. (3.4.17) ∂t ∂x2 Under the transformation  Z  Φ 7→ ψ = Φ exp i c(t)dt ,

this equation is reduced to NLS equation. We arrive at the following result.

Theorem 3.4.2. If the matrix valued function U and V of the form ( 3.4.2) and ( 3.4.3) satisfy the zero curvature equation ( 3.4.1), then

√  0 eiθψ U = κ , (3.4.18) 0 e−iθψ 0 ! 2 θ0  √ iθ ∂ψ i κ|ψ| + 2 − κe ∂x V0 = √ −iθ ∂ψ 2 θ0  , (3.4.19) κe ∂x −i κ|ψ| + 2 where θ = θ(t) is a real function of t and ψ = ψ(x, t) is a solution of NLS equation

∂ψ ∂2ψ i = − + 2κ|ψ|2ψ. (3.4.20) ∂t ∂x2 On the contrary, an arbitrary real function θ(t) and a solution ψ(x, t) of NLS equation define a solution (U, V) of the zero curvature equation ( 3.4.1) by the formulae ( 3.4.2), ( 3.4.3) and ( 3.4.18), ( 3.4.19).

Therefore, this theorem says that the solutions of the zero curvature equation (3.4.1) is the class of λ-parametric matrix functions (3.4.2), (3.4.3) are parametrized by one real function and a solution of NLS equation.

Corollary 3.4.3. The nonlinear Schr¨odingerequation ( 3.5.1) for ψ is equivalent to the zero cur- vature equation for the λ-parameter matrix functions

 λ √  2i κψ U = √ −λ , (3.4.21) κψ 2i √ ! 2 √ 2 ∂ψ  λ λ κψ iκ|ψ| −i κ ∂x 2i V = √ ∂ψ 2 − √ −λ2 . (3.4.22) i κ ∂x −iκ|ψ| λ κψ 2i

Remark 3. It follows from Proposition ( 3.4.1) and Corollary ( 3.4.3) that the NLS equation is equivalent to the commutativity of linear operators L and M ( 3.4.7), ( 3.4.8) which define the Lax equation pair for the NLS equation [7].

3.5 Application of the inverse problem to the NLS equation

Let us consider the dynamical system generated by the nonlinear Schr¨odingerequation

∂ψ ∂2ψ i = − + 2κ|ψ|2ψ, (κ ∈ ) (3.5.1) ∂t ∂x2 R 64CHAPTER 3. THE INVERSE PROBLEM: THE RAPIDLY DECREASING CASE. THE NONLINEAR SCHRODINGER¨ EQUATION

Here, ψ(x) is a given complex valued function belongs to Schwartz space S(R). We shall apply the inverse problem, discussed in Chapter 3, to reconstruct the solution ψ(x, t) of the Schr¨odinger equation from the initial data ψ. First, we formulate the linear problem corresponding to ψ, dT = U(x, λ), (3.5.2) dx T(x, y, λ)|x=y = I, where λ U(x, λ) = U (x) + σ 0 2i 3 √    λ  0 ψ(x) 2i 0 = κ + λ . ψ(x) 0 0 − 2i n×n Since ψ(x) ∈ S(Rx) U ∈ L1 and the system (3.5.2) is in the decreasing case which was aboard in Chapter 2. As we established in Section (??) the monodromy matrix M(λ) has the form a(λ) b(λ) M(λ) = (3.5.3) b(λ) a(λ)

2 2 where a(λ), b(λ) ∈ S(Rλ) and |a(λ)| −|a(λ)| = 1. From the linear problem, we derive the spectral data, proceeding in similar manner as in Section (2.2.3). As we did it in Chapter 2, we have to consider two cases: 1) if κ ≥ 0 the spectral data consisting only of the function b(λ), 2) if κ ≥ 0 the spectral data is the set {b(λ), λj, γj : j = 1, 2, .., n}; where λj are the zeroes of a(λ) on its extended analytic domain and γj are related with the values of the Jost solution in the λj. Then we define the time dependent function b(t, λ) = e−iλ2tb(λ), (3.5.4) if κ ≥ 0. And we add the functions −iλ2t γj(t) = e γj (3.5.5)

and the complex numbers λj if κ < 0. For each fixed t, the functions b(λ, t) and γj(t) satisfy the conditions of the Theorem (3.1.1) or Theorem (3.1.2). Thus, there exist a function ψ(x, t) such that a(t, λ) εb(t, λ) M(t, λ) = , (3.5.6) b(t, λ) a(t, λ) where     2  Qn λ−λi 1 R ∞ ln(1−|b(λ)| )  exp p.v. dµ if ε = 1  i=1 λ−λi 2πi −∞ µ−λ a(λ) = (3.5.7)  2   1 R ∞ ln(1−|b(λ)| )  exp 2πi p.v. −∞ µ−λ dµ if ε = −1, is the monodromy matrix corresponding to ψ(x, t), with matrix coefficient

 λ √  2i κψ(x, t) U(x, t, λ) = √ λ . (3.5.8) κψ(x, t) − 2i By the Theorem (3.2.3), if we construct the matrix

√ ! 2 √ 2 ∂ψ(x,t)  λ λ κψ(x, t) iκ|ψ(x, t)| −i κ ∂x 2i V(x, t, λ) = √ − √ 2 . (3.5.9) ∂ψ(x,t) 2 λ κψ(x, t) −λ i κ ∂x −iκ|ψ(x, t)| 2i 3.6. NON SOLITONIC AND SOLITONIC SOLUTIONS OF NLS EQUATION 65

U and V satisfy the zero curvature equation (3.4.1) for all λ. Now, It follows from corollary (3.4.3) that ψ(x, t) is a solution of the NLS equation with ψ(x, 0) = ψ(x). Summarizing, for reconstruct the solution ψ(x, t) of NLS equation that satisfy the initial con- dition ψ(x), we begin with the linear problem corresponding to ψ and derive the spectral data, depending of sign of κ. Using these data, we define the time dependent functions (3.5.4) and (3.5.5). For each t we apply the inverse problem analyzed in chapter 3, and we get the complex valued function ψ(x, t) that is the solution on the NLS equation with the initial condition ψ(x). We present the method for solving the initial value problem for the NLS equation in the following commutative diagram

Linear problem  b(λ) if κ ≥ 0 ψ(x) −−−−−−−−−−→ {b(λ), λj, γj : j = 1, 2, .., n} if κ ≥ 0 | | | | | | | | ↓ ↓ Inverse problem  b(t, λ) if κ ≥ 0 ψ(x, t) ←−−−−−−−−−− {b(t, λ), λj, γj(t): j = 1, 2, .., n} if κ ≥ 0 For to complete our analysis, we give some observation about the case κ = 0. Firstly, the NLS equation reduce to the linear Schr¨odinger equation ∂ψ ∂2ψ i = − (3.5.10) ∂t ∂x2. Let us consider the asymptotic behavior of the transition coefficient a(λ) and b(λ) as κ → 0. We observe that the integral representation (2.2.44) for the Lost solution T−(x, λ) Z x T−(x, λ) = E(x, λ) + E(x − z, λ)U0(z)E(z, λ)dz + O(|κ|). (3.5.11) −∞ If the variable x tends to infinity in this formula we get √ Z ∞ a(λ) = 1 + O(|κ|), b(λ) = κ ψ(x)e−iλxdx + O(|κ|). (3.5.12) −∞ As we can see the discrete spectrum disappears and the linear problem (3.5.2) can be interpreted as the Fourier transform. Moreover, the time dynamics of b(λ) is given by the Fourier transform of ψ(x, y) subjet to ψ. Therefore, in the general case κ 6= 0, the inverse problem is interpreted as a nonlinear analogue of the Fourier method.

3.6 Non solitonic and solitonic solutions of NLS equation

In this part we shall derive formulae for soliton solution of the NLS equation. Follow [7], the soliton is a wave solution with the follow condition 1. Propagation does not change its shape.

2. It has finite energy and all the integral of motion are finite. 66CHAPTER 3. THE INVERSE PROBLEM: THE RAPIDLY DECREASING CASE. THE NONLINEAR SCHRODINGER¨ EQUATION

In the physics literature the term soliton sometimes refers to a particle solution. b(λ) One important mean for the transition coefficient is that the ratio a(λ) is the reflection coefficient. If b ≡ 0, (3.6.1) the linear problem is referred as reflectionless. Now, we assume (3.6.1) and find soliton solution for NLS equation. Let us only only consider κ < 0 because κ > 0 produces trivial solutions. We begin with a single zero λ1, Imλ1 and one complesx coefficient γ1 6= 0. From Chapter 3, under these conditions, que have that

2Imλ γ (x) √ 1 1 −iλx ψ(x) = 2 , with γ1(x) = e γ1. (3.6.2) κ 1 + |γ1(x)| Applying the inverse problem, we get the solution of NLS equation

2Imλ1 γ1(x, t) −iλ2x √ 1 ψ(x, t) = 2 , where γ1(x) = e γ1. (3.6.3) κ 1 + |γ1(x, t)| Bibliography

[1] Abraham R. , J. E. Marsden, T. Ratiu, 1988 Manifolds, Tensor, Analysis, and Applications, Springer, New York.

[2] Akhmanov S. A., Khoklov R. V., Sukhorukov A. P. 1982 In Laser Handbook, F.T. Arecchi, E.O. Schulz-Dubois, Amsterdan, Holland.

[3] V.I. Arnol’d, 1984 Ordinary Differential equations, Springer -Verlag, New York.

[4] Benjamin T.B., Feir J. E., 1966 Fluid Mech.

[5] D´avilaR. G., Flores E.R., Vorobjev Y.,2006 Albegra´ Lineal, Teor´ıay problemas, Colecci´onde Textos Acad´emicos,Editorial Unison, Hermosillo, Sonora, M´exico.

[6] Diustermaat J.J., Kolk J.A.C. (1942) Lie Groups, Ed. Springer-Verlag, Berlin Heidelberg.

[7] Faddev L. D., Takhtajan L. A., 1987 Hamiltonian Methods in the Theory of Solitions, Classics in Matehmatics, Ed. Springer-Verlag, Berlin. Heidelberg.

[8] Guest M. A., 1997 Harmonic maps, Loop Groups and integrable Systems, Ed. Cambrige Uni- versity Press, United Kingdom.

[9] Its Alexander R., December 2003 The Riemann-Hilbert Problem and Integrable Systems, NO- TICES OF THE AMS.

[10] E. O. Roxin, Ecuaciones diferenciales ordinarias y teor´ıade control, Editorial universitaria de Buenos aires, Argentina, 1968.

[11] Verhulst F., 1985 Nonlinear Differential Equation and Dymical Systems, Ed. Springer-Verlag, Berlin Heidelberg.

[12]

[13] Zakharov V. E., Shabat A. B. 1971 Zh. Eksp. Teor. Fiz.

67