<<

Validated spectral stability via conjugate points

Margaret Beck∗ Jonathan Jaquette†

May 25, 2021

Abstract

Classical results from Sturm-Liouville theory state that the number of unstable eigenvalues of a scalar, second-order linear operator is equal to the number of associated conjugate points. Recent work has extended these results to a much more general setting, thus allowing for spectral stability of nonlinear waves in a variety of contexts to be determined by counting conjugate points. However, in practice, it is not yet clear whether it is easier to compute conjugate points than to just directly count unstable eigenvalues. We address this issue by developing a framework for the computation of conjugate points using validated numerics. Moreover, we apply our method to a parameter-dependent system of bistable equations and show that there exist both stable and unstable standing fronts. This application can be seen as complimentary to the classical result via Sturm-Louiville theory that in scalar reaction-diffusion equations pulses are unstable whereas fronts are stable, and to the more recent result of “Instability of pulses in gradient reaction-diffusion systems: a symplectic approach,” by Beck et. al., that symmetric pulses in reaction-diffusion systems with gradient nonlinearity are also necessarily unstable.

1 Introduction

arXiv:2105.06895v2 [math.DS] 23 May 2021 Understanding the stability of solutions is important for predicting the long-time behavior of partial differential equation models of physical systems. A key step is analyzing spectral stability, which means determining whether the linearization L of the PDE about a given solution of interest has any spectral elements with positive real part. In many cases, it is useful to divide the spectrum of L into two disjoint subsets, the essential and point spectrum:

σ(L) = σess(L) ∪ σpt(L). The essential spectrum is often relatively easy to compute, so determining spectral stability amounts to looking for unstable eigenvalues [KP13].

∗Department of and , Boston University; [email protected]. †Department of Mathematics and Statistics, Boston University; [email protected].

1 Scalar, second-order eigenvalue problems can be understood by classical Sturm-Liouville theory. A simple such problem is of the form

Lu := uxx + q(x)u = λu, x ∈ [a, b], u(a) = u(b) = 0, λ ∈ R, (1.1) with q(x) smooth, but the following discussion is applicable to a much larger class of equa- tions and boundary conditions [BR89]. Using Pr¨ufer coordinates, which for this equation are just the polar coordinates

u(x; λ) = r(x; λ) sin[θ(x; λ)], ux(x; λ) = r(x; λ) cos[θ(x; λ)], one obtains a system for (r, θ) given by

r0 = r(1 + λ − q(x)) cos θ sin θ, θ0 = cos2 θ + (q(x) − λ) sin2 θ. (1.2)

Eigenvalues are values of λ for which a solution of (1.1) exists. In (1.2), {r = 0} is invariant, the θ equation decouples, and so the existence of an eigenvalue can be reformulated as follows. Consider the initial condition θ(a; λ) = 0 and flow forward to x = b; λ is an eigenvalue if and only if θ(b; λ) ∈ {jπ}j∈Z. The form of (1.2) implies that, if λ is negative and sufficiently large, then θ0 > 0 and θ will be forced to oscillate. Thus, there exists at least one eigenvalue, which we label λk, such that θ(b; λk) = (k + 1)π. Increasing λ reduces the oscillation in λ, and since θ(b; λ) varies continuously in λ, we obtain an increasing sequence ∞ of eigenvalues {λj}j=0 such that θ(b; λj) = (j + 1)π. For λ > λ0, θ no longer has enough “time” to complete one (half) oscillation, and so no more eigenvalues exist. One obtains a result regarding the sequence of eigenvalues λ0 > λ1 > . . . and corresponding eigenfunctions u0, u1,... that says that uk has exactly k zeros. From a stability perspective, this result is very powerful, as can be seen by the following ex- ample. Suppose L is the linearization of a scalar reaction-diffusion equation, vt = vxx +f(v), about a stationary solution ϕ. Then Lϕ0 = 0. In other words, u = ϕ0 is the eigenfunc- tion associated with eigenvalue zero. If ϕ is a pulse, then ϕ0 has exactly one zero. Hence, 0 ϕ = u1, λ1 = 0, λ0 > 0 and an unstable eigenvalue exists. On the other hand, if ϕ is a monotonic front, then λ0 = 0 and no unstable eigenvalues exist. It is worth noting that this (in)stability result is forced by the topology, and it does not depend on the details of the function f or the underlying solution ϕ. This problem can be reformulated in terms of conjugate points. Instead of (1.1), consider the same problem but on a variable domain with λ = λ∗ fixed,

uxx + q(x)u = λ∗u, x ∈ [a, s], u(a) = u(s) = 0, a ≤ s ≤ b. (1.3)

A conjugate point is a value of s such that the above equation has a solution. Equation (1.2) is still relevant, with the value of s controlling how much “time” θ has to oscillate.

2 If we choose λ∗ = λk, we find sk := b is a conjugate point. Decreasing s produces a k sequence of conjugate points {sj}j=0, with θ(sj; λk) = (j + 1)π. This leads to a relationship between conjugate points and eigenvalues: the number of conjugate points that exist for some fixed λ∗ is equal to the number of eigenvalues λ such that λ > λ∗. Choosing λ∗ = 0 allows one to count unstable eigenvalues by counting conjugate points. In the scalar case, counting conjugate points is equivalent to counting the number of zeros of the eigenfunction associated with the zero eigenvalue. Because the above perspective relies on the reformulation of the eigenvalue problem in terms of polar coordinates, it is not immediately clear how to generalize this to systems of n equations, ie an eigenvalue problem Lu = λu with u ∈ R and n > 1. It turns out that this generalization is provided by a topological invariant called the Maslov index [Arn67, Arn85, 2n Bot56]. Given a symplectic form ω on R , ie a nondegenerate, bilinear, skew-symmetric 2n form, the set of Lagrangian subspaces ` in R with respect to ω is defined by

2n Λ(n) = {` ⊂ R : dim(`) = n, ω(U, W ) = 0 ∀ U, W ∈ `}. (1.4)

It turns out that the fundamental group of Λ(n) is the integers, π1(Λ(n)) = Z. This allows one to define a phase in Λ(n), similar to the angle θ above, and the Maslov index counts the winding of this phase. There are many recent results that use the Maslov index to prove that one can count unstable eigenvalues by counting conjugate points. See, for example, [BCC+20, BCJ+18, CJLS16, CJ20, CJM15, DJ11, How20, HLS18, JLM13]. The exact statements of the results and their proofs depend on the details of the situation being considered. Notably, some of the results d are valid on spatial domains Ω ⊂ R with d ≥ 1 [CJLS16, CJM15, DJ11]. The vast majority of techniques concerning the stability of nonlinear waves are valid primarily in one spatial domain, and so results in higher space dimensions are of particular importance. In fact, it was the multidimensional result [DJ11] that inspired the recent extensive activity in using the Maslov index to study stability. Although these results are very interesting in their own right, if it is just as difficult to count conjugate points as it is to count unstable eigenvalues, then from a stability perspective not much has been gained. This work addresses exactly this issue. The primary tool for studying spectral stability of nonlinear waves is arguably the Evans function [San02]. It has been used extensively in one-dimensional domains, and in multidi- d−1 mensional domains with a distinguished direction, such as a channel R×Ω ⊂ R×R with Ω compact [OS10]. Although there are some results in more general multidimensional do- mains [DN08], it is not clear how useful they are in practice. This is one key reason why the possibility of using the Maslov index to study stability, in particular for multidimensional problems, is so interesting.

3 There are very few results where the Maslov index has been used to analyze stability in a context where the Evans function appears to not be similarly applicable, and all are in one spatial dimension. For example, [CH14] applies to a specific FitzHugh-Nagumo model, and it relies heavily on the activator-inhibitor structure present there. Another, [BCJ+18], applies to systems of reaction-diffusion equations with gradient nonlinearity. This latter result is used to show that symmetric pulse solutions of such systems are necessarily unstable, which is the generalization of the above-mentioned result from Sturm-Liouville theory to the system case. The main goal of this paper is to provide a framework for rigorously counting conjugate points using validated numerics in the setting of [BCJ+18]. This will allow for the Maslov index to be used to determine stability in more applications. Furthermore, we argue that, when applicable, this a more efficient method for determining stability than the Evans function. This is because counting unstable eigenvalues using the Evans function requires computing its winding number as λ varies in a contour in the complex plane, whereas count- ing conjugate points requires computation only for one fixed value λ = 0. For comparison, there appear to be only two instances of validated numerics for the Evans function, and in those works stability calculations are reported to take 425 hours [AK15] and 4 hours [BZ16], respectively. Our method for rigorously counting conjugate points currently takes less than half a minute, a significant improvement. It may be possible to speed up Evans function computations, eg using [BNS+18], but this has not yet been done. For further reading on the validated computation of stability in PDEs we refer to the book [NPW19] and references cited therein. The general setting in which our method will apply is that of [BCJ+18], which we now present. Consider an eigenvalue problem

n λu = Duxx + M(x)u =: Lu, u ∈ R , x ∈ R (1.5) where D is a diagonal matrix with positive entries and M satisfies Hypothesis 1.1.

Hypothesis 1.1. The matrix M(x) satisfies

n×n (H1) M ∈ C(R, R ) and is for each x a symmetric matrix with real entries.

(H2) The limits limx→±∞ M(x) =: M± exist, are negative definite, and have distinct eigenvalues.

−η |x| (H3) There exist constants CM and ηM such that |M(x) − M±| ≤ CM e M for ± x ∈ R , respectively.

Moreover, we require that the operator L has a simple eigenvalue at zero.

4 2 2 Hypothesis 1.2. The operator L, defined on X = L (R) with domain Y = H (R), satisfies (H4) λ = 0 is an eigenvalue of geometric and algebraic multiplicity one.

2 2 We will study the spectrum of L on X = L (R) with domain Y = H (R). Hypothesis (H1) implies that L is self adjoint, and so we can take λ ∈ R. This is necessary for the symplectic framework used below. If, for example, L is the linearization of a system of reaction-diffusion equations with gradient nonlinearity,

2 n vt = Dvxx + ∇G(v),G ∈ C (R , R), (1.6) about a stationary solution ϕ, then (H1) holds. The assumption in (H2) that M± are negative definite is necessary and sufficient for ensuring that the essential spectrum of L lies in the open left half plane and is bounded away from the imaginary axis. This is a natural assumption; if the essential spectrum is unstable, then stability has already been determined. If the essential spectrum is only marginally stable, meaning it lies in the left half plane but touches the imaginary axis, this can also be handled using the modification in [BCJ+18, Remark 1.3], but we do not consider this further here. Assumption (H3) appears at first glance to be stronger than the corresponding hypothesis stated in [BCJ+18], where 1 ± it is only required that the functions x → M(x) − M± be in L (R ), which implies that the associated first order eigenvalue problem has exponential dichotomies on the half lines. 2 However, for the primary setting of interest here, (1.6), the fact that M± = ∇ G(ϕ±) < 0 implies that the underlying solution ϕ has constant limits ϕ± = limx→±∞ ϕ(x) that are approached exponentially fast, which in turn implies (H3). It should be possible to extend 1 our method to the L case by adjusting some of the estimates for the constants L± that appear below. Hypothesis (H4) did not appear in [BCJ+18]. If ϕ is a stationary solution to (1.6), then ϕ0 will be an eigenfunction with eigenvalue zero. Hence, part of (H4) is natural. The additional assumption that zero is a simple eigenvalue plays a crucial role below, particularly in the determination of L+ in §2.2. It is likely possible to adapt our method to cases with higher multiplicity, but we have not yet explored that further. In particular, at the moment it is not clear the extent to which our method could be used to study bifurcations. The eigenvalue problem (1.5) can be written as a first order eigenvalue problem of the form ! 2n 0 −In Ux = JB(x; λ)U, U ∈ R ,J = , λ ∈ R, (1.7) In 0 where U = (u, Dux) and ! λ − M(x) 0 B(x; λ) = (1.8) 0 −D−1

5 2n is symmetric. Consider the canonical symplectic form on R defined by

2n 2n ω : R × R → R, ω(U, W ) = hU, JW i, (1.9)

2n where h·, ·i is the usual inner product in R . We now connect the evolution of (1.7) with the set of Lagrangian planes defined in (1.4).

Assumption (H2) implies that the limits limx→±∞ JB(x; λ) = JB±(λ) exist and are hyper- bolic. In order for U to correspond with an eigenfunction, we therefore need |U(x; λ)| → 0 u as x → ±∞. Let E−(x; λ) denote the subspace of solutions that is asymptotic, as x → −∞, to the unstable subspace of JB−(λ). This is exactly the subspace of solutions that decay u as x → −∞. It turns out that this subspace is Lagrangian: E−(x; λ) ∈ Λ(n) for each 2 u (x, λ) ∈ R . To see this, notice that if U, W ∈ E−(x; λ), then d ω(U(x),W (x)) = hU 0,JW i + hU, JW 0i = hJBU, JW i + hU, J 2BW i = 0, dx since J −1 = J ∗ = −J and B∗ = B. Since U(x),W (x) → 0 as x → −∞, we see u ω(U(x),W (x)) = 0 for all x. The fact that dim(E−) = n follows from assumptions (H1) and (H2); see the proof of Lemma 2.1 below. The subspace known as the Dirichlet subspace is also Lagrangian, 2n D = {(u, w) ∈ R : u = 0}, D ∈ Λ(n). (1.10) For fixed λ = 0, conjugate points are defined to be values of s such that there is a nontrivial u + intersection between these two Lagrangian planes: E−(s; 0) ∩ D 6= {0}. See [BCJ 18, Def 2.7].

Theorem 1. [BCJ+18, Thm 3.1, Thm 4.1]. The number of positive eigenvalues of L, defined in (1.5), is equal to the number of conjugate points, ie the number of values s ∈ R u ∗ such that E−(s; 0) ∩ D= 6 {0}. Moreover, there exists an L+ sufficiently large such that, ∗ for any L+ > L+, the number of conjugate points in (−∞, ∞) is equal to the number of conjugate points in (−∞,L+).

This result is proven in [BCJ+18] using the Maslov index. However, we will not need to use the Maslov index here. Therefore, we refer to [BCJ+18] for a definition of the Maslov index, as well as for the proof of the above Theorem. The goal of this work is to provide a method for rigorously computing the number of conjugate points on (−∞,L+). To do so, there are several issues that must be overcome. First, since computations will be done numerically, they can only be done on a finite interval,

(−L−,L+). The existence of the upper limit L+ is given by Theorem 1, but we do not know how big sufficiently large is. We must therefore show that is possible to choose a finite value

6 L−, and then prescribe a way for choosing explicit values L± so that we capture all of the conjugate points. In other words, we need to know when to start and stop the computation. Once this has been accomplished, we need an for detecting conjugate points on u (−L−,L+). This will be accomplished via the fame matrix associated with E−(x; 0). More specifically, let ! A1(x) n×n ,A1(x),A2(x) ∈ R A2(x) u be a matrix whose columns form a basis for E−(x; 0). We have the following Lemma, which is well known.

u Lemma 1.3. E−(x; 0) ∩ D= 6 {0} if and only if detA1(x) = 0.

u n Proof. If E−(x; 0) ∩ D= 6 {0} then there exists a nonzero vector u ∈ R such that ! ! A (x) 0 1 u = . A2(x) u˜

This implies that A1(x)u = 0, and so detA1(x) = 0. On the other hand, if detA1(x) = 0, n u then there exists a nonzero u ∈ R such that A1(x)u = 0. As E−(x; 0) is n dimensional its frame matrix is of full rank, having a trivial kernel. That is to sayu ˜ 6= 0 in the above u equation, whereby E−(x; 0) ∩ D= 6 {0}.

u Thus, once we have the frame matrix for E−(x; 0), we simply need to locate values of s ∈ [−L−,L+] such that detA1(s) = 0. In other words, we will have reduced the determination of stability to the computation of zeros of the scalar valued function detA1(x) on the finite + interval [−L−,L+]. We note that any zeros of this function will be isolated; see [BCJ 18, Remark 3.5]. We now summarize our numerical methodology for computing the spectral stability of stationary solutions. First, one must compute a stationary solution ϕ to the PDE (1.6), which is then used to define the eigenvalue problem in (1.7). After fixing L−,L+ ≥ 0, one u next computes a numerically approximate frame matrix for E−(x; 0) for x ∈ [−L−,L+]. To u do so, first we define an approximate frame matrix for E−(−L−; 0) by taking as an initial value a matrix A(−L−) whose columns are the n-unstable eigenvectors of JB−(0). Then  A1(x)  for −L− ≤ x ≤ L+, the frame matrix A(x) = is defined by numerically integrating A2(x) each of the columns forward in time according to the differential equation in (1.7). Lastly, one counts the number of times the scalar valued function detA1(x) equals zero on the finite interval x ∈ [−L−,L+], which by Lemma 1.3 is equal to the number of conjugate points, and hence by Theorem 1 also equals the number of positive eigenvalues of L.

7 To produce a computer assisted proof several things need to be made rigorous. At the most basic level, it is generally the case that even the stationary solution ϕ cannot be computed exactly. Beyond that, one must quantify the initial approximation error of the frame matrix, how this error grows when numerically integrating (1.7), and how large −L− and L+ must be to ensure our count of conjugate points is complete. To that end we use validated numerics to rigorously compute a posteriori error bounds. Thus, we are able to quantify the error produced in our numerical ranging from the imprecision in solving initial value problems, down to the rounding error from finite precision arithmetic. Early landmark results of validated numerics include the computer assisted proofs of the universality of the Feigenbaum constants [LI82] and Smale’s 14th problem on the nature of the Lorenz attractor [Tuc02]. Validated numerics, also referred to as rigorous numerics, have found great success in providing computer assisted proofs of non-perturbative results in dynamics, both finite and infinite dimensional, and we refer the interested reader to [vdBL15, GS19, NPW19] for further details. As mentioned earlier, the foundation of our numerical method begins with computing the stationary solution ϕ, a homoclinic/heteroclinic solution to the spatial association with stationary solutions of (1.6). The computation of connecting orbits has been, and continues to be, an active area of research within the validated numerics com- munity, see for example [Ois98, WZ03, vdBDLMJ15, AK15, CZ17, vdBBLM18, vdBS20] and the references cited therein. In Section 3 we describe the approach taken in computing and proving the existence of ϕ and, in addition, the validated computation of its conjugate points and computer assisted proof of its spectral (in)stability. We note that, although our method is developed for equations of the form (1.5), we expect that it could be applied more broadly. In particular, there are equations that are not systems of reaction-diffusion equations with gradient structure whose associated eigenvalue problems possess symplectic structure similar to that of (1.7). See [CDB09, CDB11, How20] for a variety of examples, including connections between those eigenvalue problems and the Maslov index. Moreover, as mentioned above, the relationship between unstable eigenvalues and conju- gate points has also been proven in several contexts with many space dimensions [CJLS16, CJM15, DJ11]. Recent work has shown that the corresponding path of Lagrangian sub- u spaces, analogous to E−(x; 0) here, can be understood via an ill-posed dynamical system [BCJ+20, BCJ+21]. Validated numerics have been applied in other settings involving ill- posed dynamical systems [CGL18, AK20], and so it is possible that the methods of this paper could be utilized to study multidimensional stability in the future.

The remainder of the paper is organized as follows. In §2 we explicitly characterize the

8 values of L±. In §3 we describe the associated rigorous computations for determining stability by first computing the underlying wave itself and then computing the associated conjugate points. In §4 we apply our results to an example problem, which shows that fronts in reaction-diffusion systems of the form (1.6) need not be stable, as they are in the scalar case. Although not surprising, this can be viewed as complementary to the result in [BCJ+18, §5] that shows that symmetric pulses in such systems must be unstable, as in the scalar case. Moreover, it provides a proof-of-concept for this work. Finally, in §5 we discuss future directions for our work.

2 Determination of L±

We begin with a first order eigenvalue problem of the form (1.7) that results from a reaction- diffusion system of the form (1.6). Due to Theorem 1, to count unstable eigenvalues we only need to compute conjugate points for λ = 0. Therefore, we rewrite (1.7) for λ = 0 as ! 0 D−1 Ux = A˜(x)U, A˜(x) = . (2.1) −M(x) 0

We need to determine values L± such that all conjugate points are contained in the interval

[−L−,L+]. Recall that conjugate points will be determined by the evolution of the unstable u u subspace, E−(x; λ = 0) =: E−(x). The main idea used to determine L± is that, for |x| u sufficiently large, the evolution of E−(x) is essentially governed by the dynamics determined by the asymptotic matrices limx→±∞ A˜(x). We begin with L− in §2.1. The determination 0 of L+, which is somewhat more involved due to the presence of the eigenfunction ϕ (x), is presented in §2.2.

2.1 Determination of L−

To determine L−, using (H3) we rewrite (2.1) as ! ! 0 D−1 0 0 Ux = [A−∞ + A−(x)]U, A−∞ = , A−(x) = , −M− 0 −M(x) + M− 0 (2.2) where

−ηM |x| kA−(x)k ≤ CM e , x ≤ 0. (2.3) The goal of this section is to prove Proposition 2.5, which gives an explicit way to choose

L− so there are no conjugate points for x ≤ −L−. We begin with a preliminary result.

9 Lemma 2.1. The matrix A−∞ is hyperbolic, its eigenvalues are real and distinct, and its 2n u eigenvectors form a basis for R . Furthermore, if E−∞ is the subspace spanned by its u eigenvectors corresponding to positive eigenvalues, then E−∞ ∩ D = {0}.

Proof. First, we relate the eigenvalues and eigenvectors of A−∞ to M−. In particular, if

(u, w) is an eigenvector of A−∞ with eigenvalue ν, then ! ! ! ! u u 0 D−1 u ν = A−∞ = . w w −M− 0 w

This implies that −1 2 w = νDu, D M−u = −ν u.

2 By Hypotheses (H1)-(H2), M− is symmetric and negative. If we define γ = −ν and 1/2 −1/2 −1/2 −1/2 −1/2 z = D u, then D M−D z = γz. The matrix D M−D is symmetric because

M− is symmetric and D is diagonal, and it is also negative:

−1/2 −1/2 −1/2 −1/2 hD M−D u, ui = hM−D u, D ui < 0,

2 because M− is negative. Thus, γ = −ν < 0. Moreover, we can choose the n eigenvectors −1/2 −1/2 − n n of D M−D , which we denote by {zj }j=1, to be an orthonormal basis for R . We − n denote the corresponding n (negative) eigenvalues via {γj }j=1. Hence, the eigenvalues of A−∞ with positive real part, which are in fact just positive, and their corresponding eigenvectors are given by ! D−1/2z− q V −,u = j , ν− = + −γ−, j = 1, . . . , n. (2.4) j − 1/2 − j j νj D zj Similarly, the negative eigenvalues and their corresponding eigenvectors are given by ! D−1/2z− V −,s = j , j = 1, . . . , n. j − 1/2 − −νj D zj

− − n By hypothesis (H2) the 2n eigenvalues {νj , −νj }j=1 of A−∞ are all distinct. Furthermore, 1/2 − n n −1/2 − n the fact that det(D ) 6= 0 and {zj }j=1 is a basis for R implies that {D zj }j=1 is also n −1/2 − n a basis for R , although it need not be orthonormal. The fact that {D zj }j=1 is a basis n u for R implies E−∞ ∩ D = {0}.

˜ −,u n −,u n We now construct the solutions {Vj (x)}j=1 of (2.2) that are asymptotic to {Vj }j=1. Given a solution V (x) of (2.2), define W (x) via V (x) = eνxW (x), where ν is an eigenvalue − of A−∞ with eigenvector V . We then have

Wx = [A−∞ − ν]W + A−(x)W. (2.5)

10 To construct solutions to this equation, we will utilize the exponential dichotomy determined by the constant matrix A−∞ −νI. To that end, we now review some well-known facts about dichotomies associated with constant matrices. N×N N Given any constant matrix A ∈ R with eigenvalues {νj}j=1 ordered so that νj > νj+1, N×N define the diagonal matrix Λ = diag{ν1, . . . , νN } and define Q ∈ R to be a matrix whose columns are the eigenvectors of A. Hence A = QΛQ−1 and eAx = QeΛxQ−1. For fixed m, define N × N matrices Λu = diag{ν1, . . . νm−1, 0,... 0} and Λcs = diag{0,... 0, νm, . . . νN }. cs u If P is the projection onto the eigenspace with eigenvalues ν ≤ νm and P is the comple- mentary projection onto the eigenspace with eigenvalues ν > νm, then

e(A−νm)xP cs = Qe(Λcs−νm)xQ−1P cs e(A−νm)xP u = Qe(Λu−νm)xQ−1P u.

As kP uk, kP csk = 1 with respect to the Euclidean norm, if we define

−1 K− = kQk · kQ k (2.6) then we obtain the bound

(A−νm)x cs (A−νm)x u (νm−1−νm)x ke P k ≤ K− for x ≥ 0, ke P k ≤ K−e for x ≤ 0.

Note that if the matrix of eigenvectors Q can be chosen to be orthonormal, then K− = 1.

Remark 2.2. It is not necessary that the eigenvalues satisfy the strict inequality νj > νj+1. If there are repeated eigenvalues with independent eigenvectors, the above construction is still valid. This could be relevant if, for example, (H2) is relaxed to allow for repeated eigenvalues. If there are Jordan blocks, then some modification is necessary to obtain an essentially equivalent result, but this case is not relevant as long as M± are symmetric.

cs Returning to equation (2.5), let P be the projection onto the eigenspace of A−∞ − ν corresponding to eigenvalues with real part less than or equal to zero, and let P u be the projection onto the eigenspace of A−∞ − ν corresponding to eigenvalues with real part greater than zero. Consider the map Z x − (A−∞−ν)(x−y) cs W (x) = V + e P A−(y)W (y)dy −∞ Z −L (A−∞−ν)(x−y) u − e P A−(y)W (y)dy x =: T−(W )(x), (2.7)

− where L− is any fixed positive constant. Note that, because (A−∞ − ν)V = 0, any fixed point of this map must be a solution of (2.5). Let K− and η− be positive constants such that

(A−∞−ν)(x−y) cs ke P k ≤ K−, y ≤ x ≤ 0,

(A−∞−ν)(x−y) u −η−(y−x) ke P k ≤ K−e , x ≤ y ≤ 0. (2.8)

11 Proposition 2.3. Fix L− > 0, define K− such that (2.8) holds, and define K C − M −ηM L− λT− := e , ηM where CM and ηM are the constants appearing in Hypothesis (H3). If λT− < 1, then the ∞ 2n map T− defined in (2.7) is a contraction on X = L ((−∞, −L−], R ) with contraction constant λT− . Its unique fixed point W (x) satisfies

− λT− − kW (·) − V kX ≤ |V |. (2.9) 1 − λT−

Proof. We will use properties of the dictotomy defined by the projection operators P cs and P u. We note that similar calculations have appeared elsewhere; see, eg, [BSZ10, §4] and [HLS18, §2]. Equation (2.8) together with (2.3) implies Z x −ηM |y| |T−(W1)(x) − T−(W2)(x)| ≤ K−CM e dykW1 − W2kX −∞ Z −L− −η−(y−x) −ηM |y| + K−e CM e dykW1 − W2kX x Z −L− ηM y ≤ K−CM e dykW1 − W2kX −∞ K C − M −ηM L− ≤ e kW1 − W2kX . ηM

As a result, for L− sufficiently large, this map is a contraction with contraction constant K C − M −ηM L− λT− := e < 1. (2.10) ηM We denote its unique fixed point by W (x), and we wish to estimate |W (x) − V −|. To do − this, first apply T− to the constant vector V and note that by essentially the same estimate as above, − − − kT−(V )(·) − V kX ≤ λT− |V |. Next, due to the telescoping , we have

n−1 ∞ n − − X j+1 − j − − X j+1 − j − T− (V )−V = [T− (V )−T−(V )] ⇒ W (x)−V = [T− (V )−T−(V )]. j=0 j=0

As a result, ∞ − − X j − − λT− |V | kW (·) − V kX ≤ λ kT−(V ) − V kX = . (2.11) T− 1 − λ j=0 T−

12 Remark 2.4. Only the constant K− in (2.8) appears in the final constant λT− , given in (2.10). The value of η− is not needed. We included it in the statement of (2.8) for completeness.

− We can apply Proposition 2.3 for ν = νj , j = 1, . . . , n, to construct the desired solutions − − ˜ −,u νj x −,u νj x −,u n {Vj (x) = e Wj (x)} of (2.2) that are asymptotic, as x → −∞, to {e Vj }j=1. −,u (We used tildes here to distinguish between the constant vectors Vj and the x-dependent ˜ −,u solutions Vj (x).) Notice that

u ˜ −,u ˜ −,u −,u −,u E−(x) = span{V1 (x),..., Vn (x)} = span{W1 (x),...,Wn (x)}.

−,u −,u Since Wj (x) is asymptotically close to Vj in the sense of (2.9), this implies that for u −,u −,u u L− large E−(−L−) should be close to span{V1 ,...,Vn } = E−∞. Therefore, since u u E−∞ ∩ D = {0} (see Lemma 2.1), this should imply that E−(−L−) ∩ D = {0}, as well.

Proposition 2.5. Recall that D = diag(d1, . . . , dn) > 0. Let dmax = maxj{dj} and dmin = minj{dj}. If L− is chosen so that

1/2 λT− dmin < √ , (2.12) 1 − λ 1/2 1/2 T− dmax(1 + kM−kdmax) n

u where λT− is defined in (2.10), then E−(x) ∩ D = {0} for all x ≤ −L−.

Proof. Recall that λ |V −,u| u −,u −,u −,u −,u T− j E−(x) = span{W1 (x),...,Wn (x)}, kWj (·) − Vj kX ≤ . 1 − λT− For each j we can therefore write

! −,u −,u uj(x) − λT− |Vj | Wj (x) = , uj(x) = uj +u ˜j(x), ku˜j(·)kX ≤ , (2.13) wj(x) 1 − λT−

− −1/2 − where uj = D zj as given in (2.4). Suppose that x0 ∈ (−∞, −L−] is a conjugate point, u n ie E−(x0) ∩ D= 6 {0}. Then there exists {cj}j=1 so that

n n n n X X − X − X 0 = cjuj(x0) = cj[uj +u ˜j(x0)] ⇒ cjuj = − cju˜j(x0). (2.14) j=1 j=1 j=1 j=1

− −1/2 − − n n Since uj = D zj and {zj }j=1 is an orthonormal basis for R (see Lemma 2.1),

n n n q X − −1/2 X − 1 X − 1 2 2 cju = D cjz ≥ cjz = c + . . . c . j j 1/2 j 1/2 1 n j=1 j=1 dmax j=1 dmax

13 On the other hand, by (2.13),

n n X λT− X −,u − cju˜j(x0) ≤ |cj||V |. 1 − λ j j=1 T− j=1

Moreover, again using (2.4) and the proof of Lemma 2.1,

−,u 2 − 2 − 2 −1/2 − 2 − 1/2 − 2 1 − 2 |Vj | = |uj | + |wj | = |D zj | + |νj D zj | ≤ + max |νj | dmax dmin j 1 ≤ (1 + kM−kdmax), dmin where we have used the fact that

− 2 −1/2 −1/2 − − −1/2 − −1/2 − −(νj ) = hD M−D zj , zj i = hM−D zj ,D zj i, which implies − 2 1 |(νj ) | ≤ kM−k . dmin Combining these results, we find (2.14) cannot hold if

n 1 q λT− 1 X c2 + . . . c2 > (1 + kM kd )1/2 |c |. 1/2 1 n 1/2 − max j 1 − λT dmax − dmin j=1

Using the fact that 1/2  n  n X 1 X c2 ≥ √ |c |  j  n j j=1 j=1 we obtain the result.

2.2 Determination of L+

To determine L+, using (H3) we rewrite (2.1) as ! ! 0 D−1 0 0 Ux = [A+∞ + A+(x)]U, A+∞ = , A+(x) = , −M+ 0 −M(x) + M+ 0 (2.15) where

−ηM |x| kA+(x)k ≤ CM e , x ≥ 0. (2.16) The goal of this section is to prove the following:

Theorem 2. For any x ≥ L+, where L+ is sufficiently large so that the right hand side of u (2.35) is strictly less than one, E−(x) ∩ D = {0}.

14 u s 0 00 s By hypothesis (H4), we know the that E−(x) ∩ E+(x) = span{(ϕ , Dϕ )}, where E+(x) = s E+(x; λ = 0) is the subspace of solutions to (2.1) that decay to zero as x → +∞. Denote 0 00 Uϕ = (ϕ , Dϕ ) and write

u u E−(x) = span{Uϕ(x),U1(x),...,Un−1(x)}, E−(x) ∈ Λ(n) ∀x, (2.17)

s where the solutions Uk(x) satisfy Uk(x) ∈/ E+(x) for all k. In order to determine L+, we u would like to be use the fact that, for large x, E−(x) will look like the asymptotic unstable eigenvectors at +∞, with the exception of the asymptotic behavior of Uϕ(x).

Figure 1: A cartoon of Λ(2), which is double covered by S1 × S2. The upper left sphere represents the set of planes which nontrivially intersect D (in fact homotopic to a Klein bot- tle with one loop collapsed), and D is represented by the upper left square. The trajectory u E−(x) is represented by the green line.

u One way to think of the evolution of E−(x) is as follows. In Figure 1, we have depicted a u u cartoon of the path E−(x) traces out in Λ(n). The beginning of this trajectory E−(−∞) u is the unique point E−∞ ∈ Λ(n), depicted as solid square in the upper right sphere. By u counting the number of conjugate points of E−(x) (ie the number of nontrivial intersections u with D with orientation) one is able compute the Maslov index of the path E−(x), which for a loop γ : S1 → Λ(n) is given by its equivalence class in the fundamental group [γ] ∈ ∼ π1(Λ(n)) = Z.

It follows from Hypothesis 1.1 and 1.2 that limx→+∞ Uϕ(x)/|Uϕ(x)| must limit to one of the u n stable eigendirections of A+. Furthermore, E−(x) must limit as x → +∞ to a Lagrangian plane in Λ(n) spanned by limx→+∞ Uϕ(x)/|Uϕ(x)| and n − 1 unstable eigenvectors of A+.

15 u Consequently E−(+∞) must be one of n possible points in Λ(n), depicted as the two hollow squares in the bottom sphere of Figure 1. Note that each of these n possible limit points u in Λ(n) is unstable under the flow defined by (2.15). Hence showing that E−(x) limits to a particular one of these points can be difficult. However such a calculation is not necessary; u to compute L+, we do not need to show that E−(L+) is close to its limit point, but rather u it suffices to show that E−(x) is bounded away from D for x ≥ L+. To begin our analysis, we wish to obtain a good asymptotic description of the solutions that are asymptotic to the eigendirections at +∞, just as we did in §2.1 at −∞. In particular, we can construct solutions to (2.15) of the form

+ + ˜ +,s −νj x +,s ˜ +,s ˜ +,u νj x +,u ˜ +,u Vj (x) = e (Vj + Wj (x)), Vj (x) = e (Vj + Wj (x)), j = 1, . . . n, where −1/2 + ! −1/2 + ! +,s 1 D zj +,u 1 D zj Vj = ,Vj = . (2.18) q + −ν+D1/2z+ q + ν+D1/2z+ 2νj j j 2νj j j +,s/u +,s/u ˜ +,s/u Using notation analogous to the previous section, we denote Wj (x) = Vj +Wj (x). + n −1/2 −1/2 The vectors {zj }j=1 are eigenvectors of the negative symmetric matrix D M+D n + + 2 and form an orthonormal basis for R . The corresponding eigenvalues are γj = −(νj ) . We will assume the eigenvalues, which are distinct due to (H2), are ordered so that

+ + + ν1 > ν2 > ··· > νn > 0.

+,s/u Note that we have normalized the eigenvectors {Vj } so that they form a symplectic basis. (This normalization was not necessary in §2.1.) In particular,

+,s/u +,s/u +,u +,s ω(Vj ,Vk ) = 0 if k 6= j, ω(Vj ,Vj ) = 1. (2.19)

Analogous to Proposition 2.3 we have the following result, which will lead to bounds on the ˜ +,s/u error terms Wj .

Proposition 2.6. Let K C + M −ηM L+ λT+ := e , (2.20) ηM η x where the constants CM and ηM are such that kA+(x)k ≤ CM e M for x ≥ 0 as guaranteed by (H3), and the constant K+ is defined similarly to (2.8) but using A+∞. For any L+ > 0 +,s/u such that λT+ < 1, the functions Wj (x) satisfy

+,s/u +,s/u λT+ +,s/u kW (·) − V k ∞ 2n ≤ |V |. (2.21) j j L ([L+,∞),R ) j 1 − λT+

16 Define λT+ +,s/u 0 := max |Vj |, (2.22) 1 − λT+ j,s,u which we can make small by choosing L+ large. The Proposition implies

˜ +,s/u |Wj (x)| ≤ 0, ∀ x ≥ L+, ∀j, s, u. (2.23)

˜ +,s n s The functions {Vj (x)}j=1 form a basis for E+(x) which is unique up to multiplication ˜ +,u n u by an element of GL(n). The functions {Vj (x)}j=1 form one basis for E+(x). If we let ˜ +,u ˜ +,s ˜ +,s/u V (x) and V (x) denote the matrices whose columns are Vj (x) then, in general, a u ˜ +,u ˜ +,s basis for E+(x) may be given by the columns of V (x) · M1 + V (x) · M2 for arbitrary n×n matrices M1 ∈ GL(n) and M2 ∈ R . ˜ +,s/u n Recall (2.17). Using the functions {Vj (x)}j=1 as a basis for arbitrary solutions on x ≥ 0, we can write n + X −νj x +,s ˜ +,s Uϕ(x) = α˜je (Vj + Wj (x)), (2.24) j=1 n + + X k νj x +,u ˜ +,u ˜k −νj x +,s ˜ +,s Uk(x) = [˜γj e (Vj + Wj (x)) + βj e (Vj + Wj (x))], j=1

˜k k for all x ≥ 0, whereα ˜j, βj , γ˜j ∈ R are constants that are uniquely determined. For nota- tional convenience below, we will for fixed x write this as

n n X +,s ˜ +,s X k +,u ˜ +,u k +,s ˜ +,s Uϕ(x) = αj(Vj +Wj (x)),Uk(x) = [γj (Vj +Wj (x))+βj (Vj +Wj (x))], j=1 j=1 (2.25) + + + −νj x k ˜k −νj x k k νj x where αj =α ˜je , βj = βj e , and γj =γ ˜j e . We also note that these solutions can be written as

s s s s −1 Uϕ(x) = (V + W (x))α = (V + W (x))Λ α˜ (2.26) s s k u u k s s −1 ˜k u u k Uk(x) = (V + W (x))β + (V + W (x))γ = (V + W (x))Λ β + (V + W (x))Λ˜γ , where

s,u +,s/u +,s/u 2n×n s,u ˜ +,s/u ˜ +,s/u 2n×n V = [V1 | ... |Vn ] ∈ R , W (x) = [W1 (x)| ... |Wn (x)] ∈ R ,

ν+x ν+x n×n Λ(x) = diag(e 1 , . . . e n ) ∈ R , and for each x ≥ L+ and k we have

n k k k n k k k n α = (α1, . . . , αn) ∈ R , γ = (γ1 , . . . , γn) ∈ R , β = (β1 , . . . , βn) ∈ R ,

17 ˜k k s with corresponding definitions forα ˜, β , andγ ˜ . Note that, because Uk(x) ∈/ E+(x), it must be true that γk 6= 0 for all k. Moreover, the γk’s must be independent. Otherwise, if k j s γ = cγ , then Uk − cUj ∈ E+(x), which would contradict Hypothesis (H4), that the zero eigenvalue is simple.

Remark 2.7. The vectors α,˜ β˜k, γ˜k can be computed numerically with known error bounds.

The Lagrangian condition says

s s u u k 0 = ω(Uϕ(x),Uk(x)) = ω((V + W (x))α, (V + W (x))γ ) ∀k,

s s s s s k where we have used the fact that E+(x) ∈ Λ(n) implies ω((V +W (x))α, (V +W (x))β ) = 0. Let’s write ! ! Vs + Ws(x) Vu + Wu(x) Vs + Ws(x) = 1 1 , Vu + Wu(x) = 1 1 . s s u u V2 + W2 (x) V2 + W2 (x)

+,s/u Using the explicit formulas for V1,2 given by (2.18), we find   s u −1/2 s u 1 1/2 −1 1 V1 = V1 = D ZN , −V2 = V2 = D ZN , N = diag   , (2.27) 2 q + 2νj and + + T Z = [z1 | ... |zn ], Z Z = I. Using this notation, we can write the above Lagrangian condition as

0 = ω((Vs + Ws(x))α, (Vu + Wu(x))γk) s s u u k s s u u k = −h(V1 + W1 (x))α, (V2 + W2 (x))γ i + h(V2 + W2 (x))α, (V1 + W1 (x))γ i s s T u u s s T u u k = −hα, [(V1 + W1 (x)) (V2 + W2 (x)) − (V2 + W2 (x)) (V1 + W1 (x))]γ i s T u s T u s T u u s T u u k = −hα, [1 − (V2 ) W1 (x) + (V1 ) W2 (x) − (W2 (x)) (V1 + W1 (x)) + (W1 (x)) (V2 + W2 (x))]γ i.

Therefore, we have found

k α ⊥ Γ1, Γ1 = span{Γ1 : k = 1, . . . , n − 1} (2.28) k k s T u s T u s T u u s T u u k Γ1 = γ − [(V2 ) W1 (x) − (V1 ) W2 (x) + (W2 (x)) (V1 + W1 (x)) − (W1 (x)) (V2 + W2 (x))]γ .

n This means that a frame matrix for the n − 1 dimensional subspace Γ1 ⊂ R is given by

1 n−1 Γ1 : Γ + QΓ, Γ = [γ | ... |γ ] (2.29) s T u s T u s T u u s T u u Q = (V1 ) W2 (x) − (V2 ) W1 (x) + (W1 (x)) (V2 + W2 (x)) − (W2 (x)) (V1 + W1 (x)).

18 u Suppose now that there is an intersection of E−(x) with the Dirichlet subspace. Since this 2n involves the first n components of vectors in R , we introduce the projection operator 2n 2n π1 : R → R , defined by 2n π1(u1, . . . , u2n) = (u1, . . . un, 0,... 0) ∈ R . (2.30) u With this notation, an intersection of E−(x) with the Dirichlet subspace means

0 = det[π1Uϕ0 (x), π1U1(x), . . . , π1Un−1(x)] s s u u 1 s s 1 u u n−1 s s n−1 0 = det[(V1 + W1 (x))α, (V1 + W1 (x))γ + (V1 + W1 (x))β ,..., (V1 + W1 (x))γ + (V1 + W1 (x))β ] s s = det(V1 + W1 (x)) 1 s s −1 u u 1 n−1 s s −1 u u n−1 × det[α, β + (V1 + W1 (x)) (V1 + W1 (x))γ , . . . , β + (V1 + W1 (x)) (V1 + W1 (x))γ ]. Moreover,

s s −1 u u −1/2 s −1 −1/2 u (V1 + W1 (x)) (V1 + W1 (x)) = (D ZN + W1 (x)) (D ZN + W1 (x)) −1/2 −1 s −1 −1/2 −1 u = [I + (D ZN ) W1 (x)] [I + (D ZN ) W1 (x)]. Thus,

k k α ∈ Γ2, Γ2 = span{β + Γ2 : k = 1, . . . , n − 1} (2.31) k −1 T 1/2 s −1 −1 T 1/2 u k Γ2 := [I + N Z D W1 (x)] [I + N Z D W1 (x)]γ .

This means a frame matrix for Γ2 is given by

1 n−1 1 n−1 Γ2 : Γ + P Γ + B, Γ = [γ | ... |γ ],B = [β | · · · |β ] (2.32) ∞ ! −1 T 1/2 u X k −1 T 1/2 s k −1 T 1/2 u P = N Z D W1 (x) + (−1) (N Z D W1 (x)) [I + N Z D W1 (x)], k=1 where, via (2.23), we are assuming that L+ is sufficiently large so that

−1 T 1/2 s kN Z D W1 (x)k < 1 (2.33)

q + for all x ≥ L+. We note this will follow from the condition that 0 2nν1 dmax < 1, which n×n appears in the statement of Proposition 2.9. Let πΓ1 , πΓ2 ∈ R be the projections onto Γ1,2, respectively.

Proposition 2.8. There exists an L+ sufficiently large so that, for any x ≥ L+, kπΓ1 −

πΓ2 k < 1.

We are interested in being able to verify for a specific L+ whether it is sufficiently large so that kπΓ1 − πΓ2 k < 1 for all x ≥ L+. Hence we find it necessary to explicitly quantify Proposition 2.8 in full detail with the following proposition. Below we will prove Proposition 2.9, which implies also Proposition 2.8. Define

B˜ = [β˜1| ... |β˜n−1], Γ˜ = [˜γ1| ... |γ˜n−1]. (2.34)

19 Proposition 2.9. Fix L+ and 0(L+) > 0. For D := diag(d1, . . . , dn) define the following constants:

1 +  := √ e−2νn L+ kB˜k µ∗ := min{µ : µ ∈ σ(Γ˜T Γ)˜ } b µ∗

dmin := min{dj : j = 1, . . . , n}, dmax := max{dj : j = 1, . . . , n} q + s 2 2nν dmax 1 2n q + CP := q CQ := + + 2nν1 dmax + 20n + νn dmin 1 − 0 2nν1 dmax 2 CM1 := 0CQ(2 + 0CQ) CM2 := 0CP (2 + 0CP ) + 2b(1 + 0CP ) + b .

q + If CM1 ,CM2 , 0 2nν1 dmax < 1, then for all x ≥ L+ we have:

2 2 2 kπΓ1 − πΓ2 k < 20(CQ + CP ) + 0(CQ + CP ) 2 + b + 2b(1 + 0CP ) (2.35)

2 CM1 2 CM2 + (1 + 0CQ) + (1 + 0CP + b) . 1 − CM1 1 − CM2

Note that CM1 and CM2 contain factors of 0 and b, and therefore every term on the right hand side of (2.35) contains at least one factor of either 0 or b, both of which go to 0 as

L+ → +∞.

Remark 2.10. A key issue in estimating kπΓ1 − πΓ2 k will be controlling the exponentially ν+x growing factors present in any terms involving Γ. One might naively think that kΓk ∼ e 1 , T −1 −2ν+x and so k(Γ Γ) k ∼ e 1 . However, this is not necessarily the case. To see this, consider the example for n = 3 given by

 ν+x ν+x e 1 e 1 + ν+x  ν2 x 1 Γ =  0 e  ∼ e . 0 0

One can explicitly compute

2ν+x 2ν+x 2ν+x ! 1 e 1 + e 2 −e 1 + T −1 −2ν2 x (Γ Γ) = + + + ∼ e 2ν+x+2ν+x 2ν x 2ν x 2ν x e 1 2 −e 1 e 1 + e 2

T −1 T ν+x −2ν+x ν+x Therefore, a naive estimate would predict Γ(Γ Γ) Γ ∼ e 1 e 2 e 1 , which grows exponentially in x and hence is insufficient. However, a direct calculation shows that

1 0 0 T −1 T   Γ(Γ Γ) Γ = 0 1 0 0 0 0

20 T −1 T This makes sense, since πΓ = Γ(Γ Γ) Γ is a projection matrix, so its norm should not grow (or decay) exponentially in x. In fact, kπΓk = 1, as is the case for any (nontrivial) projection. This means, however, that in estimating kπΓ1 −πΓ2 k, we must not lose too much information about the structure of Γ1,2.

u Proof. (of Theorem 2) Suppose that there exists an x ≥ L+ such that E−(x) ∩ D= 6 {0}. Then (2.28), (2.31), and the above propositions imply

|α| = |πΓ2 α| = |πΓ2 α − πΓ1 α| < |α|, which is a contradiction. Therefore, we obtain Theorem 2 as an immediate corollary.

Proof. (of Propositions 2.8 and 2.9) Recall that, if V is a frame matrix for a given n k-dimensional subspace of R , then

T −1 T πV = V (V V ) V is the corresponding projection matrix. The fact that rank V = k ensures that V T V is invertible.

The idea in bounding kπΓ1 − πΓ2 k will be to combine as many Γ’s as possible into terms of the form Γ(ΓT Γ)−1ΓT , which is a projection and hence has unit norm. We begin by collecting some bounds that will be used in estimating each term. Using this formula and equations (2.29) and (2.32), a direct calculation shows that

T −1 T −1 T πΓ1 = Γ(Γ Γ) Γ + QΓ(Γ Γ) Γ1 ∞ ! T −1 T X k k T −1 T + Γ(Γ Γ) (QΓ) + Γ1 (−1) (M1) (Γ Γ) Γ1 k=1 T −1 T −1 T πΓ2 = Γ(Γ Γ) Γ + (P Γ + B)(Γ Γ) Γ2 ∞ ! T −1 T X k k T −1 T + Γ(Γ Γ) (P Γ + B) + Γ2 (−1) (M2) (Γ Γ) Γ2 k=1 where we have defined

T −1 T T T M1 = (Γ Γ) [Γ QΓ + (QΓ) Γ + (QΓ) (QΓ)] (2.36) T −1 T T T M2 = (Γ Γ) [Γ (P Γ + B) + (P Γ + B) Γ + (P Γ + B) (P Γ + B)] so that

T T T T Γ1 Γ1 = (Γ Γ)(I + M1), Γ2 Γ2 = (Γ Γ)(I + M2).

21 By taking a difference and gathering terms, we obtain the following:

T T T πΓ1 − πΓ2 = (Q − P )πΓ + QπΓQ + P πΓP + πΓ(Q − P ) | {z } I − B(ΓT Γ)−1BT − B(ΓT Γ)−1ΓT (I + P )T − (I + P )Γ(ΓT Γ)−1BT | {z } II ∞ ! ∞ ! X k k T −1 T X k k T −1 T + Γ1 (−1) (M1) (Γ Γ) Γ1 − Γ2 (−1) (M2) (Γ Γ) Γ2 k=1 k=1 | {z } III (2.37)

We bound the norm of the pieces I, II, and III of (2.37) in turn.

Remark 2.11. The γ’s and β’s are not really unique; any scalar multiple of them also works. Our estimate should take this into account. In other words, consider the basis k k solution U˜k = ckUk, which would be equivalent to instead taking ckγ and ckβ . If we define

(n−1)×(n−1) C = diag(c1, . . . , cn−1) ∈ R , this would be equivalent to replacing Γ and B with ΓC and BC, respectively. Inspecting (2.36) and (2.37), each term involves (ΓT Γ)−1 and either two factors of Γ or one factor of Γ and one factor of B. Thus, it seems plausible the constant matrix C would not have an overall effect on the estimate. This will be confirmed below.

Bounds for I: q + We first show that if 0 2nν1 dmax < 1 then:

kQk ≤ 0CQ, kP k ≤ 0CP . (2.38)

Given any matrix M, one has

kMk = max p|λ| (2.39) λ∈σ(M T M)

To estimate P and Q, first note that (2.23) implies that 0 is an upper bound on every s,u T s,u component of the n × n matrices kW1,2 (x) W1,2 (x)k, and thereby

s,u √ kW1,2 (x)k ≤ 0 n. (2.40)

In addition, (2.27) and the fact that kZk = 1 leads to s 1 ν+d kVs,uk ≤ , kVs,uk ≤ 1 max . (2.41) 1 p + 2 2νn dmin 2

22 Using the formula for Q in (2.29), as well as (2.40), and (2.41), we find s ! 2n q + kQk ≤ 0 + + 2nν1 dmax + 20n = 0CQ. (2.42) νn dmin

Using the formula for P in (2.32), and again (2.40), and (2.41), we find

q + 20 2nν1 dmax kP k ≤ = 0CP , (2.43) q + 1 − 0 2nν1 dmax

q + where in the above we have assumed that 0 is sufficiently small so that 0 2nν1 dmax < 1. Hence we obtain:

T T T 2 2 2 k(Q − P )πΓ + QπΓQ + P πΓP + πΓ(Q − P ) k ≤ 20(CQ + CP ) + 0(CQ + CP ) (2.44)

Bounds for II:

We first show that for b defined as in our proposition, we have:

T −1 T T −1 T 2 kΓ(Γ Γ) B k ≤ b, kB(Γ Γ) B k ≤ b .

Consider the term Γ(ΓT Γ)−1BT . Notice that

[Γ(ΓT Γ)−1BT ]T Γ(ΓT Γ)−1BT = B(ΓT Γ)−1BT . (2.45)

Furthermore, using the fact that Γ = ΛΓ˜ and B = Λ−1B˜, we therefore need to understand the norm of B(ΓT Γ)−1BT = Λ−1B˜(Γ˜T Λ2Γ)˜ −1B˜T Λ−1. (2.46)

T 2 We first focus on the matrix Γ˜ Λ Γ,˜ which is real and symmetric. Hence, if {λj} are its ˜T 2 ˜ −1 −1 eigenvalues, then k(Γ Λ Γ) k ≤ λmin. If u is a unit eigenvector with eigenvalue λ, then

+ + λ = hu, λui = hu, Γ˜T Λ2Γ˜ui = hΓ˜u, Λ2Γ˜ui ≥ e2νn xhΓ˜u, Γ˜ui = e2νn xhu, Γ˜T Γ˜ui.

The matrix Γ˜T Γ˜ is again real and symmetric (and of full rank). Moreover, its eigenvalues are positive, because if µ is an eigenvalue with unit eigenvector q, then µ = hq, Γ˜T Γ˜qi = |Γ˜q|2. Hence, + 2νn x T λmin ≥ e min{µ : µ ∈ σ(Γ˜ Γ)˜ }. Denoting µ∗ = min{µ : µ ∈ σ(Γ˜T Γ)˜ }, we find 1 + k(Γ˜T Λ2Γ)˜ −1k ≤ e−2νn x. µ∗

23 + Since kΛ−1k ≤ e−νn x, we therefore find

1 + kB(ΓT Γ)−1BT k ≤ e−4νn xkB˜k2 = 2. µ∗ b Since B(ΓT Γ)−1BT is real and symmetric, this bound also provides a bound on its eigen- values. As a result, (2.39) and (2.45) imply

1 + kΓ(ΓT Γ)−1BT k ≤ √ e−2νn L+ kB˜k =  , µ∗ = min{µ : µ ∈ σ(Γ˜T Γ)˜ }. (2.47) µ∗ b

Hence we can obtain the estimate:

T −1 T T −1 T T T −1 T 2 kB(Γ Γ) B + B(Γ Γ) Γ (I + P ) + (I + P )Γ(Γ Γ) B k ≤ b + 2b(1 + 0CP ) (2.48)

Remark 2.12. Recall Remark 2.11. If in (2.46) we replace Γ˜ and B˜ by Γ˜C and BC˜ , respectively, for some invertible diagonal matrix C, then we find

Λ−1BC˜ (CΓ˜T Λ2Γ˜C)−1CB˜ T Λ−1 = Λ−1B˜(Γ˜T Λ2Γ)˜ −1B˜T Λ−1, and so the estimate does not depend on the matrix C.

Remark 2.13. The quantity µ∗ = min{µ : µ ∈ σ(Γ˜T Γ)˜ } appearing in (2.47) will in practice be computed numerically. Although we know for theoretical reasons that µ∗ > 0, one must be careful that any (controllable) numerical error does not cause a violation of this condition.

Bounds for III:

Keeping Remark 2.10 in mind, we simplify the expressions for M1,2 and III as follows. First, (2.36) can be written as:

T −1 T T −1 T M1 = (Γ Γ) Γ M˜ 1Γ M2 = (Γ Γ) Γ M˜ 2Γ (2.49)

T T where we define M˜ 1 := Q + Q + Q Q and

T T M˜ 2 := P + P + P P + (1 + P T )B(ΓT Γ)−1ΓT + Γ(ΓT Γ)−1BT (1 + P ) + Γ(ΓT Γ)−1BT B(ΓT Γ)−1ΓT .

˜ ˜ A straightforward calculation shows that CM1 ≥ kM1k and CM2 ≥ kM2k. Using (2.49), notice that k T −1 T T −1 T ˜ k M1 (Γ Γ) Γ = (Γ Γ) Γ [M1πΓ] . This can be shown, for example, via an induction argument. As a result,

∞ ! ∞ X k k T −1 T X k ˜ k Γ (−1) M1 (Γ Γ) Γ = πΓ (−1) [M1πΓ] , k=1 k=1

24 and so ∞ ! X k k T −1 T CM1 Γ (−1) M1 (Γ Γ) Γ ≤ . 1 − CM k=1 1 Similarly, we obtain the following estimate

∞ ! X k k T −1 T CM2 Γ (−1) M2 (Γ Γ) Γ ≤ . 1 − CM k=1 2

By factoring out Γ1 = (I + Q)Γ, we can go about bounding our infinite sums as follows:

∞ ! X k k T −1 T 2 CM1 Γ1 (−1) (M1) (Γ Γ) Γ1 = (1 + 0CQ) . (2.50) 1 − CM k=1 1

T −1 T −1 T T −1 Similarly, by writing Γ2(Γ Γ) = (I + P + B(Γ Γ) Γ )Γ(Γ Γ) we obtain:

∞ ! X k k T −1 T 2 CM2 Γ2 (−1) (M2) (Γ Γ) Γ2 = (1 + 0CP + b) . (2.51) 1 − CM k=1 2

Thus, by plugging into (2.37) the estimates from (2.44), (2.48), (2.50) and (2.51), we obtain + −2νn x (2.35). By choosing L+ large we can control the size of 0 and e , and hence also CM2 , which means we can choose L+ so the right hand side of (2.37) is strictly less than 1. This completes the proof.

3 Methodology for Computing Conjugate Points

Here we demonstrate how to rigorously count conjugate points in practice. Again, we restrict our interest to PDEs of the form in (1.6), repeated below:

2 n vt = Dvxx + ∇G(v),G ∈ C (R , R). (3.1)

Moreover, we are interested in computing the spectral stability of an equilibrium solution n ϕ : R → R such that its linearization (1.5) satisfies Hypotheses 1.1 and 1.2. To do so, we perform the following steps:

n (i) For two points v0, v1 ∈ R such that 0 = ∇G(v0) = ∇G(v1) and G(v0) = G(v1), n compute (and prove the existence of) a standing wave ϕ : R → R such that limx→−∞ ϕ(x) = v0 and limx→+∞ ϕ(x) = v1.

u (ii) Fix L− ≥ 0 and prove that E−(x) ∩ D = {0} for all x ≤ −L−.

2n×n (iii) Fix L+ ≥ 0 and calculate a frame matrix U :[−L−,L+] → R whose columns u span E−(x) for all x ∈ [−L−,L+].

25 u (iv) Prove that E−(x) ∩ D = {0} for all x ≥ L+.

(v) Count the conjugate points on [−L−,L+].

By Theorem 1 the number of unstable eigenvalues of L in (1.5) is equal to the number of u conjugate points x ∈ R where E−(x) ∩ D 6= {0}. By Lemma 1.3, x ∈ R is a conjugate point if and only if det A1(x) = 0. Using this approach, we reduce this search for conjugate points down to a finite interval [−L−,L+]. We note that in practice we will fix L+ = 0 and leave L− free to be chosen. We note also that our computational methodology may also be adapted for use with standard/non-validated numerics. For this case, in short, one may skip steps (ii) and (iv).

Being a numerical algorithm several computational parameters, such as L−, must be fixed. To obtain a more theoretical flavor, one could rephrase our validation theorems with phases such as “if L− > 0 is sufficiently large, then ... ”. However as necessitated in constructing a computer assisted proof, we will fix all of our computational parameters, and then after checking explicitly verifiable conditions, determine whether the hypothesis of each validation theorem is satisfied. If any one of the validation criteria prove false, then the entire computer assisted proof is unsuccessful. In this case, we may try to prove the theorem again with a different selection of computational parameters. By performing this calculation using validated numerics, we are able to obtain a mathe- matical proof of the existence of ϕ and calculate its spectral stability. This methodology is based on , a fundamental tool for producing computer assisted proofs in analysis. To briefly overview (see also [Rok01, GS19]), let us define the set of real intervals IR as IR := {[a, b] ⊆ R : −∞ < a ≤ b < ∞}. √ In this manner, a rigorous enclosure of 2, for example, can be given in terms of an upper and lower bound. One may define arithmetic operation ∗ ∈ {+, −, ·,/} on intervals A, B ∈ IR by A ∗ B = {α ∗ β : α ∈ A, β ∈ B}. These operations are explicitly computable n m with the exception, of course, of division when 0 ∈ B. More generally, if f : R → R n m is a continuous function, then we say that F : IR → IR is an interval extension of f if n f(x) ∈ F (X) for all X ∈ IR and all x ∈ X. While interval arithmetic provides a computational methodology for performing set arith- metic, some amount of precision is lost. It should be noted that subtraction (division) is not the inverse operation of addition (multiplication); [a, b] − [a, b] = [a − b, b − a] 6= [0, 0] unless a = b. Furthermore the interval extension of a function need not be unique nor sharp. For example the function defined as A 7→ [−1, 1] for A ∈ IR is an interval extension of the sine function. Nevertheless, the forward propagation of error behaves predictably and the final

26 result is guaranteed to contain the correct answer. More precisely, if A, A,˜ B, B˜ ∈ IR with A ⊆ A˜ and B ⊆ B˜, then A ∗ B ⊆ A˜ ∗ B˜ for ∗ ∈ {+, −, ·,/}. This property is sometimes called the inclusion isotony principle. On a computer using floating point numbers, one is only able to precisely represent the rational numbers with a finite binary expansion. Furthermore, when the sum of two float- ing point numbers is computed, the sum is necessarily rounded to an adjacent floating point number. To account for this, when interval arithmetic is implemented on a computer, outward rounding is used in order to ensure that whatever interval arithmetic operation performed will contain the true solution. By controlling a posteriori error estimates, in- terval enclosures may be computed for elementary functions, matrix operations, and even implicitly defined functions, such as the solution to differential equations.

3.1 Computing a standing wave

To calculate a standing wave to (3.1), we solve the equivalent problem of calculating a heteroclinic orbit in the spatial dynamical system, a Hamiltonian ODE. We define the n generalized momentum variable w = Dvx ∈ R and consider the following Hamiltonian system:

1 −1 2 H(v, w) = 2 kD wk + G(v). (3.2) (v, w)0 = −J∇H(v, w) (3.3) where J denotes the symplectic matrix defined in (1.7). Hence, a standing wave ϕ to (3.1) corresponds to a heteroclinic orbit φ = (ϕ, Dϕx) to (3.3). 2n Assume that p0, p1 ∈ R are equilibria to (3.3) with equal energy H(p0) = H(p1) and without loss of generality, assume that H(p0) = 0. From our assumption (H2) we may assume that the linearizations about p0 and p1 are hyperbolic, both having n-stable and n- unstable eigenvalues, all distinct. A heteroclinic orbit between these points is an intersection u s between W (p0), the n-dimensional unstable manifold of p0, and W (p1), the n-dimensional 2n stable manifold of p1. Generically, two n dimensional manifolds embedded in R will have a trivial intersection. However the assumption H(p0) = H(p1) forces both manifolds to live within the same codimension-1 energy surface, whereby they will generically intersect along a 1-dimensional submanifold.

u We compute the heteroclinic orbit by solving a boundary value problem between W (p0) s and W (p1), using validated numerics to compute the (un)stable manifolds and the solution map of the flow. This subject has been well studied in the validated numerics literature, see for example the works [Ois98, WZ03, vdBDLMJ15, AK15, vdBBLM18, vdBS20] and the the references cited therein. We develop bounds on the (un)stable manifold of a hyperbolic

27 fixed point using the approach taken in [CZ17, CWZ17, CTZ18]. As the estimates for the

(un)stable manifolds are used in (3.13) to compute the constants ηM and CM from (H3), we present the unstable manifold theorem from [CTZ18] in full detail. Since this ODE is time reversible, the same estimates follow for bounding the stable manifold of p1. To begin our analysis, we perform a linear change of variables to align our coordinate axes with the (un)stable eigenvectors of the linearized flow about p0. In the context of computer assisted proofs, where floating point arithmetic is not perfect, we do not require 2n×2n our change of variables to be perfect. To that end, fix an invertible matrix A0 ∈ R whose columns are (approximate) eigenvectors for the Jacobian of − J∇H(p0). We choose the first n columns of A0 to correspond to the unstable eigenvectors, and the second n columns of A0 to correspond to the stable eigenvectors. We define a change of coordinates T ψ(z) := p0 + A0z, where ψ(z) = (v, w) , and correspondingly define a conjugate dynamical system by:

−1 f(z) = −A0 J∇H(ψ(z)) (3.4) z0 = f(z). (3.5)

If the columns of A0 are chosen to be the exact eigenvectors, then the unstable and stable n 2n eigenvectors of the linearization of (3.5) at 0 are given by basis vectors {ei}i=1 and {ei}i=n+1 respectively. Locally, the (un)stable manifolds of the origin can be well approximated as a graph over its (un)stable eigenspaces. Here we follow the approach taken in [CZ17] to approximate these invariant manifolds, identify a neighborhood of validity, and provide rigorous error estimates using validated numerics. As precisely detailed in Theorem 3, if certain a pos- teriori conditions are satisfied (depending on fixed constants ru, rs > 0) then the unstable manifold to (3.5) is contained within the set S = Bu × Bs, where

 2n  2n Bu = (x, 0) ∈ R : |x| ≤ ru ,Bs = (0, y) ∈ R : |y| ≤ rs . (3.6)

These a posteriori conditions are satisfied if (i) the flow exits S through the set ∂Bu × Bs;

(ii) the flow enters S through the set Bu × ∂Bs; and (iii) the flow stays inside a cone of angle ϑ = rs/ru. See Figure 2 for a pictorial representation of these conditions. This last condition is checked using logarithmic norms (3.7) and logarithmic minimums N N×N (3.8) which, for the Euclidean norm on R , is defined for square matrices A ∈ R as follows:

l(A) := max λ ∈ spectrum of (A + AT )/2 , (3.7)  T ml(A) := min λ ∈ spectrum of (A + A )/2 . (3.8)

28 rs ϑ }

y

} x ru

Figure 2: Conditions to check when using a computer to prove an unstable manifold theo- rem. The top and bottom sides of the box correspond to Bu × ∂Bs and the left and right sides correspond to ∂Bu × Bs.

The logarithmic norm l(A) and logarithmic minimum ml(A) may be positive or negative, and provide upper and lower bounds on the linear flow defined by a matrix A, that is eml(A)t ≤ keAtk ≤ el(A)t. For further details, we refer the reader to [HNW87, CZ17] and the references therein.

2n h ∂f i h ∂f i n×n For a set S ⊆ R we define interval matrices πx ∂x (S) , πx ∂y (S) ∈ IR by    h ∂f i n×n ∂f ∂f πx ∂x (S) := A = (ai,j) ∈ IR : ai,j = inf πxi ∂x (z), sup πxi ∂x (z) z∈S j z∈S j    h ∂f i n×n ∂f ∂f πx ∂y (S) := A = (ai,j) ∈ IR : ai,j = inf πxi ∂y (z), sup πxi ∂y (z) . z∈S j z∈S j

2n Here, for a point z = (x, y) ∈ R the maps z 7→ πxi (z) and z 7→ πyi (z) denote the th h ∂f i projections onto the i components of x and y respectively. These sets πx ∂x (S) and h ∂f i ∂f ∂f πx ∂y (S) are essentially a rectangular, convex closure of the πx ∂x and πx ∂y images of a set S. We now present the unstable manifold theorem from [CTZ18].

2n Theorem 3 ([CTZ18, Theorem 12]). Suppose p0 ∈ R is an equilibrium of (3.3). Fix 2n ru, ϑ > 0 and rs = ϑru. Define the sets Bu and Bs as in (3.6) and S = Bu × Bs ⊆ R . Writing (x, y) ∈ Bu × Bs, we define rate constants µ, ξ ∈ R as follows    1  ∂f ∂f µ = sup l πy ∂y (z) + πy ∂x (z) , (3.9) z∈S ϑ 0 ξ = inf ml (A) − sup ϑ A , (3.10) ∂f h ∂f i A∈[πx ∂x (S)] 0 A ∈ πx ∂y (S)

Suppose the following conditions are satisfied:

(i) For any q ∈ ∂Bu × Bs we have hπxf(q), πxqi > 0;

(ii) For any q ∈ Bu × ∂Bs we have hπyf(q), πyqi < 0;

29 (iii) µ < 0 < ξ.

1 u u Then there exists a unique C -function ω : Bu → Bs such that k∂x ω k ≤ L, and

u u Wloc(0) = {(x, ω (x)) : x ∈ Bu} is the local unstable manifold of 0 for the differential equation (3.5). Moreover the map 2n u u P : Bu → R defined by P (x) := ψ(x, ω (x)) is a chart for Wloc(p0), the local unstable manifold of p0 for the differential equation (3.3).

The intuition for the three conditions above, is that we require: (i) the flow exits through the “unstable faces” of the product Bu × Bs;(ii) the flow enters S through the “stable faces” of the product Bu × Bs; and (iii) the flow stays inside a cone of angle ϑ = rs/ru, analogous to the rate constants of Fenichel theory. The hypotheses of Theorem 3 can be explicitly verified using interval arithmetic. Of partic- ular interest is the constant ξ which we use in (3.12) to define the constant ηM from (H3). Note by reversing time, Theorem 3 can be used to provide explicit bounds on the stable s manifold Wloc(p1). u To compute a heteroclinic orbit to (3.3), we solve a boundary value problem between W (p0) s u s and W (p1). That is, we seek to find points q0 ∈ W (p0) and q1 ∈ W (p1) such that if we integrate q0 forward along the flow and we integrate q1 backwards along the flow, they will meet in the middle. That is ΦT (q0) = Φ−T (q1) for some T > 0, where ΦT denotes the time 2n 2n T −solution map of (3.3). To that end, we define in (3.11) below a function h : R → R so 2n that its zeros are (generically) isolated, and correspond to a heteroclinic orbit φ : R → R connecting p0 and p1. u s Recall that W (p0) and W (p1) will generically have a transverse intersection only within 0 2n 0 the 0−energy surface H = {(v, w) ∈ R : H(v, w) = 0}. Away from singular points, H 0 can be locally written as a graph over its first 2n−1 coordinates; if (v, w) ∈ H and wn > 0, q Pn−1 −1 2 then wn = dn − k=1 |dk wk| − 2G(v). Hence, in order to check that ΦT (q0) = Φ−T (q1), it suffices to check that the two points agree on their first 2n − 1 coordinates, and their wn coordinates are of the same sign. To explictly define our boundary value problem operator h, suppose that the hypotheses 0 n 0 2n of Theorem 3 are satisfied for some ru > 0 and Bu ⊆ R , whereby P : Bu → R is a 1 n chart for the (local) unstable manifold of p0. Similarly, for rs > 0 and Bs ⊆ R , suppose 1 2n Q : Bs → R is a chart for the (local) stable manifold of p1. A heteroclinic orbit between 0 1 p0 and p1 can be identified by finding some q0 ∈ Bu, q1 ∈ Bs and T > 0 such that ΦT (P (q0)) = Φ−T (Q(q1)). While the heteroclinic orbit may be unique, the same may not

30 be said of q0 and q1. To impose a phase condition, we fix T > 0 and ru > 0, and require 0 1 2n that kq0k = ru. Thus, we define our boundary value operator h : Bu × Bs → R as follows:  [ΦT (P (q0)) − Φ−T (Q(q1))]i if 1 ≤ i ≤ 2n − 1, [h(q0, q1)] := (3.11) i 2 2 kq0k − ru if i = 2n

th Since p0 and p1 have equal energy, if h(q0, q1) = 0 and the 2n component of both ΦT (P (q0)) and Φ−T (Q(q1)) are of the same sign, then these component values are equal and moreover

ΦT (P (q0)) = Φ−T (Q(q1)) =: φ(0). It then follows that φ(x) := Φx(φ(0)) is a heteroclinic orbit with limits: limx→−∞ φ(x) = p0 and limx→+∞ φ(x) = p1.

Thus, proving the existence of a locally unique heteroclinic orbit between equilibria p0 and p1 is essentially reduced to proving the existence of locally unique solutions to 0 = h(q0, q1) 2n as defined in (3.11). For a given domain U ⊆ IR , we use the CAPD library [KMWZ20] to calculate rigorous enclosures of ΦT (U) and DΦT (U). Lastly, we are able to prove that an 2n explicit neighborhood U ⊆ IR contains a unique zero of h by using the interval-Krawczyk method [HW03].

3.2 Determining L−

u As described in Section 2.1, we wish to find L− > 0 such that E−(x; 0) ∩ D = {0} for all x ≤ −L−. Conditions for this are established in Proposition 2.5, namely if (2.12) is satisfied. This inequality depends on a nontrivial constant λT− .

K−CM −η L In Proposition 2.5 we define the λT := e M − to bound the error when computing − ηM an eigenfunction, cf (2.9). The constants CM and ηM , as we recall from Hypothesis (H3), −η |x| are chosen such that |M(x)−M±| ≤ CM e M . Thus by choosing L− large we may ensure

|M(x) − M±|  1. In the present application, we define M(x) = ∇2G(ϕ(x)) relative to a heteroclinic orbit

φ = (ϕ, Dϕx) to (3.3). We note, however, that heteroclinic orbits are only unique up to 2n translation; if φ : R → R is a heteroclinic orbit, then so too is φσ(x) := φ(x + σ) for any σ ∈ R. In this sense the choice of L− is somewhat arbitrary, or said differently, highly dependent on the choice of translation σ.

As a result, when bounding λT− in Theorem 4 below, it is not the particular value of L− but instead the properties of φ(−L−) which have a direct effect on our estimates. In our u hypothesis we fix φ(−L−) ∈ Wloc(p0) ⊆ Bu × Bs, where Bu has a radius of ru and Bs has a radius of rs = ϑru. By taking ru small, we are able to control the size of λT− .

Theorem 4. Assume the hypothesis of Theorem 3, having fixed appropriate constants ru, ϑ, ξ > 0, and define K− as in (2.6). Suppose that φ is a heteroclinic orbit to (3.3)

31 u such that φ(−L−) ∈ Wloc(p0). Then

−1 p 2 λT− ≤ ξ K−CGru A0 1 + ϑ , where∗ 3 CG ≤ sup kD G(π1 ◦ ψ(z))k. z∈S

Proof. We first calculate constants CM , ηM defined in (2.3) for which

−ηM |x| kA−(x)k ≤ CM e , x ≤ 0.

Note that this bound is only needed for x ≤ −L−. We compute a bound on A−(x) below:

kA−(x)k = kM− − M(x)k 2 2 = D G(ϕ−) − D G(ϕ(x)) , where ϕ− = π1p0. Recall φ = (ϕ, Dϕx), where D is the diffusion matrix. By Theorem 3 we −1 have ψ ◦ φ(x) ∈ S = Bu × Bs for all x ≤ −L−. From the definition of CG and the mean value theorem it follows that

kA−(x)k ≤ CGkϕ(x) − ϕ−k.

To bound kϕ(x) − ϕ−k, we note from [CTZ18, Theorem 14] that

−1 p 2 −ξ|x| kψ ◦ φ(−(x + L−))k ≤ ru 1 + ϑ e , x ≥ 0. (3.12)

Hence for x ≤ −L− it follows that

p 2 −ξ|x+L−| kφ(x) − p0k ≤ rukA0k 1 + ϑ e x ≤ −L−.

Thereby, if we define:

p 2 ξL− ηM = ξ, CM = CGrukA0k 1 + ϑ e , (3.13) then (2.3) is satisfied for x ≤ −L−. From Proposition 2.3 we have K C − M −ηM L− −1 p 2 λT− ≤ e = ξ K−CGru A0 1 + ϑ . ηM

∗ 3 Note that D G(π1 ◦ ψ(w)) is a rank 3 . We may estimate the spectral norm of a rank-3 tensor P 21/2 A = (aijk) by defining matrices Ak as (Ak)ij = aijk and estimating kAk ≤ k kAkk .

32 u 3.3 Calculating E−(x)

u Recall our definition of E−(x) as being the subspaces of solutions to (2.1) which are asymp- u 2n totic to E−∞. That is, if Ui(x): R → R are linearly independent solutions to (2.1) for 1 ≤ i ≤ n such that limx→−∞ Ui(x) = 0, then

u E−(x) = span{U1(x),...,Un(x)}, ∀x ∈ R.

u Recall also our results from Proposition 2.3 and Proposition 2.5, that E−(x) may be repre- −,u sented by a frame matrix W (x) for x ≤ −L− with columns satisfying

−,u −,u λT− −,u kWj (·) − Vj kX ≤ |Vj | =: j, x ≤ −L−. 1 − λT−

∞ 2n u where X = L ((−∞, −L−], R ). To compute a frame matrix of E−(x) for x ≥ −L− one −,u has to solve (2.1) taking as initial values the columns Wj (−L−). A difficulty here is that u we only have a basis of E−(−L−) accurate up to a certain error, and this error needs to be propagated forward when solving the IVP (2.1).

−,u h −,u −,ui To keep track of error, for V = V1 | ... |Vn fix a numerical approximation of the h i ¯−,u ¯ −,u ¯ −,u −,u ¯ −,u eigenvectors V = V1 | ... |Vn and fix ¯i such that |Vi − Vi | ≤ ¯i. Define 2n×n −,u ¯−,u E(x):(−∞, −L−] → R such that W (x) = V + E(x). It follows that the columns satisfy

−,u −,u −,u ¯ −,u |Ei(x)| ≤ |Wi (x) − Vi | + |Vi − Vi |

≤ i + ¯i for 1 ≤ i ≤ n and x ≤ −L−. To bound E(x) computationally we construct an interval − 2n×n matrix E ∈ IR below:   [−1 − ¯1, 1 + ¯1] [−n − ¯n, n + ¯n] −  . .  E :=  . ··· .  . (3.14)   [−1 − ¯1, 1 + ¯1] [−n − ¯n, n + ¯n]

− −,u −,u − Note then that E(x) ∈ E and therefore W (x) ∈ V¯ + E for x ≤ −L−. To bound a u frame matrix of E−(x) for x ≥ −L−, it suffices to bounds solutions of (2.1) with the initial ˜ −,u ¯ −,u ˜ ˜ − values Wi (−L−) = Vi + Ei for each 1 ≤ i ≤ n and all possible initial data Ei ∈ Ei . This may be accomplished with standard techniques in validated numerics, eg [KMWZ20]. Note first that any error terms of the initial condition in the unstable eigendirections will grow exponentially. To mollify this problem we perform a change of variables; for x = −L− n×n there exist matrices Eu(−L−),Es(−L−) ∈ R such that

−,u −,s E(−L−) = V¯ · Eu(−L−) + V¯ · Es(−L−).

33 − −,u −,s From its definition in (3.14) it follows that E 3 V¯ · Eu(−L−) + V¯ · Es(−L−). Hence n×n interval enclosures Eu, Es ∈ IR containing Eu(−L−) and Es(−L−) may be computed as follows " # Eu h i−1 := V¯−,u V¯−,s E−. Es

If I + Eu(−L−) is invertible for the n × n identity matrix I, then a new frame matrix 2n×n u U(−L−) ∈ R for E−(−L−) may be defined as follows

−,u −1 U(−L−) := W (−L−) · (I + Eu(−L−)) (3.15) −,u  −1 = V¯ + E(−L−) · (I + Eu(−L−)) −,u −,s −1 = V¯ + V¯ · Es(−L−)(I + Eu(−L−)) .

Hence if I + E˜u is invertible for all E˜u ∈ Eu, then we may define an interval enclosure 2n×n U(−L−) ∈ IR of the frame matrix U(−L−) as follows

−,u −,s −1 U(−L−) := V¯ + V¯ ·Es (I + Eu) .

Note that all of the error terms in U(−L−) are essentially in the stable eigendirections. 2n×n u Thus for any L+ ≥ 0 we may define a frame matrix U :[−L−,L+] → R for E−(x) by defining the ith column of U(x) as the solution to the initial value problem (2.1) taking as th an initial condition at x = −L− the i column of U(−L−).

Using rigorous numerics, we are able to similarly extend U(−L−) to a function U :[−L−,L+] → 2n×n IR and obtain an enclosure of the frame matrix U(x) for x ∈ [−L−,L+]. In particular, we define the ith column of U(x) as the solution to the initial value problem (2.1) taking as th an initial condition at x = −L− the i column of U(−L−).

3.4 Determining L+

u To prove that E−(x; 0) ∩ D = {0} for all x ≥ L+, we aim to show that kπΓ1 − πΓ2 k < 1 using the estimate in (2.35). Analogous to Theorem 4, we may explicitly obtain bounds on

λT+ and define 0 as in (2.22). To obtain all of the necessary constants for Proposition 2.9, we need to compute bounds on

1 +  := √ e−2νn L+ kB˜k, µ∗ := min{µ : µ ∈ σ(Γ˜T Γ)˜ }, (3.16) b µ∗

˜ ˜ u The matrices B, Γ are defined in terms of how a frame matrix U(0) of E−(0) may be written s u in terms of basis vectors spanning E+(0) and E+(0), cf (2.25) and (2.32). By increasing L+ one is able to control the size of b, however the precise value of L+ should be taken with a grain of salt as heteroclinic orbits are translation invariant. As Theorem 4 was stated

34 without explicit reference to the value of L−, we will without loss of generality choose

L+ = 0. Before further discussing how to compute the matrices B,˜ Γ,˜ recall that throughout Section u 2.2 we assume the particular form of the frame matrix U(x) of E−(x) given in (2.17). 0 00 Namely, we assume that U(x) = [Uϕ|U1| ... |Un−1] where Uϕ = (ϕ , Dϕ ), and U1,...,Un−1 u are some other vectors for which the columns of U(x) span E−(x). However, this choice of frame matrix is generically at odds with how we defined the frame matrix U(x) in (3.15), where for 1 ≤ i ≤ n we simply took each column Ui(−L−) to be an approximate unstable eigenvector of A−∞. To define a frame matrix of the form required in (2.17), first fix some 1 ≤ i ≤ n, and define 2n×(n−1) a matrix Uˆi(x) ∈ R by h i Uˆi(x) := U1(x) ... Ui−1(x) Ui+1(x) ... Un(x) (3.17) where each Uk is a column of the frame matrix U defined in (3.15). So long as Uϕ is not in ˜ u the span of the columns of Uˆi(x) then U(x) := [Uϕ(x)|Uˆi(x)] is a frame matrix for E−(x). n×n−1 For this particular choice of frame matrix, we may now compute the matrices B, Γ ∈ R . + + k ˜k −νj x k k νj x Recall from (2.25) that βj = βj e , γj =γ ˜j e , and

n X k ˜ +,u ˜k ˜ +,s Uk(x) = γ˜j Vj + βj Vj . j=1 n X k +,u ˜ +,u k +,s ˜ +,s = γj (Vj + Wj (x)) + βj (Vj + Wj (x)). j=1

˜ +,s/u Note from (2.23) that |Wj (x)| ≤ 0 for all x ≥ L+ and all j, s, u. Thus, having assumed 1 n−1 1 n−1 L+ = 0, fixed 1 ≤ i ≤ n, and writing B˜ = [β˜ | ... |β˜ ] and Γ˜ = [˜γ | ... |γ˜ ] as in (2.34), we obtain " ˜ # h +,u +,s i h +,u +,s i Γ Uˆ(L+) = V V + W W . i B˜ Let V¯+,u, V¯+,s denote numerical approximations of V+,u, V+,s and define an interval matrix 2n×2n EL+ ∈ IR such that h i h i h i +,u +,s +,u +,s ¯+,u ¯+,s EL+ 3 W W + V V − V V , from which it follows that " # ˜  −1 −1 −1 Γ h +,u +,s i h +,u +,s i ∈ I + V¯ V¯ EL V¯ V¯ Uˆ(L+). (3.18) B˜ + i

35 Using validated numerics we are able to solve (3.18) to obtain enclosures of Γ˜ ∈ Γ˜I ∈ n×n−1 ˜ ˜ n×n−1 IR and B ∈ BI ∈ IR .

From here it is straightforward to bound kB˜k in terms of the interval enclosure B˜I needed ∗ in the estimate for b. However obtaining a lower bound on µ , cf (3.16), requires slightly ˜ ˜ ˜ ˜ n×n−1 more finesse. First, let us write ΓI = Γc + Γ∆, where Γc ∈ IR is the midpoint of ∗ Γ˜I with zero width and Γ˜∆ = Γ˜I − Γ˜c. We obtain a bound on µ by way of the Rayleigh quotient.

µ∗ = min{µ : µ ∈ σ(Γ˜T Γ)˜ } xT (Γ˜ + Γ˜ )T (Γ˜ + Γ˜ )x ≥ inf inf c δ c δ ˜ ˜ n−1 T Γδ∈Γ∆ x∈R ;|x|=1 x x   T ˜T ˜ ˜T ˜ ˜T ˜ x Γc Γc + 2Γδ Γc + Γδ Γδ x = inf inf ˜ ˜ n−1 T Γδ∈Γ∆ x∈R ;|x|=1 x x ˜T ˜ ˜T ˜ ˜T ˜ ≥ min{µ : µ ∈ σ(Γc Γc)} − 2kΓ∆Γck − kΓ∆Γ∆k

˜ ˜T ˜ As Γc is an interval matrix with zero width, calculating min{µ : µ ∈ σ(Γc Γc)} can be done ∗ with little error. Thus, in the manners described above, we are able to estimate b and µ . ˜ ˜ Furthermore, recall that the matrices B and Γ were defined relative to Uˆi for a choice of 1 ≤ i ≤ n. Assuming that Uϕ ∈/ span {Uk} for all 1 ≤ k ≤ n, then there are n different   ˜ ˜ ways to define a frame matrix Uϕ|Uˆi and compute corresponding matrices B and Γ. In kB˜k √ practice, we optimize this choice of i so that b = µ∗ is minimized.

3.5 Counting Conjugate Points

By Lemma 1.3, the problem of computing conjugate points is equivalent to counting the number of zeros of the function

F (x) := det [π1 ◦ U(x)] .

In Section 3.3 we discussed how to compute an interval enclosure of the frame matrix u U(x) ∈ U(x) for E−(x) and x ∈ [−L−,L+]. In a straightforward manner, we are able to obtain interval bounds on F , and F 0 as well by applying the Jacobi formula

d 0  dx det U1(x) = tr adj(U1(x))U1(x) .

u Remark 3.1. For x ≤ −L− we have bounds for a frame matrix of E−(x) given by U(x) ∈ ¯−,u − − u V + E for E given in (3.14). An alternative proof for Proposition 2.5 showing E−(x) ∩ −,u −  D = {0} may be obtained by directly computing det π1 ◦ (V¯ + E ) using interval arith- metic and showing this quantity is bounded away from 0.

36 If L− and L+ are chosen so that Propositions 2.5 and 2.8 are satisfied, then all of the zeros of F are in the interval [−L−,L+]. To rigorously enumerate the zeros of F (x), we take an approach based on the bisection method. That is, we subdivide [−L−,L+] into a cover

{[xi, xi+1]} of subintervals with non-overlapping interiors. For each subinterval [xi, xi+1], we attempt to prove either that F is bounded away from 0, or F has a unique zero in

(xi, xi+1). We do so by checking the hypotheses of the two statements below:

(i) If F (xi) · F (xi+1) > 0 (ie the endpoints have the same sign), and any of the following hold

(a)0 ∈/ F ([xi, xi+1]); ie direct computation. 0 (b)0 ∈/ F ([xi, xi+1])

 h xi+1−xi i 0   h xi+1−xi i 0  (c)0 ∈/ F (xi) + 0, 2 F ([xi, xi+1]) ∪ F (xi+1) − 0, 2 F ([xi, xi+1])

then there are no zeros of F in [xi, xi+1]. Note condition (c) is essentially the mean value theorem.

0 (ii) If F (xi) · F (xi+1) < 0 (ie the endpoints have different sign) and 0 ∈/ F ([xi, xi+1]),

then there is a unique zero of F in (xi, xi+1).

If either statement (i) or (ii) holds on every subinterval in the cover {[xi, xi+1]}, then F has a number of zeros in [−L−,L+] equal to the number of subintervals where statement (ii) holds. If on any subinterval the computer program is unable to verify either statement (i) or

(ii), then the computer assisted proof for enumerating the zeros of F on [−L−,L+] is not successful. In such a case, the computation may be attempted again using a cover of [−L−,L+] with a finer mesh. This will generically improve the interval enclosure of 0 estimates such as F ([xi, xi+1]) or F ([xi, xi+1]). Further improvements may be had by reducing L+ + L−, because – by the nature of rigorously solving an initial value problem – as x increases, the accuracy at which we are able to compute F decreases. However, the value of L+ + L− can only be reduced to the extent that we are still able to, as detailed in u section 3.4, prove that E−(x) ∩ D = {0} for all x ≥ L+.

4 (Un)stable Fronts in a Reaction-Diffusion System

In this section we construct example PDEs of the form (1.6), use validated numerics to construct front solutions, and use the conjugate point method outlined above to show that fronts can be both stable and unstable. As noted in the introduction, this complements

37 previous results. Classical Sturm-Liouville theory can be used to show that any front in a scalar reaction-diffusion equation must be stable (if the essential spectrum is stable), while any pulse in a scalar reaction-diffusion equation must be unstable. In [BCJ+18] it was shown that, for equations of the form (1.6), a symmetric pulse must also be unstable, thus in some sense generalizing the pulse instability result from the scalar to the system case. Here we show that the front stability result from the scalar case does not extend to the system case; fronts can be either stable or unstable. Although it is not surprising that both stable and unstable fronts can exists for systems of reaction-diffusion equation, we are not aware of this being discussed anywhere. We will build our example out of the scalar bistable equation

1  ut = uxx + bf(u), f(u) = u u − 2 (1 − u), (4.1) which has an explicit front solution given by 1 ϕ0(x; b) = √ (4.2) 1 + e−x b/2 that is spectrally stable by the classical results of Sturm-Liouville theory. We note that the essential spectrum of the linearized operator is given by (−∞, −b/2], which is stable if b > 0. We will couple three copies of this equation together in such a way that the resulting system of equations has the gradient structure of (1.6) so that all of the preceding results apply. To that end, consider the system

2 ∂tu1 = ∂xu1 + b1f(u1) + c12g(u1, u2) 2 ∂tu2 = ∂xu2 + b2f(u2) + c12g(u2, u1) + c23g(u2, u3) (4.3) 2 ∂tu3 = ∂xu3 + b3f(u3) + c23g(u3, u2), where f, g are given by

1 1 f(u) = u(u − 2 )(1 − u) g(u, w) = w(1 − w)(u − 2 ).

The coefficients cij are the coupling coefficients; the positive parameters bi control the extent to which the uncoupled equations are the same. If the cij are all zero, then there is a stationary front solution

Φ0(x) = (ϕ0(x; b1), ϕ0(x; b2), ϕ0(x; b3)).

Since each of the three components can be translated independently of the others, in this case the front is stable with a triple eigenvalue at the origin. The corresponding three independent eigenfunctions are given by

0 0 0 (ϕ0(x; b1), 0, 0), (0, ϕ0(x; b2), 0), (0, 0, ϕ0(x; b3)).

38 3 This system is of the form (1.6) with G : R → R given by 1 X 1 X G(u , u , u ) = − b u2(1 − u )2 − c u (1 − u )u (1 − u ). (4.4) 1 2 3 4 i i i 2 i,i+1 i i i+1 i+1 1≤i≤3 1≤i≤2

We will prove the following.

Theorem 5. Consider the PDE in (4.3) with the parameter b = (b1, b2, b3) = (1,.98,.96)

fixed. At each of the four parameter combinations c±,± = (±c12, ±c23) = (±.04, ±.02), there exists a standing wave solution ϕ±,± to (4.3) such that

lim ϕ±,±(x) = (0, 0, 0), lim ϕ±,±(x) = (1, 1, 1). x→−∞ x→+∞ Furthermore,

• There are exactly 0 positive eigenvalues in the point spectrum of ϕ−,−.

• There is exactly 1 positive eigenvalue in the point spectrum of both ϕ+,− and ϕ−,+.

• There are exactly 2 positive eigenvalues in the point spectrum of ϕ+,+.

We note that the specific parameters used in Theorem 5 are somewhat arbitrary, and the program used to produce the computer assisted proof [BJ21] is capable of performing the analogous stability calculation at other (individual) parameters. The parameters in

Theorem 5 were selected so that the standing fronts ϕ±,± to the coupled equation could be computed using validated numerics, in particular when using the standing front to the uncoupled equation (4.1) as an initial approximation. Parameter continuation could be employed to investigate the existence of standing waves to (4.3) and their spectral stability at parameters farther away from the uncoupled case, however this was not explored in this work. We also note that, even though we do not include it here, we did consider the corresponding example in the case n = 2, which produced similar results. Moreover, this example can be generalized in a relatively straightforward way to n equations for n > 3.

Proof. The C++ code for the computer assisted proof of this proposition may be found in [BJ21]. We summarize our approach. To calculate and prove the existence of these standing waves, it is equivalent to compute a connecting orbit to the ODE in (3.3), which for our specific example is given by:

0 0 u1 = v1, v1 = −b1f(u1) − c12g(u1, u2) 0 0 u2 = v2, v2 = −b2f(u2) − c12g(u2, u1) − c23g(u2, u3) 0 0 u3 = v3, v3 = −b3f(u3) − c23g(u3, u2).

39 6 6 Define p0 = 0 ∈ R and p1 = (1, 1, 1, 0, 0, 0) ∈ R . Then p0, p1 are fixed points of (3.3) with equal energy H(p0) = H(p1) = 0. To calculate and prove the existence of these standing waves, we use the methodology described in Section 3.1.

For a fixed L− > 0, we use the methodology described in Section 3.2 to calculate a rigorous u u enclosure of the three vectors which span E−(−L−). Furthermore, we show that E−(x)∩D = {0} for all x ≤ −L−. As described in Section 3.3, we may then solve the initial value problem (2.1), taking u as initial conditions the three vectors which span E−(−L−), and thus obtain a validated u ˜ representation of E−(x) for x ∈ [−L−,L+]. More precisely, (2.1) is given by Ux = AU for ˜  0 D−1  A(x) = −M(x) 0 as in (2.1) where for our example we take D as the identity matrix, 2 and the matrix M(x) = ∇ G(ϕ(x)) is defined for the standing wave ϕ = (u1, u2, u3) as   b1f˜(u1) + c12g˜(u2) c12h˜(u1, u2) 0  ˜ ˜ ˜  M(x) :=  c12h(u1, u2) b2f(u2) + c12g˜(u1) + c23g˜(u3) c23h(u2, u3)  0 c23h˜(u2, u3) b3f˜(u3) + c23g˜(u2) and the functions f,˜ g,˜ h˜ are defined as

f˜(u) = ∂uf(u)g ˜(u) = ∂wg(u, w) h˜(u, w) = ∂ug(u, w) 1 1 1 = 3u(1 − u) − 2 , = u(1 − u), = −2(u − 2 )(w − 2 ).

We then use the methodology described in Section 3.4 to verify the hypothesis of Proposition u 2.9, and thus prove that E−(x) ∩ D = {0} for all x ≥ L+. Lastly, we use the techniques described in Section 3.5 to count all of the conjugate points on the interval [−L−,L+]. As we have also proved that there are no conjugate points when x ≤ −L− nor when x ≥ L+, we have thus obtained a precise count of all conjugate points on x ∈ R. By Theorem 1, the number of conjugate points is equal to the number of positive eigenvalues in the point spectrum of ϕ±,±.

For all values of c = (c12, c23) we consider, the front exists within an intersection of stable and unstable manifolds in the spatial dynamics. Each of these manifolds is 3-dimensional, and the zero energy surface is codimension-1. Therefore, we generically expect a one-dimensional intersection. However, for c = 0 the intersection is 3-dimensional. This corresponds to the fact that, when c = 0, the equations for the ui are uncoupled, and as noted above each of the three component fronts can be translated independently of the other; this leads to the eigenvalue at λ = 0 of geometric and algebraic multiplicity three. For c 6= 0, one of the zero eigenvalues must remain at zero, but the others should move. One could compute the location of these other two eigenvalues by perturbing from the c = 0 case.

40 However, such a perturbative result would only hold for |c| sufficiently small, without a precise characterization of what “sufficiently small” means. One benefit to using validated numerics to obtain this (in)stability result is that we are not required to have c near zero.

5 Future Directions

One natural direction to explore next would be the extension of such results to a more general class of PDEs than (1.6). To do so, one would first need a theoretical result show- ing that the number of unstable eigenvalues can indeed be computed by instead counting conjugate points. For example, such a result for a class of fourth-order PDEs can be found in [How20]. Next, one would need to develop a method using validated numerics, similar to what we have done above, to actually count those conjugate points. This is the subject of current work. A much more difficult, but in some sense more interesting, extension would be to extend these ideas to systems in multiple spatial dimensions by building upon the work in [BCJ+20, BCJ+21, CGL18, CJLS16, CJM15, DJ11]. In addition, as mentioned above the pulse instability result in [BCJ+18] applies only to symmetric pulses in systems of the form (1.6). It would be interesting to construct an example of an asymmetric pulse in such a system that is stable, thus implying, together with the example of §4, that the stability/instability dichotomy for fronts/pulses in scalar reaction-diffusion equation is really specific to the scalar case. This is also the subject of current work.

6 Acknowledgments

The authors wish to thank the Mathematical Sciences Research Institute in Berkeley, CA; this work was begun while the authors were in residence there during Fall 2018. M.B. would like to acknowledge the support of NSF grant DMS-1907923 and an AMS Birman Fellowship. The authors would like to thank Maciej Capi´nskifor showing us the ropes of the CAPD library while at MSRI.

References

[AK15] Gianni Arioli and Hans Koch. Existence and stability of traveling pulse solutions of the FitzHugh-Nagumo equation. Nonlinear Anal., 113:51–70, 2015.

41 [AK20] Gianni Arioli and Hans Koch. Traveling wave solutions for the fpu chain: a constructive approach. Nonlinearity, 33(4):1705, 2020.

[Arn67] V. I. Arnol’d. On a characteristic class entering into conditions of quantiza- tion. Funkcional. Anal. i Priloˇzen., 1:1–14, 1967.

[Arn85] V. I. Arnol’d. The Sturm theorems and symplectic geometry. Functional Anal. Appl., 19(4):251–259, 1985.

[BCC+20] T.J. Baird, P. Cornwall, G. Cox, C.K.R.T. Jones, and R. Marangell. A Maslov index for non-Hamiltonian systems. Preprint, 2020.

[BCJ+18] M. Beck, G. Cox, C. Jones, Y. Latushkin, K. McQuighan, and A. Sukhtayev. Instability of pulses in gradient reaction-diffusion systems: a symplectic ap- proach. Philos. Trans. Roy. Soc. A, 376(2117):20170187, 20, 2018.

[BCJ+20] M. Beck, G. Cox, C. K. R. T. Jones, Y. Latushkin, and A. Sukhtayev. Exponential dichotomies for elliptic pde on radial domains. Mathematics of Wave Phenomena, 2020.

[BCJ+21] Margaret Beck, Graham Cox, Christopher Jones, Yuri Latushkin, and Alim Sukhtayev. A dynamical approach to semilinear elliptic equations. Ann. Inst. H. Poincar´eAnal. Non Lin´eaire, 38(2):421–450, 2021.

[BJ21] Margaret Beck and Jonathan Jaquette. Codes of “validated spec- tral stability via conjugate points”. https://github.com/JCJaquette/ Computing-Conjugate-Points, 2021.

[BNS+18] Blake Barker, Rose Nguyen, Bj¨ornSandstede, Nathaniel Ventura, and Colin Wahl. Computing Evans functions numerically via boundary-value prob- lems. Phys. D, 367:1–10, 2018.

[Bot56] Raoul Bott. On the iteration of closed geodesics and the Sturm intersection theory. Comm. Pure Appl. Math., 9:171–206, 1956.

[BR89] Garrett Birkhoff and Gian-Carlo Rota. Ordinary differential equations. John Wiley & Sons, Inc., New York, fourth edition, 1989.

[BSZ10] Margaret Beck, Bj¨ornSandstede, and Kevin Zumbrun. Nonlinear stability of time-periodic viscous shocks. Arch. Ration. Mech. Anal., 196(3):1011–1076, 2010.

[BZ16] Blake Barker and Kevin Zumbrun. Numerical proof of stability of viscous shock profiles. Math. Models Methods Appl. Sci., 26(13):2451–2469, 2016.

42 [CDB09] Fr´ed´ericChardard, Fr´ed´ericDias, and Thomas J. Bridges. Computing the Maslov index of solitary waves. I. Hamiltonian systems on a four-dimensional phase space. Phys. D, 238(18):1841–1867, 2009.

[CDB11] Fr´ed´ericChardard, Fr´ed´ericDias, and Thomas J. Bridges. Computing the Maslov index of solitary waves, Part 2: Phase space with dimension greater than four. Phys. D, 240(17):1334–1344, 2011.

[CGL18] Roberto Castelli, Marcio Gameiro, and Jean-Philippe Lessard. Rigorous numerics for ill-posed PDEs: periodic orbits in the Boussinesq equation. Arch. Ration. Mech. Anal., 228(1):129–157, 2018.

[CH14] Chao-Nien Chen and Xijun Hu. Stability analysis for standing pulse so- lutions to fitzhugh–nagumo equations. and Partial Differential Equations, 49(1-2):827–845, 2014.

[CJ20] P. Cornwall and C.K.R.T. Jones. Bifurcation to instability through the lens of the Maslov index. Preprint, 2020.

[CJLS16] Graham Cox, Christopher K. R. T. Jones, Yuri Latushkin, and Alim Sukhtayev. The Morse and Maslov indices for multidimensional Schr¨odinger operators with matrix-valued potentials. Trans. Amer. Math. Soc., 368(11):8145–8207, 2016.

[CJM15] Graham Cox, Christopher K. R. T. Jones, and Jeremy L. Marzuola. A Morse index theorem for elliptic operators on bounded domains. Comm. Partial Differential Equations, 40(8):1467–1497, 2015.

[CTZ18] Maciej J Capi´nski,Dmitry Turaev, and Piotr Zgliczy´nski. Computer as- sisted proof of the existence of the Lorenz attractor in the Shimizu–Morioka system. Nonlinearity, 31(12):5410, 2018.

[CWZ17] Maciej J. Capi´nskiand Anna Wasieczko-Zaja,c. Computer-assisted proof of Shil’nikov homoclinics: with application to the Lorenz-84 model. SIAM J. Appl. Dyn. Syst., 16(3):1453–1473, 2017.

[CZ17] Maciej J Capi´nskiand Piotr Zgliczy´nski.Beyond the Melnikov method: a computer assisted approach. Journal of Differential Equations, 262(1):365– 417, 2017.

[DJ11] Jian Deng and Christopher Jones. Multi-dimensional Morse index theorems and a symplectic view of elliptic boundary value problems. Trans. Amer. Math. Soc., 363(3):1487–1508, 2011.

43 [DN08] J. Deng and S. Nii. An infinite-dimensional Evans function theory for elliptic boundary value problems. J. Differential Equations, 244(4):753–765, 2008.

[GS19] Javier G´omez-Serrano.Computer-assisted proofs in PDE: a survey. SeMA Journal, 76(3):459–484, 2019.

[HLS18] Peter Howard, Yuri Latushkin, and Alim Sukhtayev. The Maslov and Morse indices for system Schr¨odingeroperators on R. Indiana Univ. Math. J., 67(5):1765–1815, 2018.

[HNW87] Ernst Hairer, Syvert Paul Noersett, and Gerhard Wanner. Solving Ordinary Differential Equations I: Nonstiff Problems. Springer-Verlag, 1987.

[How20] P. Howard. The maslov index and spectral counts for linear hamiltonian systems on R. Preprint, 2020. [HW03] Eldon Hansen and G William Walster. using interval analysis: revised and expanded. CRC Press, 2003.

[JLM13] Christopher K. R. T. Jones, Yuri Latushkin, and Robert Marangell. The Morse and Maslov indices for matrix Hill’s equations. In Spectral analysis, differential equations and : a festschrift in honor of Fritz Gesztesy’s 60th birthday, volume 87 of Proc. Sympos. Pure Math., pages 205–233. Amer. Math. Soc., Providence, RI, 2013.

[KMWZ20] Tomasz Kapela, Marian Mrozek, Daniel Wilczak, and Piotr Zgliczy´nski. Capd:: Dynsys: a flexible c++ toolbox for rigorous of dynamical systems. Communications in Nonlinear Science and Numerical Simulation, page 105578, 2020.

[KP13] Todd Kapitula and Keith Promislow. Spectral and dynamical stability of nonlinear waves, volume 185 of Applied Mathematical Sciences. Springer, New York, 2013. With a foreword by Christopher K. R. T. Jones.

[LI82] Oscar E Lanford III. A computer-assisted proof of the Feigenbaum conjec- tures. Bulletin of the American Mathematical Society, 6(3):427–434, 1982.

[NPW19] Mitsuhiro T Nakao, Michael Plum, and Yoshitaka Watanabe. Numerical ver- ification methods and computer-assisted proofs for partial differential equa- tions. Springer, 2019.

[Ois98] Shin’ichi Oishi. Numerical verification method of existence of connecting orbits for continuous dynamical systems. Journal of Universal Computer Science, 4(2):193–201, 1998.

44 [OS10] M. Oh and B. Sandstede. Evans functions for periodic waves on infinite cylindrical domains. J. Differential Equations, 248(3):544–555, 2010.

[Rok01] Jon G Rokne. Interval arithmetic and interval analysis: an introduction. In Granular computing, pages 1–22. Springer, 2001.

[San02] Bj¨ornSandstede. Stability of travelling waves. In Handbook of dynamical systems, Vol. 2, pages 983–1055. North-Holland, Amsterdam, 2002.

[Tuc02] Warwick Tucker. A rigorous ODE solver and Smale’s 14th problem. Foun- dations of Computational Mathematics, 2(1):53–117, 2002.

[vdBBLM18] Jan Bouwe van den Berg, Maxime Breden, Jean-Philippe Lessard, and Maxime Murray. Continuation of homoclinic orbits in the suspension bridge equation: a computer-assisted proof. Journal of Differential Equations, 264(5):3086–3130, 2018.

[vdBDLMJ15] Jan Bouwe van den Berg, Andr´eaDeschˆenes,Jean-Philippe Lessard, and Ja- son D Mireles James. Stationary coexistence of hexagons and rolls via rigor- ous computations. SIAM Journal on Applied Dynamical Systems, 14(2):942– 979, 2015.

[vdBL15] Jan Bouwe van den Berg and Jean-Philippe Lessard. Rigorous numerics in dynamics. Notices Amer. Math. Soc, 62(9):1057–1061, 2015.

[vdBS20] Jan Bouwe van den Berg and Ray Sheombarsing. Validated computations for connecting orbits in polynomial vector fields. Indagationes Mathematicae, 31(2):310–373, 2020.

[WZ03] Daniel Wilczak and Piotr Zgliczynski. Heteroclinic connections between pe- riodic orbits in planar restricted circular three-body problem–a computer as- sisted proof. Communications in mathematical physics, 234(1):37–75, 2003.

45