CLASSICAL AND NONCLASSICAL LIE SYMMETRIES OF THE K(m, n) DISPERSION

Caylah N. Retz

A Thesis Submitted to the University of North Carolina Wilmington in Partial Fulfillment Of the Requirements for the Degree of Master of Science

Department of and Statistics

University of North Carolina Wilmington

2012

Approved by

Advisory Committee

Gabriel Lugo Michael Freeze

Russell Herman Chair

Accepted by

Dean, Graduate School This thesis has been prepared in the style and format Consistent with the journal American Mathematical Monthly.

ii TABLE OF CONTENTS

ABSTRACT ...... v DEDICATION ...... vi ACKNOWLEDGMENTS ...... vii LIST OF TABLES ...... viii LIST OF FIGURES ...... ix 1 INTRODUCTION ...... 1 2 ELEMENTARY DIFFERENTIAL ...... 6 2.1 Topology ...... 6 2.2 ...... 7 2.3 Differentiable Maps and Rank ...... 9 2.4 Submanifolds ...... 12 2.5 Vector Fields ...... 12 2.5.1 Flows ...... 15 2.5.2 Lie Brackets and Lie Algebras ...... 16 3 LIE GROUPS ...... 17 3.1 r- Lie Groups ...... 17 3.2 Lie Subgroups ...... 18 3.2.1 Local Lie Groups ...... 19 3.2.2 Transformation Groups ...... 19 3.3 One-Parameter Groups of Transformations ...... 24 3.3.1 Lie Algebras ...... 24 3.3.2 The Lie Series ...... 28 3.4 Infinitesimal Transformations ...... 33 3.4.1 Fundamental Theorem of Lie ...... 36 4 SYMMETRY GROUPS AND INVARIANCE ...... 44

iii 4.1 Algebraic Systems ...... 44 4.1.1 Constructing Invariants ...... 48 4.2 Prolongation ...... 50 4.3 Prolongation of Differential ...... 54 4.3.1 Total Derivatives ...... 56 4.3.2 The General Prolongation Formula ...... 57 4.4 Burgers’ Equation ...... 60 4.5 Nonclassical Symmetries ...... 69 4.5.1 The Nonclassical Method ...... 71 5 THE K(m, n) DISPERSION EQUATION ...... 74 5.1 K(2, 2) Equation ...... 74 5.2 The K(m, n) Equation ...... 78 5.2.1 Calculations ...... 79 5.2.2 Generators and Transformation Groups ...... 81 5.2.3 Invariants and Reductions ...... 85 5.3 Nonclassical Symmetries of K(m, n)...... 91 6 CONCLUSION ...... 98 REFERENCES ...... 103 APPENDIX ...... 105 A The Coefficient Functions φJ ...... 105 B Maple Code for K(2, 2)...... 106 C Maple Code for K(m, n)...... 107 D Complete Maple Output ...... 110 E Maple Code for Nonclassical K(m, n)...... 124

iv ABSTRACT

The purpose of this thesis is to present applications of Lie groups to solve the K(m, n) dispersion equation. Focus is first placed on discussing the theory behind Lie groups and how they may be applied as a solution technique of a system. Topics of discussion include topology, manifolds, groups, Lie groups, groups of transforma- tions, invariants, and prolongation. We differentiate between what we call classical and nonclassical symmetries and establish methods for calculating each. A sim- ple example of using Lie symmetry methods is thoroughly presented using Burgers’ equation to demonstrate the inner calculations behind this technique. Focus is then changed to the K(m, n) equation, where emphasis is placed on finding the sym- metries and summarizing the types of solutions that are produced under both the classical and nonclassical methods.

v DEDICATION

This thesis is dedicated to my parents, Jeffrey and Tammy Shunk, for their continued support and encouragement in my academic endeavors.

vi ACKNOWLEDGMENTS

I would first like to express my infinite gratitude and debt to my thesis advisor, Dr. Russell Herman. His guidance and encouragement was key to grasping essential concepts and the backbone of accomplishing this work. He always instructed me by providing reference materials, extensive examples, and lectures well in advance so that I could better comprehend the content as I approached it. His consistent effort to be that prepared and spend so much of his time working with me cannot be thanked enough. This work would have been impossible without him, and I could not have had a better teacher or advisor. I would also like to thank my committee members, Dr. Gabriel Lugo and Dr. Michael Freeze, for their time and feedback regarding this thesis. Both are excellent professors and my appreciation for the talents of each greatly influenced my decision to have them on my committee. All three of the professors mentioned here have greatly impacted my education in their own unique fashion, and they each mean very much to me as a result. Thanks are due to my family for their continued support in my academic adven- tures. I would not have even considered a graduate program without their sugges- tions, so it is certain that I would not be at this point without them. I am especially indebted to my husband, who has helped me through hard times and sacrificed so much for me to get where I am today.

vii LIST OF TABLES

1 Complete list of coefficient equations...... 61 2 Reduced coefficient list...... 62 3 Reduced K(2, 2) coefficient equations...... 76 4 Groups, Solutions, and Invariants of K(2, 2) equation...... 78 5 Reduced K(m, n) coefficient equations...... 80 6 Further reduced K(m, n) coefficient equations ...... 80 7 K(m, n) generator results...... 82 8 Generators, Groups, and Solutions of K(m, n)...... 83 9 Invariants of K(m, n) equation...... 88 10 Nonclassical generators of K(m, n) equation...... 97

viii LIST OF FIGURES

1 Charts on a M ...... 8 2 Tangent Vector ...... 13

3 Path, or orbit, from (x, y) to (x1, y1)...... 20 4 Composition Ψ(ε, Ψ(δ, x)) = Ψ(ε + δ, x)...... 22 5 Orbit: the path translated level take ...... 23 6 Tangent housing x∗ = X(x; ε)...... 34 7 It’s all connected! ...... 43 8 Invariant Surface...... 51 9 Solution process...... 59

ix 1 INTRODUCTION

By definition, a differential equation is an equation relating one or more deriva- tives of an unknown . Solutions of simple differential equations may be found using one, or more, of many techniques developed in an elementary differen- tial equations course. Most differential equations, however, require more rigorous solution techniques because of their non-linearity and higher orders. As a result, thought must be placed on which solution techniques would be the most fruitful for solving the equation(s) of interest. In this thesis, emphasis is placed on demonstrat- ing one class of solution methods in particular, known as Lie symmetry methods. One of the earliest solution techniques of differential equations we learn is sep- aration of variables. The question that we now wish to present is whether a PDE that is not separable can be made so by a change of variables. The idea is that once the appropriate change of variables, or transformation, is found, the system of differential equations reduces to a system of ordinary differential equations that have known solutions. We will spend a considerable portion of this thesis describing the theory behind Lie symmetry methods with this idea in mind. We will then turn focus to applying these methods to a differential equation of particular interest: the K(m, n) dispersion equation, which models the dispersion patterns of liquid drops [11]. Finding a change of variables that makes a system of differential equations sep- arable is not always simple task. Obviously, the more challenging the system, the more difficult it would be to have insight on what change of variables would work. The key to this proposed method, then, is being able to calculate what change of variables would simplify any given system. Fortunately, we have a brilliant process developed by Sophus Lie, called Lie symmetry methods, that we will use to put

1 this idea in motion [7]. The pivotal piece of the theory that Lie developed is the discovery of transformation groups, called Lie groups, that continually map curves into other curves. One basic principle in the theory behind Lie symmetry methods is that solutions of differential equations can be represented by specific functions that we will call invariants. The level curves associated with these functions are known as solution curves. When looking at the level curves, there is a direction that allows each to be “slid” into a nearby level curve. This is known as mapping a solution curve into another. If we can identify a “direction” that allows the level curves to be mapped into each other, we can use it as criterion that any of new coordinates have to uphold. As long as the solution curves “slide” into each other, we know the new coordinates have not altered the solutions in any way. When we find these mapping coordinates, we say that we have found a symmetry of that function under that mapping. So far, we have recognized that a change of variables may make an equation solvable and that as long as the new variables do not change the solution curves, then the change is valid. Thought must now be placed on how we are to distinguish what mappings will work. First, we define an orbit as the path that the mapped solution curves will take. Once this function is found, we have a set condition that all of our mappings must satisfy. If the invariant functions which describe the solution curves follow the path of the orbit, then we know that the solution curves mapped along its path will slide into each other. To use this, we must first describe the environment where everything lives and how it all behaves, as we will do in Chapter 2. In the beginning of Chapter 2, we venture into the discussion of topology and manifolds. Before we can find the invariants or orbits of a system, we must describe the region where the solution curves live. We would like to have a space that makes

2 it easier to find and use these variables and symmetries. Manifolds are advantageous to our cause, as they have features that make objects described in them “coordinate free.” We will begin by describing concepts in topology and using those to define manifolds. Next, we must consider the possibility that a differential equation may have more than one set of coordinates that reduce it. In fact, it is quite common to find the existence of several changes of variables that will make our system solvable. This is useful because one invariance condition may be easier to utilize than the next. So, the new dilemma is being able to find all of the invariance conditions. With this in mind that we appeal to group theory in Chapter 3. Group theory was created as a means to study the solutions, or roots, of algebraic equations. The idea that the solutions of equations could be generated from a group, called a symmetry group, was developed by Lagrange and further studied by Galois. Part of group structure is a binary operation that maps elements of the group to other elements of the group. Symmetry groups are simple examples of this phenomenon, where the operation is composition of functions and each element describes a set of mappings from one element to the next. Groups in this setting are called discrete groups. For our purposes, continuous groups are beneficial since we need to continually map curves under a given parameter. So now we need to describe invariant functions of continuous groups of trans- formations. Sophus Lie extended existing symmetry techniques of solving systems of algebraic equations to solving systems of differential equations. The extension involved describing a new kind of group, now called a Lie group, that carries within it the underlying properties of a manifold. We begin in Chapter 3 by describing Lie groups and further expanding the list of properties that our groups must have. We start by redefining the original notion of a group to make the properties behave more like transformations, where we think of

3 the solution curves as being transformed, or mapped, into each other. We call these “groups of transformations” and in shaping groups to have these characteristics, we now have a means to describe the directions that solution curves need to be mapped. So, if we can find a group that is connected to our system of equations that houses all invariant mappings of that system, then we have an efficient system of extracting them. Finally, a process of using group theory presents itself: if we can find the group of transformations for a system of differential equations, we can almost immediately pull the solutions of the system out of the transformation group by using generators of that group. We establish this at the end of Chapter 3 by using what we will call infinitesimal generators, which relate Lie groups to Lie algebras, by a mapping that we call the exponential map. This connection proves pivotal for establishing a means to actually calculate the infinitesimal generators, which we show in Chapter 4 lead to reductions and solutions. Having presented most of the classical theory of Lie methods, we introduce in Chapter 4 the process by which we may make the theory useful. The step-by-step process is called prolongation, which involves expanding the infinitesimal generators so they act on a bigger space, called a jet space. We demonstrate this method my solving Burgers’ equation using Lie symmetries. At the end of Chapter 4, we intro- duce a slightly altered version of the step-by-step process of classical symmetries, and add a condition to this process and the solutions to get so-called nonclassical symmetries. In the next chapter, Chapter 5, we finally discuss the topic at the epicenter of this work: the K(m, n) dispersion equation. Emphasis is placed on using Lie symmetry methods to find and categorize its solutions in both the classical and nonclassical setting. The K(m, n) equation is a generalized version of the well-known KdV equation, which produces compacton solutions [14]. Compacton solutions are also

4 known as solitary , with compact support, so that the solutions vanish outside a finite region. Among the different solutions we may find for K(m, n), we suspect we will find traveling solutions.

5 2 ELEMENTARY DIFFERENTIAL GEOMETRY

The subject of solving differential equations with Lie symmetry groups involves the intersection of two considerably different topics of mathematics. Concepts in both differential geometry and Lie algebra provide vital preparation for and insight into the foundation of this solution technique. Before describing the solutions of these equations, or even the differential equations themselves, it is important to establish the background. This is the role that differential geometry will play as we describe its relevance by way of manifolds.

2.1 Topology

A manifold provides an environment for the objects they contain that is es- sentially “coordinate-free.” This proves very useful for the objects that we wish to describe and solve. Defining a manifold requires some discussion of basic ideas in topology.

Definition 2.1.1. A topological space is a non- E together with a family

I = (Ui|i ∈ I) of subsets of E satisfying the following axioms

E ⊂ E ⇒ E ∈ I, ∅ ∈ I,

\ J finite, J ⊂ I ⇒ Ui ∈ I, i∈J [ J ⊂ I ⇒ Ui ∈ I. i∈J

The elements of I are called open relative to I, or of only one topology is used, just simply open. The pair (E, I) is called a topological space.

6 Definition 2.1.2. Let (E, I) be a topological space. It is a Hausdorff space if and only if it satisfies the following additional axiom:

For every pair of distinct points x1, x2 ∈ E there are disjoint neighborhoods

Ui(xi), i = 1, 2 :

(∀x1, x2 ∈ E, x1 6= x2)(∃U1(x1),U2(x2)) : U1(x1) ∩ U2(x2) = ∅ [13].

Definition 2.1.3. A topological space is connected if it cannot be written as the disjoint union of two open sets.

2.2 Manifolds

As we progress through this section, we will define manifolds and how mappings act within their inherent structure. We can think of manifolds as a space that is locally Euclidean. In other words, if we “cut” out a part of a manifold, it will take the form of well-known objects in Euclidean geometry. In this way, we describe manifolds as being stitched together with Euclidean patches, called charts, and we want there to be as much overlapping as possible. This overlapping, as we will see, creates a nice differentiable structure that we may describe mappings upon.

Definition 2.2.1. An m-dimensional manifold is a set M, together with a count- able collection of subsets Uα ⊂ M, called coordinate charts, and one-to-one functions

m χα : Uα → Vα onto connected open subsets Vα ⊂ R , called local coordinate maps, which satisfy the following properties [7]:

(a) The coordinate charts cover M:

[ Uα = M. α

(b) On the overlap of any pair of coordinate charts Uα ∩ Uβ the composite map

−1 χβ ◦ χα : χα(Uα ∩ Uβ) → χβ(Uα ∩ Uβ)

7 is a smooth (infinitely differentiable) function.

(c) If x ∈ Uα,x ˜ ∈ Uβ are distinct points of M, then there exist open subsets ˜ ˜ W ⊂ Vα, W ⊂ Vβ, with χα(x) ∈ W , χβ(˜x) ∈ W , satisfying

−1 ˜ χα (W ) ∩ χβ(W ) =Ø.

The interactions of the manifold properties are illustrated in Figure 1.

Uα Uβ

χα χβ

−1 χβ ◦ χα

Figure 1: Charts on a manifold M

Example 2.2.1. The Euclidean space Rm is a manifold. It has one coordinate chart U = Rm and the identity chart χ = Rm → Rm. In addition, any open subset U ⊂ Rm is a m-dimensional manifold.

8 2.3 Differentiable Maps and Rank

Quite often, we are interested in mappings on the differentiable structure of a manifold. We want these mappings to be smooth, so first we restrict the mappings to differentiable maps. We would also like to draw attention to the characteristics of differentiable maps. For theorems that we will discuss in Chapter 4, we must describe what it means for differentiable maps to be of maximal rank.

Definition 2.3.1. A differentiable map is a linear map such that the operations of vector addition and scalar multiplication are preserved.

The main properties of differential maps are [13]

(1) A constant map is differentiable at any point of its domain of definition.

(2) A linear map B : Rn → Rm is differentiable at any point x ∈ Rn and B0(x) = B.

(3) Differentiation formulae: Let f, g : u → Rm be differentiable maps at x ∈ U ⊂ Rn, then f + g, f · g and λf, λ ∈ R, are differentiable at x, and we have

d(f + g)(x) = df(x) + dg(x),

d(f · g)(x) = f(x) · dg(x) + g(x) · df(x),

d(λf)(x) = λdf(x).

For the rest of this thesis, it will be convention to denote a multiplication f · g as simply fg.

Definition 2.3.2. A differentiable mapping f : U → Rm of an open subset U of Rn into Rm is said to be continuously differentiable or, of class C1 (written f ∈ C1(U, Rm)) if

9 df : U −→ L(Rn, Rm) is a continuous map, i.e. df ∈ C0(U, L(Rn, Rm)) [13].

Proposition 2.3.1. Let f : U → Rm be a differentiable map on an open subset U of Rn. The of the differential df(x) is given with respect to the canonical bases of Rn and Rm, respectively, by [13]

 ∂f 1 ∂f 1  1 (x) ··· n (x)  ∂x ∂x   j  j  . . .  ∂f (A ) =  . .. .  = (x) , 1 ≤ i ≤ n; 1 ≤ j ≤ m. i   ∂xi  ∂f m ∂f m  (x) ··· (x) ∂x1 ∂xn

This m × n matrix is referred to as the Jacobian matrix of f at the point x ∈ U.

Definition 2.3.3. The rank of the mapping f at the point x ∈ U is defined to be the rank of the Jacobian matrix at x [13].

Theorem 2.3.1. Let F : M → N be of maximal rank at x0 ∈ M. Then there are

1 m 1 n local coordinates x = (x , . . . , x ) near x0, and y = (y , . . . , y ) near y0 = F (x0) such that these coordinates F has the simple form [7]

y = (x1, . . . , xm, 0,..., 0), if n > m, or y = (x1, . . . , xn), if n ≤ m.

Definition 2.3.4. The map f : U → V is a C1-diffeomorphism if [13]

(1) f ∈ C1(U, Rn);

(2) f is bijective;

(3) f −1 ∈ C1(V, Rn).

10 In order to extend these properties to functions of several variables, we have to introduce higher-order differentials.

Definition 2.3.5. Let f : U → Rm be a map which is assumed to be differentiable in U ⊂ Rn. Hence the derivative

1 n m df = f : U −→ L(R , R ) exists and is also differentiable. The map f : U → Rm is said to be differentiable of order k on an open subset U of Rn, if

k k−1 n n m ∼ n n n m 0 d f = d(d f): U ⊂ R −→ Lk(R , R ) = L(R , L(R ,..., L(R , R )); d f ≡ f exists. If dkf is continuous, f is said to be of class Ck [13]. We define f to be C∞ if it is Ck for all k ≥ 0.

Definition 2.3.6. The map f : U → V, where U and V are open subsets of Rn, is a Ck-diffeomorphism 0 ≤ k ≤ ∞, if [13]

(1) f ∈ Ck(U, Rn);

(2) f is bijective;

(3) f −1 ∈ Ck(V, Rn).

The mappings that hold particular interest for us are ones that are one-to-one and onto. We are defining an environment where all actions are mappings, and the mappings are smooth and continuous. In order for the maps to be differentiable, we must have differentiable structure in the manifolds. We note that the degree of smoothness of a manifold M is determined by the degree of differentiability of the

−1 overlap functions χβ ◦χα [12]. We are interested in smooth manifolds, so we require that the overlap functions be C∞ diffeomorphisms.

11 From here forward, we will add the restriction that manifolds must be of class C∞, making it analytic and smooth as a result. We will also say that manifolds are of constant dimension and call them differentiable manifolds.

Definition 2.3.7. Let F : M → N be a smooth mapping from an m-dimensional manifold M to an n-dimensional manifold N. The rank of F at a point x ∈ M is the rank of the n × m Jacobian matrix (∂F i/∂xj) at x, where y = F (x) is expressed in any convenient local coordinates near x. The mapping F is of maximal rank on a subset S ⊂ M if for each x ∈ S the rank of F is as large as possible (i.e., the minimum of m and n) [7].

2.4 Submanifolds

When defining objects acting on a manifold, often we are only interested in a certain section, or subset, of that manifold. When examining a subset, we must be sure that it carries with it the intrinsic properties of the greater manifold. We call these subsets submanifolds.

Definition 2.4.1. Let M be a smooth manifold. A submanifold of M is a subset N ⊂ M, together with a smooth, one-to-one map φ : N˜ → N ⊂ M satisfying the maximal rank condition everywhere, where the parameter space N˜ is some other manifold and N = φ(N˜) is the of φ. In particular, the dimension of N is the same as that of N˜, and does not exceed the dimension of M [7].

The map φ is often called an immersion, so that a submanifold that contains it is called an immersed submanifold, otherwise known as a regular submanifold.

2.5 Vector Fields

We are interested in tangent vectors to solution curves and work towards defining the “infinitesimal transformation”, where the solution curves to our system are ap-

12 propriately mapped. Suppose C is a smooth curve on a manifold M, parametrized by

Φ: I → M, where I is a subinterval of R. In local coordinates x = (x1, . . . , xm),C is given by m smooth functions φ(ε) = (φ1(ε), . . . , φm(ε)) of the real variable ε.

Definition 2.5.1. At each point x = φ(ε) of C the curve has a tangent vector, namely the derivative φ˙(ε) = dφ/dε = (φ˙1(ε),..., φ˙m(ε)). In order to distinguish between tangent vectors and local coordinate expressions for points on the manifold, we adopt the notation ∂ ∂ ∂ v| = φ˙1(ε) + φ˙2(ε) + ··· + φ˙m() x ∂x1 ∂x2 ∂xm for each tangent vector to C at x = φ(ε)

Two curves C = {φ(ε)} and C˜ = {φ˜(θ)} passing through the same point

x = φ(ε∗) = φ˜(θ∗) for some ε∗, θ∗, have the same tangent vector if and only if their derivatives agree at the point: dφ dφ˜ (ε∗) = (θ∗) dε dθ

Figure 2: Tangent Vector

Definition 2.5.2. The collection of all tangent vectors to all possible curves passing through a given point x in M is called the tangent space to M at x, and is denoted by TM|x.

13 Definition 2.5.3. The collection of all tangent spaces corresponding to all points x in M is called the tangent bundle of M, denoted by

[ TM = TM|x. x∈M

Definition 2.5.4. A vector field v on M assigns a tangent vector v|x ∈ TM|x to each point x ∈ M, with v|x varying smoothly from point to point. In local coordinates (x1, . . . , xm), a vector field has the form

∂ ∂ ∂ v| = ξ1(x) + ξ2(x) + ··· + ξm(x) , x ∂x1 ∂x2 ∂xm where each ξi(x) is a smooth function of x.

At this point, we would like to begin connecting vectors to an ability to map curves. To do this, we must define curves and flows. Flows, in the end, are the keys to solidifying the differential geometry aspect of Lie symmetries.

Definition 2.5.5. An integral curve of a vector field v starting at x0 is a smooth parametrized curve x = φ(ε) whose tangent vector at any point coincides with the value of a given tangent vector v at the same point x0 = φ(0):

˙ φ(ε) = v|φ(ε) for all ε.

In local coordinates, x = φ(ε) = (φ1(ε), . . . , φm(ε)) must be a solution to the autonomous system of ordinary differential equations

dxi = ξi(x), i = 1, . . . , m, (1) dε where the ξi(x) are the components of v at x. For ξi(x) smooth, the standard existence and uniqueness theorems for systems of ordinary differential equations [7] guarantee that there is a unique solution to (1) for each set of initial data

14 φ(0) = x0.

This in turn implies the existence of a unique maximal integral curve passing through a given point, where “maximal” means that it is not contained in any longer integral curve.

2.5.1 Flows

If v is a vector field, the parametrized maximal integral curve passing through x in M is given by Ψ(ε, x) and is called the flow generated by v.

Definition 2.5.6. The flow of a vector field has the basic properties:

Ψ(0, x) = x, (2)

d Ψ(ε, x) = v| (3) d Ψ(ε,x) for all ε where defined, and

Ψ(δ, Ψ(ε, x)) = Ψ(δ + ε, x), x ∈ M (4) for all δ, ε ∈ R such that both sides of the equation are defined.

Here, Property (3) states that v is tangent to the curve Ψ(ε, x) for fixed x at ε = 0. The condition that ε = 0 carries over into all other aspects of Lie symmetry methods, as flows prove essential to development. Property (4) says that if two flows are composed, the resulting flow is simply the addition of the of the two composed flows. In other words, flows are additive.

15 2.5.2 Lie Brackets and Lie Algebras

Definition 2.5.7. Given two n×n matrices A and B, the bracket (or commutator) of A and B, denoted [A, B], is defined to be [3]

[A, B] = AB − BA.

Definition 2.5.8. A finite-dimensional real or complex Lie algebra is a finite- dimensional real or complex vector space g, together with a map [·, ·] from g × g into g, with the following properties [3]:

1.[ ·, ·] is bilinear.

2.[ X,Y ] = −[Y,X] for all X,Y ∈ g.

3.[ X, [Y,Z]] + [Y, [Z,X]] + [Z, [X,Y ]] = 0 for all X,Y,Z ∈ g.

Lie algebras are related to Lie groups, as we will discuss in the next chapter.

16 3 LIE GROUPS

In the previous chapter, we defined vector fields and how they act on functions. We also very briefly discussed simple characteristics of Lie algebras. In this chapter, we will expand on the basic concept of groups and describe Lie groups. From Lie groups evolves the necessity of Lie groups of transformations, which is the back- bone of mapping solution curves into each other. We then borrow concepts from group theory and differential geometry to define one-parameter groups of transfor- mations. One-parameter transformation groups contain all the invariant solutions of our system of equations.

3.1 r-Parameter Lie Groups

Our objective is to define Lie groups of transformations. So, let us start with the definition of a group and show how transformations have group structure.

Definition 3.1.1.

Definition 3.1.2. A group is a set G equipped with a binary operation ∗ such that [9]

(i) the associative law holds: for every x, y, z ∈ G,

x ∗ (y ∗ z) = (x ∗ y) ∗ z;

(ii) there is an element e ∈ G, called the identity, with e ∗ x = x = x ∗ e for all x ∈ G;

(iii) every x ∈ G has an inverse: there is x−1 ∈ G with x ∗ x−1 = e = x−1 ∗ x.

17 We would now like to structure a group that also has the properties of a manifold. This ensures that mappings are continuously differentiable.

Definition 3.1.3. An r-parameter Lie group is a group G which also carries the structure of an r-dimensional smooth manifold in such a way that both the group operation

m : G × G → G, m(g, h) = g ∗ h, g, h ∈ G, and the inversion

i : G → G, i(g) = g−1, g ∈ G, are smooth maps between manifolds [7].

3.2 Lie Subgroups

Often, we are only interested how objects map locally on manifolds. Just the same, we are interested in Lie groups that act locally. In this event, we must see that the local regions can be “closed off” from the rest of the group on the manifold and studied independently. This is accomplished by ensuring that “local” Lie groups maintain the properties of the bigger group.

Definition 3.2.1. A Lie subgroup H of a Lie group G is given by a submanifold φ : H˜ → G, where H˜ itself is a Lie group, H = φ(H˜ ) is the image of φ, and φ is a Lie group homomorphism.

Theorem 3.2.1. Suppose G is a Lie group. If H is a closed subset of G, then H is a regular submanifold of G and hence a Lie group in its own right. Conversely, any regular Lie subgroup of G is a closed subgroup [7].

18 3.2.1 Local Lie Groups

Since we want to concentrate on areas of the Lie groups that are close to the identity, we will describe them using local coordinates and call them local Lie groups.

Definition 3.2.2. An r-parameter local Lie group consists of connected open

r subsets V0 ⊂ V ⊂ R containing the origin 0, and smooth maps

m : V × V → Rr, defining the group operation, and

i : V0 → V , defining the group inversion, with the following properties [7].

(a) Associativity. If x, y, z ∈ V , and also m(x, y) and m(y, z) are in V , then

m(x, m(y, z)) = m(m(x, y), z).

(b) Identity Element. For all x in V , m(0, x) = x = m(x, 0).

(c) Inverses For each x in V0, m(x, i(x)) = 0 = m(i(x), x).

3.2.2 Transformation Groups

Now that we have infused the intrinsic properties of a manifold into the concept of a group, we can further adjust the group axioms to reflect transformation actions. Furthermore, we are now permitted to think in the local setting, hence we set to define local transformation groups.

Example 3.2.1. One-parameter Translation Group Consider the one-parameter family of translations which takes an arbitrary point

(x, y) to another point (x1, y1) by a motion parallel to the y-axis, as shown in Figure

19 3. The points are moved along by the parameter α by the following transformation

x1 = x

y1 = y + ε.

y

(x, y + ε) = (x1, y1) ε

(x, y)

x

Figure 3: Path, or orbit, from (x, y) to (x1, y1).

The transformation can be repeated with a shift β to produce a new point (x2, y2) [1] by the transformation

x2 = x1

y2 = y1 + δ.

It is apparent that the point (x2, y2) can be reached from the original point (x, y) by another transformation of the same family [1] since

x2 = x1 = x

y2 = y1 + δ = y + (ε + δ).

We see that a single transformation can accomplish the work of multiple transfor-

20 mations. We can also note that the identity of these transformation types is α = 0 and the inverse is the additive inverse (−ε). Note that the transformations adhere to group axioms.

From Example 3.2.1, we have an immediate interpretation of the transformations as a group. All groups that we will use will have these exact type of transformations within it. We would like to describe this family of transformations as a group with the operation being addition under composition. We then alter Definition 3.2.2 to fit these qualities as shown below.

Definition 3.2.3. Let M be a smooth manifold. A local group of transforma- tions acting on M is given by a (local) Lie group G, an open subset U of G × M, with [7]

{e} × M ⊂ U ⊂ G × M, which is the domain of definition of the group action, and a smooth map Ψ : U → M with the following properties:

(a) If (δ, x) ∈ U, (ε, Ψ(δ, x)) ∈ U, then

Ψ(ε, Ψ(δ, x)) = Ψ(ε · δ, x).

(b) For all x ∈ M,

Ψ(e, x) = x.

(c) If (ε, x) ∈ U, then (ε−1, Ψ(ε, x)) ∈ U and

Ψ(ε−1, Ψ(ε, x)) = x.

21 Figure 4: Composition Ψ(ε, Ψ(δ, x)) = Ψ(ε + δ, x).

Notice that the above conditions translate into typical group axioms when we write x · y for m(x, y). The only difference is that the conditions, as written, are not necessarily defined everywhere. We would also like to draw attention to Figure 3 and point out that the same compositions described in Definition 3.2.3 are represented in the figure, where the group operation is shown to be addition. So, for the groups of transformations of interest, ε · δ in Definition 3.2.3 is replaced with ε + δ. Recall that it is our purpose to map solution curves of the system into neighboring curves without altering their physical appearance. One way to keep track of how the solution curves will be mapped is to focus on single points on a curve and trace out the path that point will take as it is being mapped. We call the path that “transformed” points trace under the mapping Ψ : U → M an orbit. We can use this to our advantage by first calculating the fixed orbit that the solution curves have to take and using it as a condition that the solution curves must follow as they are mapped through a parameter. If we already know the orbit, we may theoretically use it to find the transformations that map the curves along that orbit. The formal definition of an orbit is given below.

Definition 3.2.4. O ⊂ M is an orbit provided it satisfies the conditions

(a) If x ∈ O, g ∈ G and g · x is defined, then g · x ∈ O.

22 (b) If O˜ ⊂ O, and O˜ satisfies part (a), then either O˜ = O, or O˜ is empty.

Since we are interested in mapping level curves of differential equations, we ac- tually apply the transformation on every point in the curve, so as to map the whole curve. So realistically there are infinitely many orbits of the points on solution curves. Figure 5 illustrates the mapping of one point, the origin, along an orbit of y = −x. Along the y = −x, the level curves would be mapped into each other and no change in appearance would occur.

y

2

Level Curves

Orbit 1.5

1

0.5

0 x 0 0.5 1 1.5 2

Figure 5: The orbit given by y = x of the level curves y + x = c.

23 3.3 One-Parameter Groups of Transformations

Instead of considering Lie transformation groups over an r dimensional manifold, we will restrict our consideration to a manifold of dimension one to correspond to the single parameter involved in groups of transformations. In order to further establish the connection of one-parameter groups of transformations to other topics, it is important to present a more in depth description of Lie algebras. In revisiting Lie algebras, we hope to relate one-parameter groups of transformations to the vector fields that we discussed in the last chapter. This would allow access to actually calculating the groups of transformations.

3.3.1 Lie Algebras

Let X be an n × n real or complex matrix. We wish to define the exponential of X, denoted eX or exp(X), by the usual power series

∞ X Xm eX = . m! m=0 Example 3.3.1. Let

    0 −ε x   −→ −→   exp   x , where x =   . ε 0 y

24 We can expand this in a power series and calculate

        0 −ε  1 0 0 −ε 1 −ε2 0   −→       exp   x =   +   +   ε 0 0 1 ε 0 2! 0 −ε2     1 0 ε3 1 ε4 0      −→ +   +   + ··· x 3! −ε3 0 4! 0 ε4     1 − 1 ε2 + 1 ε4 − · · · −ε + 1 ε3 − · · · x =  2! 4! 3!     1 3 1 2 1 4    ε − 3! ε + ··· 1 − 2! ε + 4! ε − · · · y   x cos(ε) − y sin(ε)   =   . y cos(ε) + x sin(ε)

So we have that

    0 −ε x cos(ε) − y sin(ε)   −→   exp   x =   . ε 0 y cos(ε) + x sin(ε)

At this point, it is beneficial, for the sake of argument, to re-define Lie algebras in terms of the matrix exponential. This makes proofs simpler and helps connect some later ideas.

Definition 3.3.1. A matrix Lie group is any subgroup G of GL(n; C), the group of all n × n invertible matrices with complex entries, with the following property:

If Am is any sequence of matrices in G, and Am converges to some matrix A then either A ∈ G, or A is not invertible [3].

Definition 3.3.2. Let G be a matrix Lie group. The Lie algebra of G, denoted g, is the set of all matrices X such that etX is in G for all real t [3].

Proposition 3.3.1. Let G be a matrix Lie group, and X an element of its Lie algebra. Then eX is an element of the identity component of G [3].

25 Theorem 3.3.1. Lie Product Formula Let X and Y be n × n complex matrices. Then [3],

 m X+Y X Y e = lim e m e m . m→∞

Theorem 3.3.2. Let G be a matrix Lie group, g its Lie algebra, and X and Y elements of g. Then [3]

(a) sX ∈ g for all real numbers s,

(b) X + Y ∈ g,

(c) XY − YX = [X,Y ] ∈ g.

Proof. (a) We have et(sX) = ets(X) is in G if X is in g.

(b) If X and Y commute, then et(X+Y ) = etX etY .

If X and Y do not commute, the Lie product formula states that

et(X+Y ) = lim (etX/metY/m)m. m→∞

Because X and Y are in the Lie algebra, etX/m and etY/m are in G, as is (etX/metY/m)m, since G is a group. However, because G is a matrix Lie group, the limit of elements in G must be again in G, provided that the limit is invertible. Since et(X+Y ) is automatically invertible, we conclude that it must be in G. So we have that X + Y is in g.

d tX d tX (c) Recall that dt e |t=0 = X. It follows that dt e Y |t=0 = XY , and by the product rule,

d tX −tX 0 0 (e Y e ) = (XY )e + (e Y )(−X) = XY − YX dt t=0

26 We know etX Y e−tX is in g for all t. Furthermore, we have by points a, and b,

that g is a real subspace of Mn(C). This means that g is a topologically closed

subset of Mn(C). It follows that

ehX Y e−hX − Y XY − YX = lim h→0 h

belongs to g.

Proposition 3.3.2. Let V ⊂ Rr be a local Lie group with multiplication m(x, y), x, y ∈ V . Then the Lie algebra g of right-invariant vector fields on V is spanned by the vector fields [7]

r X ∂ v = ξi (x) , k = 1, . . . , r, k k ∂xi i=1 where

∂mi ξi (x) = (0, x). k ∂xk

Theorem 3.3.3. Let G and H be matrix Lie groups with Lie algebras g and h, respectively. Suppose that Φ : G → H is Lie group homomorphism. Then, there exists a unique real linear map φ : g → h such that [3]

Φ(eX ) = eφ(X) for all X ∈ g. The map φ has the following additional properties:

(a) φ(AXA−1) = Φ(A)φ(X)Φ(A)−1, for all X ∈ g,A ∈ G.

(b) φ([X,Y ]) = [φ(X), φ(Y )], for all X,Y ∈ g.

d tX (c) φ(X) = dt Φ(e )|t=0, for all X ∈ g.

Proof. This is similar to proving Theorem 3.3.1.

27 Definition 3.3.3. If G is a matrix Lie group with Lie algebra g, then the expo- nential mapping for G is the map [3]

exp : g → G.

3.3.2 The Lie Series

Now we will define the connection between a Lie group and a Lie algebra. We have defined a Lie algebra as being spanned by certain vector fields. To reiterate, we wish to connect these vector fields to Lie groups of transformations by some means. If one looks closely at the properties of the flow of a vector field given in Definition 2.5.6 and the properties of a group of transformations in Definition 3.2.3, the actions of both take the same form. We will show that the one-parameter groups of transformations are given by an exponential series and that the actions of the flow generated by a vector field behave the same as a one-parameter group of transformations. The next theorem details one method of obtaining the one-parameter group of transformations from a vector field using the exponential map. This is known as calculating the Lie series of the group of transformations.

Theorem 3.3.4. The one-parameter Lie group of transformations is equivalent to [2]

ε2 x∗ = eεvx = x + εvx + v2x + ... 2 ε2 = [1 + εv + v2 + ... ]x 2 ∞ X εk = vkx. (5) k! k=0

Proof. Let n X ∂ v = ξ (x) (6) i ∂x i=1 i

28 and n X ∂ v(x∗) = ξ (x∗) , (7) i ∂x∗ i=1 i where

x∗ = X(x; ε) is the Lie group of transformations. From Taylor’s theorem, expanding x∗ about ε = 0, we have

∞ k  k  ∞ k  k ∗  ∗ X ε ∂ X(x; ε) X ε d x x = = . (8) k! ∂εk k! dεk k=0 ε=0 k=0 ε=0 For any differentiable function F (x),

n n d X ∂F (x∗) dx∗ X ∂F (x∗) F (x∗) = i = ξ (x∗) = v(x∗)F (x∗). (9) dε ∂x∗ dε i ∂x∗ i=1 i i=1 i

Hence it follows that dx∗ = v(x∗)x∗, dε

d2x∗ d dx∗  = = v(x∗)v(x∗)x∗ = v2(x∗)x∗, (10) dε2 dε dε and in general dkx∗ = vk(x∗)x∗, k = 1, 2,.... (11) dεk

Consequently, k ∗ d x k k k = v (x)x = v x, k = 1, 2,... (12) dε ε=0 which leads to Equation (5) using Equation (8) [2].

Example 3.3.2. Consider a vector field

v = −y∂x + x∂y.

29 If we use Lie series to find the group of transformations, we have that

x∗ = eεvx, or

(x∗, y∗) = eεv(x, y).

First, we apply the Lie series to x and recall that series rearrangement is permissible by these series converging absolutely. So, we have that

 1 eε(−y∂x+x∂y)(x) = I + ε(−y∂ + x∂ ) + ε2(−y∂ + x∂ )(−y∂ + x∂ ) x y 2! x y x y 1 1 1 + ε3(−y∂ + x∂ )3 + ε4(−y∂ + x∂ )4 + ε5(−y∂ + x∂ )5 3! x y 4! x y 5! x y  + ··· x

1 1 1 1 = x − εy − ε2x + ε3y + ε4x − ε5y + ··· 2! 3! 4! 5!  1 1   1 1  = x − ε2x + ε4x + − εy + ε3y − ε5y + ··· 2! 4! 3! 5! 1 1 1 1 = x(1 − ε2 + ε4) − y(ε − ε3 + ε5) + ··· 2! 4! 3! 5! x∗ = x cos(ε) − y sin(ε).

We now apply the Lie series to y:

 1 eε(−y∂x+x∂y)(y) = I + ε(−y∂ + x∂ ) + ε2(−y∂ + x∂ )(−y∂ + x∂ ) x y 2! x y x y 1 1 1 + ε3(−y∂ + x∂ )3 + ε4(−y∂ + x∂ )4 + ε5(−y∂ + x∂ )5 3! x y 4! x y 5! x y  + ··· y

1 1 1 1 = y + εx − ε2y − ε3x + ε4y + ε5x + ··· 2! 3! 4! 5!  1 1   1 1  = y − ε2y + ε4y + εx − ε3x + ε5x + ··· 2! 4! 3! 5! y∗ = y cos(ε) + x sin(ε).

30 We now have the transformations

x∗ = x cos(ε) − y sin(ε)

y∗ = y cos(ε) + x sin(ε) that were described in Example 3.3.1.

Definition 3.3.4. Let G be a Lie group. For any group element g ∈ G, the right multiplication map

Rg : G → G defined by

Rg(h) = h · g is a diffeomorphism, with inverse

−1 Rg−1 = (Rg) .

A vector field v on G is called right-invariant if

dRg(v|h) = v|Rg(h) = v|hg for all g and h in G [7].

Proposition 3.3.3. Let v 6= 0 be a right-invariant vector field on a Lie group G. Then the flow generated by v through the identity, namely

gε = exp(εv)e is defined for all ε ∈ R and forms a one-parameter subgroup of G, with

−1 gε+δ = gε · gδ, g0 = e, gε = g−ε. (13)

31 G is isomorphic to either R or the circle group SO(2), also known as the unit circle on the complex plane. Conversely, any connected one-dimensional subgroup of G is generated by such a right-invariant vector field in the above manner [7].

Proof. For ε, δ sufficiently small, (13) follows from the right-invariance of v [7] and F (exp(εv)x) = exp(ε · dF (v))F (x):

gδ · gε = Rgε (gδ) = Rgε [exp(δv)e]

= exp[δ · dRgε (v)]Rgε (e)

= exp(δv)gε

= exp(δv) exp(εv)e

= exp[(δ + ε)v]e = gδ+ε.

Thus, gε is at least a local one-parameter subgroup. In particular, g0 = e, and

−1 1 1 g−ε = gε for ε small. Furthermore, gε is defined at least for − 2 ε0 ≤ ε ≤ 2 ε0, for some ε0 > 0, so we can inductively define

1 1 gmε0+ε = gmε0 · gε, − 2 ε0 ≤ ε ≤ 2 ε0, for m an .

The above calculation shows that gε is a smooth curve in G satisfying (13) for all

ε, δ, proving that the flow is globally defined and forms a subgroup. If gε = gδ for

some ε 6= δ, then it is not hard to show that gε0 = e for some least positive ε0 > 0, and that gε is periodic with period ε0. In this case {gε} is isomorphic to SO(2).

Otherwise gε 6= gδ for all ε 6= δ, and {gε} is isomorphic to R

Conversely, if H ⊂ G is a one-dimensional subgroup, we let v|e be any nonzero tangent vector to H at the identity. Using the appropriate isomorphism, we extend v to a right-invariant vector field on all of G. Since H is a subgroup, it follows that

32 v|h is tangent to H at any h ∈ H, and therefore H is the integral curve of v passing through e. This proves the converse.

The significance of Proposition 3.3.3 is apparent, as it connects the flows gener- ated by a vector field to the one parameter groups of transformations that are given by the Lie series.

3.4 Infinitesimal Transformations

Now that a one-to-one relation exists between one-parameter groups of transfor- mations and one-dimensional subspaces of g, we have a means to a solution tech- nique. We know that the Lie algebra g is spanned by vector fields. Therefore, since we want to find the group of transformations that map solution curves into other so- lution curves, we can use the vector fields, called the infinitesimal generators, to find the one-parameter groups of transformations. We must now focus on how we can explicitly calculate the groups of transformations from an infinitesimal generator. Consider a one-parameter (ε) Lie group of transformations [2]

x∗ = X(x; ε) (14) with identity ε = 0 and law of composition φ. Expanding (14) about ε = 0, we get

  2   ∗ ∂X ε ∂X x = x+ε (x; ε) + 2 (x; ε) + ··· ∂ε ε=0 2 ∂ε ε=0   ∂X 2 = x + ε (x; ε) + O(ε ). (15) ∂ε ε=0

Ignoring the higher order terms of the series, we let

∂X ξ(x) = (x; ε) . (16) ∂ε ε=0

33 P

Figure 6: Tangent plane housing x∗ = X(x; ε).

Definition 3.4.1. The transformation x + εξ(x) is called the infinitesimal trans- formation of the Lie group of transformations (14); the components of ξ(x) are called the infinitesimals of (14) [2].

Once we expand the Lie group into a Taylor series and ignore all higher order terms, we are creating a tangent plane to a surface. One way to think about the one-parameter Lie group is to say that it lives on the tangent plane infinitesimally close to the tangent point, as shown in Figure 6.

Definition 3.4.2. The infinitesimal generator of the one-parameter Lie group of transformations is the operator

n X ∂ v = ξ(x) · ∇ = ξ (x) , (17) i ∂x i=1 i

34 where ∇ is the gradient operator,

 ∂ ∂ ∂  ∇ = , , ··· , . (18) ∂x1 ∂x2 ∂xn

For any differentiable function F (x) = F (x1, x2, . . . , xn) [2],

n X ∂F (x) vF (x) = ξ(x) · ∇F (x) = ξ (x) . (19) i ∂x i=1 i Notice that the infinitesimal generator is the same as the vector fields that span the Lie algebra.

Example 3.4.1. Consider the rotation group, which is given by the transformations

x∗ = x cos(ε) − y sin(ε)

y∗ = y cos(ε) + x sin(ε).

If we differentiate these transformations with respect to ε, we have

dx∗ = −x sin(ε) − y cos(ε) dε dy∗ = −y sin(ε) + x cos(ε). dε

If we let ε = 0, then

dx∗ = −y dε dy∗ = x, dε so that we have that the infinitesimal generator is

∂ ∂ v = −y + x . ∂x ∂y

35 Suppose G is a local group of transformations acting on a manifold M via g · x = Ψ(g, x) for (g, x) ∈ U ⊂ G×M. There is then a corresponding “infinitesimal action” of the Lie algebra g of G on M. Namely, if v ∈ g we define ψ(v) to be the vector field on M whose flow coincides with the action of the one-parameter subgroup exp(εv) of G on M. This means that for x ∈ M,

d ψ(v|x) = Ψ(exp(εv), x) = dΨx(v|e), dε ε=0 where Ψx(g) ≡ Ψ(g, x) [7].

3.4.1 Fundamental Theorem of Lie

Calculating the Lie series is an effective way to solve for the one-parameter group of transformations. However, once the group becomes more complicated, calcula- tions using the Lie series become cumbersome and inefficient. We can use an al- ternate method to calculate the transformation group, as detailed in the following theorem. We must first prove a lemma.

Lemma 3.4.1. Let x be a vector, X be a transformation operator, φ be an mapping under the groups of transformations, and ε be a varying parameter. We have that [2]

X(x; ε + ∆ε) = X(X(x; ε); φ(ε−1, ε + ∆ε)). (20)

36 Proof.

X(X(x); φ(ε−1, ε + ∆ε)) = X(x; φ(ε, φ(ε−1, ε + ∆ε)))

= X(x; φ(φ(ε, ε−1), ε + ∆ε))

= X(x; φ(0, ε + ∆ε))

= X(x; ε + ∆ε).

Theorem 3.4.1. (First Fundamental Theorem of Lie). There exists a parametriza- tion τ(ε) such that the Lie group of transformations x∗ = X(x; ε) is equivalent to the solution of the initial value problem for the system of first order differential equations [2] dx∗ = ξ(x∗), (21) dτ with x∗ = x when τ = 0. (22)

In particular Z ε τ(ε) = Γ(ε0)dε,0 (23) 0 where

∂φ(a, b) Γ(ε) = (24) ∂b (a,b)=(ε−1,ε) and

Γ(0) = 1.

Proof. First we show that x∗ = X(x; ε) leads to Equations (21)-(24). Expand the left-hand side of Equation (20) in a power series in ∆ε about ∆ε = 0 so that

∂X(x; ε) X(x; ε + ∆ε) = x∗ + ∆ε + O((∆ε)2), (25) ∂ε

37 where x∗ is given by x∗ = X(x; ε). Then expanding φ(ε−1, ε + ∆ε) in a power series in ∆ε about ∆ε = 0, we have

φ(ε−1, ε + ∆ε) = φ(ε−1, ε) + Γ(ε)∆ε + O((∆ε)2) (26)

= Γ(ε)∆ε + O((∆ε)2), (27) where Γ(ε) is defined by equation (24). Consequently, after expanding the right-hand side of Equation (20) in a power series in ∆ε about ∆ε = 0, we obtain

X(x; ε + ∆ε) = X(x∗; φ(ε−1, ε + ∆ε))

= X(x∗; Γ(ε)∆ε + O((∆ε)2))

∗ ∂X ∗ 2 = X(x ; 0) + ∆εΓ(ε) (x ; δ) + O((∆ε) ) ∂δ δ=0 = x∗ + Γ(ε)ξ(x∗)∆ε + O((∆ε)2). (28)

Equating Equations (25) and (28), we see that x∗ = X(x; ε) satisfies the initial value problem for the system of differential equations

dx∗ = Γ(ε)ξ(x∗) (29) dε with x∗ = x at ε = 0. (30)

From the expansion of x∗ in a series, it follows that Γ(0) = 1. The parametrization

R ε 0 0 τ(ε) = 0 Γ(ε )dε leads to Equations (21) and (22). Since ∂ξ (x), i = 1, 2, . . . , n, is continuous, it follows from the existence and ∂xi uniqueness theorem for an initial value problem for a system of first order differential equations, that the solution exists and is unique. This solution must be x∗ = X(x; ε), completing the proof of Lie’s First Fundamental Theorem [2].

38 Example 3.4.2. Consider the infinitesimal generator of the rotation group

v = −y∂x + x∂y.

We already know that

x∗ = x cos(ε) − y sin(ε)

y∗ = y cos(ε) + x sin(ε) are the transformations of the rotation group. We will use the First Fundamental Theorem of Lie to derive these transformations from the infinitesimal generator v. Since the transformation operation is known to be addition, φ(a, b) = a + b. We would like to clarify that we have used the notation Ψ(a + b, x) to describe the same transformations that we have been describing with X(x; a + b). So, we have that

∂φ(a, b) = 1. ∂b

Therefore,

∂φ(a, b) Γ(ε) = = 1 = 1. ∂b (a,b)=(ε−1,ε) (a,b)=(ε−1,ε) Next, we calculate τ(ε), where

Z ε ε 0 0 τ(ε) = 1 dε = ε = ε. 0 0

So, we have that the Lie group of transformations is equivalent to the solution of

dx∗ = ξ(x∗) dε

39 with x∗ = x when ε = 0. For this particular case, we have that

dx∗ dy∗ = −y∗ and = x∗. dε dε

We now introduce the complex variables z∗ = x∗ + iy∗ and z = x + iy [4]. From z∗ = x∗ + iy∗, we see that

z∗ = x∗ + iy∗ dz∗ dx∗ dy∗ =⇒ = + i dε dε dε = −y∗ + ix∗ = iz∗. (31)

Next, we solve (31) as an ordinary differential equation, where z∗(0) = z∗. We then have that

z∗ = zeiε

= z(cos(ε) + i sin(ε))

= (x + iy)(cos(ε) + i sin(ε))

(x∗ + iy∗) = x cos(ε) + ix sin(ε) + iy cos(ε) − y sin(ε)

=⇒ x∗ = x cos(ε) − y sin(ε)

y∗ = x sin(ε) + y cos(ε).

We now have the appropriate transformations of x∗ and y∗ that give the rotation group. An alternate, quicker, way to find these transformations is to take the character-

40 istic system

dx∗ = −y∗ dε dy∗ = x dε and differentiate them with respect to ε to get

d2x∗ dy∗ = − dε2 dε = −x∗, d2y∗ dx∗ = dε2 dε = −y.

These are simple ordinary differential equation that have solutions

x∗(ε) = a cos(ε) + b sin(ε)

y∗(ε) = a sin(ε) − b cos(ε).

After we use the initial conditions, we find that a = x and b = −y.

As we can see, The Fundamental Theorem of Lie revolutionizes the way we may solve for the Lie group of transformations. Since the groups of interest have addition as their operation, it becomes unnecessary to do all of the steps in the Fundamental Theorem of Lie. For example, as long as addition is the operation (which it always is for Lie symmetries), we have that dτ = dε, and we may skip right to solving the characteristic system of ordinary differential equations. Henceforth, we will leave out the calculations for τ and simply go straight into solving the characteristic system of ODE’s. To summarize the finer points of this chapter, we have shown that the flow

41 generated by a vector field is the same as a local group action of the Lie group on the manifold M, often called a one-parameter group of transformations. The vector field v is called the infinitesimal generator of the action since by Taylor’s theorem in local coordinates

Ψ(ε, x) = x + εξ(x) + O(ε2), where ξ = (ξ1, . . . , ξm) are the coefficients of v. Recall that the flow generated by a vector field is also called the maximal integral curve, where every possible tangent to this curve equals the vector field v. The computation is the flow or one-parameter group generated by a given vector field v and is often referred to as exponentiation of the vector field. The suggestive notation

exp(εv)x ≡ Ψ(ε, x) will be adopted for the exponentiation. Since we have established that the flow is the same as a one-parameter group of transformations, we can draw attention to the fact that now the orbits of the one-parameter group action are actually the same as the flows. This in turn means that the orbits generate the vector field v as well. The implications of the one-parameter groups and flows being interchangeable are vast and extremely helpful. One implication in particular is that we now have a means to find the groups of transformations. We had already established that we can write the infinitesimal generators v that span the Lie algebra as vector fields, but now we also know that those vector fields are the same as the underlying structure of the one-parameter groups of transformations. More formally, if Ψ(ε, x) is any one- parameter group of transformations acting on M, then its infinitesimal generator is

d v|x = Ψ(ε, x). dε ε=0

42 So, if we can infuse the properties of our system into these vector fields, we can find the groups and therefore the types of transformations that map solution curves. We may also explicitly solve for the canonical coordinates that may be substituted into our system to reduce it. Figure 7 illustrates the connections between the theory that has been presented so far. The bold arrows represent the calculations that we actually perform when doing Lie symmetry methods. Chapter 4 focuses creating an applied method to jump between these arrows while keeping all this theory in the background.

Figure 7: The web relating Lie Algebras and Lie Groups.

43 4 SYMMETRY GROUPS AND INVARIANCE

A symmetry group can be described as a local group of transformations acting on the independent and dependent variables of the system of equations with the requirement that it must transform solutions of the system into other solutions of the system [7]. The “direction” that symmetry groups map the solutions is found from a condition that we must develop. Invariant functions describe the solution curves so that the symmetry group may effectively map them to each other. This chapter focuses on developing how the infinitesimal generators of the groups of trans- formations interact with the invariant functions. Once we set up this interaction, we then develop a means to calculate the infinitesimal generators while infusing the characteristics of the system of differential equations within them. We wish to take advantage of the idea developed in the last chapter that the flow is the same as the one-parameter groups of transformations. Since the infinitesimal generators are found from the flow and they have the added bonus of being vectors, calculating them is doable. Once we have the infinitesimal generators, we can find the one-parameter groups of transformations. The focus of this chapter is developing a calculable series of steps to explicitly extract the infinitesimal generators of the groups of transformations from the waves of theory that surround them. Once the infinitesimal generators are known, we use the First Fundamental Theorem of Lie to find the groups of transformations.

4.1 Algebraic Systems

At this point, we would like describe groups of transformations in terms of how they act on the solution curves of a system of algebraic equations. This will hopefully help us find a condition by which transformation groups must act on a system in

44 order to “properly” map solution curves. Describing groups of transformations in this way means that we must define what we will call symmetry groups.

Definition 4.1.1. Let S be a system of differential equations. A symmetry group of the system S is a local group of transformations G acting on an open subset M of the space of independent and dependent variables for the system with the property that whenever u = f(x) is a solution of S, and whenever g · f is defined for g ∈ G, then u = g · f(x) is also a solution of the system [7].

Definition 4.1.2. Let G be a local group of transformations acting on a manifold M. A subset S ⊂ M is called G-invariant and G is called a symmetry group of S if whenever x ∈ S, and g ∈ G, then g · x ∈ S [7].

Definition 4.1.3. The set S will be the set of solutions or subvariety determined by the common zeros of a collection of smooth functions F = (F1,...,Fl),

S = SF = {x : Fv(x) = 0, v = 1, . . . , l}.

If S1 and S2 are G-invariant sets, so are S1 ∪ S2 and S1 ∩ S2 [7].

Let us consider a function F (x) that defines the to a system of alge- braic equations. If we find a “direction” that F (x) is invariant, we have effectively found the invariance of the whole system of equations. We must then be sure to set up conditions on F (x) such that the mapping of the solution curves maps into other solution curves.

Definition 4.1.4. Let G be a local group of transformations acting on a manifold M. A function F : M → N, where N is another manifold, is called a G-invariant function if for all x ∈ M and all g ∈ G such that g · x is defined [7],

F (g · x) = F (x).

45 Proposition 4.1.1. Let N ⊂ M be a submanifold of M. Then N is locally G- invariant if and only if for each x ∈ N, g|x ⊂ TN|x. In other words, N is locally G-invariant if and only if the infinitesimal generators v of G are everywhere tangent to N [7].

The following proposition develops this same idea of not altering the solution curves of a system. It forges a relationship between invariant functions and the infinitesimal generators of the Lie group of transformations. Since knowing the infinitesimal generators is useful, developing invariance conditions for them will lead us to solutions of the system.

Proposition 4.1.2. Let G be a connected group of transformations acting on the manifold M. A smooth real-valued function ζ : M → R is an invariant function for G if and only if

v(ζ) = 0 for all x ∈ M, and every infinitesimal generator v of G [7].

Proof. By the chain rule, if x ∈ M, d ζ(exp(εv)x) = v(ζ)[exp(εv)x] dε whenever exp(εv)x is defined. Setting ε = 0 proves the necessity of v(ζ) = 0. Conversely, if v(ζ) = 0 holds everywhere, then d ζ(exp(εv)x) = 0, dε where defined; hence, ζ(exp(εv)x) is a constant for the connected, local one-parameter subgroup exp(εv) of Gx = {g ∈ G : g · x is defined}. But, every element of Gx can be written as a finite product of exponentials of infinitesimal generators vi of G.

Hence, ζ(g · x) = ζ(x) for all g ∈ Gx.

46 At this point, a theorem linking Lie groups of transformations to symmetry groups is feasible through the invariance conditions on the infinitesimal generators. The following theorem sets the groundwork for a formal process to find the invariance conditions of a system of algebraic equations. From here, the next step will be to discuss the invariance of a system of differential equations as a system of algebraic equations in a higher-dimensional space. Consider a system of algebraic equations

Fv(x) = 0, v = 1, . . . , l,

where F1(x), ··· ,Fl(x) are smooth functions defined for x in some manifold M.

Theorem 4.1.1. Let G be a connected local Lie group of transformations acting

l on the m-dimensional manifold M. Let Fv : M → R , l ≤ m, define a system of algebraic equations

Fv(x) = 0, v = 1, ..., l, and assume that the system is of maximal rank. Then G is a symmetry group of the system if and only if

v[Fv(x)] = 0, v = 1, ..., l, whenever Fv(x) = 0, (32) for every infinitesimal generator v of G [7].

Proof. The necessity of Equation (32) follows from differentiating the identity

F (exp(εv)x) = 0 in which x is a solution, and v is an infinitesimal generator of G with respect to ε at ε = 0.

47 To prove the sufficiency, let x0 be a solution to the system. Using the maximal

1 m rank condition, we can choose local coordinates y = (y , . . . , y ) such that x0 = 0 and F has the simple form F (y) = (y1, . . . , yl), see Theorem 2.3.1. Let

∂ ∂ v = ξ1(y) + ··· + ξm(y) ∂y1 ∂ym be any infinitesimal generator of G, expressed in the new coordinates. Condition (32) means that v(yv) = ξv(y) = 0, v = 1, . . . , l, (33)

1 2 l whenever y = y = ··· = y = 0. Now the flow φ(ε) = exp(εv) · x0 of v through x0 = 0 satisfies the system of ordinary differential equations dφi = ξi(φ(ε)), φi(0) = 0, i = 1, . . . , m. dε By Equation (33) and the uniqueness of solutions to this initial-value problem, we conclude that φv(ε) for v = 1, . . . , l, and ε sufficiently small. We have thus shown that if x0 is a solution to F (x) = 0, v is an infinitesimal generator of G, and ε is sufficiently small, then exp(εv)x0 is again a solution to the system. Since the solution set SF = {x : F (x) = 0} is closed, we may draw the same conclusion

for all g = exp εv in the connected one-parameter subgroup of Gx0 generated by v.

Another application of writing every element of Gx as a finite product of exponentials of infinitesimal generators completes the proof of the theorem in general.

4.1.1 Constructing Invariants

All of this time, we have been setting up the theory behind invariants. Now we must discuss a process to actually find the invariants of the group action. Recall that if G is a one-parameter group of transformations acting on M, the infinitesimal generator is

48 ∂ ∂ v = ξ1(x) + ··· + ξm(x) , (34) ∂x1 ∂xm expressed in some given local coordinates. A local invariant ζ(x) of G is a solution of the linear, homogeneous first order partial differential equation

∂ζ ∂ζ v(ζ) = ξ1(x) + ··· + ξm(x) = 0. (35) ∂x1 ∂xm

The general solution of equation (35) can be found by integrating the correspond- ing characteristic system of ordinary differential equations: dx1 dx2 dxm = = ··· = . ξ1(x) ξ2(x) ξm(x) Example 4.1.1. Consider the infinitesimal generator

v = x∂x + y∂y.

The first order PDE is

xζx + yζy = 0.

Then the invariant functions are found by solving the characteristic system

dx dy dζ = = . x y 0

First, we solve the left side of this system, so that

dx dy = x y =⇒ ln |x| = ln |y| + c x =⇒ ln | | = c y x =⇒ = C, y

49 x and C = is the first arbitrary constant. y We now focus on the second half of the characteristic system. The right side of

dζ dx the characteristic system, 0 = x , implies that ζ = w is constant, giving the second invariant function. Now w can be thought of as an arbitrary function of C, where

x ζ = w( y ) is a solution of the PDE.

One distinguishing characteristic of invariants is that they describe the solution curves of a differential equation. One of the invariant functions will always describe some transformation in relation to the original function. One may then differentiate the invariant function as the system needs, and this leads to a reduction of the system to an ordinary differential equation. Solving this differential equation is a challenge in and of itself, but it is possible by some means. Another term coined for invariant functions is canonical coordinates because they serve as a new set of coordinates to substitute and make the system of differential equations simpler.

4.2 Prolongation

Before we go further into investigating the invariance of differential equations, must adapt the theory we have presented for algebraic equations to differential equa- tions. The action of prolonging a system is accomplished by letting the dependent variables act as if they were independent. This effectively places the system into a space of higher dimension, making the solution curves act as a hyper-surface in the bigger space. We then seek the invariance conditions on the hyper-surface. Fig- ure 8 shows solution curves that are now acting as a surface, or a plane in this case. Instead of mapping two-dimensional solution curves, we will be mapping sur- faces. Extending the system into the bigger space makes the system of differential equations act as algebraic equations in that space. Therefore, the previous theory developed for algebraic equations may be adapted to differential equations, as long as the system is extended by prolongation.

50 ε

ε

Figure 8: Invariant Surface.

Example 4.2.1. Given a system of algebraic equations

Fv(x, y) = 0, we may extend this system into the x, y, y0 space by letting y0 act as independent, i.e.

0 Fv(x, y, y ) = 0.

If we now consider partial differential equations, we may let all the partial deriva- tives of the dependent variable act as independent variables, as in the following example.

Example 4.2.2. Consider Burgers’ equation, which is a common example in fluid

51 mechanics [7],

2 ut = uxx + ux, which has two independent variables x and t and dependent variable u. The original v looks like ∂ ∂ ∂ v = ξ(x, t, u) + τ(x, t, u) + φ(x, t, u) . ∂x ∂t ∂u

The possible combinations of second order partials of u are

ut, ux, utt, uxt, uxx.

2 Let ∆ represent the extended Burgers equation. An extension of ut = uxx + ux is

∆(x, t, u, ut, ux, utt, uxt, uxx) = 0, and the extension of v is

∂ ∂ ∂ ∂ ∂ pr(2)v = v + φx + φt + φxx + φxt + φtt = 0, ∂ux ∂ut ∂uxx ∂uxt ∂utt where the ΦJ ’s are some unknown functions of x, t, and u.

We will now turn to develop the theory behind prolonging the basic space into the larger hyper-space, called a jet space. The whole process of prolongation is best explained by what is presented in Olver’s book. So, the following passage is taken from his work [7]. Given a smooth real-valued function f(x) = f(x1, . . . , xp) of p independent vari- ables, there are

p + k − 1 p ≡ k k different k-th order partial derivatives of f. We employ the multi-index notation [7]

52 ∂kf(x) ∂J f(x) = ∂xj1 ∂xj2 ··· ∂xjk for these derivatives. In this notation, J = (j1, . . . , jk) is an unordered k- of , with entries 1 ≤ jk ≤ p indicating which derivatives are being taken. The order of such a multi-index, which we denote by #J ≡ k, indicates how many derivatives are being taken. More generally, if f : X → U is a smooth function

p q 1 q from X ' R to U ' R , so u = f(x) = (f (x), . . . , f (x)), there are q · pk numbers

α α uJ = ∂J f (x) needed to represent all the different k-th order derivatives of the

q·pk components of f at a point x. We let Uk ≡ R be the Euclidean space of this

α dimension, endowed with coordinates uJ corresponding to α = 1, . . . , q, and all of the multi-indices J = (j1, . . . , jk) of order k, designed so as to represent the above

(n) derivatives. Furthermore, we set U = U × U1 × · · · × Un as the Cartesian product space, whose coordinates represent all the derivatives of functions u = f(x) of all orders from 0 to n. Note that U (n) is a Euclidean space of dimension

p + n q + qp + × + qp = q ≡ qp(n). 1 n n

A typical point in U (n) will be denoted by u(n), so u(n) has q·p(n) different components

α uJ where α = 1, . . . , q, and J runs over all unordered multi-indices J = (j1, . . . , jk) with 1 ≤ jk ≤ p and 0 ≤ k ≤ n. Given a smooth function u = f(x), so f : X → U, there is an induced func- tion u(n) = pr(n)f(x), called the n-th prolongation of f, which is defined by the equations

α α uJ = ∂J f (x).

Thus, pr(n)f is a function from X to the space U (n), and for each x in X, pr(n)f(x) is a vector whose q · p(n) entries represent the the values of f and all its derivatives up to order n at the point x [7].

53 The space X × U (n), whose coordinates are composed of the independent and dependent variables and their associated partial derivatives, is what is called the jet space.

4.3 Prolongation of Differential Equations

Thus far, we have established that Equation (34) is the infinitesimal generator of the group of transformations and that the coefficients of v can be calculated by solving a characteristic system of differential equations. The following theorem ensures that the transformation group is indeed the symmetry group of the system of equations. In other words, it links the solutions of the system to the group of transformations.

Theorem 4.3.1. Let M be an open subset of X × U and suppose ∆(x, u(n)) = 0 is an n-th order system of differential equations defined over M, with corresponding

(n) subvariety S∆ ⊂ M . Suppose G is a local group of transformations acting on M

(n) whose prolongation leaves S∆ invariant, meaning that whenever (x, u ) ∈ S∆ for all g ∈ G such that this is defined. Then G is a symmetry group of the system of differential equations [7].

Proof. Suppose u = f(x) is a local solution to ∆(x, u(n)) = 0. This means that the graph

(n) (n) Γf = {(x, pr f(x))}

(n) of the prolongation pr f lies entirely within S∆. If g ∈ G is such that the trans-

(n) formed function g·f is well defined, the graph of its prolongation, namely Γg·f , is the same as the transform of the graph of pr(n)f by the prolonged group transformation pr(n)g :

(n) (n) (n) Γg·f = pr g(Γf ).

54 (n) (n) Now since S∆ is invariant under pr g, the graph of pr (g · f) again lies entirely in S∆. But this is just another way of saying that the transformed function g · f is a solution to the system ∆.

Finally, we reach the exciting conclusion, where we can redefine all of these operations in terms of a system of differential equations instead of simple algebraic ones. We now extend Theorem 4.1.1 to also apply to differential equations.

Theorem 4.3.2. Suppose

(n) ∆v(x, u ) = 0, v = 1, . . . , l, is a system of differential equations of maximal rank defined over M ⊂ X × U. If G is a local group of transformations acting on M, and

(n) (n) (n) pr v[∆v(x, u )] = 0, v = 1, . . . , l, whenever ∆v(x, u ) = 0, for every infinitesimal generator v of G, then G is a symmetry group of the system [7].

Proof. The proof is immediate from Theorems 4.1.1 and 4.3.1.

Now that the theory is firmly established for using Lie symmetry methods, we may shift attention towards developing an applicable series of steps that uses the theory. To summarize, we have showed that v is the infinitesimal generator of the group of transformations, which is directly related to the symmetries of the system of differential equations. We have also established that the coefficients of v can be found by solving a characteristic system of ordinary differential equations. As a result, we have a perceivable foundation of steps that connects all concepts encountered here: the infinitesimal generator gives the symmetries of our system; the symmetries give the groups of transformations; and, the transformation groups

55 give the solutions to the system. The base step to the process, then, is finding coefficients of the infinitesimal generator. Our endeavors are then focused on a method for finding the coefficients of the infinitesimal generator. We will develop this process in the general prolongation formula, but, as part of the formula, total derivatives are necessary for calculations. A brief introduction into total derivatives is necessary before we can continue to the general prolongation formula.

4.3.1 Total Derivatives

Proposition 4.3.1. Given P (x, u(n)), the i-th total derivative of P has the general form

q ∂P X X ∂P D P = + uα , i ∂xi J,i ∂uα α=1 J J where, for J = (ji, . . . , jk),

α k+1 α α ∂uJ ∂ u uJ,i = = . ∂xi ∂xi∂xj1 ··· ∂xjk

Proof. The proof is a straightforward application of the chain rule. For example, in the case X = R2, with coordinates (x, y), and U = R, there are two total derivatives

Dx and Dy, with

∂P ∂P ∂P ∂P ∂P DxP = + ux + uxx + uxy + uxxx + ··· , ∂x ∂u ∂ux ∂uy ∂uxx ∂P ∂P ∂P ∂P ∂P DyP = + uy + uxy + uyy + uxxy + ··· . ∂y ∂u ∂ux ∂uy ∂uxx

Thus, if P = xuuxy, then

DxP = uuxy + xuxuxy + xuuxxy,DyP = xuyuxy + xuuxyy.

Higher order total derivatives are defined in analogy with our notation for higher order partial derivatives. Explicitly, if J = (j1, . . . , jk) is a k-th order multi-index,

56 with 1 ≤ jk ≤ p for each κ, then we denote

DJ = Dj1 Dj2 ··· Djk as the J-th total derivative.

4.3.2 The General Prolongation Formula

Theorem 4.3.3. The General Prolongation Formula Let p q X ∂ X ∂ v = ξi(x, u) + φ (x, u) (36) ∂xi α ∂uα i=1 α=1 be a vector field defined on an open subset M ⊂ X × U. The n-th prolongation of v is the vector field [7]

q X X ∂ pr(n)v = v + φJ (x, u(n)) (37) α ∂uα α=1 J J defined on the corresponding jet space M (n) ⊂ X × U (n).

The second summation is over all unordered multi-indices J = (j1, . . . , jk), with

J (n) 1 ≤ jk ≤ p, l ≤ k ≤ n. The coefficient functions φα of pr v are given by [7]

 p  p J (n) X i α X i α φα(x, u ) = DJ φα − ξ ui + ξ uJ,i, (38) i=1 i=1

∂uα ∂uα where uα = , and uα = J . i ∂xi J,i ∂xi

Proof. The proof is left to the reader.

Given a system of differential equations, we have a formula for extending the system into a corresponding jet space, making it possible to find the coefficients of the infinitesimal generator. We know that the infinitesimal generator gives the

57 symmetry group of the system of equations. Once we have the symmetry group, we may find the local Lie group of transformations that simplifies the system of differential equations. Now that we established the theory behind using symmetry methods to solve systems of differential equations, we must translate this into a workable set of steps to follow:

1. Calculate the correct prolongation based on the relevant values of p, q, and n and the appropriate form of the infinitesimal generator v.

2. Apply the prolongation to the system of differential equations.

3. Calculate the coefficients of the remaining equation and substitute the system into the resulting formula as necessary.

4. Equate these coefficients in the manner found in step 2. and pick off the coefficients of each partial derivative of the dependent variable(s).

5. Solve the resulting system to get final values for the components of v.

6. Find the linearly independent set of infinitesimal generators for the system.

7. Construct invariants using the characteristic system and find values that must be substituted into the system to make it reduce to an ODE.

Figure 9 shows what these steps accomplish as we do them. All of the theory that we have put together so far is beautiful in how much it brings topics together. We can relate this figure to Figure 7 as being the bold arrows that were previously mentioned. We will now thoroughly demonstrate the use of these steps by an aside investigation of the potential form of Burgers’ equation.

58 ∆ν = 0, ν = 1, ..., q System of Equations

Prolong

prv∆ν|∆ν =0 = 0, ν = 1, ..., q Invariance Condition

Separate

ξu = 0, ... Determining Equations

Solve

Pp i Pq v = i=1 ξ (x, u)∂xi + α=1 φα(x, u)∂uα Infinitesimal Generator

Lie Series ODE Sys Group of Transformations

Solutions, Reductions, Similarity Transformations Applications

Figure 9: Solution process.

59 4.4 Burgers’ Equation

Let us put this process to use on Burgers’ equation. Burgers’ equation is of the form:

2 ut = uxx + ux. (39)

We see that it has two independent variables, x and t, and a dependent vari- able, u = u(x, t). This means that the each infinitesimal generator will have three components, having the form

∂ ∂ ∂ v = ξ(x, t, u) ∂x + τ(x, t, u) ∂t + φ(x, t, u) ∂u .

Burgers’ equation is a second order differential equation, so we must apply the second prolongation

(2) 2 pr v[(∆ = ut − uxx − ux)] = 0,

2 when ut = uxx + ux, where

∂ ∂ ∂ ∂ ∂ pr(2)v = v + φx + φt + φxx + φxt + φtt . ∂ux ∂ut ∂uxx ∂uxt ∂utt

After applying the prolongation to Burgers’ equation, we get the symmetry con- dition

t xx x φ = φ + 2uxφ ,

2 where Theorem 4.3.3 says we must now substitute uxx +ux for ut. Using the formula for calculating the φJ ’s, we see that by (38) that

x 2 φ = φx + (φu − ξx)ux − τxut − ξuux − τuuxut (40)

t 2 φ = φt − ξtux + (φu − τt)ut − ξuuxut − τuut (41)

60 xx 2 φ = φxx − ξxxux − τxxut + 2φuxux − 2ξuxux − 2τxuuxut − 2ξxuxx − 2τxuxt

2 3 2 + φuuux − ξuuux − τuuuxut − 2τuuxuxt + φuuxx − τuuxxut − 3ξuuxuxx. (42)

2 x t xx Now we substitute ut = uxx + ux into φ , φ , and φ and set

t xx x φ = φ + 2uxφ .

We then collect terms involving partial derivatives of u and equate their coefficients. The resulting system of equations is listed in Table 1.

Term Coefficients

a. 1 φt = φxx

b. ux −ξt = 2φxu − ξxx + 2φx

c. uxx −τt = −2ξx − τxx

2 d. ux −τt = −τxx + φuu − 2ξxu + φu − 2ξx

2 e. uxuxx 0 = −τuu − τu

3 f. ux 0 = −ξuu − 2τx − ξu

4 g. ux 0 = −τuu − τu

h. uxuxx 0 = −2τxu − 2ξu − 2τx

i. uxt −2τx = 0

j. uxuxt −τu = 0

Table 1: Complete list of coefficient equations.

Almost immediately, we can see from Equations (i), (j), and then (h) that τ = f(t) and ξ = g(x, t). Once we substitute these results into the other coefficient equations, we obtain the simplified versions in Table 2.

61 Term Coefficients

a. 1 φt = φxx

b. ux −ξt = 2φxu − ξxx + 2φx

c. uxx −τt = −2ξx

2 d. ux −τt = φuu + φu − 2ξx

Table 2: Reduced coefficient list.

Since there are no more simple results, the next step is solving these equations for ξ, τ, and φ. So, we will discuss the calculations in detail. From equation (c) in Table 2, we have that

−τ = −2ξx 1 ξ = τ x 2 t 1 ξ = τ x + a(t). 2 t

1 Now we use ξx = 2 τt and substitute this into Equation (d) in Table 2 and solve for φ:

−τt = φuu + φu − 2ξx 1 −τ = φ + φ − 2( τ ) t uu u 2 t

φu = −φuu,

or φ = c(x, t)e−u + b(x, t).

Next, we use what we know about ξ and φ and substitute them into Equation (b)

62 From this, we can determine b(x, t).

−ξt = 2φxu − ξxx + 2φx 1 −( τ x + a ) = 2φ + 2φ 2 tt t xu x 1 − τ x − a = 2(−c e−u) + (c e−u + b ) 2 tt t x x x 1 − τ x − a = 2b 2 tt t x 1 1 b = − τ x2 − a x + d(t) 8 tt 2 t

We have yet to find τ(t), so we will use the last coefficient Equation (a) in Table

2. Substituting in for φ and φxx, we see

φ = φxx 1 1 1 c e−u − τ x2 − a x + d = c e−u − τ . (43) t 8 ttt 2 tt t xx 4 tt

From here, we match the coefficients of Equation (43) on either side and find that

1 1 1 ct = cxx, − 8 τttt = 0, − 2 att = 0, and dt = − 4 τtt.

We can use each of these to find τ, c(x, t), and d(t). After minor calculations, we see

1 2 τ = 2 c1t + c2t + c3,

a = c4t + c5,

1 d = − 4 c1t + k.

Substituting these results into the existing results for ξ, τ, and φ, we have:

1 1 ξ = c xt + c x + c t + c (44) 2 1 2 2 4 5

1 τ = c t2 + c t + c (45) 2 1 2 3

63 1 1 1 φ = c(x, t)e−u − c x2 − c x − c t + c . (46) 8 1 2 4 4 1 6

Thus, we have that the infinitesimal generator is given by

∂ ∂ ∂ v = ξ + τ + φ (47) ∂x ∂t ∂u 1 1 ∂ 1 ∂ = ( c xt + c x + c t + c ) + ( c t2 + c t + c ) 2 1 2 2 4 5 ∂x 2 1 2 3 ∂t 1 1 1 ∂ + (c(x, t)e−u − c x2 − c x − c t + c ) . (48) 8 1 2 4 4 1 6 ∂u

The next step is to pick values for the ci’s to find a spanning set of vector fields that generate different transformation groups that will map the solution curves of

Burgers’ Equation. Choosing c1 = 1, c2 = 0, . . . , c6 = 0, we get one generator, choosing c2 = 1, c1 = 0, c3, . . . , c6 = 0 gives a different infinitesimal generator, et cetera.

Choosing values of ci, we find that the linearly independent set of vector fields are:

v1 = ∂x (49)

v2 = ∂t (50)

v3 = ∂u (51)

v4 = x∂x + 2t∂t (52)

v5 = 2t∂x − x∂u (53)

2 2 v6 = 4tx∂x + 4t ∂t − (x + 2t)∂u (54)

−u vα = α(x, t)e ∂u, (55)

where αt = αxx. The next step is to find the group of transformations associated with each individual vector field. We can do this either by applying the Lie series, or by solving the characteristic system of ordinary differential equations. It is easier

64 to solve the characteristic system, so that is what we will use here.

Example 4.4.1. Consider v4 = x∂x + 2t∂t. We solve the initial value problem

dx∗ = x∗, x∗(0) = x, dε dt∗ = 2t∗, t∗(0) = t, dε du∗ = 0, u∗(0) = u. dε

So, x∗ = eεx, t∗ = e2εt, and u∗ = u. The corresponding group of transformations is

G :(eεx, e2εt, u).

Furthermore, if u = f(x, t) is a solution of Burger’s equation, then so is

u(4) = f(e−εx, e−2εt), since x∗ = eεx =⇒ x = e−εx∗ and so forth.

Using similar calculations for the other infinitesimal generators v, we get a com- plete list of the symmetry groups:

G1 :(x + ε, t, u)

G2 :(x, t + ε, u)

G3 :(x, t, u + ε)

ε 2ε G4 :(e x, e t, u)

2 G5 :(x + 2εt, t, u + εx + ε t)

 x t x2 1 x2  G : , , u − (1 − 4εt)−1 − ln(1 − 4ε) + 6 1 − 4εt 1 − 4εt 4t 2 4t   −u Gα : x, t, − ln(−α(x, t)ε + e ) .

65 We then have that if u = f(x, t) is a solution to Burger’s equation, so are the functions:

u(1) = f(x∗ − ε, t∗)

u(2) = f(x∗, t∗ − ε)

u(3) = f(x∗, t∗) + ε

u(4) = f(e−εx∗, e−2εt∗)

u(5) = f(x∗ − 2εt∗, t∗) + ε2t∗ − εx∗

 x∗ t∗  x2∗ x2∗ u(6) = f , − (1 − 4εt∗)3 − 2 ln(1 − 4ε) + 1 + 4εt∗ 1 + 4εt∗ 12t∗ 12t∗

u(α) = (− ln(e−f(x∗,t∗)) + α(x∗, t∗)ε).

The idea is that since these are the invariance conditions for each individual group of transformations, we may substitute these values into our equation and then reduce it to a form that is solvable.

Example 4.4.2. Consider the infinitesimal generator

v5 = 2t∂x − x∂u, where the solution of Burgers’ equation is of the form

u(5) = f(x∗ − 2εt∗, t∗) + ε2t∗ − εx∗.

(5) 2 We want to show that u satisfies the differential equation ut = uxx + ux, where

(5) 2 ut = −2εfx + ft + ε ,

(5) ux = fx − ε,

(5) uxx = fxx,

66 (5)2 2 2 ux = fx − 2εfx + ε .

2 2 Since u = f(x, t), we can see that ut = uxx + ux implies that ft = fxx + fx . We want to show that u(5)(x, t) is a solution of Burgers’ equation. Substituting these partial derivatives, we have

2 ut − uxx − ux =

2 2 2 −2εfx + ft + ε − (fxx + fx − 2εfx + ε ) =

2 = ft − fxx − fx .

Therefore, we have that if f is a solution, then so is u(5).

We now focus our attention on calculating the invariants of Burgers’ equation. Recall that if the invariants are known, we may employ a relevant technique to solve the original system of differential equations. We won’t calculate the invariants of every infinitesimal generator for Burgers’ equation here. However, we will do a couple examples for select infinitesimal generators.

Example 4.4.3. Consider the infinitesimal generator

v5 = 2t∂x − x∂u of Burgers’ equation. To find the invariants of Burger’s equation related to this infinitesimal generator, we must solve the characteristic system.

dx dt du = = . 2t 0 −x

From the middle expression, we have that t = c. We will then call this our first invariant, y = t.

67 We now look at what remains of the characteristic system and solve it.

dx du = 2t −x −xdx = 2tdu −x2 =⇒ = 2tu + k 2 x2 =⇒ w = − 2tu 2

x2 The end result is that y = t and k = w = − 2tu are our two invariants. 2

Example 4.4.4. Now consider a different infinitesimal generator of Burgers’ equa- tion

2 2 v6 = 4tx∂x + 4t ∂t − (x + 2t)∂u.

The resulting characteristic system is given by

dx dt du = = . 4tx 4t2 −(x2 + 2t)

From the left side, we see that

dx dt = 4tx 4t2 dx dt = x t =⇒ ln |x| = ln |t| + k x =⇒ ln | | = k t x =⇒ y = , t

68 where we pick k = y. From the right, we have

dt du = 4t2 −(x2 + 2t) (−x2 − 2t)dt = du 4t2 x2 1 =⇒ − ln |t| = u + k 4t 2 x2 1 =⇒ w = − ln |t| − u. 4t 2

4.5 Nonclassical Symmetries

For the entirety of this chapter, we have described symmetry groups, invariance, and the prolongation formula, all for particular types of symmetries that we will now call classical symmetries. Classical symmetries fall under the category that

pr(n)v(∆) = 0 when ∆ = 0. (56)

In other words, the n-th prolongation acting on a system of order n equals zero whenever the system of partial differential equations is satisfied. At this point, we would like to focus on different symmetries, called nonclassical symmetries, that fall under tighter restrictions. Nonclassical symmetries, as we will see, offer opportunities to find solutions to a system of differential equations that may not be visible under the traditional set of requirements. For nonclassical symmetries, we simply add the condition that the prolongation acting on a system equals zero not only when the system is satisfied, but also when the invariant surface condition that we will call Q is satisfied. The following passage and proof of the nonclassical symmetry system are taken from the book by Hydon [5], where Equation (56) will be modified to include the invariant surface condition Q.

69 All solutions of the system of differential equations that are invariant under the group with characteristic Q satisfy the PDE and the invariant surface condition and are solutions of the augmented system [5]

∆ = 0 (57)

Q = 0. (58)

For some PDEs, finding the generators of the symmetries leads to new invariant solutions that cannot be found from the “classical” point symmetries. The linearized symmetry condition for systems states that the generator v cor- responding to Q generates symmetries of the nth order PDE if

pr(n)v[∆] = 0, pr(1)v[Q] = 0, when (57), (58) hold. (59)

This condition is simplified by the identity [5]

(1) pr v[Q] = QQu. (60)

From (60), it follows that

pr(1)v[Q] = 0 when Q = 0.

Therefore pr(n)v is a symmetry of (57) and (58) if

pr(n)v[∆] = 0 when ∆ = 0 and Q = 0. (61)

This new criterion for finding invariants illustrates how there may be generators that satisfy (61) that may not satisfy the relaxed criterion of (56).

70 4.5.1 The Nonclassical Method

Now that we have established the criterion that nonclassical symmetries fall un- der, we may establish the actual process by which the generators v can be calculated. Some of the following theory is repetition, but some new material must be presented in the midst. Consider a k-th order system E of differential equations

(k) ∆v(x, u, u ) = 0, v = 1, . . . , l,

1 q in n independent variables x = (x1, . . . , xn), and q dependent variables u = (u , . . . , u ). Suppose that v is a vector field on the space of independent and dependent variables:

n q X ∂ X ∂ v = ξi(x, u) + ϕα(x, u) . (62) ∂x ∂uα i=1 i α=1

The graph of a solution

α α u = f (x1, . . . , xn), α = 1, .., q, (63) to the system defines an n-dimensional submanifold of the space of independent and dependent variables. By applying the well known criterion of invariance of a submanifold under a vector field we get that a solution is invariant under v if and only if f satisfies the first order system of partial differential equations [6]:

n X ∂uα Qα(x, u, u(1)) = φα(x, u) − ξi(x, u) = 0, α = 1, . . . , q, (64) ∂x i=1 i known as the invariant surface conditions. If the equations (61) are satisfied, then the vector field v generates a nonclassical infinitesimal symmetry of the system.

71 Example 4.5.1. Nonclassical Method of Burgers’ Equation From Theorem 4.3.2 and Equation (64), we may find the nonclassical symmetries of Burgers’ equation

2 ut = uxx + ux with the side condition that

Q = φ − ξux − τut = 0.

Since pr(1)vQ = 0 when Q = 0, we need only solve the system

(2) 2 pr [ut − uxx − ux = 0] when

2 ut = uxx + ux and

Q = φ − ξux − τut = 0.

Convention has it that we let τ = 1, without loss of generality, so that calculations are easier. With this adjustment, we have that

Q = φ − ξux − ut = 0.

After applying the prolongation, we once again have the symmetry condition

t xx x φ = φ + 2uxφ (65)

2 J and we must again substitute ut = uxx + ux in for the φ s. In addition to this

2 standard procedure, the nonclassical method we also substitute ut = uxx + ux into Q. So, we have that

Q = φ − ξux − uxx − uux = 0.

72 The next step in the nonclassical method is to solve for another derivative of u(x, t) and substitute that into the φJ s also. In this case, we will substitute

uxx = φ − ξux − uux

2 into Equation (65) after we substitute ut = uxx +ux. From this point, the procedure is precisely the same as for the classical symmetries: we find the determining system of coefficient equations and solve for φ and ξ (since we have that τ = 1). However, the resulting system of determining equations is nonlinear in ξ and φ.

After much theory and whispered agendas, we have now reached the point where we may actively pursue the primary goal of this thesis. Now that the theory has been thoroughly presented and we have seen a thorough example in Burgers’ equation, it is time to apply what we know to analyze the K(m, n) dispersion equation.

73 5 THE K(m, n) DISPERSION EQUATION

We have now thoroughly developed the background and process behind using Lie symmetry methods to solve differential equations. At this point, we will focus on applying these techniques to the equation of this thesis, the K(m, n) dispersion equation:

m n K(m, n) = ut + (u )x + (u )xxx = 0. (66)

In order to find all the symmetries of the K(m, n) equation, we will consider general m and n and any specific cases as they arise. However, we will first investigate the K(2, 2) equation. We will use Maple to perform most of the gruelling calculations, which happens to be all of them. Note that a benefit of calculating a particular case by hand presents itself. To be sure that the program is written correctly, we will compare the solutions obtained from Maple and those from our hand calculations. So, all of the calculations for the K(2, 2) case were tediously hand-calculated and matched with Maple output. We can then adapt the code to the K(m, n) case and be confident that the results are accurate.

5.1 K(2, 2) Equation

To begin our analysis of K(m, n), let us first take a thorough look at the case when m, n = 2. For this case, we have the following partial differential equation for K(2, 2):

2 2 ut + (u )x + (u )xxx = 0. (67)

We perform the same step-by-step process as we used with Burgers’ Equation. We need to apply an appropriate prolongation to K(2, 2). So, noting that K(2, 2) has two independent variables and one dependent variable and that K(2, 2) is a

74 third-order differential equation, we have p = 2, q = 1. We will use the third prolongation

∂ ∂ ∂ ∂ ∂ pr(3)v = v + φx + φt + φxx + φxt + φtt ∂ux ∂ut ∂uxx ∂uxt ∂utt ∂ ∂ ∂ ∂ + φxxx + φxxt + φxtt + φttt (68) ∂uxxx ∂uxxt ∂uxtt ∂uttt to act on the K(2, 2) equation. Before we do this, we must expand the derivatives in Equation (67) as

2 2 ut + (u )x + (u )xxx = ut + 2uux + 2uuxxx + 6uxuxx = 0. (69)

Applying prolongation (68) to (69), we find that

t x xx xxx φ + φ(2ux + 2uxxx) + φ (2u + 6uxx) + 6uxφ + 2uφ = 0.

For this equation, we need to know φx, φxx, and φxxx. We have already calculated φx and φxx for the Burgers’ equation example. In that case, p and q were the same. So, we may reuse φx and φxx. The only total derivative we need to calculate is φxxx. After some lengthy calculations, we find that

xxx 2 φ = φxxx + (3φuxx − ξxxx)ux − τxxxut + (3φuux − 3ξxxu)ux − 3τuxxuxut

3 2 + (3φux − 3ξxx)uxx − 3τxxuxt + (φuuu − ξuux)ux − 3τuuxuxut

+ (3φuu − 9ξux)uxuxx − 6τxuuxuxt − 3τuxutuxx + (φu − 3ξx)uxxx

4 3 2 2 − 3τxuxxt − ξuuuux − τuuuuxut − 6ξuuuxuxx − 3τuuuxuxt − 3τuuuxutuxx

2 − 4ξuuxuxxx − 3τuuxuxxt − 3ξuuxx − 3τuuxxuxt − τuutuxxx. (70)

Since the general prolongation formula states that pr(3)v(∆) = 0 when ∆ = 0, we

2 2 take this into account by substituting −ut = (u )x +(u )xxx = 2uux +2uuxxx +6uxuxx into the φJ ’s. This is one of the several calculation-intensive parts of this problem.

75 After substantial algebra, we get new values for φx, φxx, and φxxx. The next step is to set

t x xx xxx −φ = φ(2ux + 2uxxx) + φ (2u + 6uxx) + 6uxφ + 2uφ (71) from our prolongation and then collect like coefficients of the partial derivatives on either side of Equation (71). From the resulting 87 coefficient equations, we easily find that τ = a(t) and ξ = b(x, t). Considering these conditions in the remaining coefficient equations, the of equations reduce to the seven equations. These are listed Table 3.

Term Coefficients

a. uxxx 2atu − 6ubx + 2φ = 0

b. uxuxx 6φu − 18bx + 6at + 6uφuu = 0

c. uxx 6uφux − 6ubxx + 6φx = 0

d. ux −bt + 6uφuxx + 2atu + 6φxx − 2ubx − 2ubxxx + 2φ = 0

3 e. ux 2uφuuu + 6φuu = 0

2 f. ux 12φux + 6uφuux − 6bxx = 0

g. 1 2uφxxx + φt + 2φxu = 0

Table 3: Reduced K(2, 2) coefficient equations.

As we will see, finding ξ, τ, and φ from these equations is not nearly as difficult as calculating the φJ ’s. So, we will go through the steps thoroughly here.

From Equation (a), φ = u(3bx −2at). Considering this when looking at the rest of our equations, we can see that Equations (b) and (e) will give no further information. We then move on to Equation (c) and substitute the appropriate derivatives of φ into it. We have

76 15bxxu = 0 =⇒ bxx = 0 =⇒ b(x, t) = p(t)x + q(t).

Since bxx = 0 and φ = u(3bx − 2at), Equation (f) gives no further information. We are then left with Equations (d) and (g) to finish solving our system. Using what we know about φ(x, t, u) and b(x, t), Equation (g) reduces to

u(3bxt − att) = 0 =⇒ 3bxt − att = 0 =⇒ att = 3bx,t.

We now move on to the last equation, Equation (d), to find that

bt + 2atu − 2ubx + 2(u(3bx − at)) = 0 =⇒ −bt + u(4bx) = 0 =⇒ bt = 0, bx = 0.

From this, we see that b(x, t) = c3 (a constant). Now that we finally have a concrete value for b(x, t), go back and find values values for a(t) and φ(x, t). Since b = c3, we use the relation from Equation (g) to obtain

att = 0 =⇒ a(t) = c1t + c2.

We input the values of b and a(t) into φ(x, t):

φ = u(0 − c1) = −c1u.

After these relatively simple calculations, we now have definite solutions for ξ, τ, and φ :

ξ = c3 (72)

τ = c1t + c2 (73)

φ = −c1u. (74)

Thus, the infinitesimal generator is given by

∂ ∂ ∂ v = ξ + τ + φ ∂x ∂t ∂u ∂ ∂ ∂ = (c ) + (c t + c ) + (−c u) . 3 ∂x 1 2 ∂t 1 ∂u

77 The next step is to pick particular values for the constants so as to find the linearly independent set of vector fields that generate the symmetry groups of trans- formations. Thus, we have

v1 = ∂x (75)

v2 = ∂t (76)

v3 = t∂t − u∂u. (77)

After some relatively simple calculations, we found the groups, solutions, and the invariants resulting from these three infinitesimal generators. Given symmetry groups and some unknown solution u = f(x, t), the following in Table 4 are also solutions to K(2, 2) under the given transformations. The first solution, u(1), shows translations in x, and the second, u(2), shows translations in t. The last one involves scaling the solution and scaling in t.

Group Solution Invariants

(1) ∗ ∗ G1 :(x + ε, t, u) u = f(x − ε, t ) y = x − ε, w = u

(2) ∗ ∗ G2 :(x, t + ε, u) u = f(x , t − ε) y = t − ε, w = u

ε ε (3) ε ∗ −ε ∗ G3 :(x, e t, e u) u = e f(x , e t ) y = x, w = ut

Table 4: Groups, Solutions, and Invariants of K(2, 2) equation.

5.2 The K(m, n) Equation

At this point, we may note that the vector fields we found from the K(2, 2) case may not have been as complicated or impressive as the ones that we found for Burgers’ equation. This may, or may not, be indicative of what we will find in the general case of K(m, n). However, for whatever coefficients we do find for the K(m, n) equation, we should be able to substitute m, n = 2 into them and get the

78 vector fields listed above. For K(m, n), we perform all calculations in Maple due to their arduous nature. We will attempt to keep the solutions as general as possible until we must consider specific cases in order to continue. The Maple code used may be found in the appendix. In the Maple worksheet (see Appendix C.), the differential equation is substituted into the appropriate prolongation formula automatically. The prolongation formula we used is the same as the one that we used for the K(2, 2) case. We go on to equate the φJ ’s and collect a total of 213 coefficient equations. At this point, we looked at specific terms and worked out details of the coefficients. For example, from the uxxxxx term, we have that τ = a(t). Similarly, we find that ξ = b(x, t). Each time we discover something new about ξ, τ, and φ, we substitute the result into the coefficient equations to simplify them. We then glean through them again to see if we can discover something further and repeat the simplification process.

5.2.1 Calculations

Once τ = a(t) and ξ = b(x, t) are considered, the coefficients reduce rather nicely. We consider the uxxx term, and solve the coefficient equation for φ. From u(3b − a ) this, φ = x t . Note that this is only the value of φ provided n 6= 1. We n − 1 then substitute this value of φ into the coefficient equations and reduce them once more. From these three assumptions, we reduce our 213 coefficient equations down to four, which are listed in Table 5.

79 Term Coefficients

n+1 a. uxx 3n(2n + 1)u bxx = 0

2 n b. ux 3nu bxx(2n + 1)(n − 1) = 0

m 2 m 2 m m n 2 m c. ux u m at − 3u m bx + 2u mbx + u mbxn − 8bxxxu n − u matn

n −u nbxxx + btun − btu = 0

n m d. 1 −uatt + 3btx + 3u nbxxxx + 3bxxu m = 0

Table 5: Reduced K(m, n) coefficient equations.

Equations (a) and (b) tell us that bxx = 0, which implies that b = b1(t)x + b2(t). This reduces Table 5 to Table 6.

Coefficients

m 2 m 2 m m m c. u m at − 3u m bx + 2u mbx + u mbxn − u matn + btun − btu = 0

2 d. (−uatt + 3btx)u = 0

Table 6: Further reduced K(m, n) coefficient equations

Equation (d) gives us a relation between ξ and τ that we can use to simplify Equation (c) We now continue our calculations as much as possible while maintain- ing its general form. Grouping the terms in Equation (c) serves to give us more information. We have that

(m−1) 2 2 u [m at − 3m b1(t) + 2mb1(t) + mnb1(t) − mnat] + b1t xn − b1t x + b2t n − b2t = 0.

This implies that

b (t)(2 + n − 3m) a = 1 and b (n − 1) = 0 t n − m t if m 6= 0 and n 6= 1. In turn, this means that

[c (2 + n − 3m)]t b = c x + c and a = 1 + c . 1 2 n − m 3

80 We may now solve for φ to find that

c (2 − 2n)u φ = 1 . (m − n)(n − 1)

Finally, we have the following values for ξ, τ and φ.

ξ = c1x + c2 (78) c (2 + n − 3m)t τ = 1 + c (79) n − m 3 2c u φ = 1 (80) n − m for n 6= 1, m 6= n, m 6= 0.

5.2.2 Generators and Transformation Groups

At this point, it becomes necessary to start considering particular cases of m and n to understand how the equations act outside of the general setting. Recall that up to this point, we have been operating under the assumption that n 6= 1. We also have to assume that m 6= n and m 6= 0 for the above equations to hold. We now must investigate what each of the coefficients are when we consider these special values of m and n. For the case when m = n, we can determine that Equation (c) simplifies to

2 2u bx(1 − m) + btu(m − 1) = 0. Provided that m 6= 0, 1, bx = 0 and bt = 0. This means that ξ is a constant, ξ = c1. Using this result in Equation (d), att = 0. So,

u(3bx−at) we have that a = c2t + c3. Then, we substitute a back into φ = n−1 and find c u φ = 2 . m − 1 Once we consider the case when n = 1, or when m = 1, we must start the calculation process over from nearly the beginning. A thorough investigation in Maple of the special cases can be found in Appendix D. Each infinitesimal generator

∂ ∂ ∂ has the form v = ξ ∂x + τ ∂t + φ ∂u where ξ, τ, and φ and their conditions are given

81 in Table 7.

Conditions ξ τ φ 2c m 6= n n 6= 1, m 6= 0, 1 c x + c c1(2+n−3m)t + c 1 u 1 2 n−m 3 n − m −c m = n c c t + c 2 u 1 2 3 m − 1 −2c n = 1 m 6= 0, 1 c x + c 3c t + c 1 u 1 3 1 2 m − 1

c1 3c1 m = 2 − 2 x + 2c2t + c3 − 2 t + c4 c1u + c2

m = 1 c1x + 2c1t + c3 3c1t + c2 c4u + c(x, t),

ct + cxxx = 0

m = 0 c1x + c3 3c1t + c2 c4u + g(x, t),

gt + gx + gxxx = 0 3c − c m = 0 n 6= 0, 1 c x + c c t + c 1 3 u 1 2 3 4 n − 1

Table 7: K(m, n) generator results.

For each case, we need to list what the infinitesimal generators of the groups of transformations are. Recall this is done by choosing the values of the constants ci such that each case has a linearly independent set of infinitesimal generators. From these linearly independent sets of vectors, we can find the transformation groups, as detailed in the Burgers’ equation example. Each case that yields a different set of infinitesimal generators also leads to a different groups of transformations. We then rewrite the transformations into a form that describes exactly what must be substituted into the original differential equation to reduce it to an ordinary differential equation. All of this information is listed one column at a time in the Tables 8. The material is presented case-by-case. We now have a complete list of solutions for the K(m, n) dispersion equation. An interesting observation about these solutions is that despite the complexity the equation can reach, the solutions remain relatively simple in nature. In almost every

82 Generators Groups Solutions m 6= n, n 6= 1, m 6= 0, 1 2 2 2+n−3m 2u ε 2+n−3m n−m ε (1) n−m ε −ε ∗ ∗ 2+n−3m v1 = x∂x + n−m ∂t + n−m ∂u G1 :(e x, t + n−m ε, e u) u = e f(e x , t − n−m ε) (2) ∗ ∗ v2 = ∂x G2 :(x + ε, t, u) u = f(x − ε, t ) (3) ∗ ∗ v3 = ∂t G3 :(x, t + ε, u) u = f(x , t − ε) m = n, n 6= 1, m 6= 0, 1 (1) ∗ ∗ v1 = ∂x G1 :(x + ε, t, u) u = f(x − ε, t ) −ε −ε u ε m−1 (2) m−1 ∗ −ε ∗ v2 = t∂t − m−1 ∂u G2 :(x, e t, e u) u = e f(x , e t ) ∗ ∗ v3 = ∂t G3 :(x, t + ε, u) f(x , t − ε) n = 1, m 6= 0, 1, 2 2 2 2u ε 3ε m−1 ε (1) m−1 ε −ε ∗ −3ε ∗ v1 = x∂x + 3t∂t − m−1 ∂u G1 :(e x, e t, e u) u = e f(e x , e t ) (2) ∗ ∗ v2 = ∂t G2 :(x, t + ε, u) u = f(x , t − ε) (3) ∗ ∗ v3 = ∂x G3 :(x + ε, t, u) u = f(x − ε, t ) n = 1, m = 2 1 3 −ε/2 −3ε/2 ε (1) ε ε/2 ∗ 3ε/2 ∗ v1 = − 2 x∂x − 2 t∂t + u∂u G1 :(xe , te , e u) u = e f(e x , e t ) (2) ∗ ∗ ∗ v2 = 2t∂x + ∂u G2 : (2tε + x, t, u + ε) u = f(x − 2t ε, t ) + ε (3) ∗ ∗ v3 = ∂x G :(x + ε, t, u) u = f(x − ε, t ) (4) ∗ ∗ v4 = ∂t G :(x, t − ε) u = f(x , t − ε)

83 n = 1, m = 1 3ε ε ε 3ε (1) −ε ∗ −ε −3ε ∗ −3ε ∗ v1 = (x + 2t)∂x + 3t∂t G1 : ((e − e )t + e x), e t, u) u = f(e x − (e − e )t , e t ) (2) ∗ ∗ v2 = ∂t G2 :(x, t + ε, u) u = f(x , t − ε) (3) ∗ ∗ v3 = ∂x G3 :(x + ε, t, u) u = f(x − ε, t ) ε (4) ε ∗ ∗ v4 = u∂u G4 :(x, t, e u) u = e f(x , t ) (5) ∗ ∗ v5 = c(x, t)∂u G5 :(x, t, u + ε) u = f(x , t ) n = 1, m = 0 ε 3ε (1) −ε ∗ −3ε ∗ v1 = x∂x + 3t∂t G1 :(xe , e t, u) u = f(e x , e t ) (2) ∗ ∗ v2 = ∂t G2 :(x, t + ε, u) u = f(x , t − ε) (3) ∗ ∗ v3 = ∂x G3 :(x + ε, t, u) u = f(x − ε, t ) ε (4) ε ∗ ∗ v4 = u∂u G4 :(x, t, e u) u = e f(x , t ) ε (5) ε ∗ ∗ v5 = g(x, t)∂u G5 :(x, t, e u) u = e f(x , t ) m = 0, n 6= 0, 1 3 ε 3ε (1) εx∗,t∗ 3ε v1 = x∂x + n−1 ∂u G1 :(e x, t, u + n−1 ) u = f(e ) + n−1 (2) ∗ ∗ v2 = ∂x G2 :(x + ε, t, u) u = f(x − ε, t ) u ε −ε/n−1 (3) −ε/n−1 ∗ −εt∗ v3 = t∂t − n−1 ∂u G3 :(x, e t, e ) u = e f(x , e ) (4) ∗ ∗ v4 = ∂t G4 :(x, t + ε) u = f(x , t − ε)

Table 8: Generators, Groups, and Solutions of K(m, n). case, the only solutions that are found are translations and scaling transformations. Now, let us do an example of showing that a couple of cases are correct.

Example 5.2.1. Consider the case when m = 1, n = 1, where

u(1) = f(e−εx∗ − (e−ε − e−3ε)t∗, e−3εt∗)

We must check that this will satisfy the K(1, 1) differential equation

K(1, 1) = ut + ux + uxxx = 0.

We have that

−3ε −ε −3ε ut = (e − e )fx + e ft

−ε ux = (e )fx

−3ε uxxx = e fxxx, where

−3ε −ε −3ε −ε −3ε ut + ux + uxxx = (e − e )fx + e ft + (e )fx + e fx

−3ε = e (fx + ft + fxxx) = 0

(1) This last equation implies that if ft + fx + fxxx = 0, then we have that u satisfies the differential equation for K(1, 1).

Example 5.2.2. Consider the case where m = 2, n = 2, where

u(2) = e−εf(x∗, e−εt∗), and

2 2 ut + (u)x + (u)xxx = ut + 2uux + 2uuxxx + 6uxuxx = 0.

Check that u(2) satisfies K(2, 2).

84 −ε −ε −2ε ut = e fte = e

−ε ux = e fx

−ε uxx = e fxx

−ε uxxx = e fxxx

Substitute these into K(2, 2), we have,

−2ε −ε ∗ −ε ∗ −ε −ε ∗ −ε ∗ −ε e ft + 2(e f(x , e t ))(e fx) + 2(e f(x , e t ))(e fxxx)

−ε −ε + 6(e fx)(e fxx)

−2ε −2ε ∗ −ε ∗ −2ε ∗ −ε ∗ −2ε = e ft + 2e f(x , e t )fx + 2e f(x , e t )fxxx + 6e fxfxx

−2ε = e (ft + 2ffx + 2ffxxx + 6fxfxx) = 0

=⇒ ft + 2ffx + 2ffxxx + 6fxfxx = 0.

We see that u(2) satisfies the differential equation for K(2, 2). As this examples suggests, once m and n get larger, the calculations become increasingly strenuous. As a result, we will not show an example of the more complex cases.

5.2.3 Invariants and Reductions

The next step is to go into relative detail discussing invariants of the K(m, n) infinitesimal generators. Ideally, once found, these can be used to reduce the K(m, n) equation to an ordinary differential equation. Solving the resulting ODE takes some ingenuity, but we will show a few examples of calculating invariants and performing these reductions.

85 Example 5.2.3. Consider the most general infinitesimal generator of K(m, n)

2 + n − 3m 2u v = x∂ + ∂ + ∂ , (81) x n − m t n − m u where m 6= n, n 6= 1, m 6= 0, 1. We would like to calculate the invariant functions by solving the characteristic system

dx (n − m)dt (n − m)du = = . x 2 + n − 3m 2u

First, we solve the left side of this system as follows:

dx (n − m)dt = x 2 + n − 3m (n − m)t ln |x| = + C. 2 + n − 3m

(n−m)t We now choose the invariant to be y = C = ln |x| − (2+n−3m) . We solve the right side of the characteristic system similarly.

(n − m)dt (n − m)du = 2 + n − 3m 2u (n − m)t n − m = ln |u| + k. 2 + n − 3m 2

(n−m)t n−m We choose the other invariant w = k = 2+n−3m − 2 ln |u|.

Example 5.2.4. Consider another infinitesimal generator of K(m, n)

v = (x + 2t)∂x + 3t∂t.

Since ∂u is not included in this generator, we can already say that one invariant is

86 w = u. To find the other, we must solve the characteristic system

dx dt = , x + 2t 3t which requires an integrating factor of t−1/3 to continue. We have that

dx x 2 − = dt 3t 3 dx x  2 t−1/3 − = t−1/3 dt 3t 3 dx 1 2 t−1/3 − t−4/3x = t−1/3 dt 3 3 2 (t−1/3x)0 = t−1/3 3 t−1/3x = t2/3 + C

We choose y = C = t−1/3x − t2/3 as the other invariant.

We can continue this process for every infinitesimal generator listed in Table 8 and we have all the infinitesimal generators and their corresponding invariants listed in Table 9.

87 Conditions Generators Invariants 2+n−3m 2u n−m (n−m)t m 6= n, n 6= 1, m 6= 0, 1 v1 = x∂x + n−m ∂t + n−m ∂u y = ln |x|− 2+n−3m , w = 2+n−3m − n−m 2 ln |u| v2 = ∂x y = x − ε, w = u m = n, n 6= 1, m 6= 0, 1 v1 = ∂x y = x − ε, w = u u 1/(m−1) v2 = t∂t − m−1 ∂u y = x, w = t u v3 = ∂t y = t − ε, w = u 1 2u −1/3 3(m−1) n = 1, m 6= 0, 1 v1 = x∂x + 3t∂t − m−1 ∂u y = t x, w = t u v2 = ∂t y = t − ε, w = u v3 = ∂x y = x − ε, w = u −1/3 2/3 n = 1, m = 1 v1 = (x + 2t)∂x + 3t∂t y = t x − t , w = u v = ∂ y = t − ε, w = u

88 2 t v3 = ∂x y = x − ε, w = u −ε v4 = u∂u y = x, w = e u v5 = ∂u y = x, w = u − ε −1/3 n = 1, m = 0 v1 = x∂x + 3t∂t y = t x, w = u v2 = ∂t y = t − ε, w = u v3 = ∂x y = x − ε, w = u v4 = u∂u y = x, w = u − ε −4 4u x n−1 m = 0, n 6= 0, 1 v1 = x∂x + t∂t + n−1 ∂u y = t , w = t u v2 = ∂t y = t − ε, w = u v3 = ∂x y = x − ε w = u

Table 9: Invariants of K(m, n) equation. We will now perform a couple examples of how invariants lead to reductions of K(m, n) to an ordinary differential equation.

Example 5.2.5. Reduction Consider the invariants

y = t−1/3x, w = u = v(t−1/3x), corresponding to the infinitesimal generator

v = x∂x + 3t∂t when n = 1, m = 0. We need to use these invariants to rewrite K(m, n) in terms of the new variable that we will call v, where v is a function of the other invariant. We have that

ut + uxxx = 0 −1 t−4/3xv + v = 0 3 y yyy −1 yv + v = 0. 3 y yyy

If we let w = vy, it becomes −1 yw + w = 0, 3 yy which is the well-known Airy’s equation with known solutions. Thus, we have that

−1 solutions to 3 yvy + vyyy = 0 are of solutions of Airy’s equation.

Example 5.2.6. Traveling Wave Solutions Consider the case when m = 2 and n = 2, so that K(2, 2) is given by

2 2 ut + (u )x + (u )xxx = 0.

89 In Table 8, if we take v = v1 + v3 = ∂x + ∂t, we may solve the characteristic system

dx dt du = = 1 1 0 to get the invariants y = x − t and u = v(y). Substituting these into K(2, 2), we have that

2 2 ut + (u )x + (u )xxx =

2 2 −vy + (v )y + (v )yyy, (82) which is an ordinary differential equation under the variable y. Integrating (82) once with respect to y and then using the integrating factor vvy, we get

2 2 −v + v + (v )yy = const. = 0

2 2 vvy(−v + v + (2vvyy + 2vy) = 0)

2 3 2 3 −v vy + v vy + 2v vyvyy + 2vvy = 0 v3 v4 − + + v2v2 = 0. 3 4 y

The next step is to solve for v so that we may substitute u back into the solution.

So, we must solve for vy and integrate, a process which requires a trigonometric substitution as follows

v3 v4 v2v2 = − y 3 4 r dv v v2 = − dy 3 4 √ 3dv dy = . √ q 3v v 1 − 4

90 3v 2 Let 4 = sin (θ) and we have

dy = 4dθ

y = 4θ + c r3v  y = 4(sin−1 ) + θ 4 0 4 y  v = sin2 + θ 3 4 0 4 x − t  u(x, t) = sin2 + θ . (83) 3 4 0

We have that Equation (83) is a solution of the K(2, 2) differential equation in the form of a traveling wave at unit speed.

5.3 Nonclassical Symmetries of K(m, n)

We would now like to turn our attention to finding nonclassical symmetries of K(m, n). The process is almost exactly the same as finding the classical symme- tries; the only difference is that we have one more condition to substitute into our calculations. Recall the K(m, n) equation

m n ut + (u )x + (u )xxx = 0.

When we expand the derivatives, we have that

m−1 n−1 n−2 n−3 3 ut + mu ux + nu uxxx + 3n(n − 1)u uxuxx + n(n − 1)(n − 2)u ux = 0.

From (61), we have that

pr(n)v[∆] = 0 when ∆ = 0 and Q = 0,

91 where

Q = φ − ξux − ut.

m−1 n−1 n−2 We then insert ut = −mu ux − nu uxxx − 3n(n − 1)u uxuxx − n(n − 1)(n −

n−3 3 J 2)u ux in Q. This equation will need to be substituted into the φ ’s along with the K(m, n) equation. This quickly becomes cumbersome, so we will use Maple once again for finding the nonclassical symmetries. The Maple code for this process can be found in Appendix E. Once we substitute the system and the invariant surface condition into the in- variance condition pr(3)v[∆] = 0, we look at the coefficient equations that surface.

Immediately we see that ξu = 0 so that ξ = b(x, t). We substitute this back into the determining equations and subsequently reduce them. One reduced coefficient equation in particular,

2 nφuu − nφ + φuuu − φuu + φ = 0, is a Cauchy-Euler equation that may be solved for φ. If we let φ = ur,

2 φ(1 − n) + φuu(n − 1) + φuuu

= ur(1 − n) + (r)ur−1u(n − 1) + (r)(r − 1)ur−2u2

= ur(1 − n) + urr(n − 1) + (r)(r − 1)ur

= ur(1 − n + rn − r + r2 − r)

= ur(r − 1)(r + n − 1).

We now have that r = 1 or r = 1 − n, where we must restrict that n 6= 0 so that we are guaranteed linearly independent solutions. Therefore, φ = a(x, t)u + c(x, t)u1−n, and we substitute this into the coefficient equations and further reduce them. After

92 this, there are three nonlinear determining equations that emerge:

 3 2n n+1 eq1 = −u (u )nbxxx − nbau

− unbc − um+nm2a − umm2c

m+n m n+1 n+1 + nu ma + nu mc + btu + bau  m+n 2n 2 n+1 n + ubc − 2bxu m − 3(u )n axx + 3bxbu /(u ) = 0

n+2 eq2 = 3u n(n − 1)(axn − bxx) = 0

 eq3 = u4 − na2u2n+1 − 2nacun+1

2 3n 2n 2n+1 − unc + u naxxx + ncxxxu + 3bxau

n+1 2 2n+1 n+1 + 3bxcu + a u n + 2acu

2 2n+1 n+1 2n+m + uc + atu + ctu + ma[x]u  m+n 2n + mcxu /u = 0, where a = a(x, t), b = b(x, t), and c = c(x, t). From these three equations, we can begin investigating particular cases depending on the values of m and n. For example, from eq2, we see that there are three possible cases: when n = 0, n = 1, and axn − bxx = 0. Each of these cases then have their own subcases that emerge, and the process becomes tedious almost immediately. We will not discuss every case here, but we will elaborate a few interesting examples.

93 Up to this point, we have already established that

τ = 1,

ξ = b(x, t),

φ = a(c, t)u + c(x, t)u1−n.

When looking at the particular cases of m and n, we can substitute those values into the system of three equations and Maple can solve for ξ and φ almost immediately. The more general we try to keep m and n, however, the harder it is for Maple to actually compute it. As a result, we do not have a complete set of solutions. But, we would like to present here the unique results of a few cases.

Example 5.3.1. Case n = 1, m = 2 If we let n = 1 and m = 2, using Maple’s pdsolve command, we see that

(3t − 2c )c + x + 3c ξ = 1 2 3 3t − 2c1 τ = 1 2 3c φ = u + 2 . −3t + 2c1 −6t + 4c1

After multiplying v by 3t − 2c1 and choosing values for the ci’s such that we get a linearly independent set, we get the generators

v1 = x∂x + (3t − 2)∂t − 2u∂u

v2 = (6t + 2x)∂x + 6t∂t − 3∂u

v3 = (x + 3)∂x + 3t∂t − 2∂u.

This particular case is unique because there are not very many results where c(x, t) is not zero. If we compare these generators to the ones we got under the classical

94 symmetry conditions, we see that the generators are different, but not altogether different in form.

Example 5.3.2. Case n = 1, m > 2 If we let n = 1 and restrict m > 2, we have that

−xm + x + c ξ = 2 (3 − 3m)t + 2c1 τ = 1 −2 φ = u, (3m − 3)t − 2c1 where the infinitesimal generators take on the form

(1 − m)x 2 v = ∂ + ∂ − u∂ 1 (3 − 3m)t + 2 x t (3m − 3)t − 2 u (1 − m)x + 1 2 v = ∂ + ∂ − u∂ . 2 (3 − 3m)t x t (3m − 3)t u

These two generators are almost identical; they are only different from each other by constants.

Example 5.3.3. Case m = n Consider the case when m = n. We then find the values

c ξ = 2 −tn + t + c1 τ = 1 1 φ = u, −tn + t + c1

95 which leads to the infinitesimal generators

1 v = ∂ + u∂ 1 t t − tn + 1 u 1 1 v = ∂ + ∂ + u∂ . 2 t − tn x t t − tn u

If we compare these to generators found for m = n under classical symmetry condi- tions, we find that these are slightly different.

For simplification, we multiply the generators by the denominators of ξ and φ, as we have done in Table 10. Some of the generators listed are similar to ones that we saw under the classical symmetry conditions. For example, if we look at the case where m = n, and neglect the constants, v1 in Table 10 is of the same form as v2 in

Table 8. Also, v2 in Table 10 for m = 2 is of the same form as v1 + v2 in Table 8.

If we look at the case when n = 1 m > 2 in Table 10 and set c1 = c2 = 0, we get v1 in Table 8. Each one of these cases lead to their own invariants, and those invariants can be used to reduce the adapted version of K(m, n) to an ordinary differential equa- tion. We have seen a few cases where generators found under nonclassical symmetry conditions are different from the ones previously found. These will in turn lead to different invariants and different ODE’s. There are plenty more cases that may be considered for m and n, but we will leave the Maple code for those in Appendix E. As we can see, there is much more information to be desired from the nonclas- sical symmetry setting for solving K(m, n). This is certainly an area worth further exploration.

96 Conditions Generators   m = n + 1 v1 = x∂x + (1 + 2n)t − 2 ∂t − 2u∂u

v2 = (x − 1)∂x + (1 + 2n)t∂t − 2u∂u   m = 2n v1 = −nx∂x + (2 − 5n)t + 2 + 2u∂u

v2 = (1 − nx)∂x + (2t − 5tn)∂t + 2u∂u m = n v1 = (t − tn + 1)∂t + u∂u v2 = ∂x + (t − tn)∂t + u∂u   m = 1, n > 1 v1 = (1 + n)t + x + (1 − n)x ∂x + 3t∂t − 3u∂u

v2 = (n − 1)(t − x)∂x + 3∂t − 3u∂u   v3 = (n − 1)(t − x) + 3 ∂x − 3u∂u m = 0, n > 1 v1 = (2 − n)x∂x + 3t∂t − 3u∂u v2 = (1 − n)x∂x + 3∂t − 3u∂u   v3 = (1 − n)x + 3 ∂x − 3u∂u n = 1, m > 2 v1 = (x − xm)∂x + (3t − 3tm + 2)∂t + 2u∂u v2 = (x − xm + 1)∂x + (3t − 3tm)∂t + 2u∂u n = 1, m = 2 v1 = x∂x + (3t − 2)∂t − 2u∂u 3 v2 = (3t + x)∂x + 3t∂t − (2u + 2 )∂u v3 = 3∂x + 3t∂t − 2u∂u

Table 10: Nonclassical generators of K(m, n) equation.

97 6 CONCLUSION

There are an abundance of techniques one may use to solve a system of dif- ferential equations. This thesis focused on applying one particular technique and finding solutions with it. We used Lie symmetry solution techniques to find both classical and nonclassical symmetries of the K(m, n) dispersion equation. We also demonstrated a few of the solutions that were found, and we summarized the trans- formation groups and invariants. The work for Burgers’ equation and the K(2, 2) case were calculated entirely by hand and compared to results we found in Maple. From satisfactory comparability, we went on to calculate the K(m, n) infinitesimal generators in Maple for both the classical and nonclassical method. We began discussing the theory behind Lie symmetry methods in Chapter 2 with topology and weaved through discussions of manifolds and vector fields. We also defined integral curves and flows, which become substantial later. We defined the Lie bracket and touched on what a Lie algebra is. In Chapter 3, we introduced the idea of a Lie group. Defining manifolds in Chap- ter 2 was important for this since a Lie group holds within it the underlying structure of a manifold. This property is useful since we can now describe actions within the Lie group using vectors. We then moved on to present the idea of transformation groups, which are Lie groups, but have the operation of addition under composition. This fits with our criterion for needing solution curves to “map” into each other; they need to be “slid” into each other by increments of a certain parameter. This paves the way for the existence of one-parameter groups of transformations. We went further into Chapter 3 and discussed certain properties of one-parameter groups of transformations. In particular, we re-defined a Lie group as an exponential and defined infinitesimal generators as the vector fields that span the Lie algebra. We then used properties to show that the one-parameter groups of transformations

98 are the same as the flow, or maximal integral curve, generated by the vector fields. Since the maximal integral curve can be generated by the infinitesimal generators, we established a link between one-parameter groups of transformations and the infinitesimal generators of the Lie algebra. This is advantageous because infinitesimal generators are vectors and have the promise of solvability attached to them. Once the infinitesimal generators are known, we may find the groups of transformations that map solution curves of the system of equations. The last important issue discussed in Chapter 3 was how we may go about solving for the infinitesimal generators, since that is the base step to finding the groups of transformations. We developed the theory that infinitesimal generators may be found by evaluating a Lie series about the parameter at its initial value of zero. This is simple enough in theory, but the calculations are very gruelling and require some insight to draw conclusions. We are then grateful to have the First Fundamental Theory of Lie to show that we may find the infinitesimal generators by solving a characteristic system of ordinary differential equations. As we showed, this process is a lot quicker and often extremely simple. With this, we began Chapter 4, which focused on creating application theory and a series of steps to bring calculating infinitesimal generators into fruition. In Chapter 3, we established a connection between Lie algebras and Lie groups, but how are we to infuse our system of differential equations into this theory and create a calculable means to extract the one-parameter groups of transformations? We did this in Chapter 4 by first developing symmetry groups. Symmetry groups are simply a criterion for solution curves to adhere to while they are being mapped. They are a slightly altered version of Lie groups of transformations that are compatible with the way we need to develop the theory. We continued Chapter 4 by defining invariant functions, which parametrize the solution curves of a system of differential equations. We also showed that infinites-

99 imal generators acting on the invariant functions equal zero. This creates excellent criterion by which we make sure that further theory follows. We went on to describe a process called prolongation, where the system of differential equations and corre- sponding vector fields are thrown into a bigger space. This makes the system behave as an algebraic system in the jet space, which is what we need to relate the previous theory to differential equations. Once the differential equations are factored into these prolonged infinitesimal generators, the solution curves become hyper-surfaces in the jet space instead of two dimensional lines. We went on to introduce the Gen- eral Prolongation Formula, the key by which calculating the one-parameter groups of transformations through its infinitesimal generators becomes possible. The General Prolongation Formula gives us a means to find the infinitesimal generators. The process is extremely tedious and cumbersome, but it amounts to finding the coefficients of the generators from the determining equations. These are simply the coefficient equations in front of different partial derivatives of the solution. From these, we can solve for the individual coefficient functions of the infinitesimal generator. Eventually, we find all the coefficients of the infinitesimal generators and can find the spanning set of generators of the Lie algebra. From this spanning set, we can easily calculate the one parameter groups of transformations corresponding to each infinitesimal generator. Through alterations of the characteristic system of ODE’s, we may also find the invariants of the system of differential equations. These may lead to reductions of the system which make the system solvable. At the end of Chapter 4, we introduced a new kind of symmetry, called non- classical symmetries, which may lead to groups of transformations that can not be found under the general prolongation formula for classical symmetries. We simply added the condition that the invariant surface condition must be satisfied as well as the system of differential equations. This amounts to simply inputting the invariant surface condition into the prolongation formula, and the calculation process from

100 there is generally unaltered. The difficultly level is increased, however, as the added constraint, called the invariant surface condition, makes the determining equations nonlinear. With these tools at our disposal, we moved into Chapter 5 and began the much anticipated analysis of the K(m, n) dispersion equation. We first investigated the classical symmetries and found simplistic solutions, such as traveling wave solutions in Example 5.2.6. For classical Lie symmetries, the groups of transformations were all listed, as were the invariants of each, in Tables 8 and 9. A couple of examples were shown as to how invariants lead to reductions of K(m, n) to an ODE in Examples 5.2.5 and 5.2.6. If one desired, the other listed invariants in Table 9 could be used to reduce K(m, n) to other ODE’s that have different solutions. We then moved on to finding the nonclassical symmetries of the dispersion equa- tion, which proved much more gruelling, but also rewarding for the unusual equa- tions that we found for the coefficients of the infinitesimal generators. Some of the nonclassical infinitesimal generators are listed in Table 10, and can be used to find invariants by solving the characteristic system. Note that the generators of a general case of m and n were not calculated, as the computations proved too challenging for Maple and time was too short to develop methods to circumvent the issue. Also note that the nonclassical approach revealed some new generators, but none that were very notably different from the ones found under the classical approach. Because of the difficulty presented by the nonclassical symmetries, we have not calculated all of the cases that emerged for the coefficients of the generators. Per- haps future effort may be placed on finding methods, packages, or programs in Maple where these could be easily solved. We also did not describe solutions of the K(m, n) equation for every case that emerged here, so for completeness, this would be worthwhile. In addition, further effort may be placed on performing Lie symmetry techniques

101 on modified versions of the K(m, n) equation. For example, the modified equation [8]

m n k ut + a(u )x + (u )xxx = µ(u )xx, a, µ = consts. (84) adds a dissipation term so that Equation (84) models the interaction between dis- persion and dissipation. For a long while, dispersion and dissipation were studied independently due the methods for studying each being too different. In reality, there are rarely times when they are completely independent of each other, so Equa- tion (84) is a more realistic model. However, this interaction between dispersion and dissipation means that finding solutions is much more difficult.

102 REFERENCES

[1] G. W. Bluman, J. D. Cole, 1974: Similarity Methods for Differential Equations. Springer-Verlag, New York, 332 pp.

[2] G. W. Bluman and S. Kumei, 1989: Symmetries and Differential Equations, Springer-Verlag, New York, 412 pp.

[3] Brian C. Hall, 2003: Lie Groups, Lie Algebras, and Representations: An Ele- mentary Introduction. Springer Science + Business Media, LLC, 351 pp.

[4] James M. Hill, 1992: Differential Equations and Group Methods for Scientists and Engineers. CRC Press, Inc., 201 pp.

[5] Peter E. Hydon, 2000: Symmetry Methods for Differential Equations: A Begin- ner’s Guide. Cambridge University Press, 228 pp.

[6] N. H. Ibragimov, ed., 1995: “Nonclassical and Conditional Symmetries”, CRC Handbook of Lie Group Analysis of Differential Equations Volume Three. CRC Press, 448 pp.

[7] Peter J. Olver, 1993: Applications of Lie Groups to Differential Equations Sec- ond Ed. Springer-Verlag, New York, N.Y., 513 pp.

[8] Philip Rosenau, 1998: “On a class of nonlinear dispersive-dissipative interac- tions”. Physica D, Vol. 123, 525-546 pp.

[9] Joseph J. Rotman, 2010: Advanced Modern Algebra, Second Ed.. American Mathematical Society, 1008 pp.

[10] John Starrett, 2007: “Solving Differential Equations by Symmetry Groups.” American Mathematical Monthly, Vol. 114, 778-792 pp.

103 [11] Houria Triki and Abdul-Majid Wazwaz, 2009: “Bright and dark soliton solu- tions for a K(m, n) equation with t-dependent coefficients”. Physics Letters A, 373, Elsevier B.V., 2162-2165 pp.

[12] Srinath Vadlamani, 2001: “Study of Lie Symmetries of the Vaidya Equations”, M.S. Thesis, UNCW, 99 pp.

[13] C. Von Westenholz, 1981: Differential Forms in Mathematical Physics. North- Holland, New York, N.Y., 563 pp.

[14] Wikipedia, June 2012: Compacton http://en.wikipedia.org/wiki/Compacton.

104 APPENDIX

A The Coefficient Functions φJ

The following are values for the φJ ’s when p = 2 and q = 1. We used these in both Burgers’ equation and in the K(2, 2) example.

x 2 3 φ = φx + (φu − ξx)ux − τxuxx − (τx + ξu)ux − τuuxuxx − τuux

t 2 3 2 2 4 φ = φt − ξtux + (φu − τt)uxx + (φu − τt)ux − ξuuxuxx − ξuux − τu(uxx + 2uxuxx + ux)

xx 2 φ = φxx + (2φxu − ξxx)ux + (φu − 2ξx − τxx)uxx + (−τxx + φuu − 2ξxu)ux

3 2 4 − (2τxu + ξuu)ux − (2τxu + 3ξu)uxuxx − (τuu + τu)uxuxx − τuuux

2 − 2τxuxt − τuuxx − 2τuuxuxt

xxx 2 φ = φxxx + (3φuxx − ξxxx)ux − τxxxut + (3φuux − 3ξxxu)ux − 3τuxxuxut

3 2 + (3φux − 3ξxx)uxx − 3τxxuxt + (φuuu − ξuux)ux − 3τuuxuxut

+ (3φuu − 9ξux)uxuxx − 6τxuuxuxt − 3τuxutuxx + (φu − 3ξx)uxxx − 3τxuxxt

4 3 2 2 − ξuuuux − τuuuuxut − 6ξuuuxuxx − 3τuuuxuxt − 3τuuuxutuxx

2 − 4ξuuxuxxx − 3τuuxuxxt − 3ξuuxx − 3τuuxxuxt − τuutuxxx

105 B Maple Code for K(2, 2)

This Appendix includes the Maple code used for the case when m = n = 2. This is a short piece of code, and the end output is exactly the solutions we get calculating the coefficients by hand.

> restart: > with(PDEtools, DeterminingPDE, declare, diff_table, Infinitesimals, SymmetryTransformation): > declare(u(x,t)): > PDE1:=diff(u(x,t),t)+diff(u(x,t)^m,x)+diff(u(x,t)^n,x$3)=0; > PDE2:= U[t]+((U[]^m*m*U[x])/U[])+((U[]^n*n^3*U[x]^3)/U[]^3) +((3*U[]^n*n^2*U[x]*U[x,x])/U[]^2)-((3*U[]^n*n^2*U[x]^3)/U[]^3) +((U[]^n*n*U[x,x,x])/U[])-((3*U[]^n*n*U[x,x]*U[x])/U[]^2) +((2*U[]^n*n*U[x]^3)/U[]^3)=0; > DetSys:=DeterminingPDE(PDE1,integrabilityconditions=false); > n:=2: m:=2: > sol:=pdsolve(DetSys);

106 C Maple Code for K(m, n)

This Maple code was used for the general K(m, n) equation. Note that there are steps in the middle of the code where we input information about the coefficients so that we can reduce the system far enough to get explicit values.

> restart: > with(PDEtools,declare,InfinitesimalGenerator,FromJet,ToJet, DeterminingPDE,Infinitesimals,SymmetryTransformation,diff_table): > DepVars:=[u](x,t): declare((xi,tau,phi)(x,t,u)): declare(u(x,t)): > S:= [xi(x,t,u),tau(x,t,u),phi(x,t,u)]; > G:=InfinitesimalGenerator(S,DepVars,prolongation=3,expanded): > PDE1:=diff(u(x,t),t)+diff(u(x,t)^m,x)+diff(u(x,t)^n,x$3); > PDE2:=simplify(ToJet(%,DepVars)); > EQ1:=FromJet(G(PDE2),DepVars): > EQ2:=expand(FromJet(subs({diff(u(x,t),t)=diff(u(x,t),t)-PDE1} ,EQ1),DepVars)): > nops(%); > EQ3:=subs({u[]=u,u[1]=u_x,u[1,1]=u_xx,u[1,1,1]=u_xxx, u[1,1,1,1]=u_xxxx, u[1,1,1,1,1]=u_xxxxx,u[1,1,1,1,1,1]=u_xxxxxx}, ToJet(EQ2,DepVars)): > nops(%); > C1:=coeffs(EQ3,u_xxxxx,’X1’): X1; C1[2]; > C2:=coeffs(EQ3,u_xxxx,’X2’): X2; C2[2]; > C3:=coeffs(EQ3,u_xxx,’X3’): X3; C3[2]; > tmp1:=X1[2]*C1[2]+X2[2]*C2[2]+X3[2]*C3[2]: > EQ4:=expand(EQ3-tmp1):

107 > C4:=coeffs(EQ4,u_xx,’X4’): X4; C4[4]; C4[3]; C4[2]; > tmp2:=X4[2]*C4[2]+X4[3]*C4[3]+X4[4]*C4[4]: > EQ5:=expand(EQ4-tmp2): > C5:=coeffs(EQ5,u_x,’X5’): X5; C5[5]; C5[4]; C5[3]; C5[2]; > tmp3:=X5[2]*C5[2]+X5[3]*C5[3]+X5[4]*C5[4]+X5[5]*C5[5]: > tmp4:=expand(EQ5-tmp3): > TMP:=tmp1+tmp2+tmp3+tmp4; > expand(TMP-EQ3); > new:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b(x,t)},TMP)); > new1:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b(x,t), phi(x,t,u)=u*(-diff(a(t),t) +3*diff(b(x,t),x))/(n-1)},TMP)); > c[1]:=(coeff(new1,u_xxx)):c[2]:=(coeff(new1,u_xx)): c[3]:=(coeff(new1,u_x^3)): c[4]:=(coeff(new1,u_x^2)): c[5]:=coeff(expand(new1-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): c[6]:=expand(new1-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3+c[4]*u_x^2 +c[5]*u_x)): > for i from 1 to 6 do factor(u^2*(n-1)*c[i]); od; > new2:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=(1/3)*(diff(a(t),t) +c1)*x+d(t),phi(x,t,u)=u*(-diff(a(t),t)+3*diff(b(x,t),x))/(n-1)}, TMP)); > new3:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b1(t)*x+b2(t), phi(x,t,u)=u*(-diff(a(t),t)+3*b1(t))/(n-1)},TMP)); > c[1]:=(coeff(new3,u_xxx)):c[2]:=(coeff(new3,u_xx)): c[3]:=(coeff(new3,u_x^3)): c[4]:=(coeff(new3,u_x^2)):

108 c[5]:=coeff(expand(new3-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): c[6]:=expand(new3-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3+c[4]*u_x^2 +c[5]*u_x)): > for i from 1 to 6 do factor(u^2*(n-1)*c[i]); od; > start:=2*u^m*m*b1(t)+u^m*m^2*a[t]-3*u^m*m^2*b1(t)+u^m*m*b1(t)*n +b1[t]*x*u*n-b1[t]*x*u+b2[t]*u*n-b2[t]*u-u^m*m*a[t]*n; > new4:=eval(subs({tau(x,t,u)=3*c2*t+c1,xi(x,t,u)=c2*x+p(t),n=1}, TMP)); > c[1]:=(coeff(new4,u_xxx)):c[2]:=(coeff(new4,u_xx)): c[3]:=(coeff(new4,u_x^3)): c[4]:=(coeff(new4,u_x^2)): c[5]:=coeff(expand(new4-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): c[6]:=expand(new4-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3+c[4]*u_x^2 +c[5]*u_x)): > for i from 1 to 6 do factor(u^2*c[i]); od;

109 D Complete Maple Output

This Appendix includes the Maple code for K(m, n) again. This time the output is included and all the cases are shown and worked out.

> restart:with(PDEtools,declare,InfinitesimalGenerator,FromJet,ToJet, DeterminingPDE,Infinitesimals,SymmetryTransformation,diff_table): > DepVars:=[u](x,t): declare((xi,tau,phi)(x,t,u)): declare(u(x,t)):

xi(x, t, u) will now be displayed as xi

tau(x, t, u) will now be displayed as tau

phi(x, t, u) will now be displayed as phi

u(x, t) will now be displayed as u

> S:= [xi(x,t,u),tau(x,t,u),phi(x,t,u)];

S := [xi, tau, phi]

> G:=InfinitesimalGenerator(S,DepVars,prolongation=3,expanded): > PDE1:=diff(u(x,t),t)+diff(u(x,t)^m,x)+diff(u(x,t)^n,x$3);

PDE1 := u[t] + um ∗ m ∗ u[x]/u + un ∗ n3 ∗ u[x]3/u3 + 3 ∗ un ∗ n2 ∗ u[x]/u2 ∗ u[x, x]

− 3 ∗ un ∗ n2 ∗ u[x]3/u3 + un ∗ n ∗ u[x, x, x]/u − 3 ∗ un ∗ n ∗ u[x, x]/u2 ∗ u[x]

+ 2 ∗ un ∗ n ∗ u[x]3/u3

110 > PDE2:=simplify(ToJet(%,DepVars));

PDE2 := u[2] + u[](m−1) ∗ m ∗ u[1] + u[](n−3) ∗ n3 ∗ u[1]3

+ 3 ∗ u[](n−2) ∗ n2 ∗ u[1] ∗ u[1, 1] − 3 ∗ u[](n−3) ∗ n2 ∗ u[1]3

+ u[](n−1) ∗ n ∗ u[1, 1, 1] − 3 ∗ u[](n−2) ∗ n ∗ u[1, 1] ∗ u[1]

+ 2 ∗ u[](n−3) ∗ n ∗ u[1]3

> EQ1:=FromJet(G(PDE2),DepVars): > EQ2:=expand(FromJet(subs({diff(u(x,t),t)=diff(u(x,t),t)-PDE1}, EQ1),DepVars)): > nops(%); 213 > EQ3:=subs({u[]=u,u[1]=u_x,u[1,1]=u_xx,u[1,1,1]=u_xxx, u[1,1,1,1]=u_xxxx,u[1,1,1,1,1]=u_xxxxx,u[1,1,1,1,1,1] =u_xxxxxx},ToJet(EQ2,DepVars)): > nops(%); 213

> C1:=coeffs(EQ3,u_xxxxx,’X1’): X1; C1[2];

1, u_xxxxx

3 ∗ (un)2 ∗ n2 ∗ u ∗ τ[u] 3 ∗ (un)2 ∗ n2 ∗ τ[x] x + u2 u2

Here, tau_u, tau_x =0.

> C2:=coeffs(EQ3,u_xxxx,’X2’): X2: C2[2]:

111 > C3:=coeffs(EQ3,u_xxx,’X3’): X3: C3[2]: > tmp1:=X1[2]*C1[2]+X2[2]*C2[2]+X3[2]*C3[2]: > > EQ4:=expand(EQ3-tmp1): > C4:=coeffs(EQ4,u_xx,’X4’): X4: C4[4]: C4[3]: C4[2]: > tmp2:=X4[2]*C4[2]+X4[3]*C4[3]+X4[4]*C4[4]: > EQ5:=expand(EQ4-tmp2): > C5:=coeffs(EQ5,u_x,’X5’): X5: C5[5]: C5[4]: C5[3]: C5[2]: > tmp3:=X5[2]*C5[2]+X5[3]*C5[3]+X5[4]*C5[4]+X5[5]*C5[5]: > > tmp4:=expand(EQ5-tmp3): > TMP:=tmp1+tmp2+tmp3+tmp4: > expand(TMP-EQ3);

0

> new:=eval(subs({tau(x,t,u)=a(t)},TMP)):

From the u_xu_xxx term, we see that xi_u=0

> new:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b(x,t)},TMP)):

From the u_xxx term, we know that phi=(u*a_t + 3*b_x)/(n-1). From here, note that n != 1

> new1:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b(x,t), phi(x,t,u)=u*(-diff(a(t),t)+3*diff(b(x,t),x))/(n-1)},

112 TMP)): > c[1]:=(coeff(new1,u_xxx)):c[2]:=(coeff(new1,u_xx)): c[3]:=(coeff(new1,u_x^3)): c[4]:=(coeff(new1,u_x^2)): c[5]:=coeff(expand(new1-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new1-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2+c[5]*u_x)): > for i from 1 to 6 do factor(u^2*(n-1)*c[i]); od;

0 3unnb[x, x]u(2n + 1) 0 3unnb[x, x](2n + 1)(n − 1) −(umm2a[t] − 3umm2b[x] + 2ummb[x] + ummb[x]n − 8b[x, x, x]unn2 − umma[t]n − unnb[x, x, x] + b[t]un − b[t]u)u (−ua[t, t] + 3ub[t, x] + 3unnb[x, x, x, x] + 3b[x, x]umm)u2

From the last of the equations listed, which we recall is equal to zero, we get that xi=(1/3)*(diff(a(t),t)+c1)*x+d(t). We also know from the top two equations, that b_xx = 0 .

> new3:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b1(t)*x+b2(t), phi(x,t,u)=u*(-diff(a(t),t)+3*b1(t))/(n-1)},TMP)): Since using b_xx=0 simplified things more than otherwise, we will use that. Collecting terms again, we are left with two equations, where the last one gives the same relation between xi and tau that we had already established. > c[1]:=(coeff(new3,u_xxx)):c[2]:=(coeff(new3,u_xx)):

113 c[3]:=(coeff(new3,u_x^3)): c[4]:=(coeff(new3,u_x^2)): c[5]:=coeff(expand(new3-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new3-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3+c[4]*u_x^2 +c[5]*u_x)): > for i from 1 to 6 do factor(u^2*(n-1)*c[i]); od;

0 0 0 0 −(umm2a[t] − 3umm2b1(t) + 2ummb1(t) + ummb1(t)n − umma[t]n + b1[t]xun − b1[t]xu + b2[t]un − b2[t]u)u (−a[t, t] + 3b1[t])u3

From these two equations, we can go on to find the general form of xi, tau, and phi.

Next, we must focus on specific cases of m,n. For the general case above, we the restrictions that n!=1 and m!=0,1. We also have to make sure that m!=n. Our next step is to consider each of these cases individually and find the results.

WHEN n=1, > new4:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b(x,t), n=1},TMP)): > c[1]:=(coeff(new4,u_xxx)):c[2]:=(coeff(new4,u_xx)): c[3]:=(coeff(new4,u_x^3)): c[4]:=(coeff(new4,u_x^2)):

114 c[5]:=coeff(expand(new4-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new4-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2+c[5]*u_x)): > for i from 1 to 6 do factor(u^2*(n-1)*c[i]); od;

u2(n − 1)(a[t] − 3b[x])

2 −3u (n − 1)(−φ[u, x] + b[x, x] − uxφ[u, u]) u2(n − 1)φ[u, u, u] 3u2(n − 1)φ[u, u, x] (n − 1)(φumm2 − φumm − ummb[x]u + 3φ[u, x, x]u2 + umma[t]u − b[x, x, x]u2 − b[t]u2) u(n − 1)(φ[t]u + φ[x, x, x]u + φ[x]umm)

From equation one, we see that a_t=3b_x. This means that b_x=c1 => b=c1x+p(t), and a_t=3c1 => a=3c1t+c2 . We substitute this in for the equations and run it again. > new5:=eval(subs({tau(x,t,u)=3*c1*t+c2,xi(x,t,u)=c1*x+p(t),n=1}, TMP)): > c[1]:=(coeff(new5,u_xxx)):c[2]:=(coeff(new5,u_xx)): c[3]:=(coeff(new5,u_x^3)): c[4]:=(coeff(new5,u_x^2)): c[5]:=coeff(expand(new5-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new5-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2+c[5]*u_x)): > for i from 1 to 6 do factor(u^2*(n-1)*c[i]); od;

0

2 3u (n − 1)(φ[u, x] + uxφ[u, u]) u2(n − 1)φ[u, u, u]

115 3u2(n − 1)φ[u, u, x] (n − 1)(φumm2 − φumm + 2c1ummu + 3φ[u, x, x]u2 − p[t]u2) u(n − 1)(φ[t]u + φ[x, x, x]u + φ[x]umm)

From these equations, we can easily find phi. Note that for the case n=1, m!=0,1 .

We now consider the case when n=1, m=1. > new6:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b(x,t) ,n=1,m=1},TMP)): > c[1]:=(coeff(new6,u_xxx)):c[2]:=(coeff(new6,u_xx)): c[3]:=(coeff(new6,u_x^3)): c[4]:=(coeff(new6,u_x^2)): c[5]:=coeff(expand(new6-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new6-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2+c[5]*u_x)): > for i from 1 to 6 do factor(c[i]); od;

a[t] − 3b[x]

3φ[u, x] − 3b[x, x] + 3uxφ[u, u] φ[u, u, u] 3φ[u, u, x] −b[x] + 3φ[u, x, x] + a[t] − b[x, x, x] − b[t] φ[t] + φ[x] + φ[x, x, x]

From the first equation, a_t=3b_xx => a=3c1t+c2 , b=c1x+p(t).

> new7:=eval(subs({tau(x,t,u)=3*c1*t+c2, xi(x,t,u)=c1*x+p(t),n=1,m=1},TMP)):

116 > c[1]:=(coeff(new7,u_xxx)):c[2]:=(coeff(new7,u_xx)): c[3]:=(coeff(new7,u_x^3)): c[4]:=(coeff(new7,u_x^2)): c[5]:=coeff(expand(new7-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new7-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2+c[5]*u_x)): > for i from 1 to 6 do factor(c[i]); od;

0

3φ[u, x] + 3uxφ[u, u] φ[u, u, u] 3φ[u, u, x] 2c1 + 3φ[u, x, x] − p[t] φ[t] + φ[x] + φ[x, x, x]

Once again, from here it is easy to find phi and p(t).

For the case when n=1,m=0 > new8:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b(x,t), n=1,m=0},TMP)): > c[1]:=(coeff(new8,u_xxx)):c[2]:=(coeff(new8,u_xx)): c[3]:=(coeff(new8,u_x^3)): c[4]:=(coeff(new8,u_x^2)): c[5]:=coeff(expand(new8-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new8-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2+c[5]*u_x)): > for i from 1 to 6 do factor(c[i]); od;

a[t] − 3b[x]

117 3φ[u, x] − 3b[x, x] + 3uxφ[u, u] φ[u, u, u] 3φ[u, u, x] 3φ[u, x, x] − b[x, x, x] − b[t] φ[t] + φ[x, x, x]

For the case when m=0, > new9:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b(x,t),m=0} ,TMP)): > c[1]:=(coeff(new9,u_xxx)):c[2]:=(coeff(new9,u_xx)): c[3]:=(coeff(new9,u_x^3)): c[4]:=(coeff(new9,u_x^2)): c[5]:=coeff(expand(new9-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new9-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2+c[5]*u_x)): > for i from 1 to 6 do factor(u^4*c[i]); od;

• u2unn(a[t]u − 3b[x]u + φn − φ)

n 2 2 2 • 3uu n(φn ux − 3φnux + 2φux + uxφ[u]nu − uxφ[u]u + uxφ[u, u]u − b[x, x]u −

2 φ[x]u + φ[u, x]u + φ[x]nu − 3uxb[x]nu + 3uxb[x]u + a[t]nuxu − a[t]uxu)

• unn(−3b[x]n2u + 9b[x]nu − 6b[x]u + a[t]n2u − 3a[t]nu + 2a[t]u + φn3 − 6φn2 + 11φn−6φ+2φ[u]n2u−6φ[u]nu+4φ[u]u+3φ[u, u]nu2 −3φ[u, u]u2 +φ[u, u, u]u3)

•− 3uunn(2φ[u, x]u−2φ[u, x]nu−φ[x]n2 +3φ[x]n−2φ[x]−φ[u, u, x]u2 −b[x, x]u+ b[x, x]nu)

•− u2(−3unnφ[u, x, x]u − 3φ[x, x]unn2 + 3φ[x, x]unn + unnb[x, x, x]u + b[t]u2)

• u3(unnφ[x, x, x] + φ[t]u)

118 > new11:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b(x,t), phi(x,t,u)=u*(-diff(a(t),t)+3*diff(b(x,t),x))/(n-1)},TMP)): > c[1]:=(coeff(new11,u_xxx)):c[2]:=(coeff(new11,u_xx)): c[3]:=(coeff(new11,u_x^3)): c[4]:=(coeff(new11,u_x^2)): c[5]:=coeff(expand(new11-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new11-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2+c[5]*u_x)): > for i from 1 to 6 do factor(u^2*(n-1)*c[i]); od;

0 3unnb[x, x]u(2 ∗ n + 1) 0 3unnb[x, x](2 ∗ n + 1)(n − 1) −(umm2a[t] − 3umm2b[x] + 2ummb[x] + ummb[x]n − 8b[x, x, x]unn2 − umma[t]n − unnb[x, x, x] + b[t]un − b[t]u)u (−ua[t, t] + 3ub[t, x] + 3unnb[x, x, x, x] + 3b[x, x]umm)u2

Equation 2 => b_xx=0 > new12:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b1(t)*x+b2(t), phi(x,t,u)=u*(-diff(a(t),t)+3*b1(t))/(n-1)},TMP)): > c[1]:=(coeff(new12,u_xxx)):c[2]:=(coeff(new12,u_xx)): c[3]:=(coeff(new12,u_x^3)): c[4]:=(coeff(new12,u_x^2)): c[5]:=coeff(expand(new12-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new12-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3+c[4]*u_x^2 +c[5]*u_x)): > for i from 1 to 6 do factor(u*(n-1)*c[i]); od;

119 0 0 0 0 −um ∗ m2 ∗ a[t] + 3 ∗ um ∗ m2 ∗ b1(t) − 2 ∗ um ∗ m ∗ b1(t) − um ∗ m ∗ b1(t) ∗ n + um ∗ m ∗ a[t] ∗ n − b1[t] ∗ x ∗ u ∗ n + b1[t] ∗ x ∗ u − b2[t] ∗ u ∗ n + b2[t] ∗ u (−a[t, t] + 3 ∗ b1[t]) ∗ u2

From here, finding xi, tau, and phi is easy work (this is the case where m=0,n!=0,1)

We have already investigated the m=0, n=1 case, so now we will try the m=0,n=0 case. > new13:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b(x,t),m=0,n=0}, TMP));

new13 := -u_x b[t] + phi[t]

From this, we get very basic information for xi, tau, and phi.

Next, we want to investigate the case when m=n. This is also a special case, and the final one that needs to be addressed.

> new14:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b(x,t),n=m}, TMP)): > c[1]:=(coeff(new14,u_xxx)):c[2]:=(coeff(new14,u_xx)):

120 c[3]:=(coeff(new14,u_x^3)): c[4]:=(coeff(new14,u_x^2)): c[5]:=coeff(expand(new14-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new14-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2+c[5]*u_x)): > for i from 1 to 6 do factor(u^4*c[i]); od;

• u2umm(−3b[x]u + a[t]u + φm − φ)

m 2 • 3uu m(φm ux −3φmux +2φux +uxφ[u]mu−uxφ[u]u−3uxb[x]mu+3uxb[x]u+

2 2 2 a[t]muxu − a[t]uxu + uxφ[u, u]u − φ[x]u − b[x, x]u + φ[u, x]u + φ[x]mu)

• umm(φm3 − 6φm2 + 11φm − 6φ + 2φ[u]m2u − 6φ[u]mu + 4φ[u]u − 3b[x]m2u + 9mb[x]u − 6b[x]u + a[t]m2u − 3ma[t]u + 2a[t]u + 3φ[u, u]mu2 − 3φ[u, u]u2 + φ[u, u, u]u3)

•− 3uumm(2φ[u, x]u − 2φ[u, x]mu − φ[x]m2 + 3φ[x]m − 2φ[x] − φ[u, u, x]u2 − b[x, x]u + b[x, x]mu)

• u2(φumm2 − φumm − ummb[x]u + 3ummφ[u, x, x]u + 3φ[x, x]umm2 + umma[t]u − 3φ[x, x]umm − ummb[x, x, x]u − b[t]u2)

• u3(φ[t]u + φ[x]umm + ummφ[x, x, x])

> new15:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b(x,t), phi(x,t,u)=(u*(3*diff(b(x,t),x)-diff(a(t),t)))/(m-1),n=m},TMP)): > c[1]:=(coeff(new15,u_xxx)):c[2]:=(coeff(new15,u_xx)): c[3]:=(coeff(new15,u_x^3)): c[4]:=(coeff(new15,u_x^2)): c[5]:=coeff(expand(new15-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new15-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3

121 +c[4]*u_x^2+c[5]*u_x)): > for i from 1 to 6 do factor(u^2*(m-1)*c[i]); od;

0 3ummb[x, x]u(2m + 1) 0 3b[x, x]umm(2m + 1)(m − 1) (2umm2b[x] − 2ummb[x] + 8b[x, x, x]umm2 + ummb[x, x, x] − b[t]um + b[t]u)u (3ummb[x, x, x, x] + 3b[x, x]umm − ua[t, t] + 3ub[t, x])u2

> new16:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b1(t)*x+b2(t), phi(x,t,u)=(u*(3*b1(t)-diff(a(t),t)))/(m-1),n=m},TMP)): > c[1]:=(coeff(new16,u_xxx)):c[2]:=(coeff(new16,u_xx)): c[3]:=(coeff(new16,u_x^3)): c[4]:=(coeff(new16,u_x^2)): c[5]:=coeff(expand(new16-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)),u_x): > c[6]:=expand(new16-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2+c[5]*u_x)): > for i from 1 to 6 do factor(u^2*(m-1)*c[i]); od;

0 0 0 0 (m − 1)(2ummb1(t) − b2[t]u − b1[t]xu)u (−a[t, t] + 3b1[t])u3

From here, we can see that a_tt=3b1_t and m!=1 From equation 5, we can see that b_t =0 . Solving this is now trivial.

122 Finally, we consider the case when m=n. new17:=eval(subs({tau(x,t,u)=a(t),xi(x,t,u)=b1(t)*x+b2(t),phi(x,t,u) =u*(-diff(a(t),t)+3*b1(t))/(n-1),m=n},TMP)): c[1]:=(coeff(new17,u_xxx)):c[2]:=(coeff(new17,u_xx)): c[3]:=(coeff(new4,u_x^3)): c[4]:=(coeff(new17,u_x^2)): c[5]:=coeff(expand(new17-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3 +c[4]*u_x^2)) ,u_x): > c[6]:=expand(new17-(c[1]*u_xxx+c[2]*u_xx+c[3]*u_x^3+c[4]*u_x^2 +c[5]*u_x)): > for i from 1 to 6 do factor(u*(n-1)*c[i]); od;

0 0 u(n − 1)φ[u, u, u] 0 (n − 1)(2unnb1(t) − b1[t]xu − b2[t]u)

3 3 (−ua[t, t] − uxφ[u, u, u]n + uxφ[u, u, u] + 3ub1[t])u

123 E Maple Code for Nonclassical K(m, n)

This Appendix contains the Maple code we used when solving for the nonclas- sical symmetries. The code is slightly altered from what we used in the classical setting. Here, the maple commands are very explicit, where we must physically in- sert K(m, n) and the invariant surface condition into the φJ ’s. However, the process of solving for the coefficients of the infinitesimal generator is the same.

> restart: Load libraries > with(PDEtools,declare,InfinitesimalGenerator,FromJet, ToJet): Define variables and dependencies so that coefficient functions are displayed compactly > DepVars:=[u](x,t):declare((xi, tau, phi)(x,t,u)): declare(u(x,t)):

xi(x, t, u) will now be displayed as xi

tau(x, t, u) will now be displayed as tau

phi(x, t, u) will now be displayed as phi

u(x, t) will now be displayed as u

124 Define generator coefficient functions > xi(x,t,u);, > tau(x,t,u);, and > phi(x,t,u);. > S := [xi(x, t, u), tau(x, t, u), phi(x, t, u)]: Procedure for infinitesimal generators - need to specify order of prolongation (3 in this case) > G:=InfinitesimalGenerator(S,DepVars,prolongation=3, expanded): > PDE:=diff(u(x,t),t)+diff(u(x,t)^m,x)+diff(u(x,t)^n,x$3): PDE2:=simplify(ToJet(%,DepVars)): > EQ1:=FromJet(G(PDE2),DepVars): Substitute K(m,n) > EQ2:=expand(FromJet(subs({diff(u(x,t),t)=diff(u(x,t),t)-PDE}, EQ1),DepVars)): > ToJet(EQ2,DepVars): > EQ30:=eval(subs({u[]=u,u[1]=ux,u[1,1]=uxx,u[1,1,1]=uxxx, u[1,1,1,1]=uxxxx,u[1,1,1,1,1]=uxxxxx,tau=1}, ToJet(EQ2,DepVars))): > Q:=subs({diff(u(x,t),t)=diff(u(x,t),t)-PDE},phi(x,t,u) -xi(x,t,u)*ux-diff(u(x,t),t)=0): Substitute invariant surface condition Q > EQ3:=expand(subs(uxxx=(phi(x,t,u)*u^3 -xi(x,t,u)*u^3*ux+u^m*u^2*m*ux+u^n*n^3*ux^3 +3*u^n*u*n^2*ux*uxx -3*u^n*n^2*ux^3-3*u^n*u*n*uxx*ux+2*u^n*n*ux^3) /(-u^2*u^n*n),u^4*EQ30)):

125 > C2:=coeffs(EQ3,uxx,’X2’): X2: C2[2]: C2[3]: > C21:=coeffs(C2[2],ux,’X21’): X21: C21[2]: C21[3]: > tmp:=X2[2]*C2[2]+X2[3]*C2[3]: EQ4:=expand(EQ3-tmp): > C3:=coeffs(EQ4,ux,’X3’): X3: C3[2]: C3[3]: C3[4]: C3[5]: > tmp:=X3[2]*C3[2]+X3[3]*C3[3]+X3[4]*C3[4]+X3[5]*C3[5]: EQ5:=expand(EQ4-tmp): Re-write system > sys:=[-3*u^3*u^n*n*diff(xi(x,t,u),u), -3*u*phi(x,t,u)*u^n*n^2 +3*u*phi(x,t,u)*u^n*n+3*u^2*diff(phi(x,t,u),u)*u^n*n^2 -3*u^2*diff(phi(x,t,u),u)*u^n*n +3*u^3*u^n*n*diff(diff(phi(x,t,u),u),u) -9*u^3*u^n*n*diff(diff(xi(x,t,u),u),x), -6*u^3*u^n*n*diff(xi(x,t,u),u$2),-2*phi(x,t,u)*u^n*n^3 -4*phi(x,t,u)*u^n*n-3*u^2*diff(phi(x,t,u),u$2)*u^n*n +4*u*diff(phi(x,t,u),u)*u^n*n+6*phi(x,t,u)*u^n*n^2 +2*u*diff(phi(x,t,u),u)*u^n*n^3 -6*u*diff(phi(x,t,u),u)*u^n*n^2 -3*u^3*u^n*n*diff(diff(xi(x,t,u),u$2),x) +6*u^2*diff(diff(xi(x,t,u),u),x)*u^n*n +3*u^2*diff(phi(x,t,u),u$2)*u^n*n^2 -6*u^2*diff(diff(xi(x,t,u),u),x)*u^n*n^2 +u^3*u^n*n*diff(phi(x,t,u),u$3), -u^3*phi(x,t,u)*xi(x,t,u)-3*u^2*diff(phi(x,t,u),x$2)*u^n*n -3*u^4*diff(xi(x,t,u),x)*xi(x,t,u) +3*u^3*u^n*n*diff(diff(phi(x,t,u),u),x$2) -u^3*u^n*n*diff(xi(x,t,u),x$3)

126 -u^4*diff(xi(x,t,u),t)+3*u^2*diff(phi(x,t,u),x$2)*u^n*n^2 +u^3*phi(x,t,u)*n*xi(x,t,u) +3*u^4*diff(xi(x,t,u),u)*phi(x,t,u) +2*u^3*diff(xi(x,t,u),x)*u^m*m-u^2*phi(x,t,u)*n*u^m*m +u^2*phi(x,t,u)*u^m*m^2,-9*u*diff(phi(x,t,u),x)*u^n*n^2 +3*u*diff(phi(x,t,u),x)*u^n*n^3 +3*u^2*diff(xi(x,t,u),x$2)*u^n*n +6*u^2*diff(diff(phi(x,t,u),u),x)*u^n*n^2 +3*u^3*u^n*n*diff(diff(phi(x,t,u),u$2),x) +6*u*diff(phi(x,t,u),x)*u^n*n -3*u^2*diff(xi(x,t,u),x$2)*u^n*n^2 +3*u^3*diff(xi(x,t,u),u)*u^m*m -6*u^2*diff(diff(phi(x,t,u),u),x)*u^n*n -3*u^4*diff(xi(x,t,u),u)*xi(x,t,u) -3*u^3*u^n*n*diff(diff(xi(x,t,u),u),x$2), -3*u^2*diff(xi(x,t,u),u$2)*u^n*n^2 +u*diff(xi(x,t,u),u)*u^n*n^3+2*u*diff(xi(x,t,u),u)*u^n*n +3*u^2*diff(xi(x,t,u),u$2)*u^n*n -u^3*u^n*n*diff(xi(x,t,u),u$3)-3*u*diff(xi(x,t,u),u)*u^n*n^2, 3*u^4*diff(xi(x,t,u),x)*phi(x,t,u) -u^3*phi(x,t,u)^2*n+u^3*u^n*n*diff(phi(x,t,u),x$3) +u^3*phi(x,t,u)^2+u^4*diff(phi(x,t,u),t) +u^3*diff(phi(x,t,u),x)*u^m*m]: > for i from 1 to 8 do sys[i]=0: od: > for i from 1 to 8 do expand(eval(subs (xi(x,t,u)=b(x,t),sys[i])))=0: od: > for i from 1 to 8 do eq[i]:=factor(eval(subs

127 ({xi(x,t,u)=b(x,t),phi(x,t,u) =a(x,t)*u+c(x,t)*u^(-n+1)},sys[i]))): od: n = 1, m > 2

> sys1:=[coeffs(lhs(eval(subs({n=1,u^m=U},eq[1]/u^2))),u)]: sys2:=[coeffs(lhs(expand(subs({n=1,u^m=U},eq[3]/u^3))),u)]: SYS:={seq(coeffs(sys1[i],U),i=1..nops(sys1)), seq(coeffs(sys2[i],U),i=1..nops(sys2))}: ‘Number of Equations ‘ = nops(%): pdsolve(SYS, {a(x,t),b(x,t),c(x,t)}): > expand(subs({c(x,t)=0,a(x,t)=a0(t),b(x,t)=b1(t)*x+b0(t)},SYS)): dsolve({3*b1(t)*a0(t)+diff(a0(t),t), 2*b1(t)*m-m*a0(t)+m^2*a0(t), -diff(b0(t),t)-3*b1(t)*b0(t),-diff(b1(t),t)-3*b1(t)^2}, {a0(t),b1(t),b0(t)}): n = 0, m > 1

> sys1:=[coeffs(lhs(eval(subs({n=0,u^m=U},eq[1]/u^3))),u)]: sys2:=[coeffs(lhs(expand(subs({n=0,u^m=U},eq[3]/u^3))),u)]: SYS:={seq(coeffs(sys1[i],U),i=1..nops(sys1)), seq(coeffs(sys2[i],U),i=1..nops(sys2))}: ‘Number of Equations ‘ = nops(%): pdsolve(SYS, {a(x,t),b(x,t),c(x,t)}): > expand(subs({c(x,t)=d(x,t)-a(x,t)},SYS)): tmp:=expand(subs({c(x,t)=d(t)-a(x,t), b(x,t)=-m/2*d(t)*x+b0(t)},SYS)):

128 ans:=dsolve(tmp[2]): simplify(eval(subs({ans}, tmp))): dsolve(%[2]): n = 0, m = 1

> SYS:={seq(coeffs(lhs(expand(subs({n=0,m=1,c(x,t)=d(x,t) -a(x,t)},eq[i]))),u),i=1..3,2)}: ‘Number of Equations ‘ = nops(%): ans:=pdsolve(SYS, {b(x,t),d(x,t)}): ans[1]: > simplify(subs({_C4=0,_C5=0,_C6=0,_C7=1,_C8=0},ans[3])): n = 0, m = 0

> SYS:={seq(coeffs(lhs(expand(subs({n=0,m=0,c(x,t)=d(x,t) -a(x,t)},eq[i]))),u),i=1..3,2)}: ‘Number of Equations ‘ = nops(%): ans:=pdsolve(SYS, {b(x,t),d(x,t)}): ans: simplify(%[2]): n = 1, m = 1

> SYS:={seq(coeffs(lhs(expand(subs({n=1,m=1}, eq[i]))),u),i=1..3,2)}: ‘Number of Equations ‘ = nops(%): ans:=pdsolve(SYS,{a(x,t), b(x,t),d(x,t)}): ans: > expand(subs({a(x,t)=0},SYS)): pdsolve(%[2]): n = 1, m = 2

129 > SYS:={seq(coeffs(lhs(expand(subs ({n=1,m=2},eq[i]))),u),i=1..3,2)}: ‘Number of Equations ‘ = nops(%): pdsolve(SYS,{a(x,t),b(x,t),c(x,t)}): > expand(subs({a(x,t)=-2*b1(t), b(x,t)=b1(t)*x+b0(t),c(x,t)=c1(t)*x+c0(t)},SYS)): > tmp:={-2*diff(b1(t),t)-6*b1(t)^2+2*diff(c1(t),t), diff(c1(t),t)+3*b1(t)*c1(t),diff(c0(t),t)+3*b1(t)*c0(t), 2*c1(t)-diff(b1(t),t)-3*b1(t)^2,2*c0(t)-diff(b0(t),t) -3*b1(t)*b0(t)}: dsolve(%): m = 1, n > 1

> sys1:=[coeffs(lhs(expand(subs({m=1,u^n=U},eq[1]/u^2*U))),u)]: sys2:=[coeffs(lhs(expand(subs({m=1,u^n=U},eq[2]))),u)]: sys3:=[coeffs(lhs(expand(subs({m=1,u^n=U},eq[3]*U^2))),u)]: SYS:={seq(coeffs(sys1[i],U),i=1..nops(sys1)), seq(coeffs(sys2[i],U),i=1..nops(sys2)), seq(coeffs(sys3[i],U),i=1..nops(sys3))}; ‘Number of Equations ‘ = nops(%); pdsolve(SYS,{a(x,t),b(x,t),c(x,t)}); m = f(n): Let

> U = u^n; and > V = u^m;

130 Note: c(x,t) = 0 for all of these cases.

> for i from 1 to 3 do S[i]:=lhs(expand(subs({u^n=U,u^m=V},eq[i]*U))): od: m = n: V = U, n,m > 1

> SYS:={seq(coeffs(subs({m=n,V=U},S[i]),[u,U]),i=1..3)}: ‘Number of Equations ‘ = nops(%): pdsolve(SYS,{a(x,t),b(x,t),c(x,t)}): m = 2n: V = U^2, n,m > 1

> SYS:={seq(coeffs(subs({m=2*n,V=U^2},S[i]),[u,U]),i=1..3)}: ‘Number of Equations ‘ = nops(%): pdsolve(SYS,{a(x,t),b(x,t),c(x,t)}): m = n+1: V = uU, n,m > 1

> SYS:={seq(coeffs(subs({m=n+1,V=u*U},S[i]),[u,U]),i=1..3)}: > ‘Number of Equations ‘ = nops(%): pdsolve(SYS,{a(x,t),b(x,t),c(x,t)}):

131 BIOGRAPHICAL SKETCH

The author graduated from College of the Albemarle with an Associates Degree in Science in July of 2007. She enrolled at the University of North Carolina Wilmington in the Fall of 2008 and graduated in December of 2010 with her Bachelor’s of Science in Applied Mathematics. The author then began graduate studies at UNCW in January 2011 and, upon satisfactory completion of her thesis work, will be awarded her Master’s Degree of Science in Mathematics. The author intends to pursue her Doctorate of Philosophy in Mathematics at the University of North Carolina at Charlotte starting in August of 2012.

132