<<

U.U.D.M. Project Report 2020:33

Liouville’s equation on simply connected domains

Patrik Deigård

Examensarbete i matematik, 15 hp Handledare: Wanmin Liu Examinator: Martin Herschend Juni 2020

Department of Mathematics Uppsala University

Liouville’s equation on simply connected domains

Patrik Deig˚ard Spring 2020

Abstract We go through the stereographic projection and some basic differential geometry and . Using this we show a local solution to Liouville’s equation for any constant K > 0, a fully non-linear elliptic partial differential equation. We show symmetric solutions to two partial differential equations by turning them into ordinary differential equations. These differential equations are more or less related to Liouville’s equation.

Key words. Liouville’s equation, Differential geometry.

1 Contents

1 Introduction2

2 Preliminaries3

3 Simply connected domains and metric spaces3

4 Complex analysis and Wirtinger calculus7

5 Differential Geometry and some Terminology 11 5.1 Curves...... 11 5.2 Surfaces...... 11 5.3 The first and second fundamental forms...... 18 5.4 Gaussian curvature...... 23 5.5 The stereographic projection...... 27

6 Derivation of Liouville’s equation 34

7 Laplace’s equation 36 7.1 Solution only dependent on the distance from the origin in R2 .. 36 7.2 A slight modification of Laplace’s equation...... 38

8 Liouville’s equation 47 8.1 Liouville’s equation on the unit sphere...... 47 8.2 Liouville’s equation for constant K > 0...... 51

9 Further research 54 9.1 Global solution of Liouville’s equation...... 54 9.2 Non-linear wave equation...... 54

10 Conclusion 55

11 Appendix 56

1 1 Introduction

Historically many people believed the earth to be flat. Nowadays we know that this is not the case and it can easily be disproven. However, can you blame them? Wherever you are on the surface of the earth, it probably even looks flat to you and without further evidence one could easily think that the earth is flat. Astronauts that are out in space will probably view this in another way and clearly see that the earth isn’t flat. However, we don’t need people in space to tell us that the earth isn’t flat. In this paper you will see that we don’t even need any kind of ”space” to show that the earth isn’t flat (Gauss theorema egregium). This is the general intuition behind what we call the curvature of a surface. Since we know that the earth isn’t flat, we know that it kind of ”bends” and our intuition may tell us that, whether or not we know the actual definition of curvature, it has some kind of curvature that is different to a flat surface. This idea would definitely be correct. The main result and focus of this paper is to find all solutions to Liouville’s equation (see Theorem 8.5)

∆W = −2KeW for some constant K > 0,W = W (u, v) using an approach that uses the theory of differential geometry. As a partial differential equation this is a fully non- linear partial differential equation, which is generally very hard to solve. This equation relates the curvature of the surface with any conformal diffeo- morphism onto the surface. We will see that it is easy to describe the solutions of the equation. The solution to this differential equation is given by   4 |f 0(z)|2 W (z, z¯) = ln  2  K 1 + |f(z)|2

0 ∂f for z = u + iv, f (z) 6= 0 at all points in the domain and satisfying ∂z¯ = 0. Generally this solution depends on the curvature K of the surface that we are working on, and in this case we are working with surfaces with positive constant curvature K > 0. Liouville’s original paper can be found in the reference list [4].

Acknowledgements I would like to thank my supervisor Wanmin Liu for all the guidance I have re- ceived. The helpful discussions and encouragement helped me finish this thesis.

2 2 Preliminaries

Unless otherwise stated:

n • The standard basis of R is given by B = {e1, . . . , en} where

ei = (0,..., 1,..., 0)

for 1 placed at position i. • The inner products used in this paper are the standard dot product, that is n X hu, vi = uivi = cos (α)kukkvk, i=1 n  where u, v ∈ R and α ∈ −π/2, π/2 is the angle between u and v. n • A vector x ∈ R has components x = (x1, . . . , xn). • We define R+ as R+ = {x ∈ R | x > 0}.

• We use the notation v1 × v2 for the cross product of two vectors v1, v2 ∈ R3. Like we already know, the cross product of two vectors in R3 (it is only defined for vectors in R3) gives us a vector perpendicular to the two vectors.

a b • We use the notation for the determinant of the matrix, so we have c d

a b for example = ad − bc. c d

3 Simply connected domains and metric spaces

When we work in for example analysis or differential geometry or something else in math we usually specify some kind of ”where” we are working. Functions are defined between sets and these sets usually have certain properties depending on what we want. When we start learning math in we usually deal with func- tions f : U ⊂ R → R without us really thinking about it. Later on we start to study something like calculus and we have to pay more attention not only to the ” rule” but also to its domain and range. Then we study even more and get to linear algebra where we are considering mappings like A: Rn → Rm or mappings B : Cn → Cm and we study vector spaces and more. Then in anal- ysis we start to generalize these domains where we require the domains to have certain properties that we probably haven’t thought about a lot previously. For example we usually start off by studying metric spaces in analysis and then in functional analysis we start to consider more types of spaces such as normed spaces, Hilbert spaces and Banach spaces. One might wonder why there are so many and what the is in having all these different ”-spaces”. The thing

3 is that these ”-spaces” have different types of properties and this leads to us having a better understanding of the domain and can ensure ourselves about properties for mappings between such spaces.

For example, in metric spaces we can always talk about some kind of ”dis- tance” between points in the metric space. We can’t always do this in any general topological space. Things that we used to take for granted (convergence to a unique point, continuous functions, what is ”open” and what is ”closed” etc.) really just depends on these kind of ”-spaces”. This is why it is important to study these kind of structures.

Since we will be working with simply connected domains in this paper (and the domain will either be real or complex) it could be good to make sure we are talking about the same thing. Definition 3.1. A metric space is a pair (X, d) where X is a set and d is a function d: X × X → R≥0 such that it satisfies the following 3 requirements: (M1): It has to be symmetric. That is, d (x, y) = d (y, x) (M2): It has to be non-negative. That is,

d (x, y) ≥ 0 and d (x, y) = 0 ⇐⇒ x = y

(M3): It satisfies the triangle inequality. That is, d (x, z) ≤ d (x, y) + d (y, z).

The probably most common example of a metric space is (X, d) for X = R and d (x, y) = |x−y|. As previously stated we can measure some kind of distance in metric spaces (even if it wouldn’t be like we are used to if we have some kind of more exotic distance function). An example of this is the metric space X, d0 with X = and R ( 1 x 6= y d0 (x, y) = 0 x = y as the distance function, commonly referred to as the discrete metric. It clearly satisfies (M1) - (M3), but it does not ”work like we are used to”. Intuitively it doesn’t really make any sense, but since it satisfies all the requirements for 0 being a distance function, we have that R, d is a metric space.

Another important metric space is (Rn, d) where d is given by v u n uX 2 d (x, y) = t (xi − yi) . i=0

You should check that this satisfies (M1) - (M3) and so that this really is a metric space.

4 Now we move on to connectedness. I think most people have a feeling for what we mean when we talk about connectedness, even if they haven’t seen the strict mathematical definition. Connectedness is important in many different perspectives. For example, the intermediate value theorem does not generally hold if the domain isn’t connected. Definition 3.2. Let (X, d) be a metric space. Two sets A, B ⊂ X are said to be separated if A ∩ B¯ = ∅ and A¯ ∩ B = ∅ where A¯ = A ∪ { points of A}. A set E ⊂ X is said to be connected if E is not a union of two non-empty separated sets.

In the following picture we can see that the set E ⊂ R2 defined as E = D1 ∪ D2 is not connected.

2 Figure 1: Picture showing that the set E = D1 ∪ D2 is not connected in R .

For example, let A = [0, 1] and B = (1, 2). Then A and B are not separated since 1 ∈ [0, 1] ∩ (1, 2) = [0, 1] ∩ [1, 2] = {1}. Actually, a E ⊂ R is connected if and only if E is an interval.

We would like to introduce and define what it means for a domain to be sim- ply connected. Intuitively it means that the domain has no holes or is completely ”drilled through”. A domain is simply connected if any kind of path between two points can be deformed into a single point without ”breaking” anything. A torus (think doughnut) is for example not simply connected since it has a hole in it. A paper is simply connected as long as it does not have any holes in it (otherwise how are we supposed to deform some kind of circle containing

5 the hole?). A sphere is simply connected as long as it isn’t completely drilled through in any place. It is OK for the sphere to lose a couple of points in it, as long as it does not go completely through it since then we can go around the points.

Definition 3.3. A path in the plane (think C) from A to B is a continuous function γ(t) on some parameter interval a ≤ t ≤ b such that γ(a) = A and γ(b) = B. The path is simple if γ(s) 6= γ(t) when s 6= t. The path is closed if it starts and ends at the same point, that is, γ(a) = γ(b). A simple closed path is a closed path γ such that γ(s) 6= γ(t) for a ≤ s < t < b. Examples:

Figure 2: Picture showing three different paths. From left to right we have that the first one is a simple path, the second one is a closed simple path and the third one is a path that is not simple because of the self-intersections.

Now consider C since we are mostly going to work with R and C. Definition 3.4. Let γ(t), a ≤ t ≤ b, be a closed path in a domain D. We say that γ is deformable to a point if there are closed paths γs(t), a ≤ t ≤ b, 0 ≤ s ≤ 1, in D such that γs(t) depends continuously on both s and t, γ0 = γ and γ1(t) ≡ z1 is the constant path at some point z1 ∈ D. The domain D is simply connected if every closed path in D can be deformed to a point.

Examples follow in the next page.

6 Figure 3: Picture showing two different sets where the right one is simply con- nected and the left one is not simply connected. Notice that the blue curve in the left set cannot be continuously deformed to a point because of the hole inside of it. 4 Complex analysis and Wirtinger calculus

Throughout this section and unless otherwise stated, we view the complex num- ber √z = x + iy as a number z ∈ C and x, y ∈ R with the usual imaginary unit i = −1. Its conjugate is defined as z = x − iy. Functions also follow a similar convention - a function f(z) = u + iv is a function f : C → C for two smooth ∞ 2 functions u, v ∈ C R [5]. Definition 4.1. The Wirtinger are the following symbolic differen- tial operators: ∂ 1  ∂ ∂  := − i ∂z 2 ∂x ∂y and ∂ 1  ∂ ∂  := + i . ∂z 2 ∂x ∂y We will usually omit the ”Wirtinger” and just say ””. For functions we define it in the natural way.     Let f : → , then we define ∂f := 1 ∂f − i ∂f and ∂f := 1 ∂f + i ∂f C C ∂z 2 ∂x ∂y ∂z 2 ∂x ∂y It is also easy to check the following proposition: Proposition 4.2. ∂ (z) = 0 ∂z

7 and ∂ (z) = 0. ∂z Proof. Let z = x + iy, then we have that:

∂ 1  ∂ ∂  (z) = − i (x − iy) = ∂z 2 ∂x ∂y

1  ∂ ∂  1 1 (x − iy) − i (x − iy) = 1 − i (−i) = · 0 = 0. 2 ∂x ∂y 2 2 The proof of ∂ (z) = 0 is similar. ∂z Definition 4.3. The complex form of the Cauchy-Riemann equations is given by ∂f = 0. ∂z¯ Any function f that satisfy this equation is called holomorphic. It could also be good to try to express the mixed second order derivative in a way, and this can be done in the following way.

Proposition 4.4. Let f : C → C be a . Then ! ∂2f 1 ∂2f ∂2f = + . ∂z∂z 4 ∂x2 ∂y2

    Proof. As before, let ∂ = 1 ∂ − i ∂ and ∂ = 1 ∂ + i ∂ . Then ∂z 2 ∂x ∂y ∂z 2 ∂x ∂y

∂2 1  ∂ ∂  1  ∂ ∂  = − i + i = ∂z∂z 2 ∂x ∂y 2 ∂x ∂y

1  ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂  − i + i − i2 = 4 ∂x ∂x ∂y ∂x ∂x ∂y ∂y ∂y ! 1 ∂2 ∂2 + . 4 ∂x2 ∂y2 So then ! ∂2f 1 ∂2f ∂2f = + . ∂z∂z 4 ∂x2 ∂y2

Definition 4.5. Define the Laplacian ∆ (with variables x1 and x2) as

∂2 ∂2 ∆ = 2 + 2 . ∂x1 ∂x2

8 A function W : D ⊂ C → C, W = W (x, y) is called harmonic if its second order partial derivatives exists, are continuous and satisfy Laplace’s equation

∂2W ∂2W ∆W = + = 0. ∂x2 ∂y2 This is just a way of categorizing functions which have certain proper- ties. Harmonic functions are generally interesting since they (obviously) solve Laplace’s equation, and it is one of the most important partial differential equa- tions (PDE’s).

Theorem 4.6. Let f(z) be a continuously differentiable function on a domain D. Then f(z) is analytic if and only if f(z) satisfies the complex version of the Cauchy-Riemann equations. If f(z) is analytic, then the derivative of f(z) is given by ∂f f 0(z) = . ∂z Proof. See [2].

∂ ∂ These ∂z and ∂z¯ operators work like we are used to. They for example satisfy the following:

∂ ∂f ∂g • ∂z (af + bg) = a ∂z + b ∂z

∂ ∂g ∂f • ∂z (fg) = f ∂z + g ∂z and more [2]. They are related by

∂f ¯ = ∂f ∂z¯ ∂z and ∂f = ∂f . ∂z ∂z Now we’ve come to the interesting part. We will define the angle between two curves to be the angle between the tangent vectors at the point of intersection. A function f is called conformal if it preserves angles. And the next theorem gives us a very easy way of checking if a function is conformal or not.

0 Theorem 4.7. If f(z) is analytic at a point z0 in its domain and f (z0) 6= 0, then f is conformal at z0. Proof. See [2] (p. 59). Since we also have Theorem 4.6, we can check if a function f is conformal at a point z0 just by making sure that ∂f = 0 and f 0(z ) 6= 0. ∂z¯ 0

9 3 Example 4.8. Is the function f : C → C given by f(z) = (z − i) (z + 3) conformal at the point z0 = 3 + 7i? We check the conditions: First of all, ∂f ∂z¯ = 0 since we have no dependency onz ¯ in f. Next we compute the derivative 0 2 3 0 f (z) = 3 (z − i) (z + 3) + (z − i) which at the point z0 has the value f (z0) = 3 (3 + 6i)2 (6 + 7i) + (3 + 6i)3 = (3 + 6i)2 (21 + 27i) = ... = 27i − 1539 6= 0 so f is conformal at the point z0.

10 5 Differential Geometry and some Terminology

Here we will start off by covering some basic differential geometry as well as most of the terminology that will be used throughout the paper. Since we are mainly working with surfaces in this paper, we will not go through a lot of theory of curves, but just touch upon a few definitions at most. First of we will introduce the notion of a regular curve.

5.1 Curves

Definition 5.1. A parametrized curve in Rn is a smooth function γ : I → Rn where I ⊂ R is an interval. Note that we require that γ is a smooth function. We also need the interval I to be defined. One can therefore formally define a parametrized curve as a pair (γ, I) where we need to define a smooth function γ and its domain I. The unit circle in R2 with the interval I = (−∞, ∞) is an example of the trace of a parametrized curve, which we could interpret as the pair γ, (−∞, ∞) for γ(t) = (cos t, sin t). We will often omit ”parametrized” in parametrized curve, and understand a curve as a parametrized curve. Another important notion about curves is the notion of regular curves. Definition 5.2. A regular curve is a (parametrized) curve γ : I → Rn such that its speed, γ0(t) , always is non-zero. That is, γ0(t) 6= 0 for all t ∈ I. An example of a non-regular curve is given by the pair γ, (−1, 1) for γ(t) = t2, − cos t. It’s velocity is γ0(t) = (2t, sin t) which at the point t = 0 ∈ I has r 2  2  the speed γ0(0) = (2 · 0) + sin 0 = 0. It is therefore not a regular curve. However the curve given by γ, (0, 1) for γ(t) = t2, − cos t is a regular curve. r 2  2  2 Its speed is given by γ0(t) = (2 · t) + sin t 6= 0 since 2t > 0 and sin t ≥ 0 for all t ∈ (0, 1).

5.2 Surfaces

Definition 5.3. Let f : U ⊂ Rm → Rn, p ∈ U and v ∈ Rm. The directional derivative of f in the direction v at the point p is (assuming it exists)

f (p + tv) − f(p) 0 dfp(v) = lim = (f ◦ γ) (0), t→0 t where γ(t) = p + tv. If f is smooth (infinitely many times continuously differen- tiable), then dfp = LA where LA is multiplication from the left with the matrix A given by   ∂f1 (p) ... ∂f1 (p) ∂x1 ∂xm  . . .  A =  . .. .  .  . .  ∂fn (p) ... ∂fn (p) ∂x1 ∂xm

11 This matrix A is called the Jacobian matrix of f at p.

Remark: We often simply write dfp = A, and then it is understood that we mean multiplication from the left by A.

Definition 5.4. Let X ⊂ Rm1 and Y ⊂ Rm2 . We call X and Y diffeomorphic if there exists a smooth bijective function f : X → Y whose inverse is also smooth. In this case we call f a diffeomorphism. Diffeomorphisms are important and a natural extension of something like an isomorphism between structures. Somehow diffeomorphisms intuitively not only tells or shows us how two structures are similar, but also shows us that they look kind of the same with respect to deformations. The following is a good example: we have that the plane R2 is diffeomorphic to the paraboloid. This can also be thought of as taking the plane R2 ⊂ R3 and kind of bending it upwards, thus getting the paraboloid

3 2 2 P3 = {(x, y, z) ∈ R | z = x + y } 2 through the diffeomorphism σ : R → P3 which is given by   σ (x, y) = x, y, x2 + y2 .

−1 2 −1 Both the inverse σ : P3 → R given by σ (x, y, z) = (x, y) and σ are smooth functions, and the functions are clearly bijective. Thus we have a diffeomor- 2 phism and R is diffeomorphic to P3.

We will continue to discuss surfaces throughout the paper, but we need to be more careful. We need to be precise with what we mean with a ”surface”, and one way to categorize surfaces is by introducing the notion of a regular surface. Without being precise and exact one could say that a regular surface is more or less a surface without any sharp turns or corners or anything like that. One could say that a regular surface, which we view as a subset of R3, locally looks like the plane R2. Imagine for example an air-filled balloon. If you were sufficiently small and living on the balloon you would think that locally everything around you is flat, like how we view the earth. Your desk at home would probably not be a regular surface since it most likely have corners and if you were standing right at that corner, locally it wouldn’t look flat. That’s the intuition behind a regular surface.

We will now state the definition of a regular surface, but first we need some more terminology:

Definition 5.5. Let S ⊂ Rn be a subset. A set V ⊂ S is called open in S if V is the intersection with S of an open set in Rn. If p ∈ S. then a neighbourhood of p in S means a subset of S that is open in S and that contains p.

Definition 5.6. A set S ⊂ R3 is called a regular surface if each of its points has a neighbourhood in S that is diffeomorphic to an open set in R2. Written

12 out it means that for every p ∈ S, there exist a neighbourhood V of p in S, an open set U ⊂ R2, and a diffeomorphism σ : U → V . Such a diffeomorphism (σ), is called a surface patch. A collection of surface patches that together cover all of the regular surface S is called an atlas for S. An example of a surface that is not regular is the cone

3 p K = {(x, y, z) ∈ R | z = x2 + y2}. Intuitively it is because of the pointy bottom it has, where it is not differentiable. We will however not work a lot with the definition of a regular surface, but we will use regular surfaces quite often as they have properties that we look for (or rather do not have properties that we do not look for). Now we will have a look at tangent planes. We are all probably pretty familiar with tangent lines to some function f : R → R. Maybe even finding some equation to tangent planes in R3 for some function f : R2 → R3. However, the tangent planes themselves are interesting and have quite some theory behind themselves. First of all, what do we more exactly mean by a tangent plane? Definition 5.7. Let S be a regular surface. A regular curve in S means a regular curve in R3 whose trace is contained in S. The tangent plane to S at a point p ∈ S is the set of all initial velocity vectors of regular curves in S with initial position p. Stated differently, the tangent plane is

0 TpS = {γ (0) | γ is a regular curve in S with γ(0) = p} ∪ {0}.

Note that we need 0 ∈ TpS to ensure that TpS is indeed a . A picture of the situation would look something like:

Figure 4: Picture showing the tangent plane TpS at the point p ∈ S. It shows a sample of the velocity vectors through p in S.

13 An interesting lemma that follows is the following:

Lemma 5.8. Let σ be a surface patch for the surface S, σ : U ⊂ R2 → V ⊂ S with p ∈ V . If we set q = σ−1(p) and u, v as the variables for the function σ, i.e {u, v} are the coordinate variables of U, we have:

TpS = span{σu(q), σv(q)}.

3 TpS is a two dimensional subspace of R . Proof See [5] p.141.

Figure 5: Picture showing a unit normal N (which will be defined at p.16) to the tangent plane TpS as well as the two vectors σu and σv that spans the tangent plane. σ is a diffeomorphism.

This can be thought of as at any point on the surface S, the tangent plane at that point have a basis consisting of the partial derivatives of the surface patch σ. A simple but yet very important example is the graph of a function. Let f : U ⊂ R2 → R3 be a smooth function. We define the graph of the function as  3 Gf = { x, y, f (x, y) ∈ R | (x, y) ∈ U}. Now the question is, what is the tangent plane to the graph at any point in the domain of f? Well, clearly a surface patch for the graph of f is given

14 2 3  by σ : U ⊂ R → R , σ(x, y) = x, y, f (x, y) . The partial derivatives of the surface patch are:  σx = 1, 0, fx (x, y) and  σy = 0, 1, fy (x, y) . So the tangent plane at any point p of the graph is the plane spanned by −1 the vectors σx(q) and σy(q), for σ (p) = q. The tangent plane is therefore equivalent to:   TpS = span{σx(q), σy(q)} = span{ 1, 0, fx (x, y) , 0, 1, fy (x, y) }.

We will come back to the example of the graph later in. Now we have covered some theory of curves and surfaces, and we have talked a little about tangent planes. So far we have mostly talked about intrinsic measurements of surfaces, such as for example tangent planes. But we almost always view this surface as the surface embedded into R3. It would make sense to continue to explore the surface and its properties, and what other ways than to start looking at its extrinsic measurements would be as natural?

The next step we will do is therefore to do a 90-degree turn and have a look at normals. A normal can be defined in a couple of different ways, usually when we take a first course in something like linear algebra we learn that two vectors u and v are perpendicular, or orthogonal to each other, if their inner product hu, vi is 0. In this case u would be a normal vector to v and vice-versa, v would be a normal vector to u.

Inner products are good in that way since they extend the notion of angles and lengths in a way that aren’t always natural. For example, angles as we know them intuitively aren’t what one might expect when we are working with something like vector spaces of polynomials or vector spaces of continuous func- tions with some other than the standard inner product. We lose our intuition and we have to go by definitions, and in that way we can still talk about angles and vectors perpendicular to each other (be it polynomials, functions or some- thing completely different).

We will try to capture the concept of normals and orientations in the following definitions:

Definition 5.9. An orientation for a 2-dimensional subspace V ⊂ R3 means a choice of a unit-length normal vector N, to V. With respect to a given orien- tation, N, for V, an ordered basis {v1, v2} of V is called positively oriented if v1×v2 = N. The only other possibility is that v1×v2 = −N, in which case the |v1×v2| |v1×v2| basis is called negatively oriented. Definition 5.10. A normal vector to a surface S at a point P means a vector 3 3 N ∈ R that is orthogonal to TpS. That is, the vector N ∈ R is called a normal

15 vector to S at the point p if hN, vi = 0 for all v ∈ TpS. Furthermore, a unit normal vector to S at p is a unit-length normal vector which is equivalent to an orientation for TpS. If we go back to our previous example with the graph, we can see that a normal (unit) vector of the graph at any point (x, y) ∈ U is given by:    1, 0, fx (x, y) × 0, 1, fy (x, y) −fx (x, y) , −fy (x, y) , 1 N = = q 1, 0, f (x, y) × 0, 1, f (x, y) 2 2 x y 1 + fx (x, y) + fy (x, y)

(fx(x,y),fy (x,y),−1) Notice that we could just as well have chosen N = q 2 2 . The 1+(fx(x,y)) +(fy (x,y)) only difference would be the orientation of the surface.

Now we have gone through both intrinsic and extrinsic measurements of sur- faces, as well as some theory of the surfaces themselves. We have quite an OK understanding of surfaces and can kind of visualize how we think of normals, tangent planes, regular/non-regular surfaces and more. Now we will go through some properties that some functions might have and what they mean for the surface and the image of the the surface under the function.

First of all imagine a hand fan (the kind of handheld fan that you see in old movies, used to cool yourself when it is hot outside) that is completely stretched. Think of that as our surface. Now imagine a function f that takes this fan and folds it, not necessarily completely. If you were to draw two straight lines from the handle outwards until the end of the fan, then the angle between those 2 lines would be different depending on if the handfan is completely stretched or if it is folded. Now the idea behind the notion of conformal mappings is that they preserve the angle of two vectors through the mapping.

Definition 5.11. Let S1 and S2 be two regular surfaces. A diffeomorphism f : S1 → S2 is called conformal if df preserves angles, that is, if  ∠ (x, y) = ∠ dfp(x), dfp(y) is true for all p ∈ S1 and all x, y ∈ TpS1 The following proposition will come in handy later on in the paper when we actually try to solve Liouville’s equation.

Proposition 5.12. Let S1,...,Sn be regular surfaces and σk : Sk → Sk+1 be conformal diffeomorphisms. Then the composition

σn−1 ◦ σn−2 ◦ · · · ◦ σ1 is also conformal.

16 Proof. We will only show the case n = 3 (and also rename the functions σi to f and g), but the idea is similar. We prove this by using the definition of a conformal mapping and the . Since both f and g are conformal, it is true that  ∠ (x, y) = ∠ dfp(x), dfp(y) .

Using the chain rule (and g(q) = p, p ∈ S2, q ∈ S1) we have that:

   ∠ d (f ◦ g)q (x), d (f ◦ g)q (y) = ∠ dfp ◦ dgq(x), dfp ◦ dgq(y) .

Now letting dgq(x) = v and dgq(y) = w we have that   ∠ dfp ◦ dgq(x), dfp ◦ dgq(y) = ∠ dfp(v), dfp(w) = ∠ (v, w) since f is conformal. Therefore we have  ∠ (v, w) = ∠ dgq(x), dgq(y) = ∠ (x, y) since g also is conformal. This proves that   ∠ d (f ◦ g)q (x), d (f ◦ g)q (y) = ∠ (x, y) , so f ◦ g is conformal. In the same way we think of conformal mappings and what they do to angles, we can also think of the area distortion that might happen due to the mapping.

Definition 5.13. Let S1 and S2 be two regular surfaces. A diffeomorphism f : S1 → S2 is called equiareal if for all p ∈ S1, the area distortion kdfpk = 1. These are all important and interesting concepts since it can be good to keep track of what we actually preserve when we do things to surfaces. This can be captured even more by what is called an isometry. But before we precisely introduce the notion of isometry we must first introduce what we call the first fundamental form. Definition 5.14. The first fundamental form of S assigns to each p ∈ S the 3 restriction to TpS of the squared norm function in R , that is, the from 2 TpS to R defined as x 7→|x|p. This is basically a fancy name for the composition of the norm of the vector in the tangent plane composed with the function that squares its argument. It is however very important, and we will see just how important it is for this very paper later on. For now, we will just use it to define what we mean by an isometry.

17 Definition 5.15. Let S1 and S2 be two regular surfaces. A diffeomorphism f : S1 → S2 is called an isometry if df preserves their first fundamental forms. This is equivalent to saying that df preserves their inner products:

hx, yip = hdfp(x), dfp(y)if(p) for all p ∈ S1 and all x, y ∈ TpS1. Furthermore, two regular surfaces are called isometric if there exists an isometry between them. We call a property or measurement on a surface intrinsic if it is preserved by isometries. There has been a lot of definitions on top of each other here, but they are certainly needed to proceed with what we want to do. They are also very important and since we have been talking mainly about surfaces, we see that we have come a long way of studying surfaces and kind of categorizing them if we so wish. We have also introduced properties of functions and how they deal with and change surfaces.

Proposition 5.16. A diffeomorphism f : S1 → S2 is equiareal iff it is area preserving in the following sense: if R ⊂ S1 is any polygonal region, then  Area (R) = Area f (R) . Furthermore, let S1 and S2 be regular surfaces, then ZZ Area f (R) = kdfkdA. R Proof. See [5] p.164. We will use this later on to prove a Theorem that allows us to check if a diffeomorphism is equiareal by just studying its first fundamental form. Now we can move on to the first fundamental form in local coordinates. This is a convenient notation which we will use from now on.

5.3 The first and second fundamental forms

Definition 5.17. Let S be a regular surface and let σ : U ⊂ R2 → V ⊂ S be a surface patch. Define the functions E,F,G: U → R such that for all q ∈ U, 2 E(q) = σu(q) p

F (q) = hσu(q), σv(q)i 2 G(q) = σv(q) p . We call E, F and G the coefficients of the first fundamental form.

2 2 We will however most often just write E =|σu| , F = hσu, σvi and G =|σv| for a denser notation. Now let’s take an example since we’ve just dumped a lot of definitions on you, and once again go back to the graph of a function. What are the coefficients of the first fundamental form for the graph of a function? Well, we know the

18 surface patch σ (x, y) = x, y, f (x, y) and we’ve already calculated the partial derivatives to be: σx = (1, 0, fx)  σy = 0, 1, fy . Now we can easily calculate the coefficients of the first fundamental form:

2 2 2 2 2 E =|σx| = 1 + 0 + (fx) = 1 + (fx)

F = hσx, σyi = fxfy

2 2 2 2 2 F = σy = 0 + 1 + fy = 1 + fy Now of course we could do this at any point p on the graph to get an actual number if we wanted to, but this would be the general coefficients.

Remark: The coefficients E and G are both positive since they are defined as the length of the vectors σu and σv respectively. Definition 5.18. The first fundamental form in the local coordinates {u,v}, which we usually just call ”the first fundamental form of σ”, is the ex- pression 2 2 F1 = Edu + 2F dudv + Gdv . Again we return to the graph for an example. The first fundamental form of the graph of a function would then be:

  2 2  2 2 F1 Gf = 1 + (fx) dx + 2fxfydxdy + 1 + fy dy .

The following theorem will be one of the core building blocks in proving Theorem 8.1.

Theorem 5.19. Let S be a regular surface and f : U ⊂ R2 → V ⊂ S be a 2 2 surface patch. Let f have first fundamental form F1 = Edu + 2F dudv + Gdv . Then we have the following: 1. f is conformal ⇐⇒ E = G and F = 0. √ 2. f is equiareal ⇐⇒ EG − F 2 = 1.

3. f is an isometry ⇐⇒ E = G = 1 and F = 0. Proof. 1. ⇐=  Let f (u, v) = f1 (u, v) , f2 (u, v) , f3 (u, v) . Then we have that   f1u f1v dfp = f2u f2v f3u f3v

19 for all partial derivatives evaluated at p. However we will omit that for denser notation and take that for granted. Then clearly     f1u f1v   f1ux1 + f1vx2 x1 dfp(x) = f2u f2v · = f2ux1 + f2vx2 . x2 f3u f3v f3ux1 + f3vx2

Note that

2 2 2 2 2 2 E = f1u + f2u + f3u,G = f1v + f2v + f3v,F = f1uf1v + f2uf2v + f3uf3v. Now from the definition we know that !  hdfp(x), dfp(y)i ∠ dfp(x), dfp(y) = arccos . |dfp(x)||dfp(y)|

Moving on to calculate the numerator and denominator we get:

df(x), df(y) =

(f1ux1 + f1vx2)(f1uy1 + f1vy2) +

+ (f2ux1 + f2vx2)(f2uy1 + f2vy2) +

+ (f3ux1 + f3vx2)(f3uy1 + f3vy2) =

 2 2 2  = ... = f1u + f2u + f3u (x1y1) +

+ (x1y2 + x2y1)(f1uf1v + f2uf2v + f3uf3v) +

 2 2 2  + f1v + f2v + f3v (x1y1) =

= E (x1y1) + F (x1y2 + x2y1) + G (x2y2) = (x1y1 + x2y2) E = hx, yi E. Similarly, we get that the denominator is q 2 2 2 2 E x1 + x2 y1 + y2 = E|x||y|.

So we have that !    hdfp(x), dfp(y)i E hx, yi ∠ dfp(x), dfp(y) = arccos = arccos = |dfp(x)||dfp(y)| E|x||y|

hx, yi = arccos = (x, y) |x||y| ∠ Now the other direction, =⇒ : Let x = (1, 0) and y = (0, 1), then

hx, yi (x, y) = arccos = arccos 0. ∠ |x||y|

20 However we also know that

df(x), df(y) = E (x1y1) + F (x1y2 + x2y1) + G (x2y2) .

This gives us df(x), df(y) = F = 0 since f is conformal. Likewise if we let x = (1, −1) and y = (1, 1) we get

hx, yi (x, y) = arccos = arccos 0. ∠ |x||y|

We also have that !  dfp(x), dfp(y) ∠ dfp(x), dfp(y) = arccos = |dfp(x)||dfp(y)| ! E (x y ) + F (x y + x y ) + G (x y ) = arccos 1 1 1 2 2 1 2 2 = |dfp(x)||dfp(y)| ! E (x y ) + G (x y ) = arccos 1 1 2 2 . |dfp(x)||dfp(y)|

p 2 2 Also note that |dfp(x)| = Ex1 + Gx2 for conformal f, so ! ! Ex y + Gx y Ex y + Gx y arccos 1 1 2 2 = arccos 1 1 2 2 |df (x)||df (y)| p 2 2p 2 2 p p Ex1 + Gx2 Ey1 + Gy2

 E − G  E − G = arccos √ √ = arccos = arccos 0 ⇐⇒ E + G E + G E + G ⇐⇒ E − G = 0 ⇐⇒ E = G.

Proof. 2. √ ⇐= and =⇒ : We use Proposition 5.16 and the fact that kdfk = EG − F 2.

ZZ ZZ p ZZ Area f(R) = kdfkdA = EG − F 2dA = dA = Area(R) R R R √ ⇐⇒ EG − F 2 = 1. Proof. 3. ⇐= and =⇒ : As in the proof of 1. we computed that

df(x), df(y) = E (x1y1) + F (x1y2 + x2y1) + G (x2y2) .

So if we want f to be an isometry we need that

hx, yi = df(x), df(y) .

21 Now hx, yi = x1y1 + x2y2 and

df(x), df(y) = E (x1y1) + F (x1y2 + x2y1) + G (x2y2) , so we need the equality

x1y1 + x2y2 = E (x1y1) + F (x1y2 + x2y1) + G (x2y2) to hold if we want f to be an isometry. Clearly then we need

E = 1,G = 1,F = 0.

Proposition 5.20. A diffeomorphism f : S1 → S2 is an isometry iff it is equiareal and conformal. Proof. Follows from Theorem 5.19 since: =⇒ : √ We have that E = 1,G = 1,F = 0, so clearly E = G, F = 0 and EG − F 2 = 1. ⇐= : √ √ √ We have that E = G, F = 0 and EG − F 2 = 1, so EG − F 2 = E2 − 0 = |E| = E = 1.

Just like in the first fundamental form in local coordinates, we can define the second fundamental form in local coordinates in a similar way. Namely, we introduce coefficients e, f and g and set the expression

edu2 + 2fdudv + gdv2 as the second fundamental form in local coordinates. More precisely: Definition 5.21. The second fundamental form in local coordinates {u,v} is the expression

2 2 F2 = edu + 2fdudv + gdv , where e = hσuu,Ni,

f = hσuv,Ni,

g = hσvv,Ni, such that N is a unit normal vector to the surface.

22 Once again, we take the example of the graph of a function. We compute the second order derivatives to be

σxx = (0, 0, fxx)  σyy = 0, 0, fyy . A normal to the graph is given by  σx × σy −fx, −fy, 1 N = = q . σx × σy 2 2 fx + fy + 1

We further compute the coefficient e to be

fxx e = hσxx,Ni = h(0, 0, fxx) ,Ni = . q 2 2 fx + fy + 1

√ fxy √ fyy In a similar way we compute f and g to be: f = 2 2 , g = 2 2 . fx +fy +1 fx +fy +1 Notice that the coefficient f is not the same f as the function f (x, y) as in the function we have in the graph Gf . Now why do we even want to study these seemingly arbitrary inner products? Well, for example the first fundamental form gives us more information than what we might think about the surface. We can for example compute surface areas using only the coefficients of the first fundamental form. But the main focus, and what we actually will use them for in this paper, is to define the Gaussian curvature of a surface.

5.4 Gaussian curvature Definition 5.22. The Gaussian curvature K of a regular surface S is given by eg − f 2 K = EG − F 2 where e, f, g are the coefficient of the second fundamental form and E,F,G are the coefficients of the first fundamental form of the surface. We will often omit ”Gaussian” and assume that curvature really means Gaussian curvature unless stated otherwise. The curvature of a surface also gives us information about the surface. Usu- ally textbooks define the curvature of a surface by using the Gauss map, and this in turn gives us even more theory and information about surfaces. However, since this paper is not a paper focused on the theory of differential geometry but rather using its tools we won’t go much deeper into this theory. We heavily recommend and encourage the reader to read [5] for a more in depth under- standing of the subject.

23 We go back to the graph of a function for an example. Since we’ve already computed all the coefficients for the first and second fundamental form of the surface, the (Gaussian) curvature is therefore given by:

 2  fxxfyy −(fxy ) 2 2 2 2  eg − f 1+(fx) +(fy ) fxxfyy − fxy K = =     = 2 . EG − F 2 2 2 2  2 2 1 + (fx) 1 + fy − fxfy 1 + (fx) + fy

We see that the denominator is always positive, and that the numerator de- cides the sign of the curvature. We know from multivariable calculus that the expression 2 fxxfyy − fxy is exactly what we get if we compute the determinant of the Hessian matrix given by (assuming our function at least satisfy that f ∈ C2 (U))

f f  xx xy . fyx fyy

Hence we get a little more information about the surface by only calculating its curvature. We know that if the above expression, which equals K, is greater than 0 (K > 0) at a critical point we have that the point is a local extremum and if K < 0 we have a saddle point. So to summarize what we’ve seen so far we can say that the first and second fundamental form gives us a lot of information about the surface, and that we can do a lot of things only using the first and second fundamental form. To give some sort of feeling of the curvature of a surface we give the following proposition which we will give examples to afterwards:

Proposition 5.23. Let S be a (not necessarily oriented) regular surface, and let p ∈ S. If K(p) > 0, then a sufficiently small neighbourhood of p in S lies entirely on one side of the plane p + TpS = {p + v | v ∈ TpS} (except for the point p itself since p ∈ TpS). If K(p) < 0, then every neighbourhood of p in S intersects both sides of p + TpS. Proof. After applying a rigid motion 1 we can assume without loss of generality that TpS = span{e1, e2} and that a neighbourhood of p in S is equal to the graph of a smooth function f. As we’ve already computed in this paper, the 2 (sign of the) curvature of this graph at any point is given by fxxfyy − fxy . The proposition now follows from the previously mentioned multivariable calculus 2 fact that a critical point is a local extremum if fxxfyy − fxy > 0 and a saddle 2 point if fxxfyy − fxy < 0.

1A rigid motion is a function f that preserves lengths, that is, d f(x), f(y) = d (x, y). In particular, we can see a rigid motion LA as multiplication from the left by an orthogonal matrix A, i.e AT · A = I. See [5] p.48 for more information.

24 Example:

Figure 6: Picture showing how proposition 5.23 can be applied to two different surfaces. Imagine the second drawing as the inside of an invisible torus.

Theorem 5.24. The Gaussian curvature K of any regular surface S can be expressed using only the coefficients of the first fundamental form as

− 1 E + F − 1 G 1 E F − 1 E 0 1 E 1 G 2 vv uv 2 uu 2 u u 2 v 2 v 2 u F − 1 G EF − 1 E EF v 2 u 2 v 1 G FG 1 G FG 2 v 2 u K = . (EG − F 2)2

25 Proof. The idea behind the proof is to use the so called Christoffel symbols and make some substitutions. See [5] p.289-293 for precise definitions and proof. What this really tells us is that the Gaussian curvature of a surface is only dependent on its first fundamental form which is an intrinsic measurement! We almost always view some kind of (regular) surface as being embedded into for example R3 and in such cases we can usually define a normal, to kind of give the surface another direction. When viewing a surface in this way, we see curvature in a way that some creature that is sufficiently small living on the surface would see differently. To the creature its surrounding would look flat, and it can’t see how/if it sort of bends in its ambient space like we could see. This is kind of the point of why it is interesting and mostly surprising - one could think that one would need (as in the calculations for the coefficients of the second fundamental form) a normal of the surface, but that is actually not needed. This is a really cool result that you should try to think more about yourself. Just think about earth. We are considered pretty small compared to the earth, and to us standing on the ground it seems pretty flat. However as we know, the earth isn’t flat, but more of a sphere. What the above formula tells us, is that we would for example not need any astronauts to go to space and see that our planet is actually not flat. It is enough to know what happens here near the ground to calculate the curvature of the earth (and to show that it is in fact not flat!).

26 5.5 The stereographic projection The unit sphere S2 is important and used a lot in a variety of different branches of mathematics. There are several theorems out there connecting some seem- ingly arbitrary thing to the unit sphere. One important construction that uses the unit sphere is the stereographic projection σst. The stereographic pro- jection gives for each point p ∈ S2 \{(0, 0, 1)} a point p0 ∈ R2. We will show two different parametrizations or formulas for the stereographic projection. The first one is by doing the following:

• Take the unit sphere and view the plane R2 as embedded in R3 which would be a tangent plane to the sphere at the south pole PSP = (0, 0, −1).

• Let the point PNP = (0, 0, 1) be the north pole of the sphere. • Take any point p ∈ S2. Now take the straight line which goes through 2 the points PNP and p ∈ S . This line will somewhere intersect the plane which is a tangent at the point PSP of the sphere. • This point of intersection on the plane will be the point q such that σst(p) = q. It would look something like this:

Figure 7: Picture showing an example of the stereographic projection.

27 Remark 5.25. The point PNP is the reason why we either need to remove it from the domain of the stereographic projection or include a point +∞ in the range. A line that only goes through PNP and does not intersect the unit sphere anywhere else does not intersect the plane, which is why we define a point +∞. The stereographic projection gives for example a diffeomorphism between S2 \{(0, 0, 1)} and the plane R2 or C. Alternatively it also gives a diffeo- morphism between the extended Cˆ = C ∪ {+∞} by mapping ˆ σst (0, 0, 1) = ∞ ∈ C (see remark above). These are the ideas behind the stereo- graphic projection and now we will give an explicit formula for the stereographic projection.

First off, how can we describe any point p ∈ S2? Well, we break it down to 2 steps. First of all, consider the projection of the sphere S2 to a plane P which is embedded in R3 and has z-coordinate 0 (or −1, the choice is arbitrary). Clearly, the projection π : S2 → P will actually be limited to a subset D ⊂ P such that D p0, r0 = {z ∈ P | |z − p0| ≤ r0}. We also see that π : S2 → D (0, 1) is a surjection (but π is not injective, why?).

Figure 8: Picture showing the projection of a sphere onto the uv-plane.

Any point p ∈ S2 will therefore have some (x, y) coordinates given by (x, y) = (r cos θ, r sin θ) for some 0 < r ≤ 1, 0 ≤ θ < 2π.

Now any point p ∈ S2 will give us a way to draw a (unique) right angled triangle with hypotenuse 1, height z and length | (r cos θ, r sin θ√) | = r. The Pythagorean theorem then gives us that 1 = z2 + r2 =⇒ z = ± 1 − r2. Now we need to think about the sign of the z coordinate. We easily see that for R > 2 we have a ”+” sign, and for R < 2 we have a ”−” sign. For R = 2 we

28 clearly have the z coordinate z = 0.

Figure 9: Picture showing the sphere and how we can view a point p ∈ S2 \ {(0, 0, 1)}.

The stereographic projection is then the map   p 2 σst r cos θ, r sin θ, ± 1 − r = (R cos θ, R sin θ) for some R. How can we relate R and r? Well, looking in the picture we have similar triangles. R = 2 ⇐⇒ R = √2 . Solving for r gives that r 1−z r 1± 1−r2 4R r = R2+4 [5].

So in the end we have the stereographic projection

2 2 ∼ σst : S \{(0, 0, 1)} → R = C given by

 s  4R 4R 16R2 σst  cos θ, sin θ, ± 1 −  = (R cos θ, R sin θ) . R2 + 4 R2 + 4 (R2 + 4)2

29 The other formula is a little different, but both ideas or techniques would work. We will first show the formula for the stereographic projection, and then its inverse. We do the following : 2 • Once again consider S , the north pole PNP and this time the plane z = 0.

2 • Consider the unique line through PNP and P = (x, y, z) ∈ S .

• Wherever this line intersects the plane z = 0 is the point P 0 such that 0 σst(P ) = P .

Figure 10: Picture showing an example of the stereographic projection.

Any line through 2 points in R3 is unique. The line L can be parametrized as L : (1 − t) PNP + tP. Since we want to know where this line intersects the plane z = 0, we set the z coordinate to 0 and solve for t:  (1 − t) PNP + tP = (0, 0, 1 − t) + (tx, ty, tz) = tx, ty, 1 + t (z − 1) . So we want to find t such that 1 + t (z − 1) = 0: 1 1 + t (z − 1) = 0 ⇐⇒ t = . 1 − z

30 Note that the case where z = 1 corresponds to the north pole PNP , which we removed from our domain. Inserting this t into the parametrized line gives us the point where the line intersects the plane z = 0, i.e the image of our stereographic projection:  x y  σ (x, y, z) = , , 0 . st 1 − z 1 − z

−1 Next we want to find σst . We do the same thing except we start from the plane z = 0 and consider the line through the north pole and the point in the plane z = 0. (1 − t)(ξ, η, 0) + t (0, 0, 1) = (1 − t) ξ, (1 − t) η, t . Now however we want to find t such that the sum of the square of the components are 1, which means that the point lies on S2.   k (1 − t) ξ, (1 − t) η, t k = 1 ⇐⇒ (1 − t)2 ξ2 + η2 + t2 = 1 ⇐⇒

  1 + t   ⇐⇒ ξ2 + η2 = ⇐⇒ ξ2 + η2 − t ξ2 + η2 − t = 1 ⇐⇒ 1 − t ξ2 + η2 − 1 ⇐⇒ t = . 1 + ξ2 + η2 Inserting this t gives us: ! 2ξ 2η −1 + ξ2 + η2 σ−1 (ξ, η) = , , . st 1 + ξ2 + η2 1 + ξ2 + η2 1 + ξ2 + η2

We will need the partial derivatives of the inverse of the stereographic projection later on so we might as well compute them now, so that’s what we’re going to do next. First of all, let c denote 1 + ξ2 + η2, so c = 1 + ξ2 + η2. Then:

2 2 !  −1 2c − 2ξ (2ξ) −4ξη 2ξc − −1 + ξ + η (2ξ) σst = , , ξ c2 c2 c2

2 2 2!  −1 2 c − 2ξ −4ξη 2ξ c + 1 − ξ − η σst = , , ξ c2 c2 c2

2 2 !  −1 2 1 + η − ξ −4ξη 4ξ σst = , , . ξ c2 c2 c2

 −1 Similarly we can compute σst η

2 2 !  −1 −4ξη 2 1 + ξ − η 4η σst = , , . η c2 c2 c2

31 Now one can check that    −1  −1 σst , σst = 0 ξ η and      −1  −1  −1  −1 4 σst , σst = σst , σst = 2 . ξ ξ η η (1 + ξ2 + η2)

2 2 Proposition 5.26. The stereographic projection σst : S \ PNP → R is con- formal. Proof. We use Theorem 5.19. Above we actually computed the coefficients for −1 the first fundamental form for σst . Note that we have F = 0 and E = G, −1 so by Theorem 5.19 σst is conformal. This is equivalent to saying that σst is conformal.

2 2 Proposition 5.27. The stereographic projection σst : S \ PNP → R is a dif- feomorphism. Proof. Injectivity: This follows immediately from the fact that a line through two points in R3 is unique. Surjectivity: Follows since any point P ∈ R2 have an inverse which lies in S2. : Since all the components of the stereographic projection are smooth functions, it follows that the stereographic projection is a smooth func- −1 tion. The same applies to its inverse σst . Proposition 5.28. The Gaussian curvature K of a sphere with radius R is given by 1 K = . R2 Proof. We constructed a formula for the stereographic projection for the unit sphere above, and in a similar way we can do so for any sphere of radius R. Let σst(R) denote

3 2 2 2 2 2 σst(R) : {(x, y, z) ∈ R | x + y + z = R }\{(0, 0,R)} → R . Then a formula for the stereographic projection of a sphere with radius R can be given by (verify this by following the same argument as in the above derivation of the stereographic projection) ! 4R2u 4R2v R −4R2 + u2 + v2 σ (u, v) = , , . st(R) 4R2 + u2 + v2 4R2 + u2 + v2 4R2 + u2 + v2

This does not cover the entire sphere, so what we do is we add another surface that together cover the entire sphere. We add the surface patch

3 2 2 2 2 2 σ˜st(R) : {(x, y, z) ∈ R | x + y + z = R }\{(0, 0, −R)} → R .

32 We also letσ ˜st(R) be given by the same formula as σst(R), and they clearly agree 3 2 2 2 2  on their intersection {(x, y, z) ∈ R | x +y +z = R }\ {(0, 0,R)} ∪ {(0, 0, −R)} . eg−f 2 Now we just simply compute K, which is given by K = EG−F 2 . For this we use some software such as MATLAB just to speed up the calculations, and we 1 get that K = R2 . The code that we used can be found in the Appendix at page 56.

33 6 Derivation of Liouville’s equation

Consider the previous formula for the curvature K, namely Theorem 5.24. Now if we let F = 0 and E = G, we see that the formula simplifies to:

− 1 G + 0 − 1 G 1 G 0 − 1 G 0 1 G 1 G 2 vv 2 uu 2 u 2 v 2 v 2 u 0 − 1 G G 0 − 1 G G 0 2 u 2 v 1 G 0 G 1 G 0 G 2 v 2 u K = (GG − 02)2

− 1 (G + G ) 1 G − 1 G 0 1 G 1 G 2 vv uu 2 u 2 v 2 v 2 u − 1 G G 0 − 1 G G 0 2 u 2 v 1 G 0 G 1 G 0 G 2 v 2 u = . (GG)2

So

− 1 (G + G ) 1 G − 1 G 0 1 G 1 G 2 vv uu 2 u 2 v 2 v 2 u G4K = − 1 G G 0 − 1 G G 0 2 u 2 v 1 G 0 G 1 G 0 G 2 v 2 u   (Gvv + Guu) −Gu Gv 0 Gv Gu 1  1 1  = −  − Gu G 0 + Gv G 0  . 2 2 2  1 G 0 G 1 G 0 G  2 v 2 u

We have

(Gvv + Guu) −Gu Gv 0 Gv Gu −2G4K = − 1 G G 0 + 1 G G 0 2 u 2 v 1 G 0 G 1 G 0 G 2 v 2 u    1 1  = (G + G ) G2 − G (G G) + G (−G G) vv uu 2 u u 2 v v  1 1  + − G (G G) − G (G G) . 2 v v 2 u u

Then 1 1 1 1 −2G3K = G (G + G ) − (G )2 − (G )2 − (G )2 − (G )2 vv uu 2 u 2 v 2 v 2 u 1   = G (G + G ) − 2 (G )2 + 2 (G )2 vv uu 2 u v 2 2 = G (Gvv + Guu) − (Gu) − (Gv) .

34 Since G > 0, we have

(−G − G ) (G )2 + (G )2 K = vv uu + u v 2G2 2G3 −G (G + G ) + (G )2 + (G )2 = vv uu u v 2G3 1 G (G + G ) − (G )2 − (G )2 = − vv uu u v 2G G2 ! 1 GG − (G )2 GG − (G )2 = − vv v + uu u 2G G2 G2 ! 1 G  G  = − u + v 2G G u G v 1 = − (ln G) + (ln G)  . 2G uu vv We can rewrite this as

(ln G)uu + (ln G)vv = −2KG,

∂2 ∂2 and shorten it by using that ∆ = ∂u2 + ∂v2 . We end up with: ∆ (ln G) = −2KG.

Recall that G = G(u, v) > 0. Now the substitution G(u, v) = eW (u,v) ⇐⇒ W (u, v) = ln G(u, v) leads us to the equation

∆W (u, v) = −2KeW (u,v). (1)

Or equivalently using Wirtinger calculus (which follows from Proposition 4.4):

∂2W 1 = − KeW . (2) ∂z∂z¯ 2 The equations (1) and (2) are called Liouville’s equation, and those equations are exactly the ones we want to study further in this paper. We will refer to (1) as the real version of Liouville’s equation and (2) as the complex version of Liouville’s equation.

35 7 Laplace’s equation

First of all we will have a look at a much simpler problem, namely Laplace’s equation. Laplace’s equation is

∆U = 0 (3) for a function U = U(x, y). Clearly, the solutions to Laplace’s equation are harmonic functions. We will start off by showing a solution of the Laplace’s equation that is only dependent on the distance from the origin.

7.1 Solution only dependent on the distance from the ori- gin in R2 Theorem 7.1. The equation

∆V (r) = 0 p for r = x2 + y2, V : R+ → R has the solution V (r) = A ln r + B for some A, B ∈ R Proof. We will solve the equation (3) by introducing the function p V (r) ≡ U(x, y), for r = x2 + y2.

Here V : R+ → R, so r is clearly positive. Since ∂2U ∂2U ∆U = + , ∂x2 ∂y2   we simply compute this for V (r) = V px2 + y2 and this gives us

p  ∂2V ∂2V ∆V x2 + y2 = + ∂x2 ∂y2 ! ∂ p  x = V 0 x2 + y2 ∂x px2 + y2 ! ∂ p  y + V 0 x2 + y2 , ∂y px2 + y2 which can be written in a denser notation as ∂  x ∂  y  ∆V (r) = V 0(r) · + V 0(r) · . ∂x r ∂y r

36 Continuing we use the to compute

∂  x x x 1 · r − x · x  V 0(r) · = V 00(r) · · + V 0(r) r , ∂x r r r r2 which can be simplified to ! ∂2 x2 1 x2 V (r) = V 00(r) · + V 0(r) − . ∂x2 r2 r r3

∂2 In a similar manner/or by symmetry, one can compute ∂y2 V (r) to be ! ∂2 y2 1 y2 V (r) = V 00(r) · + V 0(r) − . ∂y2 r2 r r3

Now we will see that ∆V (r) has a very nice expression:

∂2 ∂2 ∆V (r) = V (r) + V (r) ∂x2 ∂y2 ! ! x2 1 x2 y2 1 y2 = V 00(r) · + V 0(r) − + V 00(r) · + V 0(r) − . r2 r r3 r2 r r3

This simplifies to ! r2 2 r2 ∆V (r) = V 00(r) · + V 0(r) − V 0(r) · . r2 r r3

And simplifying further we get

1 ∆V (r) = V 00(r) + V 0(r) . r

We have now managed to transform our PDE into an ODE, which we have a better understanding of. So now what remains to do is to actually solve our ODE, namely: 1 V 00(r) + V 0(r) = 0 (4) r Rewriting this we see that it is a separable differential equation:

V 00(r) 1 = − . V 0(r) r

Integrating both sides with respect to r:

Z V 00(r) Z 1 dr = − dr, V 0(r) r

37   0 1 ln V (r) = − ln|r| + C = ln + C, r where C is any arbitrary constant, and the last equality follows from the fact that r is non-negative. Taking the exponential to the power of both sides gives us 0 ln ( 1 )+C C 1 1 V (r) = e r = e · = A · r r for some arbitrary constant A. Now since

0 1 V (r) = A · , r we have that 1 V 0(r) = ±A · . r But since A is an arbitrary constant, we might as well include the ” ± ” in that constant, so we have 1 V 0(r) = A · . r Now we can just integrate both sides once again with respect to r to proceed: Z Z 1 V 0(r)dr = A dr, r V (r) = A · ln|r| + A · C = A ln r + B for some constants A and B. We also drop the absolute value since r is non- negative. So one family of solutions of Laplace’s equation (3) is given by p  p  U(x, y) = V x2 + y2 = A · ln x2 + y2 + B for (x, y) ∈ R2, and A, B ∈ R.

7.2 A slight modification of Laplace’s equation Now instead of Laplace’s equation, consider the following non-homogeneous partial differential equation: ∆U = eU (5) for U = U (x, y) . Theorem 7.2. The equation

∆V (r) = eV (r) p for r = x2 + y2, V : R+ → R has the solution   2A2BrA−2 V (r) = ln  2 . BrA − 1

38 Proof. In the same manner as in Section 7.1, consider the following substitution p  U (x, y) = V x2 + y2 = V (r) .

This gives us the non-homogeneous ordinary differential equation ∆V (r) = eV (r).   In Section 7.2 we calculated ∆V px2 + y2 and inserting that into (5) we end up at: 1 V 00 (r) + V 0 (r) = eV (r) (6) r Now this differential equation is not at all trivial in how we can solve it. If we look at it, we notice that no immediate use of a standard method of solving differential equations will work (laplace transform, separable, integrating factor etc.). Instead, we have to make a couple of tricky substitutions to get anywhere. First of all, we do the following substitution: V (r) = ln u (r). Remark: This is not the same u as our original U (x, y). This is easily seen since u: R+ → R+, but U : R2 → R. Making the above substitution we get:

00 1 0 ln u (r) + · ln u (r) = u (r) (7) r Calculating and simplifying (7) we get: u0 (r)0 1 u0 (r) + = u (r) u (r) r u (r) ⇐⇒ u00 (r) u0 (r)2 1 u0 (r) − + = u (r) u (r) u (r) r u (r) ⇐⇒ u0 (r)2 u0 (r) u00 (r) − + = u (r)2 (8) u (r) r This is yet another non-homogeneous differential equation, without any simple direct method of solving. We now make the substitution 1 u (r) = f (r) r2 for f : R+ → R+. Now we find u0 (r) and u00 (r) in terms of f by using the chain rule: 0 2 0 −1  0 2  −f (r) r − 2rf (r) u (r) = 2 · f (r) r + 2rf (r) = 2 f (r) r2 f (r) r2 f 0 (r) r + 2f (r) = − . f (r)2 r3

39 1    u00 (r) = − f 00 (r) r + f 0 (r) + 2f 0 (r) · f (r)2 r3  2 f (r)2 r3    − 2f (r) f 0 (r) r3 + 3r2f (r)2 · f 0 (r) r + 2f (r)

Inserting u(r), u0(r) and u00(r) into (8) and simplifying, we eventually end up at 2 r2f (r) f 00 (r) − r2 f 0 (r) + rf (r) f 0 (r) + f (r) = 0 (9) We now make yet another substitution, t = ln r ⇐⇒ r = et. Remember that 0 < r, so this substitution is allowed. We compute the derivatives of f in terms of with the help of the chain rule as follows: df df dt df 1 df df = = ⇐⇒ = e−t. dr dt dr dt r dr dt

d2f d df  dt d dr  = e−t = e−t dr2 dr dt dr dt dt ! ! 1 d2f df 1 d2f df = · e−t − e−t = − . r dt2 dt r2 dt2 dt

Now if we plug this into (9) we get ! 1 1 d2f df df 2 1 df 1 r2 · · − −r2 · e−t +r· · e−t + = 0, u (r) r2 r2 dt2 dt dt u (r) r2 dt u (r) r2 which simplifies quite nicely to

1 d2f 1 df df 2 1 1 df 1 − − r2 · + + = 0. u (r) r2 dt2 u (r) r2 dt dt r2 u (r) r2 dt u (r) r2

1 Now if we remember that f (r) = u(r)r2 we get

d2f df df 2 df f (r) − f (r) − + f (r) + f (r) = 0. dt2 dt dt dt which gives us the following differential equation:

d2f df 2 f (r) − + f (r) = 0 (10) dt2 dt

df Remark: One might think at a first glance that dt = 0 since we write f = f (r). It is however important to remember that r = r(t) is also a function, but a func- tion of t since we did the substitution r = et.

40 This is where we’ve actually gotten to something. Notice that this is an au- tonomous equation. This can easily be seen if we set ! df d2f F f, , = 0 dt dt2 for ! df d2f d2f df 2 F f, , = f − + f. dt dt2 dt2 dt

df If we now let ξ (t) = dt we can treat v as an independent , thus getting: d2f dξ df dξ dξ = = = ξ . dt2 dt dt df df Plugging this into (10) we get dξ f (r) ξ (t) − ξ (t)2 + f (r) = 0. df Omitting the arguments for ξ and f let’s us write it in a denser notation as dξ fξ − ξ2 + f = 0. df Multiplying through by df yields:   (fξ) dξ + f − ξ2 df = 0.

This equation looks promising in terms of solving it using the method of exact equations. Sadly however, we notice that ∂ ∂   (fξ) = ξ 6= −2ξ = f − ξ2 . ∂f ∂ξ However, we can construct an exact differential equation by multiplying by an unknown integrating factor µ (f). Doing this we want to find µ (f) such that the following equality holds: ∂ ∂    µ (f) · (fξ) = µ (f) f − ξ2 ∂f ∂ξ ⇐⇒ µ0 (f) fξ + µ (f) ξ = −2ξµ (f) ⇐⇒ µ0 (f) fξ = −3ξµ (f) ⇐⇒ µ0 (f) f = −3µ (f)

41 ⇐⇒ µ0 (f) −3 = µ (f) f ⇐⇒ Z 1 Z 1 dµ = −3 df µ f ⇐⇒ ln µ (f) = −3 ln f = ln f −3 ⇐⇒ 1 µ (f) = . f 3 Now we multiply through by this integrating factor to get the following differ- ential equation: ξ dξ ξ2 1 − + = 0 f 2 df f 3 f 2 Multiply by df and rearrange to end up at: ! ξ 1 ξ2 dξ + − df = 0 (11) f 2 f 2 f 3

This differential equation is now exact (you can check by yourself), so there exist a function Ψ (f, ξ) such that

∂Ψ 1 ξ2 ∂Ψ ξ = − and = . ∂f f 2 f 3 ∂ξ f 2 Integrating the first equation above with respect to f gives:

1 ξ2 Ψ(f, ξ) = − + + ϕ(ξ). f 2f 2 Now differentiating the above equation with respect to ξ gives ! ∂ 1 ξ2 ξ − + + ϕ(ξ) = + ϕ0 (ξ) . ∂ξ f 2f 2 f 2

∂Ψ 0 Comparing this to ∂ξ we see that we need ϕ (ξ) = 0, so ϕ (ξ) = k must be constant, k ∈ R. Now the solution, Ψ (f, ξ) = 0 is: ξ2 1 Ψ(f, ξ) = − + k = c 2f 2 f 1 for some c1 ∈ R. But since we have two arbitrary constants, we can just add them together and call it c1, so the solution now is: ξ2 1 Ψ(f, ξ) = − = c . 2f 2 f 1

42 Here’s where we finally got to something, if we look at the expression and df remember that ξ = dt we can do the following: ξ2 1 − = c 2f 2 f 1

2 2 ⇐⇒ ξ = 2c1f + 2f

p 2 ⇐⇒ ξ = ± 2c1f + 2f. df Using the fact that ξ = dt and introducing the new constant c2 = 2c1 we arrive at the differential equation:

df p = ± c f 2 + 2f (12) dt 2 And this differential equation is luckily separable. We rewrite it as: 1 df = ±1. p 2 c2f + 2f dt We can now integrate both sides: Z 1 Z df = ±1dt. p 2 c2f + 2f The right hand side is trivial, the left hand side however is rather complicated. To proceed with the we first factor out c2 in the denominator and then complete the square as follows: Z 1 1 Z 1 df = √ df p 2 q c2f + 2f c2 2 f + f 2 c2

1 Z 1 1 Z 1 √ df = √ df. c q 2 c r 2 2 f + f 2 2  1  1 c2 f + − 2 c2 c2 Now we want to use a hyperbolic substitution, one that takes advantage of the identity cosh2 x − 1 = sinh2 x. To do this we want a substitution such that:  2 1 1 2 f + = 2 cosh (η) c2 c2 1 1 ⇐⇒ f + = cosh(η) c2 c2 1 1 ⇐⇒ f = cosh(η) − c2 c2 1 ⇐⇒ df = sinh(η)dη. c2

43 Using this substitution leads us to the following integral:

Z Z 1 sinh(η) 1 1 1 c2 √ q df = √ q dη. c2 2 2 c2 1 2 1 f + f 2 cosh (η) − 2 c2 c2 c2

Now we can make use of the identity cosh2 x−1 = sinh2 x to rewrite our integral as: Z 1 sinh(η) Z 1 c2 1 sinh(η) √ q dη = √ q dη c2 1 2 1 c2 2 2 cosh (η) − 2 cosh (η) − 1 c2 c2 1 Z sinh(η) 1 Z = √ dη = √ 1dη c q c 2 sinh2(η) 2 η = √ + C c2 for some constant C ∈ R. This was just the left hand side though, and the right hand side is just: Z ±1dt = ±t + c3 for the constant c3 ∈ R where we ”baked in” the constant C from the left hand side into c3 So in the end we arrive at: η √ = ±t + c3. c2

−1 Now we can go back and solve for η to get η = cosh (c2f + 1) and using this we end up with the equation:

1 −1 √ cosh (c2f + 1) = ±t + c3 c2 √ √  ⇐⇒ c2f + 1 = cosh ± c2t + c2c3 2 ⇐⇒ c5f + 1 = cosh (c5t + c4) √ √ 2 where we let c4 = c2c3 and c5 = ± c2. Notice that we require c2 > 0 and c5 is the same regardless of the sign of c2. Now we solve for f and get:

1  f = 2 cosh (c5t + c4) − 1 . c5 Now we can use the definition of the hyperbolic cosine function, 1 cosh x = ex + e−x 2

44 and our previous substitution t = ln r to get: ! 1 ec5t+c4 + e−c4t−c4 f = 2 − 1 c5 2 1  1  = ec5t+c4 + − 2 2 c5t+c4 2c5 e 1  1  c5t = e · c6 − 2 + 2 c5t 2c5 e · c6 1  1  2 c5 = c r − 2c6 + 2 6 c5 2c5c6 r 1   2 2c5 c5 = c r − 2c6r + 1 2 c5 6 2c5c6r 2 (c rc5 − 1) = 6 , 2 c5 2c5c6r

c4 if we let c6 = e . Now we can proceed by using our previously made substitution 1 u (r) = f (r) r2 1 ⇐⇒ f (r) = u (r) r2 which gives us:

2 !−1 1 (c rc5 − 1) 2c2c rc5−2 u (r) = 6 = 5 6 . 2 2 c5 c 2 r 2c5c6r (c6r 5 − 1)

We can make this look a little bit better by just introducing A = c5 and B = c6 which would give the function u (r) as:

2A2BrA−2 u (r) = 2 . BrA − 1

The final step is now to go back to the beginning where we made the substitution V (r) = ln u (r), and compute V (r). Since V (r) = ln u (r), we have that our final solution V (r) is given by:   2A2BrA−2 V (r) = ln  2 . BrA − 1

45 Corollary 7.3. The equation 1 ∆V (r) = V 00 (r) + V 0 (r) = −2KeV (r) r has the solution   A2BrA−2 V (r) = ln − 2  K BrA − 1 p for some constants A, B, K ∈ R and r = x2 + y2 Proof. Follows from how we constructed the solution in Theorem 7.2.

46 8 Liouville’s equation 8.1 Liouville’s equation on the unit sphere We are now ready to approach Liouville’s equation. We are going to solve Liouville’s equation locally for K = 1, i.e the partial differential equation

∆W = −2eW (13) for W = W (u, v) with (u, v) ∈ U ⊂ Ω for some open domain U and simply connected domain Ω. We state our main result and theorem. Theorem 8.1. Let K = 1 in Liouville’s equation. Then the solution to the partial differential equation ∆W = −2eW for W = W (z, z) is given by   4|f 0(z)|2 W (z, z) = ln  2 , 1 + |f(z)|2 where f is a complex function f : U → C for an open domain U ⊂ Ω and f satisfy the following conditions: • f is analytic, that is, ∂f = 0 on U ∂z • f 0(z) 6= 0 for all z ∈ U. Proof. Our approach will be the following: We know that the solutions to Liouville’s equation are the coefficients E = G in the first fundamental form of the surface. In the derivation of Liouville’s equation we assumed E = G and F = 0, this is equivalent to saying that we assume our surface patch or diffeomorphism to be conformal (see Theorem 5.19). Therefore if we can find all conformal diffeomorphisms from some open subset 2 2 U ⊂ R to the unit sphere minus the north pole S \ PNP we solve Liouville’s equation for K = 1. To do this we use the inverse of the stereographic projection since we know that it is conformal (shown in Proposition 5.26), and the formula which we showed in section 5.5 is given by ! 2u 2v −1 + u2 + v2 σ−1 (u, v) = , , . st 1 + u2 + v2 1 + u2 + v2 1 + u2 + v2

We compose the inverse of the stereographic projection with a conformal map (since then the composition is also conformal by Proposition 5.12) f : Ω ⊂ C → C where Ω is any simply connected domain as a subset of C. For a complex valued function f we have that f conformal ⇐⇒ f 0(z) 6= 0 for all z ∈ Ω and it is analytic, that is, it satisfies the complex version of the Cauchy-Riemann equations ∂f = 0. ∂z

47 −1 −1 2 The composition of σst with f, σst ◦ f : Ω → S \ PNP is given by (make sure that you understand this) !   2 · < f(z) 2 · = f(z) −1 + |f(z)|2 σ−1 ◦ f (z) = , , . st 1 + |f(z)|2 1 + |f(z)|2 1 + |f(z)|2

2 Denote this composition by ϕ. Then ϕ: Ω → S \PNP is a conformal diffeomor- phism. Furthermore, let f(z) = f (u + iv) = ξ + iη. Then ξ and η are functions of u and v, ξ = ξ (u, v) and η = η (u, v) and we may write ! 2ξ 2η −1 + |f(z)|2 ϕ = , , . 1 + |f(z)|2 1 + |f(z)|2 1 + |f(z)|2

To compute the coefficient E in the first fundamental form for ϕ we need the −1 partial derivatives, and we covered the partial derivatives of σst in 5.5. Using the chain rule we get:

 −1   −1   −1  −1 ϕu = σst ◦ f = σst (ξ, η) = σst ξu + σst ηu u u ξ η and  −1   −1   −1  −1 ϕv = σst ◦ f = σst (ξ, η) = σst ξv + σst ηv. v v ξ η Computing the coefficient E: *   +  −1  −1  −1  −1 E = hϕu, ϕui = σst ξu + σst ηu , σst ξu + σst ηu ξ η ξ η      −1  −1  −1  −1 = σst ξu + σst ηu · σst ξu + σst ηu ξ η ξ η 2 2    −1  −1  −1  −1 = σst ξu + σst ηu + 2 σst · σst ξuηu . ξ η ξ η Now note that from 5.5 we know that

 −1  −1 σst · σst = 0. ξ η Clearly then the entire expression

 −1  −1 σst · σst ξuηu = 0. ξ η We are therefore left with 2 2 2 2  −1  −1  −1 2  −1 2 E = σst ξu + σst ηu = σst |ξu| + σst |ηu| . ξ η ξ η

48 Now recall that from 5.5 we know     2 2  −1  −1  −1  −1  −1  −1 σst , σst = σst , σst = σst = σst ξ ξ η η η ξ ⇐⇒ E = G in the first fundamental form for the unit sphere using the stereographic projection. We can therefore factorize E as: 2  −1  2 2 E = σst |ξu| + |ηu| . ξ 2  −1 Using what we know about σst from 5.5 we see that we can write this ξ product as 2 2 4  2 2 4 |ξu| + |ηu| E = |ξu| + |ηu| = . (1 + ξ2 + η2)2 (1 + ξ2 + η2)2 Using that f(z) = f (u + iv) = ξ + iη we see that

2 2 2 2 4 |ξu| + |ηu| 4 |ξu| + |ηu| E = 2 = 2 . (1 + ξ2 + η2) 1 + |f(z)|2

2 2 Now the question is, how do we deal with |ξu| + |ηu| ? Well, we do the following: Remember that f(z) = f (u + iv) = ξ + iη. Note that ξ = ξ (u, v) and z+z η = η (u, v). Also z = u + iv so z = u − iv, and this gives us that u = 2 and z−z v = 2i . Essentially then we have the function f(z) = ξ u (z, z) , v (z, z) + iη u (z, z) , v (z, z) .

0 df Now we compute f (z) = dz :

∂u ∂v  ∂u ∂v  f 0(z) = (ξ + iη) = ξ · + ξ · + i η · + η · z u ∂z v ∂z u ∂z v ∂z 1 1 1 1  1 1 = ξ + ξ + i η + η = (ξ + η ) + (ξ − η ) 2 u 2i v 2 u 2i v 2 u v 2i v u 1 = (ξ + η ) − i (ξ − η ) . (14) 2 u v v u Similarly we compute ∂f : ∂z ∂f ∂u ∂v  ∂u ∂v  = (ξ + iη) = ξ · + ξ · + i η · + η · ∂z z u ∂z v ∂z u ∂z v ∂z 1 1 1 1  1 1 = ξ − ξ + i η − η = (ξ − η ) − (ξ + η ) = 0, 2 u 2i v 2 u 2i v 2 u v 2i v u

49 by assumption of the analyticity of f. So we have that ηv = i (ηu + ξv) + ξu. (15) Substituting (15) into (14) we get

1    1 ξ + i (η + ξ ) + ξ  − i (ξ − η ) = (2ξ + 2iη ) = ξ + iη . 2 u u v u v u 2 u u u u

So 0 f (z) = ξu + iηu. This makes it possible for us to describe the solution to our problem even better since 0 0 2 2 2 f (z) = ξu + iηu =⇒ f (z) =|ξu| +|ηu| . We therefore have that

2 2 0 2 4 |ξu| + |ηu| 4|f (z)| E = 2 = 2 . 1 + |f(z)|2 1 + |f(z)|2

Remember that we want to find W = ln E so our solution is then actually   4|f 0(z)|2 W (z, z) = ln  2  . 1 + |f(z)|2

This completes the proof.

Example 8.2. Let f(z) = z and U = C, then we have that |f 0(z)|2 = 1 and |f(z)|2 = ξ2 + η2. This then gives that ! 4 W (z, z) = ln . (1 + ξ2 + η2)2 is a solution to Liouville’s equation on the unit sphere. Notice the expression 4 which has also appeared previously a couple of times in the paper. (1+ξ2+η2)2 Example 8.3. Let’s consider yet another example. This time consider the zk function f : U ⊂ C → C, U = C \ R≥0, given by f(z) = c for z = x + iy. Then 0 df k k−1 0 2 k2 2(k−1) p 2 2 f (z) = dz = c z , so |f (z)| = c2 |z| . Let r denote r = x + y , then r = |z| since |z| = px2 + y2. Our solution is then given by (using Theorem 8.1)     k2 2(k−1) k2 2(k−1) 4 · c2 |z|   4 c2 r  W (z, z) = ln  2  = ln  2 .   |z|2k    r2k   1 + c2 1 + c2

50 A 1 Now let k = 2 and c2 = −B. Then   2  2 A    k 2(k−1) A 2( 2 −1) 2 A−2  4 c2 r  −4B 4 r −BA r ln  2  = ln  2  = ln  2  =  r2k   1 − BrA 1 − BrA 1 + c2

  −A2BrA−2 = ln  2 . BrA − 1

Notice that this is the same result as if we would let K = 1 in Corollary 7.3. This essentially tells us that the symmetric solution that we came up with cor- zk A 1 responds to the choice f(z) = c for k = 2 and c2 = B. What is really nice here is that we put a lot of effort and work into proving The- orem 7.2, and that essentially only gave us this particular solution. Meanwhile our geometric approach was much easier and it provided us with many more solutions to Liouville’s equation. Yet another way of showing and appreciating the tools of differential geometry.

Example 8.4. Let f : U ⊂ C → C for some open domain U such that the function f(z) = sin (z), z = u + iv, satisfy the conditions in Theorem 8.1. We have that

f 0(z) = cos (z), |f(z)|2 = | sin (z)|2, |f 0(z)|2 = | cos (z)|2.

Then a solution to Liouville’s equation is given by   4 · | cos (z)|2 W (z, z) = ln  2 . 1 + | sin (z)|2

In particular, this solution is not a symmetric solution and we cannot find it using Corollary 7.3.

8.2 Liouville’s equation for constant K > 0 Now we’ve had a lot of focus on letting K = 1 in Liouville’s equation and with it we’ve found a solution. However, for any K > 0 we have the following from the point of view of differential equations:

Let K > 0 and E a solution to Liouville’s equation for K = 1, that is, E satisfy the non-linear partial differential equation

∆ ln E = −2E.

˜ E Define E = K . Note that ∆ ln E˜ = ∆ ln E

51 because of the of ∆ and the logarithm rules. Then

∆ ln E = −2E ⇐⇒ ∆ ln E˜ = −2KE.˜

˜ E So if E is a solution to Liouville’s equation for K = 1, then E = K is a solution to Liouville’s equation ∆ ln E˜ = −2KE˜ for some constant K > 0.

Recall that the Gaussian curvature of a sphere with radius R is constant and 1 is given by K = R2 , which we showed in Proposition 5.28. Intuitively this tells us that the larger the sphere is the more ”flat” it is (since if we let R → +∞ 1 2 then we see that limR→+∞ R2 = 0) . As in Proposition 5.28 we have the stere- ographic projection for an arbitrary sphere of radius R. We can use this since this also gives us a conformal diffeomorphism

3 2 2 2 2 2 σst(R) : {(x, y, z) ∈ R | x + y + z = R }\{(0, 0,R)} → R . This way we may also get a solution for K > 0 to Liouville’s equation from the point of view of differential geometry. Theorem 8.5. Let K > 0 in Liouville’s equation. Then the solution to the partial differential equation

∆W = −2KeW for W = W (z, z) is given by   4 4|f 0(z)|2 W (z, z) = ln  2 , K 1 + |f(z)|2 where f is a complex function f : U → C for an open domain U ⊂ Ω and f satisfy the following conditions: • f is analytic, that is, ∂f = 0 on U ∂z • f 0(z) 6= 0 for all z ∈ U. Proof. Follows from Theorem 8.1 and the arguments in Section 8.2. Remark 8.6. There are numerous ways one can formulate the solutions to Li- ouville’s equation. Some formulate it in the following way:   |df/dz|2 W (z, z) = ln 4 2 . 1 + K|f(z)|2

2The interested reader may read up on Minding’s theorem which states that surfaces with the same constant Gaussian curvature are locally isometric.

52 f(z) Notice however if we denote f 0(z) = df and let fˆ(z) := √ , then: dz K   4 |f 0(z)|2 ln  2  K 1 + |f(z)|2   √ 0 2  4 | Kfˆ (z)|  = ln   K  √ 2  1 + | Kfˆ(z)|2    |fˆ0(z)|2 = ln 4 .   2   1 + K|fˆ(z)|2 

There are several different ways to write the solutions. However, they are equiv- alent to each other by some transformation, like the one we showed above.

53 9 Further research 9.1 Global solution of Liouville’s equation There is a stronger proven theorem regarding Liouville’s equation [3] (p. 294). The theorem states (global) solutions to Liouville’s equation for both positive and negative constants K. It also states that the function f in the solution does not have to be analytic, and actually it is enough with f meromorphic. This also leads us to a condition about simple poles (f may have at most simple poles) - this becomes important when we allow meromorphic functions and not only analytic functions in our solution. Notice also that in our proof of Theorem 8.1 we actually only used one surface patch to cover our surface, this is in general not always possible to do. So if we want a global solution we may need multiple surface patches which may agree on some intersection of their domains.

9.2 Non-linear wave equation

∂2 ∂2 If we define (the wave operator) as = 2 − 2 , then as a follow up on   ∂x1 ∂x2 this paper one could also try to prove the following conjecture.

Conjecture 9.1. The equation

W W = −2e for W = W (z, z) has the solution   4|f 0(z)|2 W (z, z) = ln  2 . 1 + |f(z)|2

What conditions and restrictions for f would we need in the conjecture if it was true?

54 10 Conclusion

As we have seen in this paper differential geometry is a powerful tool that sometimes can be used to solve otherwise difficult problems relatively easy. The partial differential equation that we solved is as mentioned before a fully non- linear partial differential equation, a very complicated equation to solve using ”differential equation methods”. We went through the stereographic projection thoroughly and showed different parametrizations. Then we also used it to solve Liouville’s equation locally for any constant K > 0.

55 11 Appendix

MATLAB code used for calculating the Gaussian curvature of a sphere of radius R. syms u v R

assume(u, ’real ’) assume(v, ’real ’) assume(R, ’real ’)

c=4∗Rˆ2+uˆ2+vˆ2;

sigma = [ ( 4 ∗Rˆ2∗u/c ) , (4∗Rˆ2∗v/c ) , (−4∗(Rˆ2)+uˆ2+vˆ2)∗R/c ] ; sigmau = d i f f ( sigma , u ) ; sigmauu = d i f f (sigmau ,u); sigmav = d i f f ( sigma , v ) ; sigmavv = d i f f (sigmav ,v); sigmauv = d i f f (sigmau ,v);

kro = cross (sigmav ,sigmau); nor = (norm( kro ) ) ; deno = kro/nor;

e = dot (sigmauu ,deno); g = dot (sigmavv ,deno); f = dot (sigmauv ,deno);

E = dot (sigmau ,sigmau); G = dot (sigmav ,sigmav); F = dot (sigmau ,sigmav);

K = ( e∗g−f ˆ2)/(E∗G−Fˆ 2 ) ; answer = simplify(K)

56 References

[1] B.A. Dubrovin, A.T. Fomenko, S.P. Novikov, Modern Geometry - Meth- ods and Applications, Part 1. The Geometry of Surfaces, Transformation Groups, and Fields, Graduate Texts in Mathematics 93, Springer-Verlag, 1984. [2] T.W. Gamelin, Complex Analysis, Undergraduate Texts in Mathematics, Springer, 2001. [3] P. Henrici, Applied and Computational Complex Analysis Volume 3, Wiley Classics Library 1993.

d2 log λ λ [4] J. Liouville, Sur l’´equationaux diff´erences partielles dudv ± 2a2 = 0, Jour- nal de math´ematiquespures et appliqu´ees1re s´erie,tome 18 (1853), p. 71-72. Original paper. [5] K. Tapp, Differential Geometry of Curves and Surfaces, Springer 2016.

57