U.U.D.M. Project Report 2020:33
Liouville’s equation on simply connected domains
Patrik Deigård
Examensarbete i matematik, 15 hp Handledare: Wanmin Liu Examinator: Martin Herschend Juni 2020
Department of Mathematics Uppsala University
Liouville’s equation on simply connected domains
Patrik Deig˚ard Spring 2020
Abstract We go through the stereographic projection and some basic differential geometry and complex analysis. Using this we show a local solution to Liouville’s equation for any constant K > 0, a fully non-linear elliptic partial differential equation. We show symmetric solutions to two partial differential equations by turning them into ordinary differential equations. These differential equations are more or less related to Liouville’s equation.
Key words. Liouville’s equation, Differential geometry.
1 Contents
1 Introduction2
2 Preliminaries3
3 Simply connected domains and metric spaces3
4 Complex analysis and Wirtinger calculus7
5 Differential Geometry and some Terminology 11 5.1 Curves...... 11 5.2 Surfaces...... 11 5.3 The first and second fundamental forms...... 18 5.4 Gaussian curvature...... 23 5.5 The stereographic projection...... 27
6 Derivation of Liouville’s equation 34
7 Laplace’s equation 36 7.1 Solution only dependent on the distance from the origin in R2 .. 36 7.2 A slight modification of Laplace’s equation...... 38
8 Liouville’s equation 47 8.1 Liouville’s equation on the unit sphere...... 47 8.2 Liouville’s equation for constant K > 0...... 51
9 Further research 54 9.1 Global solution of Liouville’s equation...... 54 9.2 Non-linear wave equation...... 54
10 Conclusion 55
11 Appendix 56
1 1 Introduction
Historically many people believed the earth to be flat. Nowadays we know that this is not the case and it can easily be disproven. However, can you blame them? Wherever you are on the surface of the earth, it probably even looks flat to you and without further evidence one could easily think that the earth is flat. Astronauts that are out in space will probably view this in another way and clearly see that the earth isn’t flat. However, we don’t need people in space to tell us that the earth isn’t flat. In this paper you will see that we don’t even need any kind of ”space” to show that the earth isn’t flat (Gauss theorema egregium). This is the general intuition behind what we call the curvature of a surface. Since we know that the earth isn’t flat, we know that it kind of ”bends” and our intuition may tell us that, whether or not we know the actual definition of curvature, it has some kind of curvature that is different to a flat surface. This idea would definitely be correct. The main result and focus of this paper is to find all solutions to Liouville’s equation (see Theorem 8.5)
∆W = −2KeW for some constant K > 0,W = W (u, v) using an approach that uses the theory of differential geometry. As a partial differential equation this is a fully non- linear partial differential equation, which is generally very hard to solve. This equation relates the curvature of the surface with any conformal diffeo- morphism onto the surface. We will see that it is easy to describe the solutions of the equation. The solution to this differential equation is given by 4 |f 0(z)|2 W (z, z¯) = ln 2 K 1 + |f(z)|2
0 ∂f for z = u + iv, f (z) 6= 0 at all points in the domain and satisfying ∂z¯ = 0. Generally this solution depends on the curvature K of the surface that we are working on, and in this case we are working with surfaces with positive constant curvature K > 0. Liouville’s original paper can be found in the reference list [4].
Acknowledgements I would like to thank my supervisor Wanmin Liu for all the guidance I have re- ceived. The helpful discussions and encouragement helped me finish this thesis.
2 2 Preliminaries
Unless otherwise stated:
n • The standard basis of R is given by B = {e1, . . . , en} where
ei = (0,..., 1,..., 0)
for 1 placed at position i. • The inner products used in this paper are the standard dot product, that is n X hu, vi = uivi = cos (α)kukkvk, i=1 n where u, v ∈ R and α ∈ −π/2, π/2 is the angle between u and v. n • A vector x ∈ R has components x = (x1, . . . , xn). • We define R+ as R+ = {x ∈ R | x > 0}.
• We use the notation v1 × v2 for the cross product of two vectors v1, v2 ∈ R3. Like we already know, the cross product of two vectors in R3 (it is only defined for vectors in R3) gives us a vector perpendicular to the two vectors.
a b • We use the notation for the determinant of the matrix, so we have c d
a b for example = ad − bc. c d
3 Simply connected domains and metric spaces
When we work in for example analysis or differential geometry or something else in math we usually specify some kind of ”where” we are working. Functions are defined between sets and these sets usually have certain properties depending on what we want. When we start learning math in we usually deal with func- tions f : U ⊂ R → R without us really thinking about it. Later on we start to study something like calculus and we have to pay more attention not only to the ”function rule” but also to its domain and range. Then we study even more and get to linear algebra where we are considering mappings like A: Rn → Rm or mappings B : Cn → Cm and we study vector spaces and more. Then in anal- ysis we start to generalize these domains where we require the domains to have certain properties that we probably haven’t thought about a lot previously. For example we usually start off by studying metric spaces in analysis and then in functional analysis we start to consider more types of spaces such as normed spaces, Hilbert spaces and Banach spaces. One might wonder why there are so many and what the point is in having all these different ”-spaces”. The thing
3 is that these ”-spaces” have different types of properties and this leads to us having a better understanding of the domain and can ensure ourselves about properties for mappings between such spaces.
For example, in metric spaces we can always talk about some kind of ”dis- tance” between points in the metric space. We can’t always do this in any general topological space. Things that we used to take for granted (convergence to a unique point, continuous functions, what is ”open” and what is ”closed” etc.) really just depends on these kind of ”-spaces”. This is why it is important to study these kind of structures.
Since we will be working with simply connected domains in this paper (and the domain will either be real or complex) it could be good to make sure we are talking about the same thing. Definition 3.1. A metric space is a pair (X, d) where X is a set and d is a function d: X × X → R≥0 such that it satisfies the following 3 requirements: (M1): It has to be symmetric. That is, d (x, y) = d (y, x) (M2): It has to be non-negative. That is,
d (x, y) ≥ 0 and d (x, y) = 0 ⇐⇒ x = y
(M3): It satisfies the triangle inequality. That is, d (x, z) ≤ d (x, y) + d (y, z).
The probably most common example of a metric space is (X, d) for X = R and d (x, y) = |x−y|. As previously stated we can measure some kind of distance in metric spaces (even if it wouldn’t be like we are used to if we have some kind of more exotic distance function). An example of this is the metric space X, d0 with X = and R ( 1 x 6= y d0 (x, y) = 0 x = y as the distance function, commonly referred to as the discrete metric. It clearly satisfies (M1) - (M3), but it does not ”work like we are used to”. Intuitively it doesn’t really make any sense, but since it satisfies all the requirements for 0 being a distance function, we have that R, d is a metric space.
Another important metric space is (Rn, d) where d is given by v u n uX 2 d (x, y) = t (xi − yi) . i=0
You should check that this satisfies (M1) - (M3) and so that this really is a metric space.
4 Now we move on to connectedness. I think most people have a feeling for what we mean when we talk about connectedness, even if they haven’t seen the strict mathematical definition. Connectedness is important in many different perspectives. For example, the intermediate value theorem does not generally hold if the domain isn’t connected. Definition 3.2. Let (X, d) be a metric space. Two sets A, B ⊂ X are said to be separated if A ∩ B¯ = ∅ and A¯ ∩ B = ∅ where A¯ = A ∪ {limit points of A}. A set E ⊂ X is said to be connected if E is not a union of two non-empty separated sets.
In the following picture we can see that the set E ⊂ R2 defined as E = D1 ∪ D2 is not connected.
2 Figure 1: Picture showing that the set E = D1 ∪ D2 is not connected in R .
For example, let A = [0, 1] and B = (1, 2). Then A and B are not separated since 1 ∈ [0, 1] ∩ (1, 2) = [0, 1] ∩ [1, 2] = {1}. Actually, a subset E ⊂ R is connected if and only if E is an interval.
We would like to introduce and define what it means for a domain to be sim- ply connected. Intuitively it means that the domain has no holes or is completely ”drilled through”. A domain is simply connected if any kind of path between two points can be deformed into a single point without ”breaking” anything. A torus (think doughnut) is for example not simply connected since it has a hole in it. A paper is simply connected as long as it does not have any holes in it (otherwise how are we supposed to deform some kind of circle containing
5 the hole?). A sphere is simply connected as long as it isn’t completely drilled through in any place. It is OK for the sphere to lose a couple of points in it, as long as it does not go completely through it since then we can go around the points.
Definition 3.3. A path in the plane (think C) from A to B is a continuous function γ(t) on some parameter interval a ≤ t ≤ b such that γ(a) = A and γ(b) = B. The path is simple if γ(s) 6= γ(t) when s 6= t. The path is closed if it starts and ends at the same point, that is, γ(a) = γ(b). A simple closed path is a closed path γ such that γ(s) 6= γ(t) for a ≤ s < t < b. Examples:
Figure 2: Picture showing three different paths. From left to right we have that the first one is a simple path, the second one is a closed simple path and the third one is a path that is not simple because of the self-intersections.
Now consider C since we are mostly going to work with R and C. Definition 3.4. Let γ(t), a ≤ t ≤ b, be a closed path in a domain D. We say that γ is deformable to a point if there are closed paths γs(t), a ≤ t ≤ b, 0 ≤ s ≤ 1, in D such that γs(t) depends continuously on both s and t, γ0 = γ and γ1(t) ≡ z1 is the constant path at some point z1 ∈ D. The domain D is simply connected if every closed path in D can be deformed to a point.
Examples follow in the next page.
6 Figure 3: Picture showing two different sets where the right one is simply con- nected and the left one is not simply connected. Notice that the blue curve in the left set cannot be continuously deformed to a point because of the hole inside of it. 4 Complex analysis and Wirtinger calculus
Throughout this section and unless otherwise stated, we view the complex num- ber √z = x + iy as a number z ∈ C and x, y ∈ R with the usual imaginary unit i = −1. Its conjugate is defined as z = x − iy. Functions also follow a similar convention - a function f(z) = u + iv is a function f : C → C for two smooth ∞ 2 functions u, v ∈ C R [5]. Definition 4.1. The Wirtinger derivatives are the following symbolic differen- tial operators: ∂ 1 ∂ ∂ := − i ∂z 2 ∂x ∂y and ∂ 1 ∂ ∂ := + i . ∂z 2 ∂x ∂y We will usually omit the ”Wirtinger” and just say ”derivative”. For functions we define it in the natural way. Let f : → , then we define ∂f := 1 ∂f − i ∂f and ∂f := 1 ∂f + i ∂f C C ∂z 2 ∂x ∂y ∂z 2 ∂x ∂y It is also easy to check the following proposition: Proposition 4.2. ∂ (z) = 0 ∂z
7 and ∂ (z) = 0. ∂z Proof. Let z = x + iy, then we have that:
∂ 1 ∂ ∂ (z) = − i (x − iy) = ∂z 2 ∂x ∂y
1 ∂ ∂ 1 1 (x − iy) − i (x − iy) = 1 − i (−i) = · 0 = 0. 2 ∂x ∂y 2 2 The proof of ∂ (z) = 0 is similar. ∂z Definition 4.3. The complex form of the Cauchy-Riemann equations is given by ∂f = 0. ∂z¯ Any function f that satisfy this equation is called holomorphic. It could also be good to try to express the mixed second order derivative in a way, and this can be done in the following way.
Proposition 4.4. Let f : C → C be a holomorphic function. Then ! ∂2f 1 ∂2f ∂2f = + . ∂z∂z 4 ∂x2 ∂y2
Proof. As before, let ∂ = 1 ∂ − i ∂ and ∂ = 1 ∂ + i ∂ . Then ∂z 2 ∂x ∂y ∂z 2 ∂x ∂y
∂2 1 ∂ ∂ 1 ∂ ∂ = − i + i = ∂z∂z 2 ∂x ∂y 2 ∂x ∂y
1 ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ − i + i − i2 = 4 ∂x ∂x ∂y ∂x ∂x ∂y ∂y ∂y ! 1 ∂2 ∂2 + . 4 ∂x2 ∂y2 So then ! ∂2f 1 ∂2f ∂2f = + . ∂z∂z 4 ∂x2 ∂y2
Definition 4.5. Define the Laplacian ∆ (with variables x1 and x2) as
∂2 ∂2 ∆ = 2 + 2 . ∂x1 ∂x2
8 A function W : D ⊂ C → C, W = W (x, y) is called harmonic if its second order partial derivatives exists, are continuous and satisfy Laplace’s equation
∂2W ∂2W ∆W = + = 0. ∂x2 ∂y2 This is just a way of categorizing functions which have certain proper- ties. Harmonic functions are generally interesting since they (obviously) solve Laplace’s equation, and it is one of the most important partial differential equa- tions (PDE’s).
Theorem 4.6. Let f(z) be a continuously differentiable function on a domain D. Then f(z) is analytic if and only if f(z) satisfies the complex version of the Cauchy-Riemann equations. If f(z) is analytic, then the derivative of f(z) is given by ∂f f 0(z) = . ∂z Proof. See [2].
∂ ∂ These ∂z and ∂z¯ operators work like we are used to. They for example satisfy the following:
∂ ∂f ∂g • ∂z (af + bg) = a ∂z + b ∂z
∂ ∂g ∂f • ∂z (fg) = f ∂z + g ∂z and more [2]. They are related by
∂f ¯ = ∂f ∂z¯ ∂z and ∂f = ∂f . ∂z ∂z Now we’ve come to the interesting part. We will define the angle between two curves to be the angle between the tangent vectors at the point of intersection. A function f is called conformal if it preserves angles. And the next theorem gives us a very easy way of checking if a function is conformal or not.
0 Theorem 4.7. If f(z) is analytic at a point z0 in its domain and f (z0) 6= 0, then f is conformal at z0. Proof. See [2] (p. 59). Since we also have Theorem 4.6, we can check if a function f is conformal at a point z0 just by making sure that ∂f = 0 and f 0(z ) 6= 0. ∂z¯ 0
9 3 Example 4.8. Is the function f : C → C given by f(z) = (z − i) (z + 3) conformal at the point z0 = 3 + 7i? We check the conditions: First of all, ∂f ∂z¯ = 0 since we have no dependency onz ¯ in f. Next we compute the derivative 0 2 3 0 f (z) = 3 (z − i) (z + 3) + (z − i) which at the point z0 has the value f (z0) = 3 (3 + 6i)2 (6 + 7i) + (3 + 6i)3 = (3 + 6i)2 (21 + 27i) = ... = 27i − 1539 6= 0 so f is conformal at the point z0.
10 5 Differential Geometry and some Terminology
Here we will start off by covering some basic differential geometry as well as most of the terminology that will be used throughout the paper. Since we are mainly working with surfaces in this paper, we will not go through a lot of theory of curves, but just touch upon a few definitions at most. First of we will introduce the notion of a regular curve.
5.1 Curves
Definition 5.1. A parametrized curve in Rn is a smooth function γ : I → Rn where I ⊂ R is an interval. Note that we require that γ is a smooth function. We also need the interval I to be defined. One can therefore formally define a parametrized curve as a pair (γ, I) where we need to define a smooth function γ and its domain I. The unit circle in R2 with the interval I = (−∞, ∞) is an example of the trace of a parametrized curve, which we could interpret as the pair γ, (−∞, ∞) for γ(t) = (cos t, sin t). We will often omit ”parametrized” in parametrized curve, and understand a curve as a parametrized curve. Another important notion about curves is the notion of regular curves. Definition 5.2. A regular curve is a (parametrized) curve γ : I → Rn such that its speed, γ0(t) , always is non-zero. That is, γ0(t) 6= 0 for all t ∈ I. An example of a non-regular curve is given by the pair γ, (−1, 1) for γ(t) = t2, − cos t. It’s velocity is γ0(t) = (2t, sin t) which at the point t = 0 ∈ I has r 2 2 the speed γ0(0) = (2 · 0) + sin 0 = 0. It is therefore not a regular curve. However the curve given by γ, (0, 1) for γ(t) = t2, − cos t is a regular curve. r 2 2 2 Its speed is given by γ0(t) = (2 · t) + sin t 6= 0 since 2t > 0 and sin t ≥ 0 for all t ∈ (0, 1).
5.2 Surfaces
Definition 5.3. Let f : U ⊂ Rm → Rn, p ∈ U and v ∈ Rm. The directional derivative of f in the direction v at the point p is (assuming it exists)
f (p + tv) − f(p) 0 dfp(v) = lim = (f ◦ γ) (0), t→0 t where γ(t) = p + tv. If f is smooth (infinitely many times continuously differen- tiable), then dfp = LA where LA is multiplication from the left with the matrix A given by ∂f1 (p) ... ∂f1 (p) ∂x1 ∂xm . . . A = . .. . . . . ∂fn (p) ... ∂fn (p) ∂x1 ∂xm
11 This matrix A is called the Jacobian matrix of f at p.
Remark: We often simply write dfp = A, and then it is understood that we mean multiplication from the left by A.
Definition 5.4. Let X ⊂ Rm1 and Y ⊂ Rm2 . We call X and Y diffeomorphic if there exists a smooth bijective function f : X → Y whose inverse is also smooth. In this case we call f a diffeomorphism. Diffeomorphisms are important and a natural extension of something like an isomorphism between structures. Somehow diffeomorphisms intuitively not only tells or shows us how two structures are similar, but also shows us that they look kind of the same with respect to deformations. The following is a good example: we have that the plane R2 is diffeomorphic to the paraboloid. This can also be thought of as taking the plane R2 ⊂ R3 and kind of bending it upwards, thus getting the paraboloid
3 2 2 P3 = {(x, y, z) ∈ R | z = x + y } 2 through the diffeomorphism σ : R → P3 which is given by σ (x, y) = x, y, x2 + y2 .
−1 2 −1 Both the inverse σ : P3 → R given by σ (x, y, z) = (x, y) and σ are smooth functions, and the functions are clearly bijective. Thus we have a diffeomor- 2 phism and R is diffeomorphic to P3.
We will continue to discuss surfaces throughout the paper, but we need to be more careful. We need to be precise with what we mean with a ”surface”, and one way to categorize surfaces is by introducing the notion of a regular surface. Without being precise and exact one could say that a regular surface is more or less a surface without any sharp turns or corners or anything like that. One could say that a regular surface, which we view as a subset of R3, locally looks like the plane R2. Imagine for example an air-filled balloon. If you were sufficiently small and living on the balloon you would think that locally everything around you is flat, like how we view the earth. Your desk at home would probably not be a regular surface since it most likely have corners and if you were standing right at that corner, locally it wouldn’t look flat. That’s the intuition behind a regular surface.
We will now state the definition of a regular surface, but first we need some more terminology:
Definition 5.5. Let S ⊂ Rn be a subset. A set V ⊂ S is called open in S if V is the intersection with S of an open set in Rn. If p ∈ S. then a neighbourhood of p in S means a subset of S that is open in S and that contains p.
Definition 5.6. A set S ⊂ R3 is called a regular surface if each of its points has a neighbourhood in S that is diffeomorphic to an open set in R2. Written
12 out it means that for every p ∈ S, there exist a neighbourhood V of p in S, an open set U ⊂ R2, and a diffeomorphism σ : U → V . Such a diffeomorphism (σ), is called a surface patch. A collection of surface patches that together cover all of the regular surface S is called an atlas for S. An example of a surface that is not regular is the cone
3 p K = {(x, y, z) ∈ R | z = x2 + y2}. Intuitively it is because of the pointy bottom it has, where it is not differentiable. We will however not work a lot with the definition of a regular surface, but we will use regular surfaces quite often as they have properties that we look for (or rather do not have properties that we do not look for). Now we will have a look at tangent planes. We are all probably pretty familiar with tangent lines to some function f : R → R. Maybe even finding some equation to tangent planes in R3 for some function f : R2 → R3. However, the tangent planes themselves are interesting and have quite some theory behind themselves. First of all, what do we more exactly mean by a tangent plane? Definition 5.7. Let S be a regular surface. A regular curve in S means a regular curve in R3 whose trace is contained in S. The tangent plane to S at a point p ∈ S is the set of all initial velocity vectors of regular curves in S with initial position p. Stated differently, the tangent plane is
0 TpS = {γ (0) | γ is a regular curve in S with γ(0) = p} ∪ {0}.
Note that we need 0 ∈ TpS to ensure that TpS is indeed a vector space. A picture of the situation would look something like:
Figure 4: Picture showing the tangent plane TpS at the point p ∈ S. It shows a sample of the velocity vectors through p in S.
13 An interesting lemma that follows is the following:
Lemma 5.8. Let σ be a surface patch for the surface S, σ : U ⊂ R2 → V ⊂ S with p ∈ V . If we set q = σ−1(p) and u, v as the variables for the function σ, i.e {u, v} are the coordinate variables of U, we have:
TpS = span{σu(q), σv(q)}.
3 TpS is a two dimensional subspace of R . Proof See [5] p.141.
Figure 5: Picture showing a unit normal N (which will be defined at p.16) to the tangent plane TpS as well as the two vectors σu and σv that spans the tangent plane. σ is a diffeomorphism.
This can be thought of as at any point on the surface S, the tangent plane at that point have a basis consisting of the partial derivatives of the surface patch σ. A simple but yet very important example is the graph of a function. Let f : U ⊂ R2 → R3 be a smooth function. We define the graph of the function as 3 Gf = { x, y, f (x, y) ∈ R | (x, y) ∈ U}. Now the question is, what is the tangent plane to the graph at any point in the domain of f? Well, clearly a surface patch for the graph of f is given
14 2 3 by σ : U ⊂ R → R , σ(x, y) = x, y, f (x, y) . The partial derivatives of the surface patch are: σx = 1, 0, fx (x, y) and σy = 0, 1, fy (x, y) . So the tangent plane at any point p of the graph is the plane spanned by −1 the vectors σx(q) and σy(q), for σ (p) = q. The tangent plane is therefore equivalent to: TpS = span{σx(q), σy(q)} = span{ 1, 0, fx (x, y) , 0, 1, fy (x, y) }.
We will come back to the example of the graph later in. Now we have covered some theory of curves and surfaces, and we have talked a little about tangent planes. So far we have mostly talked about intrinsic measurements of surfaces, such as for example tangent planes. But we almost always view this surface as the surface embedded into R3. It would make sense to continue to explore the surface and its properties, and what other ways than to start looking at its extrinsic measurements would be as natural?
The next step we will do is therefore to do a 90-degree turn and have a look at normals. A normal can be defined in a couple of different ways, usually when we take a first course in something like linear algebra we learn that two vectors u and v are perpendicular, or orthogonal to each other, if their inner product hu, vi is 0. In this case u would be a normal vector to v and vice-versa, v would be a normal vector to u.
Inner products are good in that way since they extend the notion of angles and lengths in a way that aren’t always natural. For example, angles as we know them intuitively aren’t what one might expect when we are working with something like vector spaces of polynomials or vector spaces of continuous func- tions with some other than the standard inner product. We lose our intuition and we have to go by definitions, and in that way we can still talk about angles and vectors perpendicular to each other (be it polynomials, functions or some- thing completely different).
We will try to capture the concept of normals and orientations in the following definitions:
Definition 5.9. An orientation for a 2-dimensional subspace V ⊂ R3 means a choice of a unit-length normal vector N, to V. With respect to a given orien- tation, N, for V, an ordered basis {v1, v2} of V is called positively oriented if v1×v2 = N. The only other possibility is that v1×v2 = −N, in which case the |v1×v2| |v1×v2| basis is called negatively oriented. Definition 5.10. A normal vector to a surface S at a point P means a vector 3 3 N ∈ R that is orthogonal to TpS. That is, the vector N ∈ R is called a normal
15 vector to S at the point p if hN, vi = 0 for all v ∈ TpS. Furthermore, a unit normal vector to S at p is a unit-length normal vector which is equivalent to an orientation for TpS. If we go back to our previous example with the graph, we can see that a normal (unit) vector of the graph at any point (x, y) ∈ U is given by: 1, 0, fx (x, y) × 0, 1, fy (x, y) −fx (x, y) , −fy (x, y) , 1 N = = q 1, 0, f (x, y) × 0, 1, f (x, y) 2 2 x y 1 + fx (x, y) + fy (x, y)
(fx(x,y),fy (x,y),−1) Notice that we could just as well have chosen N = q 2 2 . The 1+(fx(x,y)) +(fy (x,y)) only difference would be the orientation of the surface.
Now we have gone through both intrinsic and extrinsic measurements of sur- faces, as well as some theory of the surfaces themselves. We have quite an OK understanding of surfaces and can kind of visualize how we think of normals, tangent planes, regular/non-regular surfaces and more. Now we will go through some properties that some functions might have and what they mean for the surface and the image of the the surface under the function.
First of all imagine a hand fan (the kind of handheld fan that you see in old movies, used to cool yourself when it is hot outside) that is completely stretched. Think of that as our surface. Now imagine a function f that takes this fan and folds it, not necessarily completely. If you were to draw two straight lines from the handle outwards until the end of the fan, then the angle between those 2 lines would be different depending on if the handfan is completely stretched or if it is folded. Now the idea behind the notion of conformal mappings is that they preserve the angle of two vectors through the mapping.
Definition 5.11. Let S1 and S2 be two regular surfaces. A diffeomorphism f : S1 → S2 is called conformal if df preserves angles, that is, if ∠ (x, y) = ∠ dfp(x), dfp(y) is true for all p ∈ S1 and all x, y ∈ TpS1 The following proposition will come in handy later on in the paper when we actually try to solve Liouville’s equation.
Proposition 5.12. Let S1,...,Sn be regular surfaces and σk : Sk → Sk+1 be conformal diffeomorphisms. Then the composition
σn−1 ◦ σn−2 ◦ · · · ◦ σ1 is also conformal.
16 Proof. We will only show the case n = 3 (and also rename the functions σi to f and g), but the idea is similar. We prove this by using the definition of a conformal mapping and the chain rule. Since both f and g are conformal, it is true that ∠ (x, y) = ∠ dfp(x), dfp(y) .
Using the chain rule (and g(q) = p, p ∈ S2, q ∈ S1) we have that:
∠ d (f ◦ g)q (x), d (f ◦ g)q (y) = ∠ dfp ◦ dgq(x), dfp ◦ dgq(y) .
Now letting dgq(x) = v and dgq(y) = w we have that ∠ dfp ◦ dgq(x), dfp ◦ dgq(y) = ∠ dfp(v), dfp(w) = ∠ (v, w) since f is conformal. Therefore we have ∠ (v, w) = ∠ dgq(x), dgq(y) = ∠ (x, y) since g also is conformal. This proves that ∠ d (f ◦ g)q (x), d (f ◦ g)q (y) = ∠ (x, y) , so f ◦ g is conformal. In the same way we think of conformal mappings and what they do to angles, we can also think of the area distortion that might happen due to the mapping.
Definition 5.13. Let S1 and S2 be two regular surfaces. A diffeomorphism f : S1 → S2 is called equiareal if for all p ∈ S1, the area distortion kdfpk = 1. These are all important and interesting concepts since it can be good to keep track of what we actually preserve when we do things to surfaces. This can be captured even more by what is called an isometry. But before we precisely introduce the notion of isometry we must first introduce what we call the first fundamental form. Definition 5.14. The first fundamental form of S assigns to each p ∈ S the 3 restriction to TpS of the squared norm function in R , that is, the map from 2 TpS to R defined as x 7→|x|p. This is basically a fancy name for the composition of the norm of the vector in the tangent plane composed with the function that squares its argument. It is however very important, and we will see just how important it is for this very paper later on. For now, we will just use it to define what we mean by an isometry.
17 Definition 5.15. Let S1 and S2 be two regular surfaces. A diffeomorphism f : S1 → S2 is called an isometry if df preserves their first fundamental forms. This is equivalent to saying that df preserves their inner products:
hx, yip = hdfp(x), dfp(y)if(p) for all p ∈ S1 and all x, y ∈ TpS1. Furthermore, two regular surfaces are called isometric if there exists an isometry between them. We call a property or measurement on a surface intrinsic if it is preserved by isometries. There has been a lot of definitions on top of each other here, but they are certainly needed to proceed with what we want to do. They are also very important and since we have been talking mainly about surfaces, we see that we have come a long way of studying surfaces and kind of categorizing them if we so wish. We have also introduced properties of functions and how they deal with and change surfaces.
Proposition 5.16. A diffeomorphism f : S1 → S2 is equiareal iff it is area preserving in the following sense: if R ⊂ S1 is any polygonal region, then Area (R) = Area f (R) . Furthermore, let S1 and S2 be regular surfaces, then ZZ Area f (R) = kdfkdA. R Proof. See [5] p.164. We will use this later on to prove a Theorem that allows us to check if a diffeomorphism is equiareal by just studying its first fundamental form. Now we can move on to the first fundamental form in local coordinates. This is a convenient notation which we will use from now on.
5.3 The first and second fundamental forms
Definition 5.17. Let S be a regular surface and let σ : U ⊂ R2 → V ⊂ S be a surface patch. Define the functions E,F,G: U → R such that for all q ∈ U, 2 E(q) = σu(q) p
F (q) = hσu(q), σv(q)i 2 G(q) = σv(q) p . We call E, F and G the coefficients of the first fundamental form.
2 2 We will however most often just write E =|σu| , F = hσu, σvi and G =|σv| for a denser notation. Now let’s take an example since we’ve just dumped a lot of definitions on you, and once again go back to the graph of a function. What are the coefficients of the first fundamental form for the graph of a function? Well, we know the
18 surface patch σ (x, y) = x, y, f (x, y) and we’ve already calculated the partial derivatives to be: σx = (1, 0, fx) σy = 0, 1, fy . Now we can easily calculate the coefficients of the first fundamental form:
2 2 2 2 2 E =|σx| = 1 + 0 + (fx) = 1 + (fx)
F = hσx, σyi = fxfy
2 2 2 2 2 F = σy = 0 + 1 + fy = 1 + fy Now of course we could do this at any point p on the graph to get an actual number if we wanted to, but this would be the general coefficients.
Remark: The coefficients E and G are both positive since they are defined as the length of the vectors σu and σv respectively. Definition 5.18. The first fundamental form in the local coordinates {u,v}, which we usually just call ”the first fundamental form of σ”, is the ex- pression 2 2 F1 = Edu + 2F dudv + Gdv . Again we return to the graph for an example. The first fundamental form of the graph of a function would then be: