<<

2 Regular Surfaces

In this half of the course we approach surfaces in E3 in a similar way to which we considered . A parameterized surface will be a function1 x : U → E3 where U is some open subset of the R2. Our purpose is twofold: 1. To be able to measure quantities such as length (of curves), angle (between curves on a surface), area using the parameterization space U. This requires us to create some method of taking vectors to the surface and ‘pulling-back’ to U where we will perform our calculations.2 2. We want to find ways of defining and measuring the curvature of a surface. Before starting, we recall some of the important background terms and concepts from other classes.

Notation Surfaces being functions x : U ⊆ R2 → E3, we will preserve some of the notational differences between R2 and E3. Thus: • Co-ordinate points in the parameterization space U ⊂ R2 will be written as lower case letters or, more commonly, row vectors: for example p = (u, v) ∈ R2. • Points in E3 will be written using capital letters and row vectors: for example P = (3, 4, 8) ∈ E3.  2  • Vectors in E3 will be written bold-face or as column-vectors: for example v = −1 . π2 Sometimes it will be convenient to abuse notation and add a vector v to a point P, the result will be a new point P + v. Open Sets in Rn The domains of our parameterized functions will always be open 2 sets in R . These are a little harder to describe than open intervals rp in R: the definitions are here for reference. p Definition 2.1. Let p ∈ Rn. The open ball of radius r around p is the set of all points whose distance to p is less than r: i.e. B(p, rp)

B(p, r) = {x ∈ Rn : |x − p| < r} U q n A set U ⊆ R is open if for all points p ∈ U there is some radius B(q, rq) rp > 0 such that B(p, rp) ⊆ U. Otherwise said, around any point of an open set, there exists an open ball which is contained entirely in that set.  u  1For example x(u, v) = v describes a paraboloid. u2+v2 2One of the great difficulties of differential geometry is that it is hard to see at this stage why this process is beneficial: why not measure all these things directly within E3? Part of the reason is that we want to be able to generalize: we want to be able to deal with surfaces (and eventually, though not in this class) with higher-dimensional objects (manifolds) without reference to some larger space in which they lie. As an example, suppose that a physicist says, ‘space-time is curved.’ They are not imagining an observer measuring its bendiness from the outside. On the contrary, the curvature is detected within spacetime: in general relativity gravity is precisely the curvature of space-time.

1 Examples 1. In R, B(p, r) is just the open interval (p − r, p + r).

2. (0, 1) ⊆ R is open: Let p ∈ (0, 1) and choose any rp with 0 < rp ≤ min(p, 1 − p), then B(p, rp) ⊆ (0, 1). 3. [0, 1) ⊆ R is not open: for p = 0 it is impossible to find a radius r > 0 such that B(p, r) = (−r, r) is contained entirely in [0, 1). 4. In R2, B(p, r) is the interior (without edge) of the circle of radius r centered at p. In R3 it is the interior of the sphere of radius r centered at p. We use the concept of an open set in Rn similarly to how we used open intervals when describing curves: a was usually defined as a function x : I → En where I was some open interval; surfaces will be defined on open subsets U ⊂ R2 similarly, e.g. x : U → E3. Openness is important because edges of sets cause difficulties for differentiation. Unless otherwise said, you should assume that the domain of any parameterized surface is open.

Partial Differentiation and the Gradient Vector

n 2 A multivariable scalar function is an assignment f : U ⊆ R → R, i.e. f (x1, x2, x3) = x1 − x3 cos x2 is 3 2 a function f : R → R. Note that we could write f (x) = x1 − x3 cos x2, where x = (x1, x2, x3). Supposing it exists, the partial derivative of f with respect to xi is the function

∂ f f (x ,..., x − , x + h, x ,..., x ) − f (x ,..., x ) = lim 1 i 1 i i+1 n 1 n ∂xi h→0 h The function f is differentiable if ∂ f exists for all i = 1, . . . , n. It is smooth if partial derivatives of all ∂xi 4 3 ∂ f orders and all combinations exist: for example, if f : R → R is smooth then 2 is defined and ∂x1∂x2∂x3 continuous. Essentially all our surfaces will be smooth. Some notations for functions: • C(U, V) is the set of continuous functions f : U → V. • C1(U, V) is the set of continuously differentiable functions f : U → V with (differentiable func- tions with continuous derivatives). • C∞(U, V) is the set of smooth functions. It is common to say, for instance, that “ f is C1” rather than writing f ∈ C1(U, V). Sets C2, C3, etc., are defined similarly.

2 ∞ 3 Example If f (x) = x1 − x3 cos x2, then it is a smooth function f ∈ C (R , R). Moreover, ∂ f ∂ f ∂ f = 2x1, = x3 sin x2, = − cos x2 ∂x1 ∂x2 ∂x3 Definition 2.2. The gradient of a differentiable function f : U ⊆ Rn → R is the function ∇ f : U → Rn defined by  ∂ f ∂ f  ∇ f (x1,..., xn) = ,..., ∂x1 ∂xn

If f is as above, we have ∇ f = (2x1, x3 sin x2, − cos x3).

2 2.1 Vector Fields and 1-forms We are almost ready to make the critical definition where we turn differential operators into vectors. The first step involves thinking about the gradient of a scalar function f : En → R: in such a situation we would write ∇ f as a column vector. Moreover, as the next definition facilitates, the gradient encodes both the direction in which the value of f changes maximally, and the magnitude of that rate of change.

T n Definition 2.3. If v = (v1,..., vn) is a vector based at a point P ∈ U ⊆ E and f : U → R is a function, then the directional derivative of f at P in the direction v is the scalar

n ∂ f Dv f (P) := v · ∇ f (P) = ∑ vk k=1 ∂xk P

The directional derivative allows one to compute the rate of change of f as one travels in any direc- tion. Indeed:

Theorem 2.4. 1. If h is a small number, then f (P + hv) ≈ f (P) + Dv f (P)h. 2. If x(t) is a curve passing through P at t = 0 and with tangent vector x0(0) = v, then

d Dv f (x(0)) = f (x(t)) dt t=0 is the rate of change of f at P as you travel along the curve.

3. If v is a unit vector, then Dv f (P) is maximal when v points in the same direction as the gradient vector. 4. ∇ f points in the direction of greatest increase of f at P. If v is the unit vector in that direction, then |∇ f (P)| = Dv f (P).

3 2 Example In E , let P = (x1, x2, x3) and suppose that f (P) = x1 + x2x3 + x3 − x1. Then   2x1 − 1 ∇ f =  x3  x2 + 1

 1  Now let v = 2 . Then the directional derivative in the direction of v is −1

3 ∂ f Dv f = v · ∇ f = ∑ vk = 2x1 − 1 + 2x3 − (x2 + 1) k=1 ∂xk

3 Be careful with notation! Dv f is a continuous function E → R. If we evaluate at a specific point P = (0, 1, 2), say, then Dv f (P) = −1 + 4 − 2 = 1 is a number.

3 The Directional Derivative Operator One of the reasons for putting the function f at the end of the directional derivative expression is that it tempts us to write the directional derivative at a point P as an operator:

n ∂ Dv|P = ∑ vk k=1 ∂xk P

This is a map (function) from the space of continuously differentiable functions U ⊆ En → R to the real numbers. It is similarly tempting to drop any reference to the point P and define a more general operator

n ∂ Dv = ∑ vk k=1 ∂xk

This is a map from the space of differentiable functions U ⊆ En → R to the space of continuous 1 functions U → R: feed a C function f to Dv and the result is a continuous function Dv f . 3 It is worth noting that v determines the operator Dv and vice versa.

One of the principal ideas of vector fields in differential geometry is to view them as directional derivative operators or, as we’ve seen above, linear first-order partial differential operators. This sounds artificially complicated but the rational is simple: we really only care about vectors in terms of how functions/surfaces change in said direction.

We now come to one of the fundamental definitions of differential geometry.

n Definition 2.5. Let U ⊆ R be open and let p ∈ U. The tangent space to at p, denoted TpU is the set of all directional derivative operators Dv|p at p.A vector field v on U is a (smooth) choice for each p ∈ U of an element of TpU. By smooth, we mean that the entries of v are infinitely differentiable functions. Note that we currently n 4 use a lower-case v ∈ R . We will often write v|p for the particular tangent vector which is the value of the vector field at p.

Example v = 3x ∂ + 2x x ∂ − x ∂ is a vector field on R3. If R3 is E3, then this 1 ∂x1 1 3 ∂x2 1 ∂x3 corresponds to the vector-valued function     x1 3x1 v x2 = 2x1x3 x3 −x1 v is an infinitely differentiable function E3 → E3. It should be clear that the operators ∂ , ∂ , ∂ ∂x1 ∂x2 ∂x3 form a basis of the tangent space at each point.

3 n 1 Given a linear partial differential operator Dv, observe that the co-ordinate xk : E → R is a C function. Thus vk = Dvxk, which recovers v, entry by entry. 4The prefix tangent helps us discriminate between these and position vectors. To reiterate: in differential geometry almost every vector is a tangent vector: a directional derivative operator.

4 Definition 2.6. If v is a vector field on U ⊆ Rn, and f : U → R is a function, then we write v[ f ] for the result of applying the vector field v to the function f . I.e., n ∂ n ∂ f v = ∑ vk =⇒ v[ f ] = ∑ vk k=1 ∂xk k=1 ∂xk If f is also smooth, then v[ f ] is itself a smooth function U → R.

2 3 Example Let f (x) = x1x2 be a function on R , and v be the vector field in the previous example. Then

∂ f ∂ f ∂ f 2 3 v[ f ] = 3x1 + 2x1x3 − x1 = 6x1x2 + 2x1x3 ∂x1 ∂x2 ∂x3 Theorem 2.7. Let v, w be vector fields on U ⊆ Rn and let f , g : U → R be smooth. Then 1. Each tangent space is a vector space: indeed f v and v + w are vector fields, where ( f v)[g] = f v[g], (v + w)[ f ] = v[ f ] + w[ f ]

2. Vector fields act linearly on smooth functions: if a, b ∈ R are constant, then v[a f + bg] = av[ f ] + bv[g]

3. Vector fields satisfy a product rule known as the Leibniz rule: v[ f g] = f v[g] + gv[ f ]

All these results are straightforward from the definition of a vector field as a linear first order partial differential operator.

Notation warning! Much of the difficulty of working with vector fields (and the associated 1-forms which are to come) is of becoming confused with notation. Remember that v[ f ] is a function Rn → R, n whose value at p ∈ R is v[ f ](p) = v|p [ f ]. The previous Theorem written at the point p, produces a total mess! In practice, extra brackets are very helpful. . . ( f v)[g](p) = f (p)v[g](p), (v + w)[ f ](p) = v[ f ](p) + w[ f ](p), v[a f + bg](p) = av[ f ](p) + bv[g](p), v[ f g](p) = f (p)v[g](p) + g(p)v[ f ](p) Note in particular that f v denotes the vector field obtained by multiplying each vector in the vector field by the value of the function f at the corresponding point. It does not mean apply the function f to the vector v, which makes no sense. For example, if v = x2 ∂ − x ∂ is a vector field on R2, f (x , x ) = x2x and p = (2, −1), then 2 ∂x1 1 ∂x2 1 2 1 2 f v = x2x3 ∂ − x3x ∂ is a vector field. 1 2 ∂x1 1 2 ∂x2 v[ f ] = x2 ∂ (x2x ) − x ∂ (x2x ) = 2x x3 − x3 is a function. 2 ∂x1 1 2 1 ∂x2 1 2 1 2 1 ( f v)(p) = f (p) v| = −4 ∂ + 8 ∂ is a tangent vector. p ∂x1 ∂x2 (v[ f ])(p) = −4 − 8 = −12 is a number.

5 1-forms Make sure you understand what a tangent vector is before moving on to this section, as 1-forms are defined in terms of tangent vectors.

Definition 2.8. A 1-form at p ∈ U ⊆ Rn (sometimes called a covector), is a linear map

α|p : TpU → R ∗ The set of 1-forms at p is the cotangent space Tp U. If v|p is a tangent vector at p, then we write α|p ( v|p) for the value of this function.

3 Example Let p = (1, 0, −2) ∈ R . Then α|p defined by ! ∂ ∂ ∂ α|p a + b + c = 7a + 3b + c ∂x1 p ∂x2 p ∂x3 p is a 1-form. Indeed ! ∂ ∂ ∂ α|p −2 + 3 − 4 = −2 · 7 + 3 · 3 − 4 · 1 = −9 ∂x1 p ∂x2 p ∂x3 p

Definition 2.9. Write dxk for the 1-form at p defined by ! ( ∂ 1 j = k dx = δ = (†) k jk ∂xj p 0 j 6= k

5 ∗ The 1-forms dxk (where k = 1, . . . , n) form a basis of the cotangent space Tp U, hence the general 1-form at p has the form

n α|p = ∑ ak dxk k=1 for unique constants ak. Definition 2.10. A 1-form on U ⊆ Rn is an assignment

n α = ∑ ak dxk k=1 where each ak : U → R is a smooth function. We also write dxk for the 1-form on U whose restriction to a point p is dxk. If v is a vector field on U, we write α(v) for the function U → R obtained by mapping p 7→ α|p ( v|p). 5The tangent space and the cotangent space at p both have the same dimension n. The ∗ denotes ‘dual space’ in linear algebra: if V is a vector space over a field F, then V∗ is the vector space of linear maps V → F. Equation (†) says that the

∂ vectors and the covectors dxk are dual bases to each other. ∂xk p

6 Example In R2, let α = 2x dx − 3 dx and v = 3x2 ∂ + ∂ . Then α(v) = 6x2x − 3 is a function 2 1 2 1 ∂x1 ∂x2 1 2 R2 → R. Theorem 2.11. Let f : U → R be smooth. Then there exists a unique 1-form denoted d f with the property that d f (v) = v[ f ] for all vector fields v on U. Definition 2.12. d f is the exterior derivative of f . If a 1-form α = dg is the exterior derivative of a function g, we say that it is exact. Proof. A linear map is defined uniquely by what it does to a basis, thus, at each p ∈ U, it is enough to define d f |p by ! ∂ ∂ f d f |p = ∂xi p ∂xi p d f is then the assignment p 7→ d f |p. Otherwise said, ∂ f ∂ f d f = dx1 + ··· + dxn (∗) ∂x1 ∂xn This is smooth since all derivatives of f are smooth.

2 2 Example If f (x1, x2) = 2x1x2 − 1, then d f = 4x1x2 dx1 + 2x1 dx2. Proposition 2.13. If f , g are smooth functions, then 1. d( f + g) = d f + dg 2. d( f g) = f dg + g d f 3. d f = 0 ⇐⇒ f is a constant function Proof. 1. and 2. follow immediately from the definition of d f and the relevant parts of Theorem 2.7. For 3.,

d f = 0 ⇐⇒ d f [v] = 0 for all vector fields v  ∂  ⇐⇒ d f = 0 for all i = 1, . . . , n ∂xi ∂ f ⇐⇒ = 0 for all i ∂xi ⇐⇒ f is constant

Example: Polar Co-ordinates

Co-ordinate functions xi : U → R have played a large role in our discussion of vector fields and 1-forms: at each point p ∈ U the tangent and contangent spaces have bases generated by these functions: ( ) ∂ ∂ ∗ TpU = Span ,..., Tp U = Span {dx1, . . . , dxn} ∂x1 p ∂xn p

7 As an example of this, we consider the change of co-ordinates in U = R2 \{(0, 0)} from rectangular to polar. Recall: ( ( p x = r cos θ r = x2 + y2 ! y y = r sin θ tan θ = x The chain rule tells us: ∂ ∂r ∂ ∂θ ∂ x ∂ y ∂ ∂ sin θ ∂ = + = p − = cos θ − ∂x ∂x ∂r ∂x ∂θ x2 + y2 ∂r x2 + y2 ∂θ ∂r r ∂θ

Similarly

∂ ∂r ∂ ∂θ ∂ y ∂ x ∂ ∂ cos θ ∂ = + = p + = sin θ + ∂y ∂y ∂r ∂y ∂θ x2 + y2 ∂r x2 + y2 ∂θ ∂r r ∂θ

We can also compute the 1-forms: by (∗),

dx = d(r cos θ) = cos θ dr − r sin θ dθ, dy = sin θ dr + r cos θ dθ

We can check that the dual basis relations hold:  ∂   ∂ sin θ ∂  dx = (cos θ dr − r sin θ dθ) cos θ − ∂x ∂r r ∂θ  ∂  cos θ sin θ  ∂   ∂   ∂  = cos2 θ dr − dr − r sin θ cos θ dθ + sin2 θ dθ = 1 ∂r r ∂θ ∂r ∂θ

Similarly

 ∂   ∂   ∂  dy = 1 dx = dy = 0 ∂y ∂y ∂x

∂ ∂ ∂ ∂ One can also compute ∂r , ∂θ , dr, dθ in terms of ∂x , ∂y , dx, dy, either by pure linear algebra from the above, or directly using the chain rule.

Relation to elementary calculus d f R → R When n = 1 we are used to writing dx for the derivative of a function f : . Contrary to what d f you might expect, dx is not the quotient of two 1-forms, rather it is the result of the application of the d vector field dx to the function f , or equivalently the application of the 1-form d f to the vector field d dx : d f d  d  = [ f ] = d f dx dx dx

Vector fields in R are written with a straight d rather than partial ∂ because there is only one direction in which to differentiate. The notion of tangent vector is at best useless and at worst misleading in R!

8 Line integrals R 1 Where you have seen 1-forms before is in integration. 0 g(x)dx is the integral of the 1-form g dx over the interval [0, 1]. Indeed we integrate 1-forms over oriented curves.

Definition 2.14. If x : [a, b] → U ⊆ Rn is a curve, and α is a 1-form on U, we define

Z Z b 0  α := α x (t) dt x a In order to feed the 1-form a tangent vector, we are obliged think of the derivative of the curve x(t) as a family of tangent vectors along the curve,6 i.e.   f1(t) 0 0 ∂ 0 ∂ x(t) = =⇒ x (t) = f1(t) + f2(t) f2(t) ∂x1 ∂x2

Examples

cos t 1. Integrate α = x dx over the unit-circle counter-clockwise. Here x(t) = , so that 1 2 sin t x0(t) = − sin t ∂ + cos t ∂ . Then α(x0(t)) = x cos t which, restricted to the curve x(t), is ∂x1 ∂x2 1 cos2 t. Around the unit circle is t = 0 to 2π, thus,

Z Z 2π 1 Z 2π α = cos2 t dt = 1 + cos 2t dt = π x 0 2 0

2 2 2 2. Integrate α = x2dx1 − x1dx2 over the curve x(t) = (t, t ) between (0, 0) and (1, 1). Z Z 1 Z 1 1 1 3 α = α(x0(t))dt = (t2)2 − 2t3dt = − = − x 0 0 5 2 10

Theorem 2.15. The integral of a 1-form along a curve is independent of the choice of (orientation-preserving) parameterization.

Proof. Suppose we are integrating α over a curve parameterized by x(t) where t runs from a to b. Now suppose that t is a function of s (t(p) = a, t(q) = b) where s0(t) > 0 (orientation-preserving) d 0 dt and integrate using s as a parameter. The critical observation is that ds x(t(s)) = x (t(s)) ds by the chain rule. Z Z q  d  Z q  dt  α = α x(t(s)) ds = α x0(t(s)) ds x p ds p ds Z q dt Z b = α(x0(t(s))) ds = α(x0(t))dt p ds a Z = α (integrating with respect to t) x 6This should make sense: the velocity x0 indicates magnitude and direction or travel, not position, so it should be thought of as a tangent vector rather than a position vector.

9 If we change the orientation of the curve then the order of the limits is reversed and R α becomes − R α.

As the next theorem shows, integration of 1-forms that are the exterior derivative of a function is independent of the path you choose. This is essentially the fundamental theorem of calculus for curves.

Theorem 2.16 (Fundamental Theorem of Line Integrals). Let x(t) be a curve for a ≤ t ≤ b. If f is a function on Rn, then, Z d f = f (x(b)) − f (x(a)) x The integral of d f over any curve between two points depends only on the values of f at those points.

Proof.

Z Z b Z b d f = d f (x0) dt = x0[ f ] dt x a a However,

0 0 ∂ f 0 ∂ f dx1 ∂ f dxn ∂ f d x [ f ] = x1 + ··· + xn = + ··· + = ( f (x(t))) ∂x1 ∂xn dt ∂x1 dt ∂xn dt Hence Z Z b d d f = ( f (x(t)))dt = f (x(b)) − f (x(a)) x a dt

Example If α = cos(x1x2)(x2dx1 + x1dx2), find the integral of α over any curve between the points 1  1  π, 3 and 2 , π .

Observe that α = d sin(x1x2). Thus if γ is any curve between (1, 2) and (3, 4), we have √ 1 Z ( 2 ,π) π π 3 α = sin(x x ) = sin − sin = 1 − 1 2 1 γ (π, 3 ) 2 3 2

Summary There should be two take-aways from this discussion:

• Vector fields and tangent vectors allow one to express the notion that vectors encode a direction in which something can change (i.e. a derivative).

• Vector fields and 1-forms essentially break standard derivatives into two pieces: the result is an alternative, and ultimately more flexible, language for describing familiar results from multi- variable calculus.

The real pay-off is hard to see until we apply our new language to surfaces. Thus. . .

10 2.2 Surfaces A surface can be thought of in two ways:

1. A two-dimensional subset of E3 with certain properties.

2. The image of a function x : U → E3 defined on an open set U ⊆ R2.

The second notion is easier to calculate with since we can describe points on the surface in terms of co-ordinates on U. It is, however, less general and arguably less geometric. In practice, a sur- face is typically defined as being the union of the images of several functions xi with suitable rules governing what happens on overlaps.7

Definition 2.17. 1. A (local) parameterized surface S is the image of a map x : U → E3 where U is an open subset of R2. I.e. S = x(U). In what follows, we will often abuse language and refer to the parameterization x as the surface.

2. Let u, v be co-ordinates on U. The co-ordinate tangent vector fields to x are

∂x ∂x x = x = u ∂u v ∂v

3. The surface x (with co-ordinates u, v) is regular at a point p ∈ U if xu(p), xv(p) are linearly independent.

4. If x is regular at p ∈ U, then the tangent plane to the surface at x(p) is Span{xu(p), xv(p)}. 5. A surface is regular if it is regular at all points p ∈ U.

6. If x is regular, then the unit normal vector field to the surface is x × x U = u v |xu × xv|

A choice of one of the two unit normal directions is called choosing an orientation of the surface.

Throughout we will assume that a surface is smooth and regular: i.e. all derivatives of x exist at all points of U and the tangent plane is well defined at each point of x(U).

We now make use of some of the language of the previous section and obtain an important result.

Definition 2.18. Suppose that x : U → E3 is a smooth surface. The differential of x is the vector-valued 1-form dx.

Suppose that u, v are co-ordinates on U ⊆ R2 and that x is a smooth surface. Then the differential of x can be written ∂x ∂x dx = du + dv = x du + x dv ∂u ∂v u v where du, dv are the co-ordinate 1-forms on U.

7See the aside below. . .

11 3 Theorem 2.19. If x : U → E is a surface and y ∈ TpU is a tangent vector to U at p ∈ U, then dx(y) is tangent to the surface at x(p). Indeed at each point p ∈ U we have a linear map

dx|p : TpU → Tx(p)S

∂ ∂ Proof. Write y = a + b . Then dx(y) = axu + bxv is clearly tangent to the surface at x(p). ∂u p ∂v p

We say that dx is a 1-form with values in E3: it maps tangent vectors to U to genuine tangent vectors to the surface. This indeed is one reason for calling linear 1st-order differential operators at p ∈ U tangent vectors.

Examples 1. Sphere of radius a. Use spherical polar co-ordinates: sin φ cos θ x(φ, θ) = a sin φ sin θ cos φ

Here x : U → E3, where U = (0, π) × (0, 2π). Note that the image of x is the sphere minus the (dashed) semicircle defined by θ = 0, so that our domain is open. We could argue for extending θ modulo 2π so that co-ordinates coontinue round the sphere. In contrast, however, the co-ordinates cannot be extended to the north or south poles without losing regularity: in this parameterization the sphere is not regular at the poles. This explains the term local: we usually need more than one parameterization to cover a complicated surface.

θ 2π ∂ ∂θ

3π ∂ 2 ∂φ

π x

π 2

0 π φ 0 2 π

∂ ∂ Observe in the picture how the tangent vectors ∂φ , ∂θ to U are mapped by the differential dx to the tangent vectors dx  ∂  dx  ∂  = dx , = dx dφ ∂φ dθ ∂θ

12  u  2. Graph of a smooth function z = f (x, y). Parameterize by x(u, v) =  v . This has f (u, v)

 1   0  − f  1 u dx =  0  du +  1  dv U = p − fv 1 + f 2 + f 2 fu fv u v 1

∂ f where fu = ∂u , etc. This is clearly regular at all points, regardless of the function f .  u  For example, the paraboloid z = x2 + y2 may be parameterized x(u, v) =  v . u2 + v2

3. Surfaces of revolution. For instance, a smooth curve z = f (x) can be rotated around the x-axis to obtain  u  x(φ, u) =  f (u) cos φ (φ, u) ∈ (0, 2π) × dom( f ) f (u) sin φ

This has differential and unit normal field  0   1  dx = − f (u) sin φ dφ +  f 0(u) cos φ du f (u) cos φ f 0(u) sin φ

− f 0(u) 1 U = cos φ p 0 2   1 + f (u) sin φ

The surface is regular everywhere. Just as with the sphere, we should ignore the curve φ = 0, but it is not crucial. The pic- ture is the surface of revolution defined by the curve z = f (x) = 2 + cos x for x ∈ (0, 2π). The co-ordinate φ essentially measures the an- gle around the axis of rotation. 4. Ruled surfaces. Given two curves y(u), z(u), define

x(u, v) = y(u) + vz(u)

In general, the surface is ruled by the lines parameterized by v: that is, through each point on the surface, there exists a line parameterized which is contained entirely within the surface. The particular case where z(u) = y0(u) is called the tangent developable of y(u). Ruled surfaces are very useful in engineering applications as they are very simple to construct: for example corrugated iron is a built on a curve which is approximately a sine wave. Two famous examples of ruled surfaces are shown below:

13 • The is built from a helix by attaching to each point a horizontal line pointing to the v cos u z-axis. A particular example is given by x(u, v) = v sin u for v > 0. u • The hyperboloid of one sheet is in fact a doubly ruled surface: through each point there are two lines lying on the surface. It may be parameterized as a ruled surfaces by

1 0  2u  x(u, v) = 0 + u 1 + v u2 − 1 0 1 u2 + 1

though convincing yourself of the double ruling might take a little more work. . .

5. Tube of radius a > 0 about a curve y(u). Parameterize as

x(u, φ) = y(u) + a cos φN(u) + a sin φB(u)

where N, B are the normal and binormal vectors of the Frenet frame of y. A tube around a helix is drawn.

14 Implicitly Defined Surfaces You are probably more used to surfaces being defined implicitly—as the level sets of some function— rather than being parameterized. This method of defining surfaces is extremely useful. Not having an explicit parameterization means that calculations are harder, however you gain by having a more global view of a surface.

Definition 2.20. An implicitly defined surface is the zero set of a smooth function f : E3 → R. We say that it is regular if ∇ f (equivalently d f ) does not vanish on the surface.

Examples

1. The sphere of radius a is implicitly defined by f (x, y, z) = x2 + y2 + z2 − a2. Here

d f = 2(xdx + ydy + zdz)

is never zero since at least one of x, y, z is non-zero at all points on the surface f = 0. The sphere is thus regular. Note the contrast with our earlier example of the parameterized sphere which we could not make regular at the north and south pole. The lack of regularity in this case is a function of the parameterization, not the surface itself.

2. The function f (x, y, z) = x2 + y2 − z2 − a has

d f = 2(xdx + ydy − zdz),

which is non-zero away from (x, y, z) = (0, 0, 0). Depending on the sign of a, the set of points with f = 0 is a hyperboloid or a cone.

a > 0 Hyperboloid of 1-sheet: x2 + y2 = z2 + a > 0 for all z a = 0 Cone: (0, 0, 0) is not a regular point of the cone √ a > 0 Hyperboloid of 2-sheets: x2 + y2 = z2 − a ≥ 0 only when |z| ≥ a

The next theorem ties together the currently distinct notions of regular. More importantly, it says that we can always assume the existence of local co-ordinates.

Theorem 2.21. A regular implicitly defined surface is (locally) the image of a regular local surface.

15 We omit the proof, as it is essentially the famous (and difficult!) Implicit Function Theorem8 from multi-variable calculus in disguise. In order to calculate with implicitly defined surfaces, you need some of the ingredients of parame- terized surfaces. In particular: if f (x, y, z) = 0 describes a regular surface S and s ∈ S, then ∇ f (s) is orthogonal to the surface at s and so the tangent plane to S at s is

3 TsS = {x ∈ E : (x − s) · ∇ f (s) = 0}

Example The hyperboloid of one sheet defined implicitly by the equation x2 + y2 − z2 = 12 has unit normal vector  x   x  1 1 U(x, y, z) =  y  = √  y  p 2 2 2 2 x + y + z −z 12 + 2z −z whenever (x, y, z) is a point on the hyperboloid. Thus at the point (3, 2, 1), the unit normal is √1 (3, 2, −1), and the tangent plane has equation 14 3x + 2y − z = 12

Of course, the hyperboloid could have been parameterized: it is a surface of revolution (around the z-axis) so we could choose √  2  √12 + u cos θ x(u, θ) =  12 + u2 sin θ u

Then the differential is √  2 −1/2   2  u(12 + u ) cos θ −√ 12 + u sin θ dx = u(12 + u2)−1/2 sin θ du +  12 + u2 cos θ  dv 1 0 with resulting normal field √ − 12 + u2 cos θ xu × xθ 1 √ U = = √ − 12 + u2 sin θ |xu × xθ| 12 + 2 2 u u

Notice that this is precisely the negative of what we obtained earlier, but no matter. Yet another expression could be obtained using our parameterization as a ruled surface (page 14).

8 If f (x, y, z) = 0 is the implicit surface and ∇ f 6= 0, then, at any given P = (x0, y0, z0), at least one of the partial ∂ derivatives of f is non-zero: suppose WLOG that f 6= 0. The implicit function theorem says that there exists an open ∂z P 2  set U ⊆ R and a unique function g : U → R such that g(x0, y0) = z0 and f x, y, g(x, y) = 0. The surface may then (locally) be parameterized as a graph:

 u  x : U → E3 : (u, v) 7→  v  g(u, v)

16 Aside: Global surfaces We have only rigorously defined a local parameterized surface. In general a surface S is any subset of E3 which locally ‘looks like’ a parameterized surface. That is, for any point s ∈ S, there exists a subset V ⊆ S containing s, an open subset U of R2 and a smooth local surface x : U → V. The upshot is that we may require more than one co-ordinate chart (subsets x(U) ⊆ S) to cover an entire surface. For example, the sphere described using spherical polar co-ordinates covers the whole sphere in a regular fashion, except for a semi-circle (or, pushing things with periodicity in θ, everywhere except two points). To cover the entire sphere with co-ordinate charts we need to choose another point to be the center of a new chart. Indeed a famous topological result (the Hairy Dog Theorem) says that it is impossible to find global co-ordinates on the sphere: the best you can do is to have the entire sphere except for one point on a single chart (stereographic projection does the job, if you know what this is. . . ).

When we start thinking about surfaces globally (such as the entire sphere at once), we see that it is possible to construct non-orientable surfaces such as the Mobius¨ strip. A Mobius¨ strip is certainly a surface: near any point it looks exactly like a bit of R2, and so we can find local co-ordinates. How- ever if you try to look at the surface globally you find you cannot orient it. Consider walking around the strip once. An orientation at a point is a choice of unit normal vector, a choice of which way is ‘up’. On circuiting the strip you find that ‘up’ is now ‘down’. It is thus impossible to choose a global regular parameterization.  Change of co-ordinates Suppose that y(s, t) = x F(s, t) where F(s, t) = (u, v) is a change of co- ordinates on the parameterization space U. The differential takes care of everything. By the chain rule the differentials dx and dy may be thought of as related by a matrix multiplication:

∂y !  ∂u ∂v   ∂x   ∂u ∂v   ∂u ∂v −1 ∂s ∂s ∂s ∂u ∂s ∂s ∂s ∂s ∂y = ∂u ∂v ∂x (du dv) = (ds dt) ∂u ∂v =⇒ (ds dt) = (du dv) ∂u ∂v ∂t ∂t ∂t ∂v ∂t ∂t ∂t ∂t from which

∂y !  ∂u ∂v −1  ∂u ∂v   ∂x  ∂s ∂s ∂s ∂s ∂s ∂u dy = (ds dt) ∂y = (du dv) ∂u ∂v ∂u ∂v ∂x ∂t ∂t ∂t ∂t ∂t ∂v  ∂x  ∂u = (du dv) ∂x = dx ∂v The matrix of partial derivatives is the Jacobian of the co-ordinate change, as you should have met in multi-variable calculus. Strictly speaking, dx and dy are not identical as they feed on tangent vectors with respect to different co-ordinates. We should properly write dy = dx ◦ dF as a composition of { ∂ ∂ } { ∂ ∂ } linear functions, where dF maps tangent vectors in Span ∂s , ∂t to those in Span ∂u , ∂v .

17 2.3 The Fundamental Forms Now that we have differentials, we can start using them to describe surfaces. The required objects a based on a little more algebra.

Definition 2.22. Given 1-forms α, β at p ∈ Rn, define a symmetric bilinear form αβ of two vectors n X, Y ∈ TpR by,

1 αβ(X, Y) = (α(X)β(Y) + α(Y)β(X)) 2 n Lemma 2.23. αβ is indeed a symmetric bilinear form on each tangent space TpR . Proof. αβ(X, Y) = αβ(Y, X) is clear from the formula, hence the product is symmetric. Moreover, n fixing X, we see that αβ(X, −) : Y → αβ(X, Y) is linear since both α, β are linear on TpR . By symmetry we have bilinearity.

Not only is αβ a symmetric bilinear form on each tangent space, it is symmetric itself: αβ = βα. We write α2 = αα. Clearly

α2(X, Y) = α(X)α(Y)

Examples

1. In R2, let α = x dx − dx , β = x x dx , X = ∂ − x ∂ , Y = ∂ . Then, 1 1 2 1 2 2 ∂x1 2 ∂x2 ∂x2

1  1 αβ(X, Y) = (x + x )x x + x x2 = (x2x + 2x x2) 2 1 2 1 2 1 2 2 1 2 1 2

n 9 2. The dot product of tangent vectors in TpR is given by

2 2 2 (ds) = (dx1) + ··· + (dxn)

I.e. if X = 3 ∂ − ∂ and Y = 2 ∂ + 5 ∂ , then ∂x1 ∂x2 ∂x1 ∂x2

2 2 2 (ds) (X, Y) = dx1(X, Y) + dx2(X, Y) = dx1(X)dx1(Y) + dx2(X)dx2(Y) = 6 − 5 = 1

 3  2 That is, · = 1. −1 5

If 1-forms α, β are vector-valued (e.g. α(X) ∈ E3) then we can still define symmetric products, but we must specify how the vectors are to be multiplied. For example

1 (α · β)(X, Y) = (α(X) · β(Y) + α(Y) · β(X)) 2 requires multiplication of vectors by the dot product, as in the following definitions. q 9 2 2 s = x1 + ··· + xn is the distance of a point from the origin.

18 Definition 2.24. Given a local parameterized surface x : U → E3, we define the first I and second I fundamental forms I = dx · dx, I = −dx · dU Here dU is the differential of the unit normal field. The fundamental forms are symmetric bilinear forms on each tangent space TpU.  By the discussion in the previous aside, if y(s, t) = x (u(s, t), v(s, t)) , then dy · dy = dx · dx so that the first fundamental form is independent of co-ordinates. Similarly the second fundamental form is independent of co-ordinates up to a change of sign if the orientation is reversed.

Before worrying about the meanings of the fundamental forms, we compute a few.

Examples 1. The sphere of radius a. cos φ cos θ − sin φ sin θ dx = a cos φ sin θ dφ + a  sin φ cos θ  dθ − sin φ 0

In this case I = a2 dφ2 + a2 sin2 φ dθ2. Computing I is very easy here, because the normal vector is simply the scaled position vector: 1 1 1 U = x =⇒ dU = dx =⇒ I = − I = −a dφ2 − a sin2 φ dθ2 a a a 2. If x is a surface of revolution around the z-axis, recall that  f 0(u) cos φ − sin φ dx =  f 0(u) sin φ du + f (u)  cos φ  dφ 1 0

Thus I = (1 + f 0(u))du2 + f (u)2dφ2. The unit normal vector is − cos φ xu × xφ 1 U = = p − sin φ x × x 0 2 0 u φ f (u) + 1 f (u) and so I = −dx · dU  f 0(u) cos φ − sin φ  = −  f 0(u) sin φ du + f (u)  cos φ  dφ 1 0  0   sin φ   1 · 0 du + − cos φ dφ p 0 2      f (u) + 1 f 00(u) 0

1 00 2 2 = p (− f (u)du + f (u)dφ ) f 0(u)2 + 1

19 3 The first fundamental form I transfers the dot product from E to U. Indeed if dx(zi) = xi, i = 1, 2, 3 then I(z1, z2) = x1 · x2. This is in the spirit of transferring calculations from E back to U. The first fundamental form I encodes ‘infinitessimal distances’ on a surface. For example, the distance s measured between two very close points on the surface of the sphere of radius a is given by,

s2 ≈ a2(∆φ)2 + a2 sin2 φ(∆θ)2 where ∆φ, ∆θ are the small changes in the co-ordinates. If all you want to do on a surface is to measure distances, then all you need to know is the first fundamental form—you don’t even need to know the surface! The second fundamental form measures how the unit normal vector of a surface is changing in com- parison to the surface itself. Since U has unit length, dU|p (X) is orthogonal to U for any X ∈ TpU and is thus a vector in the tangent space to the surface at x(p). The second fundamental form I = −dx · dU thus measures how the unit normal changes relative to the surface.

Curves in surfaces One of the best ways of understanding what the fundamental forms I, I are measuring is to consider curves lying in a surface. Let x : U → E3 be a local surface. If z(t) = (u(t), v(t)) is a curve in U then the composition t 7→ x(z(t)) describes a curve in E3 lying in the surface. It is often useful to think of z(t) as being a curve in the surface, even though it is only a curve in U. We will abuse notation by thinking of 0 d E3 0 d x = dt x(z(t)) as the velocity vector of the curve in . In a similar way, define U = dt U(z(t)).

Example Consider the curve z(t) = (φ(t), θ(t)) = (2t, 5t) on the surface of the unit sphere. In this case,

sin 2t cos 5t x(t) = sin 2t sin 5t cos 2t

0 0 ∂ 0 ∂ Lemma 2.25. Considering the fact that z (t) = u (t) ∂u + v (t) ∂v , we have x0 = dx(z0), U0 = dU(z0)

d  ∂ ∂  Proof. x(z(t)) = x u0 + x v0 = dx u0(t) + v0(t) = dx(z0) dt u v ∂u ∂v If x(z(t)) is a curve in the surface x, then the speed of the curve is √ q q v(t) = x0 · x0 = dx(z0) · dx(z0) = I(z0, z0)

We’ve therefore proved the following:

Theorem 2.26. Let x(z(t)) be a curve in a surface x : U → E3 with first fundamental form I. Then the arc-length of x(z(t)) between x(z(a)) and x(z(b)) is given by

Z b q I(z0, z0)dt a

20 Examples

0 ∂ ∂ 1. Consider the unit sphere with the curve given above z(t) = (2t, 5t). Here z (t) = 2 ∂φ + 5 ∂θ . Recalling that the first fundamental form of the unit sphere is I = dφ2 + sin2 φ dθ2, we have

I(z0, z0) = 4 + 25 sin2 5t

The arc-length of the curve over a ≤ t ≤ b is therefore given by10

Z b p 4 + 25 sin2 5t dt a Generally for a curve z(t) = (φ(t), θ(t)) on a sphere, the arc-length between t = a, b is given by,

Z b q (φ0)2 + sin2 φ(θ0)2 dt a

2 2 > dx +dy 2. Suppose that a surface satisfying y 0 has first fundamental form I = y2 . Find the arc- length over the circular path z(t) = (cos t, sin t) between t0 and t1. Here z0(t) = − sin t ∂ + cos t ∂ so that I(z0, z0) = 1 . Thus the arc-length is ∂x ∂y sin2 t

 t1 Z t1 1 1 + cos t sin t (1 + cos t ) dt = − ln = ln 1 0 sin sin sin (1 + cos ) t0 t t t0 t0 t1

+ − 11 Notice that as t0 → 0 or as t1 → π the arc-length becomes infinite. The examples show the advantages of our approach:

• If you know I, then you can change the curve lying in a given surface and still compute the arc- length with minimal effort. Before this, you would have had to go back to the start: compute d dt x(z(t)) again, then it’s length. . . Since applications often involve studying all curves simulta- neously, this approach is essential for being able to quickly compare the outcomes.

• You don’t need to know the surface explicitly, all you need is I. Example 2 is a description of the Poincare half-plane model of hyperbolic space: it is simply a domain U (here the upper half-plane) with an abstract first fundamental form. There is no surface x : U → E3 and so no I to worry about.

Interpreting I Suppose you were skateboarding in a bowl: how much effort are your legs going to have to make in order to keep you standing up? The second fundamental form along a curve measures exactly this.

Theorem 2.27. For a curve x(z(t)) lying in a surface x with unit normal U and second fundamental form I,

d2 U · x(z(t)) = I(z0, z0). dt2 10Don’t try to compute this elliptic integral explicitly! 11You may see this example again as hyperbolic space.

21 0 d Proof. Write x = dt x(z(t)) as before. Since the tangent vector to the curve is always perpendicular to the unit normal, we have d 0 = (x0 · U) = x00 · U + x0 · U0 = x00 · U + dx(z0) · dU(z0) dt = x00 · U − I(z0, z0)

Theorem 2.27 says that the second fundamental form along a curve measures the acceleration in the normal direction to the surface that is required in order to keep the curve lying in the surface.

I, I in co-ordinates Suppose that u, v are co-ordinates on U ⊆ R2. We can write the fundamental forms of a surface x in terms of easily calculated derivatives. Proposition 2.28. Define functions E, F, G and l, m, n on x : U → E3 by

E = xu · xu F = xu · xv G = xv · xv

l = xuu · U = −xu · Uu m = xuv · U = −xu · Uv n = xvv · U = −xv · Uv Then I = Edu2 + 2Fdudv + Gdv2 and I = ldu2 + 2mdudv + ndv2 These should be clear: I is just the formula dx · dx, while the expressions for I come from differenti- ating xu · U = 0 = xv · U.

Examples 1. Consider the graph of a function z = f (x, y). Parameterizing in the usual way gives us,  1   0  − f  1 u xu =  0  , xv =  1  , U = p − fv , 1 + f 2 + f 2 fu fv u v 1 2 2 E = 1 + fu , F = fu fv, G = 1 + fv , f f f l = uu , m = uv , n = vv p 2 2 p 2 2 p 2 2 1 + fu + fv 1 + fu + fv 1 + fu + fv Combining these gives us, 2 2 2 2 I = (1 + fu ) du + 2 fu fv dudv + (1 + fv ) dv , 1 n 2 2o I = fuu du + 2 fuv dudv + fvv dv p 2 2 1 + fu + fv 2. A skater follows a path parameterized by z(t) = (r(t), θ(t)) = (1 − t, 4t2) in the 1 2 paraboloidal bowl z = 2 r . Compute: (a) The fundamental forms of the surface:

(b) The speed of the skater at time t.

(c) The normal acceleration experienced by the skater at time t.

22 r cos θ The surface can be parameterized x(r, θ) = r sin θ which yields 1 2 2 r

cos θ −r sin θ −r cos θ 1 xr = sin θ , xθ =  r cos θ  , U = √ −r sin θ , 2 r 0 1 + r 1 E = 1 + r2, F = 0, G = r2, 1 r2 l = √ , m = 0, n = √ , 1 + r2 1 + r2 1 I = (1 + r2) dr2 + r2 dθ2, I = √ (dr2 + r2 dθ2) 1 + r2 0 − ∂ ∂ For the skater’s path, we have z (t) = ∂r + 8t ∂θ . Therefore

I(z0, z0) = (1 + (1 − t)2) + 64t2(1 − t)2

The skater’s speed is therefore q q v(t) = I(z0, z0) = 1 + (64t2 + 1)(1 − t)2

The second fundamental form along the path is

0 0 1 2 2 I(z , z ) = p (1 + 64t (1 − t) ) 1 + (1 − t)2

Theorem 2.27 says that this is the normal acceleration of the skater and thus proportional (via Newton’s second law F = ma) to the force experienced by the skater.

2.4 Principal Curvatures The idea of the next section is to think about finding directions on a surface with special properties: this amounts to finding co-ordinates with respect to which the fundamental forms look particularly simple. The most obvious way to may the forms simple is to insist that there be no dudv term.

Definition 2.29. Co-ordinates u, v on U are orthogonal for a surface x : U → E3 if the first fundamental form is diagonal: that is

I = E du2 + G dv2 where there is no dudv term. Otherwise said, xu · xv = 0 so the tangent vectors generated by the ∂ ∂ co-ordinate vector fields ∂u , ∂v are always orthogonal. In addition, u, v are curvature-line co-ordinates if the second fundamental form is diagonal:

I = l du2 + n dv2

While the meaning of orthogonal is clear, to understand why we use the term curvature-line will need a little work.

23 Examples 1. Recall the sphere using spherical polar co-ordinates: since both I, I are diagonal, these are cur- vature line co-ordinates.12

2. The graph of a function z = f (x, y) has its usual parameterization (u, v) = (x, y) in orthogonal co-ordinates if and only if f (x, y) is a function only of one of the variables ( fx = 0 or fy = 0). In this case the co-ordinates are also curvature-line. The surface is necessarily ruled, with a straight line in the surface parallel to either the x or the y axis.

3. A surface of revolution parameterized using (u, φ) where u is the distance along the axis of rotation and φ the angle round the axis, has

0 2 2 2 1 00 2 2 I = I = (1 + f (u))du + f (u) dφ , I = p (− f (u)du + f (u)dφ ) 1 + f 0(u)2 These are curvature-line co-ordinates.

Finding Curvature Directions: A Little Linear Algebra Finding curvature-line co-ordinates is difficult in general. It starts with finding curvature directions at a point, which amounts to finding directions with respect to which both I and I are diagonal. First we need some background linear algebra to extend the idea of eigenvalues and eigenvectors. Definition 2.30. Let A, B be two n × n matrices. The eigenvalues of B with respect to A are the roots of the polynomial

det(B − λA) = 0

A vector v is an eigenvector of B with respect to A and with eigenvalue λ if (B − λA)v = 0 Theorem 2.31. Suppose that A is an n × n symmetric, positive definite13 matrix and that B is n × n sym- n metric. Then the eigenvalues of B with respect to A are real, and there exists a basis v1,..., vn of R such that each vk is an eigenvector of B with respect to A. You should skip the proof if you haven’t studied the spectral theorem in linear algebra.

n Proof. The spectral theorem states that there exists an orthogonal basis x1,..., xn of R of eigenvectors of A. In particular, since A is positive definite, all of the eigenvalues are positive and we may choose x1,..., xn to be orthonormal with respect to A: i.e. ( T 1 i = j x Axj = i 0 i 6= j

With respect to the basis x1,..., xn the matrix of A is the identity matrix. T Now let X = (x1,..., xn) be the matrix with column vectors x1,..., xn. Then X AX = I, whence

A = (XT)−1X−1

12This is something of a cheat: every pair of orthogonal co-ordinates on the sphere is curvature-line (exercise). 13vT Av ≥ 0 for all v, with equality if and only if v = 0. This says that (v, w) := vT Aw is an inner product.

24 It follows that

− −  det(B − λA) = det (XT) 1(XT BX − λI)X 1 = 0 ⇐⇒ det(XT BX − λI) = 0

T X BX is a symmetric matrix and so has real eigenvalues λ1,..., λn, and n distinct orthogonal eigen- n vectors w1,..., wn. Define vk = Xwk: these form a basis of R such that vk is an eigenvector of B with respect to A with eigenvalue λi.

2 3 0 1 √ Example Let A = and B = . Then the eigenvalues of A are 7 ± 48 so that A is 3 5 1 3 positive definite. Now

−2λ 1 − 3λ det(B − λA) = = λ2 − 1 = 0 ⇐⇒ λ = ±1 1 − 3λ 3 − 5λ

The eigenvalues of B with respect to A are thus ±1. When λ = 1 we have

−2 −2  1  (B − A)v = v = 0 ⇐⇒ v = 1 −2 −2 1 1 −1

 2  When λ = −1 we obtain v = . 2 −1  1   2  The eigenvectors of B with respect to A are and with eigenvalues 1, −1 respectively. −1 −1

Diagonalizing the Fundamental Forms Suppose we have co-ordinates u, v on U, then for each p ∈

U we have a basis ∂ , ∂ , with respect to which I, I are symmetric bilinear forms with matrices ∂u p ∂v p

EF  l m , FG m n

∂ ∂ ∂ ∂ respectively. Think about what this means: if Y = y1 + y2 and Z = z1 + z2 are ∂u p ∂v p ∂u p ∂v p tangent vectors at p, then         EF z1 l m z1 I(Y, Z) = (y1 y2) , I(Y, Z) = (y1 y2) FG z2 m n z2

Indeed the matrix of I is positive definite, since

I(Y, Y) = dx(Y) · dx(Y) = |dx(Y)|2 ≥ 0

According to Theorem 2.31, there is a basis

(1) ∂ (2) ∂ (1) ∂ (2) ∂ X1|p = X1 + X1 , X2|p = X2 + X2 ∂u p ∂v p ∂u p ∂v p

25 of TpU and scalars k1, k2 such that Xi|p is an eigenvector of I with respect to I with eigenvalue ki. Otherwise said, ( )!  l m EF X 1 − k i = 0, for i = 1, 2 m n i FG (2) Xi Since the fundamental forms depend smoothly on the position p ∈ U, there is nothing stopping us from considering k1, k2 as smooth functions U → R and X1, X2 as vector fields on U. 3 Definition 2.32. If x : U → E is a surface, the functions k1, k2 : U → R are the principal curvatures of x. The vector fields X1, X2 are the principal curvature directions. A point x(p) is umbilic if k1(p) = k2(p). At an umbilic point, all tangent directions are principal curvature directions.

Corollary 2.33. Suppose that X1, X2 are the principal curvature directions on x. Then, at any non-umbilic point, I(X1, X2) = 0 = I(X1, X2).

Proof. Suppose that x(p) is non-umbilic and fix a basis of TpU so that the matrices of I, I are the  (1)   (1)  X1 X2 symmetric matrices A, B in Theorem 2.31. Viewing X1 = (2) , X2 = (2) as column vectors X1 X2 with respect to the basis of TpU, we see that BXi = ki AXi for each i. It is then straightforward to compute:

T T I(X1, X2) = X1 BX2 = X1 (k2 AX2) = k2I(X1, X2)

T T I(X2, X1) = X2 BX1 = X2 (k1 AX1) = k1I(X1, X2)

Since k1 6= k2 it follows that I(X1, X2) = 0 = I(X1, X2).

Examples I − 1 1. The sphere of radius a has fundamental forms related by = a I. It follows that k1(p) = − 1 k2(p) = a at all points: every point on the sphere is umbilic! 2. Suppose that a surface14 has fundamental forms

I = u2 du2 + v2 dv2, I = u2 du2 + 2uv dudv + v2 dv2 Then the principal curvatures are the solutions λ of the equation

u2 uv u2 0  det − λ = 0 uv v2 0 v2 Multiplying this out, we obtain

u2v2(1 − λ)2 − u2v2 = 0 ⇐⇒ (1 − λ)2 = 1 ⇐⇒ λ = 0, 2

whence the principal curvatures are k1 = 0, k2 = 2. We can also compute the eigenvectors:

 2 2  cos( u √+v ) 2  2 2  14For example x(u, v) = 1 sin( u √+v ) though it is unimportant. 2  2   2 2  u √−v 2

26 ( )! ( )! u2 uv X 1 0 X 1  v  k = 0 1 = =⇒ 1 = 1 uv v2 (2) 0 (2) −u X1 X1 ( )! ( )! −u2 uv  X 1 0 X 1 v k = 2 2 = =⇒ 2 = 2 uv −v2 (2) 0 (2) u X2 X2 We therefore have curvature directions ∂ ∂ ∂ ∂ X = v − u , X = v + u 1 ∂u ∂v 2 ∂u ∂v

2 2 You can easily checked that I(X1, X1) = 2u v , etc., and that the matrices of I and I with respect to this basis are 1 0 0 0 [I] = 2u2v2 [I] = 2u2v2 0 1 0 2

in which the principal curvatures k1 = 0 and k2 = 2 are clearly visible. In fact we can get more than just curvature directions, we can always locally assume the existence of curvature co-ordinates.

Theorem 2.34 (Frobenius: easy version). There exist curvature-line co-ordinates on a neighborhood of any non-umbilic point.

A more thorough discussion is given below in the aside. Two things are important for us:

• We can only guarantee the existence of curvature-line co-ordinates locally.

• If s1, s2 are curvature-line co-ordinates for x, and X1, X2 are curvature directions, then X1 = ∂ ∂ ∂ f1 and X2 = f2 for some functions f1, f2. That is, the vector fields and X are parallel but ∂s1 ∂s2 ∂si i not necessarily equal!

Example, mark II In the above example, s = u2 − v2 and t = u2 + v2 are curvature line co-ordinates. You should check the following.

• The co-ordinate vector fields are ∂ 1 ∂ 1 ∂ 1 ∂ 1 ∂ 1 ∂ 1 = − = X and = + = X ∂s u ∂u v ∂v uv 1 ∂t u ∂u v ∂v uv 2

which are parallel but not equal to X1, X2. • The fundamental forms really are diagonalized by s, t:

I = 2ds2 + 2dt2 I = dt2

We finish by considering totally umbilic surfaces: the proof is left to the homework.

Theorem 2.35. Suppose that all points of x : U → E3 are umbilic. Then x is part of a plane or a sphere.

27 Aside: The Frobenius Theorem and Curvature-line co-ordinates

As seen above, we can find curvature directions X1, X2, using linear algebra. What we’d really like → R ∂ ∂ are curvature co-ordinates: functions s, t : U such that ∂s = X1 and ∂t = X2. Unfortunately this is usually too much too ask! A necessary condition is immediate: since mixed partial derivatives should be equal, we’d need

∂2 ∂2 X X = = = X X 1 2 ∂s∂t ∂t∂s 2 1 That this condition is locally sufficient, and the consequent existence of curvature co-ordinates is given by the following result, which provides a deep link between differential geometry and the theory of partial differential equations. Theorem 2.36 (Frobenius: hard version). 1. Suppose that X, Y are linearly independent vector fields on an open set U containing p.

(a) If the Lie bracket [X, Y] = XY − YX of the vector fields is zero, then there exists a neighborhood V ⊆ U of p and functions s, t : V → R such that ∂ ∂ X = , Y = ∂s ∂t (b) There exists a neighborhood W ⊆ U of p and functions f , g : W → R such that [ f X, gY] = 0 on ⊆ → R ∂ W. Part (a) then says there exists V W and co-ordinates s, t : W such that ∂s = f X and ∂ ∂t = gY are parallel to X, Y. 2. If x(p) is a non-umbilic point of a surface x : U → E3, then there exist curvature-line co-ordinates for x on some neighborhood of p. Indeed if X, Y are curvature directions such that [X, Y] = 0, let α, β be the dual 1-forms to X, Y, then (locally) α = ds and β = dt. The proof is a little too technical for this course: at its heart is the usual existence and uniqueness theorem for differential equations and a bit more technology on forms than we’ve yet developed. It is moreover completely generalizable to arbitrary dimensions.

Example, mark III Let’s revisit our example again! Frobenius’ Theorem actually tells us how to find curvature co-ordinates. We give a sketch, though in practice this involves invoking the existence of solutions to certain differential equations!

1. Hunt for functions f , g such that the vector fields Xe1 = f X1, Xe2 = gX2 satisfy the Lie bracket condition:

[Xe1, Xe2] = 0 Multiply this out (it’s messy) to see that it is equivalent to being able to solve

( 2 2 v ∂ f + u ∂ f = − f u +v ∂u ∂v uv (∗) ∂g − ∂g − u2−v2 v ∂u u ∂v = g uv

1 That this system can be solved is essentially the theorem. Indeed you can check that f = g = uv does the job!

28 2. The co-ordinate direction vector fields 1 ∂ 1 ∂ 1 ∂ 1 ∂ Xe1 = − and Xe2 = + u ∂u v ∂v u ∂u v ∂v satisfy the Lie bracket condition. The dual basis 1-forms to these fields are easy to find using linear algebra:

1 1 α = (u du − v dv) and β = (u du + v dv) 2 2

3. Observe that α = ds and β = dt where s = u2 − v2 and t = u2 + v2. We therefore have curvature line co-ordinates s and t.

There are sometimes short-cuts to this procedure if you are feeling lucky, but in the abstract this is what is needed. For instance, we could have gone straight to α, β by noting that we have to have α(X2) = 0 = β(X1): by luck we would observe that α, β are the exterior derivatives of some functions and we’re done! In practice, all this trick does is transform the problem to an equivalent system 2 of PDE. In general, given independent vector fields X1, X2 on U ⊆ R , we know that there exist functions a, b such that

[X1, X2] = aX1 + bX2

Hunting for functions f , g such that [ f X1, gX2] = 0 is equivalent to f , g satisfying the PDEs

X2[ f ] = a f X1[g] = −bg

This is the system (∗) in our example, and the problem of finding curvature-line co-ordinates essen- tially boils down to solving this. The theorem assures us that solutions exist, so we can always assume the existence of local curvature co-ordinates. Finding them explicitly may be essentially impossible!

2.5 Gauss and mean curvature The principal curvatures of a surface are usually rearranged in two combinations which have a deeper geometric meaning.

3 Definition 2.37. Suppose that x : U → E has principal curvatures k1, k2. The mean curvature H and Gauss curvature K of x are defined respectively by

1 H = 2 (k1 + k2) K = k1k2

− 1 − 1 Example The sphere of radius a (with outward pointing normal) has k1 = k2 = a . Thus H = a 1 and K = a2 . Proposition 2.38. In co-ordinates, the mean and Gauss curvatures are given by,

lG + nE − 2mF ln − m2 H = , K = 2(EG − F2) EG − F2

29 ∂ ∂ I EF  l m Proof. In the basis ∂u , ∂v , I and have matrices FG and ( m n ) respectively. Now expand the deter- minant,  l m EF det − λ = 0 m n FG to obtain (EG − F2)λ2 − (lG + nE − 2mF)λ + (ln − m2) = 0 K is the product and H half the sum of the roots of this equation.

Examples 1. A surface of revolution found by rotating the curve x = f (z) around the z-axis, and parameter- ized in the usual way, has principal curvatures f 00(u) 1 k1 = − , k2 = (1 + f 0(u)2)3/2 f (u)(1 + f 0(u)2)1/2 whence 1 + f 0(u)2 − f (u) f 00(u) f 00(u) H = , K = − 2 f (u)(1 + f 0(u)2)3/2 f (u)(1 + f 0(u)2)2 The simplest example would be a cylinder, where the radius f (u) is constant. We immediately see that a cylinder has zero Gauss curvature and mean curvature half the radius. 2. If z = f (x, y) is the graph of a function then, parameterizing in the usual way, we have f (1 + f 2) + f (1 + f 2) − 2 f f f f f − f 2 = vv u uu v u v uv = uu vv uv H 2 2 3/2 , K 2 2 2 2(1 + fu + fv ) (1 + fu + fv ) The Gauss and mean curvatures are extremely important objects, the Gauss curvature especially so. In particular, note that these are invariants of a surface: • The Gauss curvature is independent of parameterization, while the mean curvature is invariant under orientation-preserving reparameterizations. Many of the applications of these curvatures will have to wait until another course: here are a few to whet your appetite. Definition 2.39. Constant mean curvature (CMC) surfaces are surfaces for which H is constant. A minimal surface is a surface for which H is identically zero. CMC surfaces have many useful applications and appear throughout mathematics. Minimal surfaces are so-called because they minimize areas. For instance, suppose that y is a closed curve in E3. Imag- ine filling in the curve so that you have some sort of warped disc surface whose edge is y. There are infinitely many choices, but the one with minimal area will have H = 0. The same holds for surfaces joining a pair of closed curves. As such, minimal surfaces are often the solutions to many practical questions: for example, the shape assumed by a soap film is typically a minimal surface as this will be the surface which minimizes the total tension in the film.

It can be shown that constant Gauss curvature surfaces are spheres if K > 0, and planes, cones or cylinders when K = 0.

30 Definition 2.40. A pseudosphere is a surface with constant negative Gauss curvature.

It is an exercise to show that the surface of revolution defined by a tractrix15 defines a pseudosphere with K = −1, and that the surface obtained by rotating a catenary f (u) = a cosh(a−1u + c) is a mini- mal surface.

Perhaps the most important fact regarding Gauss curvature is that it depends only on the first funda- mental form. This is Gauss’ famous Theorem Egregium, whose proof will also have to wait for a little more technology. It is therefore entirely detectable to any inhabitant of a surface who can only measure distance and angle, but is incapable of understanding the normal direction. This concept is important when extended to several dimensions and the theory of relativity and spacetime: we cannot travel ‘normal’ to the Universe in order to measure curvature! Indeed the Gauss curvature is the 2-dimensional version of the more general Riemann curvature tensor which rules the discussion of relativity.

2.6 Power series expansion and Euler’s theorem We can analyze the properties of a surface at a fixed point in terms of the curvatures described in the previous sections. Essentially we view a surface as a graph of a function nearby a given point. The analysis will tell us what happens at that point and approximately nearby, but not globally.

Theorem 2.41. Let s be a point on a regular surface S. Choose axes such that s is the origin and the (x1, x2)- plane is tangent to the surface. The surface is then locally the graph of a function x3 = f (x1, x2). Parameter- izing in the usual way we have that at the origin,

2 2 I(0,0) = du + dv 2 2 I(0,0) = fuu(0, 0)du + 2 fuv(0, 0)dudv + fuv(0, 0)dv

Proof. That the surface is locally a graph is essentially Theorem 2.21 after rotation and translation of  0  the surface to the origin in such a way that U| becomes the vertical basis vector 0 . Parameterizing s 1 in the usual way, we take (u, v) to be the standard orthonormal co-ordinates on the tangent plane TsS  u  such that (0, 0) corresponds to s and obtain a parameterization x(u, v) = v . Section 2.3 shows f (u,v) the fundamental forms for such a surface:

2 2 2 I = (1 + fu) du + 2 fu fv dudv + (1 + fv ) dv 1 n 2 2o I = fuu du + 2 fuv dudv + fvv dv p 2 2 1 + fu + fv from which evaluation at (0, 0) is immediate.

Note that we’re only computing the fundamental forms at the origin: in particular, although the co- ordinates u, v will extend at least nearby on the surface, the first fundamental form need not be diagonal anywhere except at the origin.

15Recall the definition from our discussion of curves.

31 Definition 2.42. The matrix of I(0, 0) with respect to the co-ordinates (u, v) is the Hessian of f at (0, 0):given by   f f Hess f (0, 0) = uu uv fuv fvv (0,0)

It is immediate that the Gauss and mean curvatures satisfy

1 K(s) = det Hess f (0, 0) and H(s) = tr Hess f (0, 0) 2

Suppose now that we rotate the (x1, x2)-plane such that the axes point in the principal directions at (0, 0). Then I(0, 0) is also diagonal ( fuv(0, 0) = 0). We have the following result.  u  Theorem 2.43. Suppose that x(u, v) = v is a local surface defined as a graph z = f (x, y), such that f (u,v) the (x, y)-plane is tangent at x(0, 0) and both I(0, 0) and I(0, 0) are diagonal. Then the principal curvatures at (0, 0) are given by

k1 = fuu(0, 0) and k2 = fvv(0, 0)

Furthermore, the Taylor expansion of x3 = f (x1, x2) about the origin is k k Tx = 1 x2 + 2 x2 + higher order terms 3 2 1 2 2 Proof. The principal curvatures are clear since both fundamental forms are diagonal. For the latter, recall Taylor series for multi-valued functions:   1 x1 Tx3 = f (0, 0) + (x1 x2) ∇ f |(0,0) + (x1 x2) Hess f |(0,0) + higher order terms 2 x2       0 1 fuu fuv x1 = f (0, 0) + (x1 x2) + (x1 x2) + higher order terms 0 2 fuv fvv (0,0) x2     1 k1 0 x1 = (x1 x2) + higher order terms 2 0 k2 x2 k k = 1 x2 + 2 x2 + higher order terms 2 1 2 2 Again note that we are not claiming that u, v are curvature co-ordinates on the surface: we only have to have diagonal fundamental forms at the origin. Nearby there is no requirement that u, v be curvature-line.

The Theorem is the surface analogy of a result we saw earlier regarding curves: a regular curve in E2 passing through the origin horizontally at t = 0 has its graph given locally by

κ(0) y = x2 + higher order terms ∗ 2 We can put this correspondence to work by considering the curvature of certain curves passing through a point on a surface.

32 3 Definition 2.44. Let v be a tangent vector at a point s to a surface S in E . Let Πs(v) be the plane through s spanned by v and U|s. Its intersection Πs(v) ∩ S with the surface contains a connected component which describes a curve lying in the surface passing through s. The normal curvature of S in the direction v is the curvature at s of this curve.

The normal curvature is surprisingly easy to compute.

Theorem 2.45 (Euler). Suppose that v is a tangent vector to S making angle θ with the first principal curva- ture direction. Then the normal curvature of S in the direction v is

2 2 κ = k1 cos θ + k2 sin θ

 1   0  Proof. Choose axes such that the principal curvature directions at s are 0 and 1 , and the unit 0 0  0  normal is U| = 0 . Consider the position vector s 1

t cos θ x = t sin θ z

We can think about this in two ways:

 1  1. Let θ be the angle made by v and the principal direction 0 . Then x(t, z) parameterizes the 0 plane Πs(v). By observation (∗), the plane curve in the intersection Πs(v) ∩ S has the form

 t cos θ  y(t) =  t sin θ  1 2 2 κt + higher order terms where κ is the normal curvature.

2. By Theorem 2.41, the surface is locally a graph z = f (t, θ), where t, θ are polar co-ordinates in the horizontal plane. Theorem 2.43 says that

1 2 1 2 z = 2 k1(t cos θ) + 2 k2(t sin θ) + higher order terms 1 = (k cos2 θ + k sin2 θ)t2 + higher order terms 2 1 2

Comparing the two expressions for z yields the result.

A straightforward consequence is that the normal curvature always lies between the two principal values k1 and k2. The principal curvature are therefore the extremes of curvature at any given point.

33 2.7 Elliptic and hyperbolic points: the Dupin Indicatrix Theorem 2.43 provides a description of a surface near a point s.

Definition 2.46. Suppose that k1, k2, K, H are the principal, Gauss and mean curvatures of a surface S at a given point s. We say that s is:

1. Elliptic ⇐⇒ K > 0 ⇐⇒ k1, k2 are non-zero and have the same sign In this case, H has the same sign as the principal curvatures: it is positive if and only if the surface bends towards the unit normal.

2. Hyperbolic ⇐⇒ K < 0 ⇐⇒ k1, k2 are non-zero and have opposite signs

3. Parabolic ⇐⇒ K = 0 and H 6= 0 ⇐⇒ exactly one of k1, k2 are zero

4. Planar ⇐⇒ K = H = 0 ⇐⇒ k1 = k2 = 0 Recalling Theorem 2.43, we see that a surface may be written locally as

1 1 z ≈ k x2 + k y2 2 1 2 2 Near an elliptic point, level curves (z =constant) of the surface are approximately ellipses. Near a hyperbolic point they are approximately hyperbolæ. The correspondence doesn’t extend to the other types of point. Near a parabolic point a surface looks approximately like a parabolic cylinder, which near a planar point a surface looks approximately like part of a plane.

Examples  u  1. The (elliptic) paraboloid x(u, v) = v has u2+v2 2 I = (1 + 4u2)du2 + 8uv dudv + (1 + 4v2)dv2 I = √ (du2 + dv2) 1 + 4u2 + 4v2

from which we can calculate16 4 1 + 2u2 + 2v2 K = and H = (1 + 4u2 + 4v2)2 (1 + 4u2 + 4v2)3/2

Both are positive everywhere, and so all points are elliptic with the surface always bending towards the unit normal.  u  2. The (hyperboloic) paraboloid x(u, v) = v has u2−v2

4 4(v2 − u2) K = − and H = (1 + 4u2 + 4v2)2 (1 + 4u2 + 4v2)3/2

so that all points are hyperbolic.

16Revisit Proposition 2.38 and the examples following to do this quickly.

34  u  3. The surface x(u, v) = v has u2+v4 24v2 6v2(1 + 4u2) + 1 + 16v6 K = and H = (1 + 4u2 + 16v6)2 (1 + 4u2 + 16v6)3/2 Away from v = 0, x is entirely elliptic and curves towards the unit normal vector. The points x(u, 0) are parabolic. This doesn’t mean that the surface contains a straight line, indeed all points x(u, v) where (u, v) 6= (u0, 0) lie on the same side of the tangent plane at (u0, 0). To see this, note that −2u  1 0 U(u0, 0) =  0  (1 + 4 2)1/2 u0 1

and so  u − u  −2u  1 0 0 ( ( , ) − ( , 0)) · ( , 0) = · 0 x u v x u0 U u0 2 1/2  v    (1 + 4u0) 2 2 2 u + v − u0 1 1 = (( − )2 + 2) > 0 2 1/2 u u0 v (1 + 4u0)

for all (u, v) 6= (u0, 0). As this example shows, a parabolic point can still be convex (or indeed a saddle). Definition 2.47. Recall from Euler’s theorem that the normal curvature takes values at each point between the two principal values. A tangent vector v at a point on a surface is asymptotic if the normal curvature defined by v is zero.

Examples 1. The elliptic paraboloid has no asymptotic directions at any point: the principal curvatures are 2 2 both positive at all points, whence the normal curvature at an point is k1 cos θ + k2 sin θ > 0. 2. The hyperbolic paraboloid has asymptotic directions at every point. The principal curvatures π π 2 2 have opposite signs, whence there are two angles θ ∈ (− 2 , 2 ) such that k1 cos θ + k2 sin θ = 0: i.e. k tan2 θ = − 1 k2 Computing this more specifically is awkward, as the principal curvatures themselves are ugly. At the origin, however, things are easier: k1 = 2 = −k2, so the asymptotic directions satisfy 2 π tan θ = 1 ⇐⇒ θ = ± 4 . In this case the asymptotic directions point half way between the principal curvature directions.

3. in the third example, there are only asymptotic directions at the parabolic points when v = 0. These point parallel to the y-axis. In keeping with our theme of transferring calculations back to the parameterization space U, we prove theorem.

35 3 Theorem 2.48. Suppose that x : U → E is a local surface and let v ∈ TpU be a non-zero tangent vector. Then dx(v) is asymptotic iff I(v, v) = 0. Because of the theorem we tend to call the tangent vector v asymptotic in its own right.

Proof. Let v ∈ TpU be a non-zero tangent vector and let Π be the plane at x(p) spanned by U(p) and dx(v). Let y be the curve defined by the intersection of the surface and Π, parameterized such that y(0) = x(p). Then the curvature of y at x(p) is precisely the normal curvature of the surface in the direction dx(v). Now recall Theorem 2.27, from which we see that

y00(0) · U = I(v, v)

But y00(0) · U is zero iff the curvature of y is zero at t = 0 which is iff the normal curvature in the direction dx(v) vanishes.

Definition 2.49. Let x : U → E3 be a parameterized surface. The Dupin indicatrix at p ∈ U is the set of tangent vectors v ∈ TpU such that I(v, v) = ±1. We can easily derive an equation for the Dupin indicatrix. Suppose that

v = x1X1 + x2X2

17 where X1, X2 are unit-length principal curvature vectors. Then

2 2 I(v, v) = x1I(X1, X1) + 2x1x2I(X1, X2) + x2I(X2, X2) 2 2 = k1x1 + k2x2 = ±1 defines a curve in the tangent space TpU. This curve depends on the signs of k1, k2.

1. If k1, k2 have the same sign, then the Dupin indicatrix is an ellipse (with respect to the basis X1, X2). This motivates the name elliptic point. In the special case of an umbilic point k1 = k2 the indicatrix is a circle (with respect to X1, X2).

2. If k1, k2 are non-zero with opposite signs, then the indicatrix is a pair of hyperbolae, hence hyperbolic point.

3. If one of k1, k2 is zero (a parabolic point) the indicatrix is a pair of lines. 4. If both curvatures are zero, the indicatrix is empty.

When we say that the indicatrix is an ellipse or a circle, etc., with respect to the basis X1, X2, what we really mean is with respect to the first fundamental form. Remember that TpU, although a vector space, has no inherent notion of length or angle (it is not a Euclidean space). It is the first fundamen- tal form that defines what is meant by length and angle in TpU.

The Dupin indicatrix (at least for elliptic and hyperbolic points) gives a visualization of what the surface looks like near a point. Near an elliptic point, planar slices through the surface orthogonal to the unit normal are approximately ellipses. Near a hyperbolic point planar slices through the surface

17 I.e. I(X1, X1) = 1 = I(X2, X2) and I(X1, X2) = 0.

36 orthogonal to the unit normal are approximately hyperbolae. Typical indicatrices for an elliptic and a hyperbolic point are shown below, with the asymptotic directions indicated by dashed arrows for the hyperbolic point. The first is based on an umbilic point k1 = k2 = 0.9 (this is a circle with respect to the chosen principal curvature directions X1, X2), while the second has k1 = −k2 = 0.3.

X1 X1

X2 X2

  0 elliptic   1 parabolic Corollary 2.50. A point s on a surface has asymptotic directions if s is . 2 hyperbolic   ∞ planar

Proof. The equation for asymptotic directions v is I(v, v) = 0. This requires solving the singular conic

2 2 I(v, v) = k1x1 + k2x2 = 0

The options are:

• Elliptic point: (x1, x2) = (0, 0) if k1, k2 are the same sign. We obtain no asymptotic directions v = x1X1 + x2X2.

• Hyperbolic√ point: if k1, k2 have opposite signs, the conic factors as two distinct lines x2 = ± −k1/k2x1 and we obtain two distinct asymptotic directions.

• Parabolic point: if k1 = 0, then the conic is the x1-axis (x2 = 0). We have a single asymptotic direction along this axis. (k2 = 0 is similar.)

• Planar point: if k1 = k2 = 0 then all vectors v satisfy I(v, v) = 0 whence any direction is asymptotic.

There are thus at most two asymptotic directions at each non-planar point on a surface.

37