<<

Differential of Surfaces

Jordan Smith and Carlo Sequin´ ∗ CS Division, UC Berkeley

1 Introduction fundamental form IS. This measures the length of a vector on a given parametric basis by examining the quantity T T . These are notes on of surfaces based on read- · ing [Greiner et al. n. d.]. 2 2 T T = (Su Su) ∂u + 2 (Su Sv) ∂u∂v + (Sv Sv) ∂v (2) · · · · 2 Differential Geometry of Surfaces = E∂u2 + 2F ∂u∂v + G∂v2 (3)

Su ∂u Differential geometry of a 2D manifold or embedded in 3D = ∂u ∂v Su Sv (4) Sv · ∂v is the study of the intrinsic properties of the surface as well as the ef-         fects of a given parameterization on the surface. For the discussion Su Su Su Sv ∂u = ∂u ∂v · · (5) below, we will consider the local behavior of a surface S at a single Su Sv Sv Sv ∂v point p with a given local parameterization (u, v), see Figure 1. We  · ·      E F ∂u would like to calculate the principle values (κ1, κ2), the ∂u ∂v (6) = F G ∂v maximum and minimum curvature values at p, as well as the local      t  orthogonal parameterization (s, t) which is aligned with = U ISU (7) the directions of maximum and minimum curvature.

Equation 7 defines IS. N

Su S IS = gij = Su Sv (8) v Sv ·     Su Su Su Sv T = · · (9) Su Sv Sv Sv S  · ·  u E F (10) = F G  

If an orthonormal arc length parameterization (s, t) is chosen then there will be no local metric distortion and IS will reduce to the identity matrix. Figure 1: Tangent of S

4 The 3 The The unit N of a surface S at p is the vector perpendicular With the given parameterization, we can compute a pair of tangent to S, i.e. the tangent plane of S, at p. N can be calculated given a ∂S ∂S general nondegenerate parametrization (u, v). vectors Su Sv = ∂u ∂v assuming the u and v direc- tions are distinct. These two vectors locally parameterize the tan- gent plane to S at p.A general tangent vector T can be constructed Su Sv as a of Su Sv scaled by infinitesimal co- N = × (11) Su Sv ∂u ∂v t k × k efficients U =  .  The curvature of a surface S at a point p is defined by the rate   ∂u T ∂uS ∂vS Su Sv (1) that N rotates in response to a unit tangential displacement as in = u + v = ∂v   Figure 2. A normal section at p is constructed by intersecting   Consider rotating the direction of the tangent T around p by set- S with a plane normal to it, i.e a plane that contains N and a tan- ting ∂u ∂v = cos θ sin θ . In general, the magnitude gent direction T . The curvature of this curve is the curvature of S in and direction of T will vary based on the skew and scale of the the direction T . The curvature κ of a curve is the reciprocal of the radius ρ of the best fitting osculating circle, i.e. κ = 1 . The direc- basisvectors Su Sv , see Figure 3(a). The distortion of the ρ local parameterization is described by the or the first tional derivative NT of N in the direction T is a linear combination ∂N ∂N   of the vectors Nu Nv = ∂u ∂v ∗e-mail: (jordans|sequin)@cs.berkeley.edu and is parallel to the tangent plane.     −1 Su (Suu Sv) + (Su Svu) ∂ Su Sv IIS = hij = Nu Nv (19) Nu = × × + (Su Sv) k × k Sv · − − Su Sv × ∂u     k × k (12) Su Nu Su Nv = − · − · (20) −1 Sv Nu Sv Nv (Suv Sv) + (Su Svv) ∂ Su Sv  − · − ·  Nv = × × + (Su Sv) k × k Su·(Suu×Sv) Su·(Suv×Sv) Su Sv × ∂u u v u v = − kS ×S k − kS ×S k (21) k × k (13) Sv·(Su×Svu) Sv·(Su×Svv ) " − kSu×Sv k − kSu×Sv k # ∂u Suu·(Su×Sv) Suv ·(Su×Sv) N = ∂uN + ∂vN = Nu Nv (14) T u v ∂v = kSu×Svk kSu×Svk (22)   Svu·(Su×Sv ) Svv ·(Su×Sv)   " kSu×Svk kSu×Svk # Suu N Suv N = · · (23) Suv N Svv N  · ·  L M (24) N T = M N   N Equations [23,24] can be substituted for IIS to yield alternate formulas for T NT . − · Sv 2 2 T NT = (Suu N) ∂u + 2 (Suv N) ∂u∂v + (Svv N) ∂v T − · · · · (25) Su = L∂u2 + 2M∂u∂v + N∂v2 (26)

Suu N Suv N ∂u = ∂u ∂v · · Svu N Svv N ∂v  · ·      (27) L M ∂u = ∂u ∂v (28) M N ∂v Figure 2: Normal section of S       IIS contains information about the curvature of S, but it will

As with the tangent vector T , NT is subject to scaling by the be distorted by the metric if the given parameterization is not an metric of the parameter space. In an arc length parameter direc- arc length parameterization. It is still necessary to counter act this tion s, Ss Ns is the curvature of the normal section. Given a distortion in order to compute the curvature of the S. − · general parameterization (u, v), we can compute T NT . This − · quantity defines the second fundamental form IIS which describes curvature information distorted by the local metric. 5 Coordinate Transformations Our purpose in studying differential geometry and the fundamental forms has been to compute the principle curvature values (κ1, κ2) 2 T NT = (Su Nu) ∂u (Su Nv + Sv Nu) ∂u∂v and directions (Sk1, Sk2) of a S at a point p. − · − · − · · 2 Thus far we have derived IIS which contains curvature informa- (Sv Nv ) ∂v (15) − · tion but can be distorted by the metric of the parameterization. IS Su ∂u describes this metric information. We must combine IIS and IS to = ∂u ∂v − Nu Nv find the arc length curvature values. Sv · ∂v  −    The given parameterization u, v defines a set of basis tan-     (16) ( ) gent vectors Su Sv . We can construct an orthonormal basis Su Nu Su Nv ∂u = ∂u ∂v − · − · Ss St due to an arc length parameterization (s, t) where Ss Sv Nu Sv Nv ∂v    − · − ·    and Su are aligned, see Figure 3(a). A point T can be expressed in   (17) either coordinate system, see Figure 3(b). t = U IISU (18) S S T = ∂u ∂v u = ∂s ∂t s (29) Sv St Equation 18 defines IIS. IIS can be simplified to a set of     quanties which are easier to compute by substituting the expres-     sions in Equations [12,13] for Nu and Nv respectively. Two of the The coordinates ∂s ∂t measure the length of T as op- terms of in of the entries of the matrix in Equation 21 vanish due to posed to the coordinates ∂u ∂v . the of orthogonal vectors. Further simplification using   properties of the box product of vectors yields Equation 22. Equa- T T =∂s2 + ∂t2 = ∂u2 + ∂v2 (30) tion 23 is derived by subsituting N from Equation 11 and shows · 6 We would like to work in the Ss St , so we must find the that IIS can be computed simply by the dot product of the second transformations to and from S S . The quantities a, b, and partial derivatives of S with N. Equation 23 also demonstrates that u v  II is a symmetric matrix. θ in Figure 3(a) are defined by S S . S  u v   St Su ∂u T T = ∂u ∂v Su Sv (42) · Sv · ∂v       ∂u   Sv ∂u ∂v I (43) = S ∂v s u T   S   b t −1 −1 t ∂s v = ∂s ∂t A I A (44) S ∂t S v t   θ b    t ∂s = ∂s ∂t A−1 AAt A−1 (45) Ss θ ∂t a a S S S   u u s   ∂s   ∂s ∂t (46) = ∂t (a) Constructing an or- (b) A point T in (u, v) and   thonormal basis on top of (s, t) coordinates = ∂ s2 + ∂t2  (47) the given parametric basis It is now possible to transform our given parameters to coordi- nates on an orthonormal basis where we can measure lengths. We Figure 3: Effects of the metric can also transform coordinates on the orthonormal basis back to coordinates on our given parameterization.

6 Curvature

The curvature at a point p on the surface S in any tangential direc- Su a 0 Ss tion T is T N . = (31) T Sv b cos θ b sin θ St − ·       S = A s (32) Su ∂u S T NT = ∂u ∂v Nu Nv t − · Sv · − − ∂v       S 1 b sin θ 0 S     (48) s = u (33) St ab sin θ b cos θ a Sv ∂u    −    = ∂u ∂v IIS (49) S ∂v = A−1 u (34)   Sv     This function is parameterized by the given parameterization. We would like to study the function that sweeps a unit length tan- gent around p, so it is more convenient to use arc length coordinates. The first fundamental form IS can be represented by these coor- dinate transformations. We must transform coordinates so that setting the arc length coordi- nates ∂s ∂t = cos φ sin φ sweeps out the curvature values.    

Su IS = Su Sv (35) −1 −1 t ∂s Sv · T N = ∂s ∂t A II A (50)   T S ∂t   − · Ss t   = A Ss St A (36)   ∂s  St · = ∂s ∂t II (51)   Sˆ ∂t   1 0 t   = A A (37)   0 1   S = AAt (38) t E1 a2 ab cos θ = 2 (39) S ab cos θ b E v   2 b

We can also derive transformations between coordinates of the φ different sets of basis vectors by substituting Equations [32,34] re- θ S spectively into Equation 29. a s Su Figure 4: The relationship between the principle parameter direc- tions (E1, E2), the given general parameter directions (Su, Sv), ∂s ∂t = ∂u ∂v A (40) and the constructed orthonormal basis (Ss, St)in the tangent −1  ∂u ∂v  =  ∂s ∂t A (41) plane to S at p     The curvature function will have maximum and minimum val- To verify that this makes sense, consider T T again and trans- ues that occur at directions that are orthogonal to each other. The · form the coordinates using Equations [40,41]. eigenvalues of IISˆ are the principle curvature values (κ1, κ2). The eigenvectors of IISˆ are the coordinates of the principle curvature eigenvalue matrix as for IISˆ, so the eigenvalues of W are the prin- t directions in the arc length coordinates. These coordinates will not ciple curvature values. A−1 V = V −1At − 1, so the columns be directly useful because we only know the Su Sv vectors, −1 t so we would actually prefer the eigenvectors in coordinates on this of A V are the eigen vectors of W . This is a transformed ver- sion of V , the eigenvectors of II over the arc length basis. The basis.   Sˆ transformation  A−1 transforms the arc length coordinates to coor- dinates over the given basis, see Equation 41. −1 −1 t IISˆ = A IIS A (52) 1 =  s1 s2 2 2 2 V = (67) a b sin θ t1 t2   b sin θ 0 L M b sin θ b cos θ t t s s b cos θ a M N 0 − a A−1 V = A−1 1 2 (68) − t1 t2       (53)    u  u 1 = 1 2 (69) = v1 v2 a2b2 sin2 θ ·   b2 sin2 θ b sin θ ( b cos θL + aM) To solve for the principle curvature values and directions using 2 2 − 2 b sin θ ( b cos θL + aM) b cos θL 2ab cos θM + a N the given parameterization, we substitute Equations [24,59] into  − − (54)  Equation 57 and simplify to derive W written as the coefficients of the first and second fundamental forms. Solving for the eigenvalues, we find the characteristic equation. The principle curvature values are the eigenvalues. −1 W = IS IIS (70)

2 2 2 2 2 2 1 G F L M 0 = a b (ab cos θ) λ b L 2ab cos θM + a N λ = − (71) − − − EG F 2 F E M N 2  −    + LN M  (55) − − 1 GL F M GM F N 2 2 2 = − − (72) 0 = EG F λ (GL 2F M + EN) λ + LN M EG F 2 EM F L EN F M − − − − (56) −  − −    Then we use the representation of W in Equation 72 and com- pute the eigendecomposition. The eigenvalues are the principle cur- 7 The Weingarten Operator vatures, and the eigenvectors are the coordinate coefficients over the given parametric basis vectors. An alternate way of computing the principle curvature values and directions is by using the Weingarten Operator W , also known as the shape operator. W is the inverse of the first fundamental form GL 2F M + EN κ1,2 = λ1,2 = − multiplied by the second fundamental form. 2 (EG F 2) − W = I−1II (57) (GL 2F M + EN)2 4 (EG F 2) (LN M 2) S S − − − − I−1 is the inverse of the first fundamental form. ∓ q 2 (EG F 2) S − (73)

−1 t κ1 + κ2 GL 2F M + EN T race (W ) −1 t −1 −1 κM = = − = (74) IS = AA = A A (58) 2 2 (EG F 2) 2 − 1 G F 2 =   (59) LN M 2 − κG = κ1κ2 = − = Det (W ) (75) EG F F E EG F 2 −  −  − We will prove that W has the same eigenvalues as IISˆ, i.e. the same principle curvature values. We will also prove that the eigen- References vectors of W are transformed versions of the eigenvectors of IISˆ GREINER, G., LOOS, J., AND WESSELINK, W. Data Dependent to coordinates on the given basis Su Su . Thin Plate Energy and its use in Interactive Surface Modeling.   −1 IISˆ = V ΛV (60) t IIS = AIISˆA (61) −1 W = IS IIS (62) −1 t −1 = A A IIS (63) −1 t −1 t = A  A AIISˆA (64) −1 t t = A  IISˆA (65) t W = A−1 V ΛV −1At (66)

Equation 66 is the eigendecomposition  of W . The diagonal ma- trix Λ has the eigenvalues of W on its diagonal. It is the same A Square Matrices: A Quick Reference A.3 Eigenvectors For a general n n square matrix A, once the eigenvalues A.1 General 2 × 2 Matrices × λi (i : 1, n) have been computed, the corresponding eigenvectors vi (i : 1, n) are computed by finding the nullspace of the matrix We will be manipulating the 2 2 matrices IS and IIS, so it is im- × (A λiI). The null space is computed for each λi by substituting portant to review some fundamental properties. Matrix B in Equa- − −1 its value into the matrix (A λiI) and then performing Gaussian tion A-1 is a general 2 2 matrix. The inverse, B , can be com- − puted in closed form as×shown in Equation A-2, if the determinant elimination. The matrix will be singular, so at least one row will (ad−bc) become all zeroes. It is then possible to solve for the components of B, Det (B) = α , does not equal to zero. of vi as parametric equations of a subset of free components. The parametric nature of the vi’s show that the eigenvectors are 1 a b not unique. In Equation A-3, vi can be replaced with itself multi- B = (A-1) plied by an abitrary scalar. This arbitrary scaling of the eigenvectors α c d   is the only degree of freedom if the eigenvalues are unique, i.e. their −1 α d b algebraic multiplicities are all one. In this case, the nullspaces are B = − (A-2) ad bc c a all 1-dimensional subspaces, lines through the origin. −  −  If there are multiple roots to the characteristic equation, then the algebraic multiplicity m of some of the eigenvalues will be greater A.2 Eigenvalues than one. The m eigenvectors corresponding to such an eigen- For a general n n square matrix A, the eigenvalues of A are n spe- value will be a set of linearly independent vectors that span the m-dimensional subspace. The choice of these vectors has more cial scalar values× λ (i : 1, n) such that for special corresponding i degrees of freedom than the scaling in the 1-dimension case. If eigenvectors v (i : 1, n), multiplication of A with v is the same i i m = 2, then the subspace is a plane through the origin, and there as scaling vi by λi, Equation A-3. By substituting vi with itself multiplied by the identity matrix I in Equation A-4, we can then is a rotational degree of freedom for choosing the eigenvectors as combine terms in Equation A-5. well as the scaling degree of freedom. In general, there are some matrices with eigenvalues of algebraic multiplicity m > 1 that do not have a complete set of m associated eigenvectors, i.e. the geometric multiplicity is less than the alge- λ v = Av (A-3) i i i braic multiplicity. These matrices are defective. For our purposes λi (Ivi) = Avi (A-4) for the case of 2 2 matrices, we will assume that the matrices are × λiIvi = Avi non-defective. For the case of a non-defective 2 2 matrix B, we will try to 0 = (A λiI) vi (A-5) − solve for the eigenvectors analytically×by substituting the eigenval- ues from Equation A-12 into the matrix in Equation A-10 and then vi is in the nullspace of the matrix (A λiI), but the vector − performing Gaussian elimination. vi = 0 always satisfies Equation A-5. We are interested in nonzero eigenvectors vi only, so (A λiI) must be singular so that its − nullspace will be nonempty. (A λiI) is singular if its determi- 1 a αλi b − 0 = − vi (A-13) nant is zero. Equation A-6 is the characteristic equation of A, and α c d αλi − the n roots of this equation define the eigenvalues λi (i : 1, n).   a αλi b vi,x 0 = − (A-14) c d αλi vi,y  −    2 0 = Det (A λiI) (A-6) a+d√(a−d) +4bc − a 2 b vi,x 0 = − 2  a+d√(a−d) +4bc  vi,y For the case of the general 2 2 matrix B, the characteristic c d   × − 2 equation, Equation A-7 can be expanded analytically to yield an  (A-15) equation that is quadratic in λi, Equation A-11. (a−d)∓√(a−d)2+4bc 2 b vi,x 0 = 2  −(a−d)∓√(a−d) +4bc  vi,y 0 = Det (B λiI) (A-7) c 2   − (A-16) 1 a b λ 0   0 = Det i (A-8) c d 0 λi α − 0 0 vi,x      2 0 = −(a−d)∓√(a−d) +4bc (A-17) 1 a b αλ 0 1 vi,y 0 = Det i (A-9) " 2c #   α c d − 0 αλi     Equation A-17 shows that the matrix (A λiI) is singular. We 1 a αλi b will let v then we can solve for v , −Equation A-18. 0 = − (A-10) i,y = 1 i,y α c d αλi − 1 2 2 2 (a−d)√(a−d) +4bc 0 = α λi α (a + d) λi (ad bc) (A-11) α − − − v1,2 = 2c (A-18) 1  " # Equation A-11 can be solved analytically using the quadratic equation to yield the 2 eigenvalues λ and λ in Equation A-12. In Equation A-18, if c = 0, then the eigenvector is not well 1 2 defined. We must handle the case where c = 0 or b = 0 separately. a d In this case the eigenvalues simplify to λ1 = α and λ2 = α , 2 and gaussian elimination will produce a matrix with a single non- a + d (a d) + 4bc t t zero element. Hence, v1 = 1 0 and v2 = 0 1 , λ , =  − (A-12) 1 2 q 2α respectively.     A.4 Eigen Decomposition For a non-defective n n square matrix A, the geometric multi- λ + λ a + d T race (A) × 1 2 = = (A-31) plicity equals the algebraic multiplicity for all of the eigenvalues 2 2α 2 λi (i : 1, n), so there is a corresponding set of n linearly indepen- ad bc dent eigenvectors v i , n . It is then possible to diagonalize A λ1λ2 = − = Det (A) (A-32) i ( : 1 ) α by transforming A by the eigenvectors and their inverse. Remem- bering Equation A-19 that defines each eigenvector, we can write We remember that the determinant of the multiplication of two all n equations as columns next to each other in Equation A-20. matrices is the product of their determinants.

Det (AB) = Det (A) Det (B) (A-33) Avi = viλi (A-19) We can then prove that the determinant of a matrix is equal to v v ... v v λ v λ ... v λ A 1 2 n = 1 1 2 2 n n the product of its eigenvalues.  ↓ ↓ ↓   ↓ ↓ ↓  (A-20) A = V ΛV −1 (A-34) We will define V as the matrix of column eigenvectors. We can rewrite the right side of Equation A-20 as the eigenvector matrix Det (A) = Det V ΛV −1 (A-35) V multiplied by a diagonal matrix Λ that has the n eigenvalues −1 = Det (V ) Det (Λ) Det V (A-36) along its diagonal, Equation A-21. After right multiplying V −1,  −1 we derive the eigen decomposition of A, Equation A-24. = Det V V Λ  (A-37) = Det (Λ)  (A-38) λ 0 ... 0 1 = λ (A-39) v v ... v 0 λ ... i AV = 1 2 n 2  ......  Y  ↓ ↓ ↓  0 ... λ  n   (A-21) AV = V Λ (A-22) AV V −1 = V ΛV −1 (A-23) A = V ΛV −1 (A-24) The eigen decomposition of A, Equation A-24, makes it very efficient to raise A to an integer exponent e. Consider first squaring A, Equation A-25.

A2 = AA = V ΛV −1V ΛV −1 = V ΛΛV −1 = V Λ2V −1 (A-25) Equation A-26 for A to an integer exponent e is then derived by induction. This is very efficient to compute because raising Λ to the exponent e can be computed by simply raising the n diagonal elements to e.

Ae = V ΛeV −1 (A-26) For the 2 2 matrix B, the eigen decomposition is the following: ×

−1 B = V2Λ2V2 (A-27) 2 2 a−d−√(a−d) +4bc a−d+√(a−d) +4bc V2 = 2c 2c (A-28) " 1 1 # 2 a+d−√(a−d) +4bc 2α 0 Λ2 = 2 (A-29)  a+d+√(a−d) +4bc  0 2α  2  c a−d+√(a−d) +4bc −1 1 2c V2 = − − 2 2  a−d−√(a−d) +4bc  (a d) + 4bc 1 − − 2c q  (A-30)