Hodge Theory

Total Page:16

File Type:pdf, Size:1020Kb

Hodge Theory Hodge Theory Victor Guillemin Notes, Spring 1997 Contents 1 Definition and properties of ?. 1 2 Exterior and interior multiplication. 3 3 The case of B symmetric positive definite. 5 4 The case of B symplectic. 6 5 Graded sl(2). 7 6 Hermitian vector spaces. 10 7 Symplectic Hodge theory. 13 8 Excursus on sl(2) modules. 15 9 The strong Lefschetz property. 19 10 Riemannian Hodge theory. 22 11 Kaehler Hodge theory. 23 1 Definition and properties of ?. Let V be an n-dimensional vector space over R and B a bilinear form on V . B induces a bilinear form on ^pV , also denoted by B determined by its value on decomposable elements as B(µ, ν) := det(B(ui; vj )); µ = u1 ^ · · · up; ν = v1 ^ · · · vp: Suppose we also have fixed an element Ω 2 ^nV which identifies ^nV with R. Exterior multiplication then identifies ^n−pV with (^pV )∗ and B maps ^pV ! (^pV )∗. We thus get a composite map ? : ^pV ! ^n−pV 1 characterized by α ^ ?β = B(α, β)Ω: (1) Properties of ?. • Dependence on Ω. If Ω1 = λΩ then ?1 = λ? as follows immediately from the definition. • Dependence on B. Suppose that B1(v; w) = B(v; Jw); J 2 End V: Extend J to an element of ^V by J(v1 ^ · · · vp) := Jv1 ^ · · · ^ Jvp. Thus the extended bilinear forms are also related by B1(µ, ν) = B(µ, Jν) and hence ?1 = ? ◦ J: • Behavior under direct sums. Suppose V = V1 ⊕ V2; B = B1 ⊕ B2; Ω = Ω1 ^ Ω2 under the identification ^V = ^V1 ⊗ ^V2: r s Then for α1; β1 2 ^ V1; α1; β2 2 ^ V2 we have B(α1 ^ α2; β1 ^ β2) = B(α1; β1)B(α2; β2) and (α1 ^α2)^?(β1 ^β2) = B(α1 ^α2; β1 ^β2)Ω = B(α1; β1)Ω1 ^B(α2; β2)Ω2 while α1 ^ ?1β1 = B(α1; β1)Ω1; α2 ^ ?2β1 = B(α2; β2)Ω2: Hence n1−r)s r s ?(!1 ^ !2) = (−1) ?1 !1 ^ ?2!2 for !1 2 ^ V1; !2 2 ^ V2: (n1−r)(n2−s) Since ?1!1 ^ ?2!2 = (−1) ? 2!2 ^ ?1!1 we can rewrite the preceding equation as (n1−r)n2 ?(!1 ^ !2) = (−1) ?2 !2 ^ ?1!1: 2 In particular, if n2 is even we get the simpler looking formula ?(!1 ^ !2) = ?2!2 ^ ?1!1: So, by induction, if V = V1 ⊕ · · · ⊕ Vm is a direct sum of even dimensional subspaces and Ω = Ω1 ^ · · · ^ Ωm then ?(!1 ^ · · · ^ !m) = !m ^ · · · ^ !1; !i 2 ^(Vi): (2) 2 Exterior and interior multiplication. Suppose that B is non-degenerate. For u 2 V we let eu : ^V ! ^V denote ∗ exterior multiplication by u. For γ 2 V we let iγ : ^V ! ^V denote interior multiplication by γ. We can also consider the transposes of these operators with respect to B: y p p−1 ev : ^ V ! ^ V; defined by y p−1 p B(evα, β) = B(α, evβ); α 2 ^ V; β 2 ^ V and y p−1 p iγ : ^ V ! ^ V defined by y p+1 p B(iγα, β) = B(α, iγ β); α 2 ^ V; β 2 ^ V: We claim that y p−1 −1 ev = (−1) ? ev ? (3) and y p −1 iγ = (−1) ? iγ ? (4) on ^pV . Proof of (3). For α 2 ^p−1V; β 2 ^pV we have B(ev ^ α, β)Ω = evα ^ ?β = (−1)p−1α ^ v ^ ?β p−1 −1 = (−1) α ^ ? ? ev ? β p−1 −1 = (−1) B(α, ? ev ? β)Ω: Proof of (4). Let α 2 ^p+1V; β 2 ^pV so that α ^ ?β = 0: 3 We have 0 = iγ(α ^ ?β) p−1 = (iγα) ^ ?β + (−1) α ^ iγ ? β p−1 −1 = (iγα) ^ ?β + (−1) α ^ ?(? iγ?)β so p −1 B(iγα, β)Ω = (−1) B(α, ? iγ ? β)Ω: y y There are alternative formulas for ev and iγ which are useful, and involve dualities between V and V ∗ induced by B. We let h ; i denote the pairing of V and V ∗, so hv; `i denotes the value of the linear function, ` 2 V ∗ on v 2 V . Define the maps op op ∗ L = LB; and L = LB : V ! V by hv; Lwi = B(v; w); hv; Lopwi = B(w; v); v; w 2 V: (5) We claim that y ev = iLopv (6) y iLv = ev (7) Proof. We may suppose that v 6= 0 and extend it to a basis v1; : : : ; vn of V , with v1 = v. Let w1; : : : ; wn be the basis of V determined by B(vi; wj ) = δij : 1 n ∗ Let γ ; : : : ; γ be the basis of V dual to w1; : : : ; wn and set γ := γ1. Then op hwi; L vi = B(v; wi) = δ1i = hwi; γi so γ = Lopv: If J = (j1; : : : ; jp) and K = (k1; : : : ; kp+1 are (increasing) multi-indices then J K B(evv ; w ) = 0 unless k1 = 1 and kr+1 = ir; r = 1; : : : ; p, in which case J K B(evv ; w ) = 1: The same is true for J K B(v ; iγw ): Hence y ev = iγ 4 which is the content of (6). Similarly, let w = w1 and β = L(w) so that iβvj = B(vj ; w1) = δ1j : Then K J B(iβ(v ); w ) = 0 unless k1 = 1 and kr+1 = jr; r = 1; : : : ; p in which case K J B(iβ(v ); w ) = 1 and the same holds for B(vK ; w ^ wJ ). This proves (7). Combining (3) and (6) gives −1 p−1 ? ev? = (−1) iLopv; (8) while combining (4) and (7) gives −1 p ? iLv? = (−1) ev: (9) On any vector space, independent of any choice of bilinear form we always have the identity ∗ iγew + ewiγ = hw; γi; v 2 V; γ 2 V : If γ = Lopv, then hw; γi = B(v; w) so (3) implies y y evew + ewev = B(v; w)I: (10) 3 The case of B symmetric positive definite. In this case it is usual to choose Ω such that kΩk = 1. The only choice left is then of an orientation. Suppose we have fixed an orientation and so a choice of Ω. To compute ? it is enough to compute it on decomposable elements. So let U be a p-dimensional subspace of V and u1 ^· · ·^wp an orthonormal basis of U. Let W be the orthogonal complement of U and let w1; : : : ; wq be an orthonormal basis of W where q := n − p. Then u1 ^ · · · ^ up ^ w1 ^ · · · ^ wq = Ω: We claim that ?(u1 ^ · · · ^ up) = w1 ^ · · · ^ wq: We need only check that B(α, u1 ^ · · · ^ up)Ω = α ^ w1 ^ · · · ^ wq p for α 2 ^ V which are wedge products of ui and wj since u1; : : : ; up; w1; : : : ; wq form a basis of V . Now if any w's are involved in this product decomposition 5 both sides vanish. And if α =1 ^ · · · ^ up then this is the definition of the occurring in the formula. Suppose we have chosen both bases so that = +. Then ?(u1 ^ · · · ^ up) = w1 ^ · · · ^ wq while ?(w1 ^ · · · wq) = u1 ^ · · · up where is the sign of the permutation involved in moving all the w's past the u's. This sign is (−1)p(n−p). We conclude ?2 = (−1)p(n−p) on ^p V: (11) In particular ?2 = (−1)p on ^p V if n is even. (12) 4 The case of B symplectic. Suppose n = 2m and e1; : : : ; em; f1; : : : ; fm is a basis of V with B(ei; fj) = δij ; B(ei; ej ) = B(fi; fj) = 0: We take Ω := e1 ^ f1 ^ e2 ^ f2 · · · ^ em ^ fm which is clearly independent of the choice of basis with the above properties. If we let Vi denote the two dimensional space spanned by ei; fi with Bi the restriction of B to Vi and Ωi := ei ^ fi then we are in the direct sum situation and so can apply (2). So to compute ? in the symplectic situation it is enough to compute it for a two dimensional vector space with basis e; f satisfying B(e; f) = 1; e ^ f = Ω: Now B(e; e) = 0 = e ^ e; B(f; e)Ω = −Ω = f ^ e so ?e = e: Similarly ?f = f: On any vector space the \induced bilinear form" on ^0 is given by B(1; 1) = 1 so ?1 = Ω: 6 On the other hand, B(e; e) B(e; f) 0 1 B(e ^ f; e ^ f) = det = det = 1: B(f; e) B(f; f) −1 0 So ?(e ^ f) = 1: This computes ? is all cases for a two dimensional symplectic vector space. We conclude that ?2 = id (13) first for a two dimensional symplectic vector space and then, from (2), for all symplectic vector spaces. 5 Graded sl(2). We consider the three dimensional graded Lie algebra g = g−2 ⊕ g0 ⊕ g2 where each summand is one dimensional with basis F; H; E respectively and bracket relations [H; E] = 2E; [H; F ] = −2F; [E; F ] = H: For example, g = sl(2) with 0 1 0 0 1 0 E = ; F = ; H = : 0 0 1 0 0 −1 Let V be a symplectic vector space with symplectic form, B and symplectic basis u1; : : : ; um; v1; : : : ; vm so B(ui; uj) = 0; = B(vi; vj ); B(ui; vj ) = δij : Let ! := u1 ^ v1 + · · · + um ^ vm: This element is independent of the choice of symplectic basis. (It is the image in ^2V of B under the identification of ^2V ∗ with ^2V induced by B.) Let E(!) : ^V ! ^V denote the operation of exterior multiplication by !. So E(!) = eui evi : X Let F (!) := E(!)y 7 so y y F (!) = evi eui : X For α 2 ^pV we have, by (3), y y p−1 p−2 eveuα = (−1) (−1) ? veveu ? α = − ? eveu ? α = ?euev ? α so F (!) = ?E(!) ? : (14) 1 m 1 m ∗ Alternatively, if µ ; : : : ; µ ; ν ; : : : ; ν is the basis of V dual to u1; : : : ; vm then F (!) = iνj iµj : (15) X We now prove the Kaehler-Weil identity [E(!); F (!)]α = (p − m)α, α 2 ^pV: (16) Write E(!) = E1 + · · · + Em; Ej := euj evj and F (!) = F1 + · · · Fm; Fj = iνj iµj : Let Vj be the two dimensional space spanned by uj; vj and write α = α1 ^ · · · ^ αp; αj 2 ^Vj : Then Ei really only affects the i-th factor since we are multiplying by an even element: Ei(α) = α1 ^ · · · ^ Eiαi ^ · · · ^ αp and Fi annihilates all but the i-th factor: Fi(α) = α1 ^ · · · ^ Fiαi ^ · · · ^ αp: So if i < j EiFj (α) = Fj Ei(α) = α1 ^ · · · ^ Eiαi ^ · · · ^ Fj αj ^ · · · ^ αp: In other words, [Ei; Fj ] = 0; i 6= j: So [E(!); F (!)]α = α1 ^ · · · ^ [Ei; Fi]αi ^ · · · ^ αp X Since the sum of the degrees of the αi add up to p, it is sufficient to prove (16) for the case of a two dimensional symplectic vector space with symplectic basis u; v .
Recommended publications
  • A Guide to Symplectic Geometry
    OSU — SYMPLECTIC GEOMETRY CRASH COURSE IVO TEREK A GUIDE TO SYMPLECTIC GEOMETRY IVO TEREK* These are lecture notes for the SYMPLECTIC GEOMETRY CRASH COURSE held at The Ohio State University during the summer term of 2021, as our first attempt for a series of mini-courses run by graduate students for graduate students. Due to time and space constraints, many things will have to be omitted, but this should serve as a quick introduction to the subject, as courses on Symplectic Geometry are not currently offered at OSU. There will be many exercises scattered throughout these notes, most of them routine ones or just really remarks, not only useful to give the reader a working knowledge about the basic definitions and results, but also to serve as a self-study guide. And as far as references go, arXiv.org links as well as links for authors’ webpages were provided whenever possible. Columbus, May 2021 *[email protected] Page i OSU — SYMPLECTIC GEOMETRY CRASH COURSE IVO TEREK Contents 1 Symplectic Linear Algebra1 1.1 Symplectic spaces and their subspaces....................1 1.2 Symplectomorphisms..............................6 1.3 Local linear forms................................ 11 2 Symplectic Manifolds 13 2.1 Definitions and examples........................... 13 2.2 Symplectomorphisms (redux)......................... 17 2.3 Hamiltonian fields............................... 21 2.4 Submanifolds and local forms......................... 30 3 Hamiltonian Actions 39 3.1 Poisson Manifolds................................ 39 3.2 Group actions on manifolds.......................... 46 3.3 Moment maps and Noether’s Theorem................... 53 3.4 Marsden-Weinstein reduction......................... 63 Where to go from here? 74 References 78 Index 82 Page ii OSU — SYMPLECTIC GEOMETRY CRASH COURSE IVO TEREK 1 Symplectic Linear Algebra 1.1 Symplectic spaces and their subspaces There is nothing more natural than starting a text on Symplecic Geometry1 with the definition of a symplectic vector space.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On Manifolds of Tensors of Fixed Tt-Rank
    ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format for the approximation of solutions of high dimensional problems. In this paper, we prove some new results for the TT representation of a tensor U ∈ Rn1×...×nd and for the manifold of tensors of TT-rank r. As a first result, we prove that the TT (or compression) ranks ri of a tensor U are unique and equal to the respective seperation ranks of U if the components of the TT decomposition are required to fulfil a certain maximal rank condition. We then show that d the set T of TT tensors of fixed rank r forms an embedded manifold in Rn , therefore preserving the essential theoretical properties of the Tucker format, but often showing an improved scaling behaviour. Extending a similar approach for matrices [7], we introduce certain gauge conditions to obtain a unique representation of the tangent space TU T of T and deduce a local parametrization of the TT manifold. The parametrisation of TU T is often crucial for an algorithmic treatment of high-dimensional time-dependent PDEs and minimisation problems [33]. We conclude with remarks on those applications and present some numerical examples. 1. Introduction The treatment of high-dimensional problems, typically of problems involving quantities from Rd for larger dimensions d, is still a challenging task for numerical approxima- tion. This is owed to the principal problem that classical approaches for their treatment normally scale exponentially in the dimension d in both needed storage and computa- tional time and thus quickly become computationally infeasable for sensible discretiza- tions of problems of interest.
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Book: Lectures on Differential Geometry
    Lectures on Differential geometry John W. Barrett 1 October 5, 2017 1Copyright c John W. Barrett 2006-2014 ii Contents Preface .................... vii 1 Differential forms 1 1.1 Differential forms in Rn ........... 1 1.2 Theexteriorderivative . 3 2 Integration 7 2.1 Integrationandorientation . 7 2.2 Pull-backs................... 9 2.3 Integrationonachain . 11 2.4 Changeofvariablestheorem. 11 3 Manifolds 15 3.1 Surfaces .................... 15 3.2 Topologicalmanifolds . 19 3.3 Smoothmanifolds . 22 iii iv CONTENTS 3.4 Smoothmapsofmanifolds. 23 4 Tangent vectors 27 4.1 Vectorsasderivatives . 27 4.2 Tangentvectorsonmanifolds . 30 4.3 Thetangentspace . 32 4.4 Push-forwards of tangent vectors . 33 5 Topology 37 5.1 Opensubsets ................. 37 5.2 Topologicalspaces . 40 5.3 Thedefinitionofamanifold . 42 6 Vector Fields 45 6.1 Vectorsfieldsasderivatives . 45 6.2 Velocityvectorfields . 47 6.3 Push-forwardsofvectorfields . 50 7 Examples of manifolds 55 7.1 Submanifolds . 55 7.2 Quotients ................... 59 7.2.1 Projectivespace . 62 7.3 Products.................... 65 8 Forms on manifolds 69 8.1 Thedefinition. 69 CONTENTS v 8.2 dθ ....................... 72 8.3 One-formsandtangentvectors . 73 8.4 Pairingwithvectorfields . 76 8.5 Closedandexactforms . 77 9 Lie Groups 81 9.1 Groups..................... 81 9.2 Liegroups................... 83 9.3 Homomorphisms . 86 9.4 Therotationgroup . 87 9.5 Complexmatrixgroups . 88 10 Tensors 93 10.1 Thecotangentspace . 93 10.2 Thetensorproduct. 95 10.3 Tensorfields. 97 10.3.1 Contraction . 98 10.3.2 Einstein summation convention . 100 10.3.3 Differential forms as tensor fields . 100 11 The metric 105 11.1 Thepull-backmetric . 107 11.2 Thesignature . 108 12 The Lie derivative 115 12.1 Commutator of vector fields .
    [Show full text]
  • Introduction to Hodge Theory
    INTRODUCTION TO HODGE THEORY DANIEL MATEI SNSB 2008 Abstract. This course will present the basics of Hodge theory aiming to familiarize students with an important technique in complex and algebraic geometry. We start by reviewing complex manifolds, Kahler manifolds and the de Rham theorems. We then introduce Laplacians and establish the connection between harmonic forms and cohomology. The main theorems are then detailed: the Hodge decomposition and the Lefschetz decomposition. The Hodge index theorem, Hodge structures and polariza- tions are discussed. The non-compact case is also considered. Finally, time permitted, rudiments of the theory of variations of Hodge structures are given. Date: February 20, 2008. Key words and phrases. Riemann manifold, complex manifold, deRham cohomology, harmonic form, Kahler manifold, Hodge decomposition. 1 2 DANIEL MATEI SNSB 2008 1. Introduction The goal of these lectures is to explain the existence of special structures on the coho- mology of Kahler manifolds, namely, the Hodge decomposition and the Lefschetz decom- position, and to discuss their basic properties and consequences. A Kahler manifold is a complex manifold equipped with a Hermitian metric whose imaginary part, which is a 2-form of type (1,1) relative to the complex structure, is closed. This 2-form is called the Kahler form of the Kahler metric. Smooth projective complex manifolds are special cases of compact Kahler manifolds. As complex projective space (equipped, for example, with the Fubini-Study metric) is a Kahler manifold, the complex submanifolds of projective space equipped with the induced metric are also Kahler. We can indicate precisely which members of the set of Kahler manifolds are complex projective, thanks to Kodaira’s theorem: Theorem 1.1.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • The Tensor Product and Induced Modules
    The Tensor Product and Induced Modules Nayab Khalid The Tensor Product The Tensor Product A Construction (via the Universal Property) Properties Examples References Nayab Khalid PPS Talk University of St. Andrews October 16, 2015 The Tensor Product and Induced Modules Overview of the Presentation Nayab Khalid The Tensor Product A 1 The Tensor Product Construction Properties Examples 2 A Construction References 3 Properties 4 Examples 5 References The Tensor Product and Induced Modules The Tensor Product Nayab Khalid The Tensor Product A Construction Properties The tensor product of modules is a construction that allows Examples multilinear maps to be carried out in terms of linear maps. References The tensor product can be constructed in many ways, such as using the basis of free modules. However, the standard, more comprehensive, definition of the tensor product stems from category theory and the universal property. The Tensor Product and Induced Modules The Tensor Product (via the Nayab Khalid Universal Property) The Tensor Product A Construction Properties Examples Definition References Let R be a commutative ring. Let E1; E2 and F be modules over R. Consider bilinear maps of the form f : E1 × E2 ! F : The Tensor Product and Induced Modules The Tensor Product (via the Nayab Khalid Universal Property) The Tensor Product Definition A Construction The tensor product E1 ⊗ E2 is the module with a bilinear map Properties Examples λ: E1 × E2 ! E1 ⊗ E2 References such that there exists a unique homomorphism which makes the following diagram commute. The Tensor Product and Induced Modules A Construction of the Nayab Khalid Tensor Product The Tensor Product Here is a more constructive definition of the tensor product.
    [Show full text]