Overview of Linear Algebra, Basic Topology, and Multivariate

Total Page:16

File Type:pdf, Size:1020Kb

Overview of Linear Algebra, Basic Topology, and Multivariate John Nachbar Washington University March 25, 2018 A Quick Introduction to Linear Algebra, Topology, and Multivariate Calculus1 1 Vector Spaces and Linear Algebra. 1.1 Overview. N Definition 1. A set V ⊆ R is a vector space iff the following two properties hold. 1. For any x; x^ 2 V , x +x ^ 2 V . 2. For any x 2 V , for any γ 2 R, γx 2 V . Because γ can equal 0, we always have that the origin (0;:::; 0) is in V . I will N write the origin in R as simply \0," but to avoid confusion with the real number N 0, some authors write the origin in R as, say, θ. A vector space is a line, plane, or higher dimensional analog thereof, through the origin. Thus, for example, for x 2 R, the graph of the line f(x) = ax is a vector 2 space in R . On the other other hand, the graph of f^(x) = ax + b, with b 6= 0, is not a vector 2 space because the graph does not go through the origin in R . The graph of f^(x) is instead an example of a linear manifold. A linear manifold is the result of taking a vector space and shifting it in a parallel fashion away from the origin. N N R itself is a vector space. Thus it is also common to see a vector space V ⊆ R N called a vector subspace of R . 1.2 Spanning, Linear Independence, and Bases In the example above, the vector space V given by the graph of f(x) = ax can also 2 be represented in the form V = f(x; y) 2 R : there is a γ 2 R such that (x; y) = γ(1; a)g. The vector (1; a) 2 V is said to span V . More generally, we have the t t t t N following. As a matter of notation, s denotes the vector s = (s1; : : : ; sN ) 2 R . N 1 T Definition 2. Given a vector space V ⊆ R and a set of T vectors S = fs ; : : : ; s g, N 1 T all in V , S spans V iff for any x 2 R there exist γ ; : : : ; γ , all in R, such that x = γ1s1 + ··· + γT sT : 1cbna. This work is licensed under the Creative Commons Attribution-NonCommercial- ShareAlike 4.0 License. 1 In the example, note that although f(1; a)g spans V , so does f(1; a); (2; 2a)g. In particular it is always possible to take γ2 = 0, which puts us back in the previous case. Including (2; 2a) in the spanning set when we already have (1; a) is redundant. We are interested in spanning by sets of vectors that are minimal in the sense of not having any such redundancies. Given a set S = fs1; : : : ; sT g of T vectors, there is a redundancy if it is possible to write one of the vectors, say s1, as a linear combination of the other vectors, s1 = γ2s2 + : : : γT sT : This can be rewritten as −s1 + γ2s2 + : : : γT sT = 0: In the example above, I could write either (1; a) = (1=2)(2; 2a) or (2; 2a) = 2(1; a). This motivates the following definition. 1 T N Definition 3. A set S = fs ; : : : ; s g of T vectors in R is linearly dependent if there exists T numbers γ1; : : : ; γT , at least one not equal to zero, such that γ1s1 + ··· + γT sT = 0: If S is not linearly dependent then it is linearly independent. Equivalent, S is linearly independent iff whenever γ1s1 + ··· + γT sT = 0; γ1 = ··· = γT = 0. In particular, if γ1s1 + ··· + γT sT = 0 and, say, γ1 6= 0 then s1 is redundant in the sense that s1 = −(1/γ1)(γ2s2 + ··· + γT sT ). If S contains the origin as one of its vectors then it is automatically linearly dependent. In particular, if S contains only the origin then it is linearly dependent. Specializing even further, in the case N = 1, the \matrix" [0] is linearly dependent; on the other hand, the \matrix" [1] is linearly independent. Even if S is linearly dependent, some subset of S may be linearly independent. And when there is one linearly independent subset of S, then there is often more than one. For example, if S = f(1; a); (2; 2a)g then S is linearly dependent (take λ1 = −2, λ2 = 1), but both S^ = f(1; a)g or S~ = f(2; 2a)g are linearly independent. Definition 4. S is a basis for V iff S is linearly independent and spans V . Except for the trivial case V = f0g, there will be many (uncountably infinitely many, in fact) bases. In the example above, (1; a) is a basis but so is (2; 2a) and so is (−1; −a). One can show that if there is a basis for V with T vectors then every basis for V has exactly T vectors. This allows us to define the dimension of V . 2 Definition 5. If V is a vector space then the dimension of V is T iff there is a basis for V with T vectors. N 1 In particular, the dimension of R is N, because the N unit vectors e = 2 N (1; 0;:::; 0), e = (0; 1; 0;:::; 0) and so on are a basis for R . This basis is called the standard basis. Finally, note that if V is the vector space and S is a basis for V then any x 2 V is uniquely represented in the form x = γ1s1 + ··· + γT sT : To see this, suppose that we also have x =γ ^1s1 + ··· +γ ^T sT : Then setting these equal and rearranging, 0 = (γ1 − γ^1)s1 + ··· + (γT − γ^T )sT If S is independent, all of the γt − γ^t must equal 0, which implies that for all t, γ^t = γt. 1.3 Linear Functions and Matrices. N M Definition 6. A function f : R ! R is linear iff the following hold. N 1. For any x; x^ 2 R , f(x +x ^) = f(x) + f(^x): N 2. For any x 2 R , γ 2 R, f(γx) = γf(x): When M > 1, linear functions are often called linear maps. Map and function mean the same thing here. Setting γ = 0 implies that if f is linear then f(0) = 0. Thus, in the examples above, f(x) = ax is linear but f^(x) = ax + b is not when b 6= 0 (f^ is affine). A fundamental fact is that a function f is linear iff it can be represented in matrix form: there is an M × N matrix A (note the dimensions) such that for any N x 2 R , f(x) = Ax: N 1 N n To see that this is true, note that for any x 2 R , x = x1e + ··· + xN e , where e is the unit vector with a 1 in coordinate n and 0s everywhere else. Then since f is linear 1 N f(x) = x1f(e ) + ··· + xN f(e ): Let an = f(en). Let A be the M × N matrix in which column n is an. Then f(x) = Ax: In our simple example in which f(x) = ax, f(1) = a and so A = [a]. 3 1.4 The Fundamental Spaces of a Linear Function Given an M ×N matrix A, let an denote column n of A. Then for the linear function f(x) = Ax, 1 N f(x) = x1a + ··· + xN a : M In words, this says that f(x) is in the vector space in R that is spanned by the columns of A. This space is called the column space. N Similarly, one can consider the vector space spanned in R by the rows of A, considered as vectors (equivalently, consider the columns of A0, the transpose of A). Finally, let K(A) be the set of points x such that Ax = 0: K(A) is the kernel of A. (It is also called the null space of A.) It is easy to verify N that K(A) is a vector space in R . 1.5 The Fundamental Dimensionality Theorem. One can prove the following theorem. Theorem 1 (The Dimension Theorem). Let A be an M ×N matrix. The dimension of the column space of A plus the dimension of K(A) equals N. One consequence of the Dimension Theorem is that, with some additional work, one can show that, for any given matrix, the maximum number of independent columns (i.e. the number of columns in the largest independent subset of columns) equals the maximum number of independent rows (i.e., the maximum number of independent columns of A0). This number is called the rank of A. In particular, this says that the column space of A and the row space of A have the same dimension. And this says that the dimension of K(A) is N minus the rank of A. The rank of A cannot exceed minfM; Ng but could be strictly less. For example, if N = M = 2 and 1 2 A = ; a 2a then the rank is 1. A matrix A has full rank iff its rank is minfM; Ng. Otherwise, A is singular. As discussed below, the linear function f(x) = Ax is one-to-one, and hence invertible, iff A has full rank. 1.6 Vector Subspaces Revisited. Consider any M × N matrix A. Suppose that M < N and that A has full rank, which is M. Then by the Dimension Theorem, the vector space K(A) has dimension N − M.
Recommended publications
  • The Grassmann Manifold
    The Grassmann Manifold 1. For vector spaces V and W denote by L(V; W ) the vector space of linear maps from V to W . Thus L(Rk; Rn) may be identified with the space Rk£n of k £ n matrices. An injective linear map u : Rk ! V is called a k-frame in V . The set k n GFk;n = fu 2 L(R ; R ): rank(u) = kg of k-frames in Rn is called the Stiefel manifold. Note that the special case k = n is the general linear group: k k GLk = fa 2 L(R ; R ) : det(a) 6= 0g: The set of all k-dimensional (vector) subspaces ¸ ½ Rn is called the Grassmann n manifold of k-planes in R and denoted by GRk;n or sometimes GRk;n(R) or n GRk(R ). Let k ¼ : GFk;n ! GRk;n; ¼(u) = u(R ) denote the map which assigns to each k-frame u the subspace u(Rk) it spans. ¡1 For ¸ 2 GRk;n the fiber (preimage) ¼ (¸) consists of those k-frames which form a basis for the subspace ¸, i.e. for any u 2 ¼¡1(¸) we have ¡1 ¼ (¸) = fu ± a : a 2 GLkg: Hence we can (and will) view GRk;n as the orbit space of the group action GFk;n £ GLk ! GFk;n :(u; a) 7! u ± a: The exercises below will prove the following n£k Theorem 2. The Stiefel manifold GFk;n is an open subset of the set R of all n £ k matrices. There is a unique differentiable structure on the Grassmann manifold GRk;n such that the map ¼ is a submersion.
    [Show full text]
  • Relativistic Dynamics
    Chapter 4 Relativistic dynamics We have seen in the previous lectures that our relativity postulates suggest that the most efficient (lazy but smart) approach to relativistic physics is in terms of 4-vectors, and that velocities never exceed c in magnitude. In this chapter we will see how this 4-vector approach works for dynamics, i.e., for the interplay between motion and forces. A particle subject to forces will undergo non-inertial motion. According to Newton, there is a simple (3-vector) relation between force and acceleration, f~ = m~a; (4.0.1) where acceleration is the second time derivative of position, d~v d2~x ~a = = : (4.0.2) dt dt2 There is just one problem with these relations | they are wrong! Newtonian dynamics is a good approximation when velocities are very small compared to c, but outside of this regime the relation (4.0.1) is simply incorrect. In particular, these relations are inconsistent with our relativity postu- lates. To see this, it is sufficient to note that Newton's equations (4.0.1) and (4.0.2) predict that a particle subject to a constant force (and initially at rest) will acquire a velocity which can become arbitrarily large, Z t ~ d~v 0 f ~v(t) = 0 dt = t ! 1 as t ! 1 . (4.0.3) 0 dt m This flatly contradicts the prediction of special relativity (and causality) that no signal can propagate faster than c. Our task is to understand how to formulate the dynamics of non-inertial particles in a manner which is consistent with our relativity postulates (and then verify that it matches observation, including in the non-relativistic regime).
    [Show full text]
  • On Manifolds of Tensors of Fixed Tt-Rank
    ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format for the approximation of solutions of high dimensional problems. In this paper, we prove some new results for the TT representation of a tensor U ∈ Rn1×...×nd and for the manifold of tensors of TT-rank r. As a first result, we prove that the TT (or compression) ranks ri of a tensor U are unique and equal to the respective seperation ranks of U if the components of the TT decomposition are required to fulfil a certain maximal rank condition. We then show that d the set T of TT tensors of fixed rank r forms an embedded manifold in Rn , therefore preserving the essential theoretical properties of the Tucker format, but often showing an improved scaling behaviour. Extending a similar approach for matrices [7], we introduce certain gauge conditions to obtain a unique representation of the tangent space TU T of T and deduce a local parametrization of the TT manifold. The parametrisation of TU T is often crucial for an algorithmic treatment of high-dimensional time-dependent PDEs and minimisation problems [33]. We conclude with remarks on those applications and present some numerical examples. 1. Introduction The treatment of high-dimensional problems, typically of problems involving quantities from Rd for larger dimensions d, is still a challenging task for numerical approxima- tion. This is owed to the principal problem that classical approaches for their treatment normally scale exponentially in the dimension d in both needed storage and computa- tional time and thus quickly become computationally infeasable for sensible discretiza- tions of problems of interest.
    [Show full text]
  • More on Vectors Math 122 Calculus III D Joyce, Fall 2012
    More on Vectors Math 122 Calculus III D Joyce, Fall 2012 Unit vectors. A unit vector is a vector whose length is 1. If a unit vector u in the plane R2 is placed in standard position with its tail at the origin, then it's head will land on the unit circle x2 + y2 = 1. Every point on the unit circle (x; y) is of the form (cos θ; sin θ) where θ is the angle measured from the positive x-axis in the counterclockwise direction. u=(x;y)=(cos θ; sin θ) 7 '$θ q &% Thus, every unit vector in the plane is of the form u = (cos θ; sin θ). We can interpret unit vectors as being directions, and we can use them in place of angles since they carry the same information as an angle. In three dimensions, we also use unit vectors and they will still signify directions. Unit 3 vectors in R correspond to points on the sphere because if u = (u1; u2; u3) is a unit vector, 2 2 2 3 then u1 + u2 + u3 = 1. Each unit vector in R carries more information than just one angle since, if you want to name a point on a sphere, you need to give two angles, longitude and latitude. Now that we have unit vectors, we can treat every vector v as a length and a direction. The length of v is kvk, of course. And its direction is the unit vector u in the same direction which can be found by v u = : kvk The vector v can be reconstituted from its length and direction by multiplying v = kvk u.
    [Show full text]
  • Lecture 3.Pdf
    ENGR-1100 Introduction to Engineering Analysis Lecture 3 POSITION VECTORS & FORCE VECTORS Today’s Objectives: Students will be able to : a) Represent a position vector in Cartesian coordinate form, from given geometry. In-Class Activities: • Applications / b) Represent a force vector directed along Relevance a line. • Write Position Vectors • Write a Force Vector along a line 1 DOT PRODUCT Today’s Objective: Students will be able to use the vector dot product to: a) determine an angle between In-Class Activities: two vectors, and, •Applications / Relevance b) determine the projection of a vector • Dot product - Definition along a specified line. • Angle Determination • Determining the Projection APPLICATIONS This ship’s mooring line, connected to the bow, can be represented as a Cartesian vector. What are the forces in the mooring line and how do we find their directions? Why would we want to know these things? 2 APPLICATIONS (continued) This awning is held up by three chains. What are the forces in the chains and how do we find their directions? Why would we want to know these things? POSITION VECTOR A position vector is defined as a fixed vector that locates a point in space relative to another point. Consider two points, A and B, in 3-D space. Let their coordinates be (XA, YA, ZA) and (XB, YB, ZB), respectively. 3 POSITION VECTOR The position vector directed from A to B, rAB , is defined as rAB = {( XB –XA ) i + ( YB –YA ) j + ( ZB –ZA ) k }m Please note that B is the ending point and A is the starting point.
    [Show full text]
  • Concept of a Dyad and Dyadic: Consider Two Vectors a and B Dyad: It Consists of a Pair of Vectors a B for Two Vectors a a N D B
    1/11/2010 CHAPTER 1 Introductory Concepts • Elements of Vector Analysis • Newton’s Laws • Units • The basis of Newtonian Mechanics • D’Alembert’s Principle 1 Science of Mechanics: It is concerned with the motion of material bodies. • Bodies have different scales: Microscropic, macroscopic and astronomic scales. In mechanics - mostly macroscopic bodies are considered. • Speed of motion - serves as another important variable - small and high (approaching speed of light). 2 1 1/11/2010 • In Newtonian mechanics - study motion of bodies much bigger than particles at atomic scale, and moving at relative motions (speeds) much smaller than the speed of light. • Two general approaches: – Vectorial dynamics: uses Newton’s laws to write the equations of motion of a system, motion is described in physical coordinates and their derivatives; – Analytical dynamics: uses energy like quantities to define the equations of motion, uses the generalized coordinates to describe motion. 3 1.1 Vector Analysis: • Scalars, vectors, tensors: – Scalar: It is a quantity expressible by a single real number. Examples include: mass, time, temperature, energy, etc. – Vector: It is a quantity which needs both direction and magnitude for complete specification. – Actually (mathematically), it must also have certain transformation properties. 4 2 1/11/2010 These properties are: vector magnitude remains unchanged under rotation of axes. ex: force, moment of a force, velocity, acceleration, etc. – geometrically, vectors are shown or depicted as directed line segments of proper magnitude and direction. 5 e (unit vector) A A = A e – if we use a coordinate system, we define a basis set (iˆ , ˆj , k ˆ ): we can write A = Axi + Ay j + Azk Z or, we can also use the A three components and Y define X T {A}={Ax,Ay,Az} 6 3 1/11/2010 – The three components Ax , Ay , Az can be used as 3-dimensional vector elements to specify the vector.
    [Show full text]
  • INTRODUCTION to ALGEBRAIC GEOMETRY 1. Preliminary Of
    INTRODUCTION TO ALGEBRAIC GEOMETRY WEI-PING LI 1. Preliminary of Calculus on Manifolds 1.1. Tangent Vectors. What are tangent vectors we encounter in Calculus? 2 0 (1) Given a parametrised curve α(t) = x(t); y(t) in R , α (t) = x0(t); y0(t) is a tangent vector of the curve. (2) Given a surface given by a parameterisation x(u; v) = x(u; v); y(u; v); z(u; v); @x @x n = × is a normal vector of the surface. Any vector @u @v perpendicular to n is a tangent vector of the surface at the corresponding point. (3) Let v = (a; b; c) be a unit tangent vector of R3 at a point p 2 R3, f(x; y; z) be a differentiable function in an open neighbourhood of p, we can have the directional derivative of f in the direction v: @f @f @f D f = a (p) + b (p) + c (p): (1.1) v @x @y @z In fact, given any tangent vector v = (a; b; c), not necessarily a unit vector, we still can define an operator on the set of functions which are differentiable in open neighbourhood of p as in (1.1) Thus we can take the viewpoint that each tangent vector of R3 at p is an operator on the set of differential functions at p, i.e. @ @ @ v = (a; b; v) ! a + b + c j ; @x @y @z p or simply @ @ @ v = (a; b; c) ! a + b + c (1.2) @x @y @z 3 with the evaluation at p understood.
    [Show full text]
  • Two Worked out Examples of Rotations Using Quaternions
    TWO WORKED OUT EXAMPLES OF ROTATIONS USING QUATERNIONS This note is an attachment to the article \Rotations and Quaternions" which in turn is a companion to the video of the talk by the same title. Example 1. Determine the image of the point (1; −1; 2) under the rotation by an angle of 60◦ about an axis in the yz-plane that is inclined at an angle of 60◦ to the positive y-axis. p ◦ ◦ 1 3 Solution: The unit vector u in the direction of the axis of rotation is cos 60 j + sin 60 k = 2 j + 2 k. The quaternion (or vector) corresponding to the point p = (1; −1; 2) is of course p = i − j + 2k. To find −1 θ θ the image of p under the rotation, we calculate qpq where q is the quaternion cos 2 + sin 2 u and θ the angle of rotation (60◦ in this case). The resulting quaternion|if we did the calculation right|would have no constant term and therefore we can interpret it as a vector. That vector gives us the answer. p p p p p We have q = 3 + 1 u = 3 + 1 j + 3 k = 1 (2 3 + j + 3k). Since q is by construction a unit quaternion, 2 2 2 4 p4 4 p −1 1 its inverse is its conjugate: q = 4 (2 3 − j − 3k). Now, computing qp in the routine way, we get 1 p p p p qp = ((1 − 2 3) + (2 + 3 3)i − 3j + (4 3 − 1)k) 4 and then another long but routine computation gives 1 p p p qpq−1 = ((10 + 4 3)i + (1 + 2 3)j + (14 − 3 3)k) 8 The point corresponding to the vector on the right hand side in the above equation is the image of (1; −1; 2) under the given rotation.
    [Show full text]
  • DIFFERENTIAL GEOMETRY COURSE NOTES 1.1. Review of Topology. Definition 1.1. a Topological Space Is a Pair (X,T ) Consisting of A
    DIFFERENTIAL GEOMETRY COURSE NOTES KO HONDA 1. REVIEW OF TOPOLOGY AND LINEAR ALGEBRA 1.1. Review of topology. Definition 1.1. A topological space is a pair (X; T ) consisting of a set X and a collection T = fUαg of subsets of X, satisfying the following: (1) ;;X 2 T , (2) if Uα;Uβ 2 T , then Uα \ Uβ 2 T , (3) if Uα 2 T for all α 2 I, then [α2I Uα 2 T . (Here I is an indexing set, and is not necessarily finite.) T is called a topology for X and Uα 2 T is called an open set of X. n Example 1: R = R × R × · · · × R (n times) = f(x1; : : : ; xn) j xi 2 R; i = 1; : : : ; ng, called real n-dimensional space. How to define a topology T on Rn? We would at least like to include open balls of radius r about y 2 Rn: n Br(y) = fx 2 R j jx − yj < rg; where p 2 2 jx − yj = (x1 − y1) + ··· + (xn − yn) : n n Question: Is T0 = fBr(y) j y 2 R ; r 2 (0; 1)g a valid topology for R ? n No, so you must add more open sets to T0 to get a valid topology for R . T = fU j 8y 2 U; 9Br(y) ⊂ Ug: Example 2A: S1 = f(x; y) 2 R2 j x2 + y2 = 1g. A reasonable topology on S1 is the topology induced by the inclusion S1 ⊂ R2. Definition 1.2. Let (X; T ) be a topological space and let f : Y ! X.
    [Show full text]
  • Manifold Reconstruction in Arbitrary Dimensions Using Witness Complexes Jean-Daniel Boissonnat, Leonidas J
    Manifold Reconstruction in Arbitrary Dimensions using Witness Complexes Jean-Daniel Boissonnat, Leonidas J. Guibas, Steve Oudot To cite this version: Jean-Daniel Boissonnat, Leonidas J. Guibas, Steve Oudot. Manifold Reconstruction in Arbitrary Dimensions using Witness Complexes. Discrete and Computational Geometry, Springer Verlag, 2009, pp.37. hal-00488434 HAL Id: hal-00488434 https://hal.archives-ouvertes.fr/hal-00488434 Submitted on 2 Jun 2010 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Manifold Reconstruction in Arbitrary Dimensions using Witness Complexes Jean-Daniel Boissonnat Leonidas J. Guibas Steve Y. Oudot INRIA, G´eom´etrica Team Dept. Computer Science Dept. Computer Science 2004 route des lucioles Stanford University Stanford University 06902 Sophia-Antipolis, France Stanford, CA 94305 Stanford, CA 94305 [email protected] [email protected] [email protected]∗ Abstract It is a well-established fact that the witness complex is closely related to the restricted Delaunay triangulation in low dimensions. Specifically, it has been proved that the witness complex coincides with the restricted Delaunay triangulation on curves, and is still a subset of it on surfaces, under mild sampling conditions. In this paper, we prove that these results do not extend to higher-dimensional manifolds, even under strong sampling conditions such as uniform point density.
    [Show full text]
  • Chapter 3 Vectors
    Chapter 3 Vectors 3.1 Vector Analysis ....................................................................................................... 1 3.1.1 Introduction to Vectors ................................................................................... 1 3.1.2 Properties of Vectors ....................................................................................... 1 3.2 Cartesian Coordinate System ................................................................................ 5 3.2.1 Cartesian Coordinates ..................................................................................... 6 3.3 Application of Vectors ............................................................................................ 8 Example 3.1 Vector Addition ................................................................................. 12 Example 3.2 Sinking Sailboat ................................................................................ 13 Example 3.3 Vector Description of a Point on a Line .......................................... 14 Example 3.4 Rotated Coordinate Systems ............................................................ 15 Example 3.5 Vector Description in Rotated Coordinate Systems ...................... 15 Example 3.6 Vector Addition ................................................................................. 17 Chapter 3 Vectors Philosophy is written in this grand book, the universe which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and
    [Show full text]
  • Hodge Theory
    HODGE THEORY PETER S. PARK Abstract. This exposition of Hodge theory is a slightly retooled version of the author's Harvard minor thesis, advised by Professor Joe Harris. Contents 1. Introduction 1 2. Hodge Theory of Compact Oriented Riemannian Manifolds 2 2.1. Hodge star operator 2 2.2. The main theorem 3 2.3. Sobolev spaces 5 2.4. Elliptic theory 11 2.5. Proof of the main theorem 14 3. Hodge Theory of Compact K¨ahlerManifolds 17 3.1. Differential operators on complex manifolds 17 3.2. Differential operators on K¨ahlermanifolds 20 3.3. Bott{Chern cohomology and the @@-Lemma 25 3.4. Lefschetz decomposition and the Hodge index theorem 26 Acknowledgments 30 References 30 1. Introduction Our objective in this exposition is to state and prove the main theorems of Hodge theory. In Section 2, we first describe a key motivation behind the Hodge theory for compact, closed, oriented Riemannian manifolds: the observation that the differential forms that satisfy certain par- tial differential equations depending on the choice of Riemannian metric (forms in the kernel of the associated Laplacian operator, or harmonic forms) turn out to be precisely the norm-minimizing representatives of the de Rham cohomology classes. This naturally leads to the statement our first main theorem, the Hodge decomposition|for a given compact, closed, oriented Riemannian manifold|of the space of smooth k-forms into the image of the Laplacian and its kernel, the sub- space of harmonic forms. We then develop the analytic machinery|specifically, Sobolev spaces and the theory of elliptic differential operators|that we use to prove the aforementioned decom- position, which immediately yields as a corollary the phenomenon of Poincar´eduality.
    [Show full text]