VISUALIZING EXTERIOR CALCULUS Contents Introduction 1 1. Exterior

Total Page:16

File Type:pdf, Size:1020Kb

VISUALIZING EXTERIOR CALCULUS Contents Introduction 1 1. Exterior VISUALIZING EXTERIOR CALCULUS GREGORY BIXLER Abstract. Exterior calculus, broadly, is the structure of differential forms. These are usually presented algebraically, or as computational tools to simplify Stokes' Theorem. But geometric objects like forms should be visualized! Here, I present visualizations of forms that attempt to capture their geometry. Contents Introduction 1 1. Exterior algebra 2 1.1. Vectors and dual vectors 2 1.2. The wedge of two vectors 3 1.3. Defining the exterior algebra 4 1.4. The wedge of two dual vectors 5 1.5. R3 has no mixed wedges 6 1.6. Interior multiplication 7 2. Integration 8 2.1. Differential forms 9 2.2. Visualizing differential forms 9 2.3. Integrating differential forms over manifolds 9 3. Differentiation 11 3.1. Exterior Derivative 11 3.2. Visualizing the exterior derivative 13 3.3. Closed and exact forms 14 3.4. Future work: Lie derivative 15 Acknowledgments 16 References 16 Introduction What's the natural way to integrate over a smooth manifold? Well, a k-dimensional manifold is (locally) parametrized by k coordinate functions. At each point we get k tangent vectors, which together form an infinitesimal k-dimensional volume ele- ment. And how should we measure this element? It seems reasonable to ask for a function f taking k vectors, so that • f is linear in each vector • f is zero for degenerate elements (i.e. when vectors are linearly dependent) • f is smooth These are the properties of a differential form. 1 2 GREGORY BIXLER Introducing structure to a manifold usually follows a certain pattern: • Create it in Rn. We define the exterior algebra in section 1. • Introduce it locally in coordinate neighborhoods. We devote sections 2 and 3 to this. • Understand it globally by patching together the coordinate neighborhoods. This is beyond the scope of this paper, except for a few remarks at the end of section 3. 1. Exterior algebra Given a vector space1 V , we can construct its exterior algebra2 Λ(V ), a se- quence of vector spaces Λk(V ) which represent k-dimensional volume elements.We measure these volume elements via the dual space Λk(V )_. This generalizes the line integrals seen in a typical introductory multivariable calculus class: R F · dr. Here, \F ·" is a dual vector measuring the line element dr. We face a double challenge: to see the exterior algebra of dual vectors. We'll tackle this by first visualizing dual vectors, and then visualizing the exterior algebra of (regular) vectors and dual vectors in tandem. 1.1. Vectors and dual vectors. Visually, a vector in Rn is an arrow whose tail is at the origin. It's less clear how to visualize dual vectors. Since there is an isomorphism V _ =∼ V , why not simply visualize V _ as V itself? The main problem with this approach is that it transforms the wrong way under a change of basis. For example, suppose we're initially measuring in feet and switch to inches: if our old basis is fe1; : : : ; eng, then our new basis is fe1=12; : : : ; en=12g. Vectors appear to get 12 times bigger. But for any T 2 V _ and v 2 V , the value of T (v) is independent of basis, so T must shrink by a factor of 12, not grow. Instead, in R1, view a dual vector a collection of \tick marks" on the axis: In Rn, we can do the same, with a caveat. We must choose an axis with tick marks, and project the vector into this axis. 1 In this paper, all vector spaces are finite-dimensional. 2 The notation “Λ” seems to be a corruption of the symbol ^, the exterior/wedge product. Some authors write V(V ) instead. VISUALIZING EXTERIOR CALCULUS 3 This doesn't really measure the vector: it measures the projection. We can measure vectors more directly by orthogonally extending the tick marks. This yields a series of lines in R2, planes in R3, and hyperplanes in general. Formally, we can think of this geometric representation of a dual vector T as the set of pre-images T −1(n) of integers n. 1.2. The wedge of two vectors. Allow me to start with a historical note. In R3, in addition to the vector space operations of addition and scaling, we have the dot product and cross product. We compute the dot product u · v by projecting u along v and then multiplying their signed lengths. We compute the cross product u × v by first forming the parallelogram P spanned by u and v and then setting the length of u × v to be the area of P , in the direction of the (oriented) normal to P . These operations were not always called \dot product" and \cross product". Hermann Grassman essentially invented linear algebra in his 1844 book Die lineale Ausdehnungslehre ([3]), including the exterior algebra. In the preface, he calls these the \interior product"3 and \exterior product," respectively. He reasons that the dot product takes its maximum value when one vector is inside the span of the other, while the cross product is only nonzero if each vector is outside the span of the other. 3 This is not the same as the interior multiplication we define later! Also, this is probably where the term \inner product" comes from. 4 GREGORY BIXLER Anyways, back to describing multi-dimensional volume. The cross product is essentially the right idea|it's no accident that it shows up whenever we compute a surface integral. To generalize it, we need to fix a quirk of the cross product: it describes a 2-dimensional area as a 1-dimensional vector. The fix is to describe the result as a \2-dimensional vector"!4 We implement this fix by defining a new operation, the wedge product (or exterior product) u ^ v of vectors u and v. We want it to satisfy many of the same properties as the cross product: • (u1 + u2) ^ v = u1 ^ v + u2 ^ v • c(u ^ v) = (cu) ^ v = u ^ (cv) • u ^ v = −(v ^ u), so in particular, v ^ v = 0 1.3. Defining the exterior algebra. The wedge product looks essentially like the tensor product, but with some extra relations (in the third bullet above). So, it should seem natural to define the space of 2-vectors, Λ2(V ), as a quotient of the tensor power V ⊗ V . Before we do this, recall a little trick, which shows that the relations v ^ v = 0 imply u ^ v = −v ^ u. 0 = (u + v) ^ (u + v) = u ^ u + u ^ v + v ^ u + v ^ v = u ^ v + v ^ u: We now define V ⊗ V Λ2(V ) = ; I where I is the subspace of V ⊗ V generated by the tensors v ⊗ v. We can generalize this definition to higher powers: Definition 1.1. Let V be a vector space, and k a nonnegative integer. Then V ⊗k Λk(V ) = ; Ik ⊗k where Ik is the subspace of V generated by tensors v1 ⊗ · · · ⊗ vk, where at least one of the vi is repeated. (Or equivalently, where the vi are linearly dependent.) Recall that, if fe1; : : : ; eng is a basis for V , then the elements eI = ei1 ^ · · · ^ eik form a basis for Λk(V ). Here, I ranges over all strictly increasing k-tuples in k n f1; : : : ; ng. The dimension of Λ (V ) is the number of such tuples, which is k . Finally, treating the exterior algebra as a whole makes the definition even simpler: Let T (V ) be the tensor algebra for V . Then Λ(V ) = T (V )=I; where I is the ideal generated by the tensors v ⊗ v.5 4 3 Perhaps I'm being unfair to the cross product here. It really is remarkable that in R , we can describe area using vectors, and the cross product is particularly useful since it spits out an object of the same type as its inputs. 5 The ideal I is different from the subspace I from the definition of Λ2(V ), since now we also use ⊗ when generating I. VISUALIZING EXTERIOR CALCULUS 5 1.4. The wedge of two dual vectors. We've seen that the wedge of two vectors looks like a stretchy, oriented parallelogram. How about the wedge of two dual vectors? First, recall the usual definition of 2-forms. Proposition 1.2. Let V be a finite-dimensional vector space. Then Λ2(V _) = W; 6 where W is the space of bilinear, alternating functions V × V ! R. Let's start simply, in R2. Here, we have a way to measure oriented parallelo- grams: the determinant. Indeed, the function (u; v) 7! det u v is bilinear and alternating. The determinant in R2 is a particular 2-form; recall n _ _ _ that in general, the determinant detn in R is equal to e1 ^ · · · ^ en , where ej is the usual basis for (Rn)_. Here's how we can visualize the determinant: Place a dot at each lattice point in the plane. To approximate det(u ^ v), draw the parallelogram u ^ v and count how many dots are inside. Keep track of orientations with a small circle around each dot. For example, in the below figure, det(u1 ^ u2) ≈ 3 and det(v1 ^ v2) ≈ −4. From now on, I'll mostly ignore orientations. To rescale the determinant, move the dots closer together or farther apart, reversing the circles if the scalar is negative. Due to the bilinearity of the determinant, it doesn't matter how you choose to move the dots. 6 A function is alternating if its sign flips when swapping any two of its arguments.
Recommended publications
  • Combinatorial Laplacian and Rank Aggregation
    Combinatorial Laplacian and Rank Aggregation Combinatorial Laplacian and Rank Aggregation Yuan Yao Stanford University ICIAM, Z¨urich,July 16–20, 2007 Joint work with Lek-Heng Lim Combinatorial Laplacian and Rank Aggregation Outline 1 Two Motivating Examples 2 Reflections on Ranking Ordinal vs. Cardinal Global, Local, vs. Pairwise 3 Discrete Exterior Calculus and Combinatorial Laplacian Discrete Exterior Calculus Combinatorial Laplacian Operator 4 Hodge Theory Cyclicity of Pairwise Rankings Consistency of Pairwise Rankings 5 Conclusions and Future Work Combinatorial Laplacian and Rank Aggregation Two Motivating Examples Example I: Customer-Product Rating Example (Customer-Product Rating) m×n m-by-n customer-product rating matrix X ∈ R X typically contains lots of missing values (say ≥ 90%). The first-order statistics, mean score for each product, might suffer from most customers just rate a very small portion of the products different products might have different raters, whence mean scores involve noise due to arbitrary individual rating scales Combinatorial Laplacian and Rank Aggregation Two Motivating Examples From 1st Order to 2nd Order: Pairwise Rankings The arithmetic mean of score difference between product i and j over all customers who have rated both of them, P k (Xkj − Xki ) gij = , #{k : Xki , Xkj exist} is translation invariant. If all the scores are positive, the geometric mean of score ratio over all customers who have rated both i and j, !1/#{k:Xki ,Xkj exist} Y Xkj gij = , Xki k is scale invariant. Combinatorial Laplacian and Rank Aggregation Two Motivating Examples More invariant Define the pairwise ranking gij as the probability that product j is preferred to i in excess of a purely random choice, 1 g = Pr{k : X > X } − .
    [Show full text]
  • Topology and Physics 2019 - Lecture 2
    Topology and Physics 2019 - lecture 2 Marcel Vonk February 12, 2019 2.1 Maxwell theory in differential form notation Maxwell's theory of electrodynamics is a great example of the usefulness of differential forms. A nice reference on this topic, though somewhat outdated when it comes to notation, is [1]. For notational simplicity, we will work in units where the speed of light, the vacuum permittivity and the vacuum permeability are all equal to 1: c = 0 = µ0 = 1. 2.1.1 The dual field strength In three dimensional space, Maxwell's electrodynamics describes the physics of the electric and magnetic fields E~ and B~ . These are three-dimensional vector fields, but the beauty of the theory becomes much more obvious if we (a) use a four-dimensional relativistic formulation, and (b) write it in terms of differential forms. For example, let us look at Maxwells two source-free, homogeneous equations: r · B = 0;@tB + r × E = 0: (2.1) That these equations have a relativistic flavor becomes clear if we write them out in com- ponents and organize the terms somewhat suggestively: x y z 0 + @xB + @yB + @zB = 0 x z y −@tB + 0 − @yE + @zE = 0 (2.2) y z x −@tB + @xE + 0 − @zE = 0 z y x −@tB − @xE + @yE + 0 = 0 Note that we also multiplied the last three equations by −1 to clarify the structure. All in all, we see that we have four equations (one for each space-time coordinate) which each contain terms in which the four coordinate derivatives act. Therefore, we may be tempted to write our set of equations in more \relativistic" notation as ^µν @µF = 0 (2.3) 1 with F^µν the coordinates of an antisymmetric two-tensor (i.
    [Show full text]
  • Line, Surface and Volume Integrals
    Line, Surface and Volume Integrals 1 Line integrals Z Z Z Á dr; a ¢ dr; a £ dr (1) C C C (Á is a scalar ¯eld and a is a vector ¯eld) We divide the path C joining the points A and B into N small line elements ¢rp, p = 1;:::;N. If (xp; yp; zp) is any point on the line element ¢rp, then the second type of line integral in Eq. (1) is de¯ned as Z XN a ¢ dr = lim a(xp; yp; zp) ¢ rp N!1 C p=1 where it is assumed that all j¢rpj ! 0 as N ! 1. 2 Evaluating line integrals The ¯rst type of line integral in Eq. (1) can be written as Z Z Z Á dr = i Á(x; y; z) dx + j Á(x; y; z) dy C CZ C +k Á(x; y; z) dz C The three integrals on the RHS are ordinary scalar integrals. The second and third line integrals in Eq. (1) can also be reduced to a set of scalar integrals by writing the vector ¯eld a in terms of its Cartesian components as a = axi + ayj + azk. Thus, Z Z a ¢ dr = (axi + ayj + azk) ¢ (dxi + dyj + dzk) C ZC = (axdx + aydy + azdz) ZC Z Z = axdx + aydy + azdz C C C 3 Some useful properties about line integrals: 1. Reversing the path of integration changes the sign of the integral. That is, Z B Z A a ¢ dr = ¡ a ¢ dr A B 2. If the path of integration is subdivided into smaller segments, then the sum of the separate line integrals along each segment is equal to the line integral along the whole path.
    [Show full text]
  • LP THEORY of DIFFERENTIAL FORMS on MANIFOLDS This
    TRANSACTIONSOF THE AMERICAN MATHEMATICALSOCIETY Volume 347, Number 6, June 1995 LP THEORY OF DIFFERENTIAL FORMS ON MANIFOLDS CHAD SCOTT Abstract. In this paper, we establish a Hodge-type decomposition for the LP space of differential forms on closed (i.e., compact, oriented, smooth) Rieman- nian manifolds. Critical to the proof of this result is establishing an LP es- timate which contains, as a special case, the L2 result referred to by Morrey as Gaffney's inequality. This inequality helps us show the equivalence of the usual definition of Sobolev space with a more geometric formulation which we provide in the case of differential forms on manifolds. We also prove the LP boundedness of Green's operator which we use in developing the LP theory of the Hodge decomposition. For the calculus of variations, we rigorously verify that the spaces of exact and coexact forms are closed in the LP norm. For nonlinear analysis, we demonstrate the existence and uniqueness of a solution to the /1-harmonic equation. 1. Introduction This paper contributes primarily to the development of the LP theory of dif- ferential forms on manifolds. The reader should be aware that for the duration of this paper, manifold will refer only to those which are Riemannian, compact, oriented, C°° smooth and without boundary. For p = 2, the LP theory is well understood and the L2-Hodge decomposition can be found in [M]. However, in the case p ^ 2, the LP theory has yet to be fully developed. Recent appli- cations of the LP theory of differential forms on W to both quasiconformal mappings and nonlinear elasticity continue to motivate interest in this subject.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Laplacians in Geometric Analysis
    LAPLACIANS IN GEOMETRIC ANALYSIS Syafiq Johar syafi[email protected] Contents 1 Trace Laplacian 1 1.1 Connections on Vector Bundles . .1 1.2 Local and Explicit Expressions . .2 1.3 Second Covariant Derivative . .3 1.4 Curvatures on Vector Bundles . .4 1.5 Trace Laplacian . .5 2 Harmonic Functions 6 2.1 Gradient and Divergence Operators . .7 2.2 Laplace-Beltrami Operator . .7 2.3 Harmonic Functions . .8 2.4 Harmonic Maps . .8 3 Hodge Laplacian 9 3.1 Exterior Derivatives . .9 3.2 Hodge Duals . 10 3.3 Hodge Laplacian . 12 4 Hodge Decomposition 13 4.1 De Rham Cohomology . 13 4.2 Hodge Decomposition Theorem . 14 5 Weitzenb¨ock and B¨ochner Formulas 15 5.1 Weitzenb¨ock Formula . 15 5.1.1 0-forms . 15 5.1.2 k-forms . 15 5.2 B¨ochner Formula . 17 1 Trace Laplacian In this section, we are going to present a notion of Laplacian that is regularly used in differential geometry, namely the trace Laplacian (also called the rough Laplacian or connection Laplacian). We recall the definition of connection on vector bundles which allows us to take the directional derivative of vector bundles. 1.1 Connections on Vector Bundles Definition 1.1 (Connection). Let M be a differentiable manifold and E a vector bundle over M. A connection or covariant derivative at a point p 2 M is a map D : Γ(E) ! Γ(T ∗M ⊗ E) 1 with the properties for any V; W 2 TpM; σ; τ 2 Γ(E) and f 2 C (M), we have that DV σ 2 Ep with the following properties: 1.
    [Show full text]
  • NOTES on DIFFERENTIAL FORMS. PART 3: TENSORS 1. What Is A
    NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? 1 n Let V be a finite-dimensional vector space. It could be R , it could be the tangent space to a manifold at a point, or it could just be an abstract vector space. A k-tensor is a map T : V × · · · × V ! R 2 (where there are k factors of V ) that is linear in each factor. That is, for fixed ~v2; : : : ;~vk, T (~v1;~v2; : : : ;~vk−1;~vk) is a linear function of ~v1, and for fixed ~v1;~v3; : : : ;~vk, T (~v1; : : : ;~vk) is a k ∗ linear function of ~v2, and so on. The space of k-tensors on V is denoted T (V ). Examples: n • If V = R , then the inner product P (~v; ~w) = ~v · ~w is a 2-tensor. For fixed ~v it's linear in ~w, and for fixed ~w it's linear in ~v. n • If V = R , D(~v1; : : : ;~vn) = det ~v1 ··· ~vn is an n-tensor. n • If V = R , T hree(~v) = \the 3rd entry of ~v" is a 1-tensor. • A 0-tensor is just a number. It requires no inputs at all to generate an output. Note that the definition of tensor says nothing about how things behave when you rotate vectors or permute their order. The inner product P stays the same when you swap the two vectors, but the determinant D changes sign when you swap two vectors. Both are tensors. For a 1-tensor like T hree, permuting the order of entries doesn't even make sense! ~ ~ Let fb1;:::; bng be a basis for V .
    [Show full text]
  • 1.2 Topological Tensor Calculus
    PH211 Physical Mathematics Fall 2019 1.2 Topological tensor calculus 1.2.1 Tensor fields Finite displacements in Euclidean space can be represented by arrows and have a natural vector space structure, but finite displacements in more general curved spaces, such as on the surface of a sphere, do not. However, an infinitesimal neighborhood of a point in a smooth curved space1 looks like an infinitesimal neighborhood of Euclidean space, and infinitesimal displacements dx~ retain the vector space structure of displacements in Euclidean space. An infinitesimal neighborhood of a point can be infinitely rescaled to generate a finite vector space, called the tangent space, at the point. A vector lives in the tangent space of a point. Note that vectors do not stretch from one point to vector tangent space at p p space Figure 1.2.1: A vector in the tangent space of a point. another, and vectors at different points live in different tangent spaces and so cannot be added. For example, rescaling the infinitesimal displacement dx~ by dividing it by the in- finitesimal scalar dt gives the velocity dx~ ~v = (1.2.1) dt which is a vector. Similarly, we can picture the covector rφ as the infinitesimal contours of φ in a neighborhood of a point, infinitely rescaled to generate a finite covector in the point's cotangent space. More generally, infinitely rescaling the neighborhood of a point generates the tensor space and its algebra at the point. The tensor space contains the tangent and cotangent spaces as a vector subspaces. A tensor field is something that takes tensor values at every point in a space.
    [Show full text]
  • Chapter 8 Change of Variables, Parametrizations, Surface Integrals
    Chapter 8 Change of Variables, Parametrizations, Surface Integrals x0. The transformation formula In evaluating any integral, if the integral depends on an auxiliary function of the variables involved, it is often a good idea to change variables and try to simplify the integral. The formula which allows one to pass from the original integral to the new one is called the transformation formula (or change of variables formula). It should be noted that certain conditions need to be met before one can achieve this, and we begin by reviewing the one variable situation. Let D be an open interval, say (a; b); in R , and let ' : D! R be a 1-1 , C1 mapping (function) such that '0 6= 0 on D: Put D¤ = '(D): By the hypothesis on '; it's either increasing or decreasing everywhere on D: In the former case D¤ = ('(a);'(b)); and in the latter case, D¤ = ('(b);'(a)): Now suppose we have to evaluate the integral Zb I = f('(u))'0(u) du; a for a nice function f: Now put x = '(u); so that dx = '0(u) du: This change of variable allows us to express the integral as Z'(b) Z I = f(x) dx = sgn('0) f(x) dx; '(a) D¤ where sgn('0) denotes the sign of '0 on D: We then get the transformation formula Z Z f('(u))j'0(u)j du = f(x) dx D D¤ This generalizes to higher dimensions as follows: Theorem Let D be a bounded open set in Rn;' : D! Rn a C1, 1-1 mapping whose Jacobian determinant det(D') is everywhere non-vanishing on D; D¤ = '(D); and f an integrable function on D¤: Then we have the transformation formula Z Z Z Z ¢ ¢ ¢ f('(u))j det D'(u)j du1::: dun = ¢ ¢ ¢ f(x) dx1::: dxn: D D¤ 1 Of course, when n = 1; det D'(u) is simply '0(u); and we recover the old formula.
    [Show full text]
  • Curl, Divergence and Laplacian
    Curl, Divergence and Laplacian What to know: 1. The definition of curl and it two properties, that is, theorem 1, and be able to predict qualitatively how the curl of a vector field behaves from a picture. 2. The definition of divergence and it two properties, that is, if div F~ 6= 0 then F~ can't be written as the curl of another field, and be able to tell a vector field of clearly nonzero,positive or negative divergence from the picture. 3. Know the definition of the Laplace operator 4. Know what kind of objects those operator take as input and what they give as output. The curl operator Let's look at two plots of vector fields: Figure 1: The vector field Figure 2: The vector field h−y; x; 0i: h1; 1; 0i We can observe that the second one looks like it is rotating around the z axis. We'd like to be able to predict this kind of behavior without having to look at a picture. We also promised to find a criterion that checks whether a vector field is conservative in R3. Both of those goals are accomplished using a tool called the curl operator, even though neither of those two properties is exactly obvious from the definition we'll give. Definition 1. Let F~ = hP; Q; Ri be a vector field in R3, where P , Q and R are continuously differentiable. We define the curl operator: @R @Q @P @R @Q @P curl F~ = − ~i + − ~j + − ~k: (1) @y @z @z @x @x @y Remarks: 1.
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]