Dyadic Tensor Notation Similar to What I Will Be Using in Class, with Just a Couple of Changes in Notation

Total Page:16

File Type:pdf, Size:1020Kb

Dyadic Tensor Notation Similar to What I Will Be Using in Class, with Just a Couple of Changes in Notation Here are some notes on vector and dyadic tensor notation similar to what I will be using in class, with just a couple of changes in notation. Notes = Dowling ∧ = × British notation for Cross-Product vs. American (Dowling’s) A A = Notation for Vector vs. Standard (Dowling’s). B = B Notation for Dyadic Tensor vs. Standard (Dowling’s). With these notes you should be able to make sense of the expressions in the vector identities pages below that involve the dyadic tensor T. (Notes courtesy of Peter Littlewood, University of Cambridge.) The vector identity pages are from the NRL Plasma Formulary – order your very own online or download the entire thing at: http://wwwppd.nrl.navy.mil/nrlformulary/ --JPD VECTOR IDENTITIES4 Notation: f; g; are scalars; A, B, etc., are vectors; T is a tensor; I is the unit dyad. (1) A B C = A B C = B C A = B C A = C A B = C A B · × × · · × × · · × × · (2) A (B C) = (C B) A = (A C)B (A B)C × × × × · − · (3) A (B C) + B (C A) + C (A B) = 0 × × × × × × (4) (A B) (C D) = (A C)(B D) (A D)(B C) × · × · · − · · (5) (A B) (C D) = (A B D)C (A B C)D × × × × · − × · (6) (fg) = (gf) = f g + g f r r r r (7) (fA) = f A + A f r · r · · r (8) (fA) = f A + f A r × r × r × (9) (A B) = B A A B r · × · r × − · r × (10) (A B) = A( B) B( A) + (B )A (A )B r × × r · − r · · r − · r (11) A ( B) = ( B) A (A )B × r × r · − · r (12) (A B) = A ( B) + B ( A) + (A )B + (B )A r · × r × × r × · r · r (13) 2f = f r r · r (14) 2A = ( A) A r r r · − r × r × (15) f = 0 r × r (16) A = 0 r · r × If e1, e2, e3 are orthonormal unit vectors, a second-order tensor T can be written in the dyadic form (17) T = Tij eiej Pi;j In cartesian coordinates the divergence of a tensor is a vector with components (18) ( T )i = (@Tji=@xj ) ∇· Pj [This definition is required for consistency with Eq. (29)]. In general (19) (AB) = ( A)B + (A )B r · r · · r (20) (fT ) = f T +f T r · r · ∇· 4 Let r = ix + jy + kz be the radius vector of magnitude r, from the origin to the point x; y; z. Then (21) r = 3 r · (22) r = 0 r × (23) r = r=r r (24) (1=r) = r=r3 r − (25) (r=r3) = 4πδ(r) r · (26) r = I r If V is a volume enclosed by a surface S and dS = ndS, where n is the unit normal outward from V; (27) dV f = dSf Z r Z V S (28) dV A = dS A Z r · Z · V S (29) dV T = dS T Z ∇· Z · V S (30) dV A = dS A Z r × Z × V S (31) dV (f 2g g 2f) = dS (f g g f) Z r − r Z · r − r V S (32) dV (A B B A) Z · r × r × − · r × r × V = dS (B A A B) Z · × r × − × r × S If S is an open surface bounded by the contour C, of which the line element is dl, (33) dS f = dlf Z × r I S C 5 (34) dS A = dl A Z · r × I · S C (35) (dS ) A = dl A Z × r × I × S C (36) dS ( f g) = fdg = gdf Z · r × r I − I S C C 6 EXAMPLES CLASS IN MATHEMATICAL PHYSICS V Intro duction to Tensors Commentary This is the rst of two sheets on tensors the second will come in class VI I I Tensors have many applications in theoretical physics not only in p olarization prob lems like the ones describ ed below but in dynamics elasticity uid mechanics quantum theory etc The mathematical apparatus for dealing with tensor problems may not yet be familiar so this sheet contains quite a high prop ortion of formal material Section A intro duces the concept of a tensor in this case a linear op erator that transforms one vector into another in the physical contextofstudying the p olarisation of an aspherical molecule Section B summarizes various mathematical and notational ideas that are useful in physics these include sux notation and concepts suchastheouter product of twovectors and pro jection op erators These are mathematical ob jects which take the projection or comp onent of a vector along a particular direction Section C returns to a physical problem also ab out p olarization and leads to a dis cussion of principal axes eigenvectors and eigenvalues of tensor quantities Section D intro duces the prop erties of the antisymmetric thirdrank tensor which ij k can be used to derive many results in vector calculus much more rapidly than by other metho ds This ground will be covered again in sheet VI I I so it do esnt matter if you run out of time A Intro duction Weoften nd vectors that are linearly related to other vectors for instance the p olar induced in a medium by applying a eld E isation P P E and E are parallel Their cartesian comp onents are related to each other by Here P the same constant of prop ortionality for each comp onent Here we have denoted vectors by underlinings as one would do if writing this out by hand Usually b o oks denote vectors by b old typ e in the rest of this sheet we treat the two notations as completely interchangeable It is p ossible to imagine situations where the medium is anisotropic and the resp onse factor in is dierent for comp onents in dierent directions The twovectors P and E are no longer necessarily parallel and what connects them is called a tensor We will use simple mo del of molecular p olarisability to show the underlying physics and intro duce the ideas of tensors A An Electrostatics Problem The diacetylene molecule HCCCCH is highly p olarisable along its long axis p E being the induced dip ole for a eld E k k k k applied along the molecule The resp onse to p erp endicular elds is p E where Let the cylindrical molecule point in the direction and apply a eld in k E x the x direction E x i Prove p E x x k p E y x k p z Hence one sees that p and E are not parallel and in general we must write E p where is the p olarisability tensor Tensors are written with a double underline as matricies often are or in b o oks sometimes as doubly bold b old sans serif typ eface characters In comp onent form is p E E E x xx x xy y xz z p E E E y yx x yy y yz z p E E E z zx x zy y zz z ii What are the co ecients explicitly for the ab ove example Write as a matrix of its ij comp onents Note that although the tensor is the same physical entity in all co ordinate systems it relates p with E in whatever basis these are expressed its comp onents ij dep end on the co ordinate system used is p where there Remark If many molecules are present with the same orientation P are molecules per unit volume A hyp othetical diacetylene solid might consist of such ro ds arranged in an arrayall directed along Then the macroscopic to replace in would neglecting internal eld corrections be B Miscellaneous Ideas and Notations B Sux notation Commonly suces such as i and j are dummy symb ols and when rep eated in an expression they are to b e summed over this is the Einstein Convention For instance to can be succinctly written as p E i ij j which is Eq rewritten in sux notation Note the order of the indices on the right Thus we have written the vector p simply as p and it will be clear from context that a i can be denoted vector is intended and not simply one of its comp onents Likewise ij In sux notation the dot pro duct also called scalar pro duct or inner productoftwo vectors a b is written as a b The op eration of a tensor on a vector also involves an inner i i pro duct sum over rep eated indices as describ ed ab ove The unit tensor by denition ij transforms any vector into itself v v ij j i B The outer pro duct Consider the ob ject Q dened as ij Q a b ij i j where a and b are vectors i Show using sux notation that if p isavector Q p is also a vector nd its magnitude and direction Note that Qp Q p We see that Q is a linear op erator that transforms ij j ij one vector p into another it is therefore a tensor This denes the outer product or dyadic product of two vectors In nonsux notation sometimes called dyadic notation Eq is written as Q ab a b where there is no dot between the two vectors In this notation we have or Q ab c ab c prove this using sux notation if you did not do so already ab ove This notation is used in various areas of mathematics and physics but in general sux notation is more versatile and can always be used instead ii We can write the tensor in Q A in the following form x x x y x z other terms xx xy xz Check that indeed agrees with to by nding the result of applying a eld in x the xdirection E E x p E x E E x xx yx zx x iii Pro jection op erators In dyadic notation the identity tensor is often written ij Consider the action on a vector v of the op erators uu and as the unit op erator uu where u is a unit vector Show that these p erform the job of resolving the vector v into vectors v and v resp ectively parallel and p erp endicular to u Prove that these k pro jection op erators are idempotent The denition of an idemp otent op erator G is that GG G Show that the p olarizability tensor in problem A can be written uu uu k p where u C Use of Principal Axes C A xed ob ject is placed in a uniform electric eld of kV m When the eld is in the x direction it acquires a dip ole momentwith comp onents and C m in the x y z directions For the same eld strengths in the y z directions the dip ole moments are and C m resp ectively i Write down the comp onents of the tensor that describ es the p olarisability of the ob ject using the x y z co ordinate frame p ii What is the torque T p E on the ob ject when it is placed in an electric eld of kVm in the direction Ans Nm iii Supp ose there are eld directions uv w for which there is no torque on the ob ject what relation do E and p then have Show that u v and w are the eigenvectors of iv Show that if we use as a co ordinate system the unit vectors uv w there are no odiagonal
Recommended publications
  • Relativistic Dynamics
    Chapter 4 Relativistic dynamics We have seen in the previous lectures that our relativity postulates suggest that the most efficient (lazy but smart) approach to relativistic physics is in terms of 4-vectors, and that velocities never exceed c in magnitude. In this chapter we will see how this 4-vector approach works for dynamics, i.e., for the interplay between motion and forces. A particle subject to forces will undergo non-inertial motion. According to Newton, there is a simple (3-vector) relation between force and acceleration, f~ = m~a; (4.0.1) where acceleration is the second time derivative of position, d~v d2~x ~a = = : (4.0.2) dt dt2 There is just one problem with these relations | they are wrong! Newtonian dynamics is a good approximation when velocities are very small compared to c, but outside of this regime the relation (4.0.1) is simply incorrect. In particular, these relations are inconsistent with our relativity postu- lates. To see this, it is sufficient to note that Newton's equations (4.0.1) and (4.0.2) predict that a particle subject to a constant force (and initially at rest) will acquire a velocity which can become arbitrarily large, Z t ~ d~v 0 f ~v(t) = 0 dt = t ! 1 as t ! 1 . (4.0.3) 0 dt m This flatly contradicts the prediction of special relativity (and causality) that no signal can propagate faster than c. Our task is to understand how to formulate the dynamics of non-inertial particles in a manner which is consistent with our relativity postulates (and then verify that it matches observation, including in the non-relativistic regime).
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Estimations of the Trace of Powers of Positive Self-Adjoint Operators by Extrapolation of the Moments∗
    Electronic Transactions on Numerical Analysis. ETNA Volume 39, pp. 144-155, 2012. Kent State University Copyright 2012, Kent State University. http://etna.math.kent.edu ISSN 1068-9613. ESTIMATIONS OF THE TRACE OF POWERS OF POSITIVE SELF-ADJOINT OPERATORS BY EXTRAPOLATION OF THE MOMENTS∗ CLAUDE BREZINSKI†, PARASKEVI FIKA‡, AND MARILENA MITROULI‡ Abstract. Let A be a positive self-adjoint linear operator on a real separable Hilbert space H. Our aim is to build estimates of the trace of Aq, for q ∈ R. These estimates are obtained by extrapolation of the moments of A. Applications of the matrix case are discussed, and numerical results are given. Key words. Trace, positive self-adjoint linear operator, symmetric matrix, matrix powers, matrix moments, extrapolation. AMS subject classifications. 65F15, 65F30, 65B05, 65C05, 65J10, 15A18, 15A45. 1. Introduction. Let A be a positive self-adjoint linear operator from H to H, where H is a real separable Hilbert space with inner product denoted by (·, ·). Our aim is to build estimates of the trace of Aq, for q ∈ R. These estimates are obtained by extrapolation of the integer moments (z, Anz) of A, for n ∈ N. A similar procedure was first introduced in [3] for estimating the Euclidean norm of the error when solving a system of linear equations, which corresponds to q = −2. The case q = −1, which leads to estimates of the trace of the inverse of a matrix, was studied in [4]; on this problem, see [10]. Let us mention that, when only positive powers of A are used, the Hilbert space H could be infinite dimensional, while, for negative powers of A, it is always assumed to be a finite dimensional one, and, obviously, A is also assumed to be invertible.
    [Show full text]
  • Generic Properties of Symmetric Tensors
    2006 – 1/48 – P.Comon Generic properties of Symmetric Tensors Pierre COMON I3S - CNRS other contributors: Bernard MOURRAIN INRIA institute Lek-Heng LIM, Stanford University I3S 2006 – 2/48 – P.Comon Tensors & Arrays Definitions Table T = {Tij..k} Order d of T def= # of its ways = # of its indices def Dimension n` = range of the `th index T is Square when all dimensions n` = n are equal T is Symmetric when it is square and when its entries do not change by any permutation of indices I3S 2006 – 3/48 – P.Comon Tensors & Arrays Properties Outer (tensor) product C = A ◦ B: Cij..` ab..c = Aij..` Bab..c Example 1 outer product between 2 vectors: u ◦ v = u vT Multilinearity. An order-3 tensor T is transformed by the multi-linear map {A, B, C} into a tensor T 0: 0 X Tijk = AiaBjbCkcTabc abc Similarly: at any order d. I3S 2006 – 4/48 – P.Comon Tensors & Arrays Example Example 2 Take 1 v = −1 Then 1 −1 −1 1 v◦3 = −1 1 1 −1 This is a “rank-1” symmetric tensor I3S 2006 – 5/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... CanD/PARAFAC: = + ... + . ... ... ... ... ... ... ... ... I3S 2006 – 6/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... CanD/PARAFAC: = + ... + . ... ... ... ... ... ... ... ... PARAFAC cannot be used when: • Lack of diversity I3S 2006 – 7/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... CanD/PARAFAC: = + ... + . ... ... ... ... ... ... ... ... PARAFAC cannot be used when: • Lack of diversity • Proportional slices I3S 2006 – 8/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .
    [Show full text]
  • Vectors, Matrices and Coordinate Transformations
    S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making extensive use of vectors in Dynamics, we will summarize some of their important properties. Vectors For our purposes we will think of a vector as a mathematical representation of a physical entity which has both magnitude and direction in a 3D space. Examples of physical vectors are forces, moments, and velocities. Geometrically, a vector can be represented as arrows. The length of the arrow represents its magnitude. Unless indicated otherwise, we shall assume that parallel translation does not change a vector, and we shall call the vectors satisfying this property, free vectors. Thus, two vectors are equal if and only if they are parallel, point in the same direction, and have equal length. Vectors are usually typed in boldface and scalar quantities appear in lightface italic type, e.g. the vector quantity A has magnitude, or modulus, A = |A|. In handwritten text, vectors are often expressed using the −→ arrow, or underbar notation, e.g. A , A. Vector Algebra Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • On Manifolds of Tensors of Fixed Tt-Rank
    ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format for the approximation of solutions of high dimensional problems. In this paper, we prove some new results for the TT representation of a tensor U ∈ Rn1×...×nd and for the manifold of tensors of TT-rank r. As a first result, we prove that the TT (or compression) ranks ri of a tensor U are unique and equal to the respective seperation ranks of U if the components of the TT decomposition are required to fulfil a certain maximal rank condition. We then show that d the set T of TT tensors of fixed rank r forms an embedded manifold in Rn , therefore preserving the essential theoretical properties of the Tucker format, but often showing an improved scaling behaviour. Extending a similar approach for matrices [7], we introduce certain gauge conditions to obtain a unique representation of the tangent space TU T of T and deduce a local parametrization of the TT manifold. The parametrisation of TU T is often crucial for an algorithmic treatment of high-dimensional time-dependent PDEs and minimisation problems [33]. We conclude with remarks on those applications and present some numerical examples. 1. Introduction The treatment of high-dimensional problems, typically of problems involving quantities from Rd for larger dimensions d, is still a challenging task for numerical approxima- tion. This is owed to the principal problem that classical approaches for their treatment normally scale exponentially in the dimension d in both needed storage and computa- tional time and thus quickly become computationally infeasable for sensible discretiza- tions of problems of interest.
    [Show full text]
  • Parallel Spinors and Connections with Skew-Symmetric Torsion in String Theory*
    ASIAN J. MATH. © 2002 International Press Vol. 6, No. 2, pp. 303-336, June 2002 005 PARALLEL SPINORS AND CONNECTIONS WITH SKEW-SYMMETRIC TORSION IN STRING THEORY* THOMAS FRIEDRICHt AND STEFAN IVANOV* Abstract. We describe all almost contact metric, almost hermitian and G2-structures admitting a connection with totally skew-symmetric torsion tensor, and prove that there exists at most one such connection. We investigate its torsion form, its Ricci tensor, the Dirac operator and the V- parallel spinors. In particular, we obtain partial solutions of the type // string equations in dimension n = 5, 6 and 7. 1. Introduction. Linear connections preserving a Riemannian metric with totally skew-symmetric torsion recently became a subject of interest in theoretical and mathematical physics. For example, the target space of supersymmetric sigma models with Wess-Zumino term carries a geometry of a metric connection with skew-symmetric torsion [23, 34, 35] (see also [42] and references therein). In supergravity theories, the geometry of the moduli space of a class of black holes is carried out by a metric connection with skew-symmetric torsion [27]. The geometry of NS-5 brane solutions of type II supergravity theories is generated by a metric connection with skew-symmetric torsion [44, 45, 43]. The existence of parallel spinors with respect to a metric connection with skew-symmetric torsion on a Riemannian spin manifold is of importance in string theory, since they are associated with some string solitons (BPS solitons) [43]. Supergravity solutions that preserve some of the supersymmetry of the underlying theory have found many applications in the exploration of perturbative and non-perturbative properties of string theory.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • More on Vectors Math 122 Calculus III D Joyce, Fall 2012
    More on Vectors Math 122 Calculus III D Joyce, Fall 2012 Unit vectors. A unit vector is a vector whose length is 1. If a unit vector u in the plane R2 is placed in standard position with its tail at the origin, then it's head will land on the unit circle x2 + y2 = 1. Every point on the unit circle (x; y) is of the form (cos θ; sin θ) where θ is the angle measured from the positive x-axis in the counterclockwise direction. u=(x;y)=(cos θ; sin θ) 7 '$θ q &% Thus, every unit vector in the plane is of the form u = (cos θ; sin θ). We can interpret unit vectors as being directions, and we can use them in place of angles since they carry the same information as an angle. In three dimensions, we also use unit vectors and they will still signify directions. Unit 3 vectors in R correspond to points on the sphere because if u = (u1; u2; u3) is a unit vector, 2 2 2 3 then u1 + u2 + u3 = 1. Each unit vector in R carries more information than just one angle since, if you want to name a point on a sphere, you need to give two angles, longitude and latitude. Now that we have unit vectors, we can treat every vector v as a length and a direction. The length of v is kvk, of course. And its direction is the unit vector u in the same direction which can be found by v u = : kvk The vector v can be reconstituted from its length and direction by multiplying v = kvk u.
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]