
<p><strong>APPENDIX A </strong></p><p><strong>TENSORS AND DIFFERENTIAL FORMS </strong><br><strong>ON VECTOR SPACES </strong></p><p>Since only so much of the vast and growing field of differential forms and differentiable manifolds will be actually used in this survey, we shall attempt to briefly review how the calculus of exterior differential forms on vector spaces can serve as a replacement for the more conventional vector calculus and then introduce only the most elementary notions regarding more topologically general differentiable manifolds, which will mostly be used as the basis for the discussion of Lie groups, in the following appendix. <br>Since exterior differential forms are special kinds of tensor fields – namely, completely-antisymmetric covariant ones – and tensors are important to physics, in their own right, we shall first review the basic notions concerning tensors and multilinear algebra. Presumably, the reader is familiar with linear algebra as it is usually taught to physicists, but for the “basis-free” approach to linear and multilinear algebra (which we shall not always adhere to fanatically), it would also help to have some familiarity with the more “abstract-algebraic” approach to linear algebra, such as one might learn from Hoffman and Kunze [<strong>1</strong>], for instance. </p><p><strong>1. Tensor algebra. </strong>– A tensor algebra is a type of algebra in which multiplication takes the form of the tensor product. </p><p><em>a. Tensor product. </em>– Although the tensor product of vector spaces can be given a rigorous definition in a more abstract-algebraic context (See Greub [<strong>2</strong>], for instance), for the purposes of actual calculations with tensors and tensor fields, it is usually sufficient to say that if <em>V </em>and <em>W </em>are vector spaces of dimensions <em>n </em>and <em>m</em>, respectively, then the <em>tensor product V </em>⊗ <em>W </em>will be a vector space of dimension <em>nm </em>whose elements are finite linear combinations of elements of the form <strong>v </strong>⊗ <strong>w</strong>, where <strong>v </strong>is a vector in <em>V </em>and <strong>w </strong>is a vector in <em>W</em>. The tensor product ⊗ then takes the form of a bilinear map <em>V </em>× <em>W </em>→ <em>V </em>⊗ <em>W</em>, (<strong>v</strong>, <strong>w</strong>) ֏ <strong>v </strong>⊗ <strong>w</strong>. In particular, that kind of product is not closed, since the tensor product of two vectors will belong to a different vector space. <br><em>Bilinearity </em>means that the map is linear in each factor individually, but not collectively linear. Hence: </p><p>(α <strong>v </strong>+ β <strong>v</strong>′) ⊗ <strong>w </strong>= α <strong>v </strong>⊗ <strong>w </strong>+ β <strong>v</strong>′ ⊗ <strong>w</strong>, <strong>v </strong>⊗ (α <strong>w </strong>+ β <strong>w</strong>′) = α <strong>v </strong>⊗ <strong>w </strong>+ β <strong>v </strong>⊗ <strong>w</strong>′. </p><p>(1.1) (1.2) </p><p>One can also see that this means that the tensor product is right and left distributive over vector addition. <br>Because of this bilinearity, if {<strong>e</strong><sub style="top: 0.12em;"><em>i </em></sub>, <em>i </em>= 1, …, <em>n</em>} is a basis for <em>V </em>and {<strong>f</strong><sub style="top: 0.12em;"><em>a </em></sub>, <em>a </em>= 1, …, <em>m</em>} </p><p>is a basis for <em>W </em>then {<strong>e</strong><sub style="top: 0.12em;"><em>i </em></sub>⊗ <strong>f</strong><sub style="top: 0.12em;"><em>a </em></sub>, <em>i </em>= 1, …, <em>n</em>, <em>a </em>= 1, …, <em>m</em>} will constitute a basis for <em>V </em>⊗ <em>W</em>. </p><p></p><ul style="display: flex;"><li style="flex:1">396 </li><li style="flex:1">Appendix A – Tensors, differential forms on vector spaces </li></ul><p></p><p>Hence, if <strong>t </strong>is an element of <em>V </em>⊗ <em>W </em>then it can be expressed as a linear combination of the basis elements in the form: </p><p></p><ul style="display: flex;"><li style="flex:1"><em>n</em></li><li style="flex:1"><em>m</em></li></ul><p></p><p><strong>t </strong>= <em>t</em><sup style="top: -0.46em;"><em>ia </em></sup><strong>e</strong><sub style="top: 0.12em;"><em>i </em></sub>⊗ <strong>f</strong><sub style="top: 0.12em;"><em>a </em></sub>≡ </p><p><em>t</em><sup style="top: -0.46em;"><em>ia </em></sup><strong>e </strong>⊗<strong>f </strong>. </p><p>(1.3) </p><p>∑∑ </p><p></p><ul style="display: flex;"><li style="flex:1"><em>i</em></li><li style="flex:1"><em>a</em></li></ul><p><em>i</em>=1 <em>a</em>=1 </p><p>The numbers <em>t</em><sup style="top: -0.46em;"><em>ia </em></sup>are referred to as the <em>components </em>of <strong>t </strong>with respect to the chosen basis on <em>V </em>⊗ <em>W</em>. Most of the literature of theoretical physics, even up to the present era, deals </p><p>exclusively with the components of tensors, although if brevity be the soul of wit then one can easily see that dealing with the intrinsic objects, such as <strong>v</strong>, <strong>w</strong>, and <strong>t</strong>, can add a certain clarity and conciseness to one’s mathematical expressions, even if it does involve a small investment of abstraction in the process. <br>One can also use the bilinearity of the tensor product to express the tensor product <strong>v </strong></p><p>⊗ <strong>w </strong>in terms of components. Suppose that <strong>v </strong>= <em>v</em><sup style="top: -0.46em;"><em>i </em></sup><strong>e</strong><sub style="top: 0.12em;"><em>i </em></sub>and <strong>w </strong>= <em>w</em><sup style="top: -0.46em;"><em>a </em></sup><strong>f</strong><sub style="top: 0.12em;"><em>a </em></sub>(<sup style="top: -0.46em;">1</sup>). One will then </p><p>have: </p><p><strong>v </strong>⊗ <strong>w </strong>= <em>v </em><sup style="top: -0.46em;"><em>i </em></sup><em>w</em><sup style="top: -0.46em;"><em>a </em></sup><strong>e</strong><sub style="top: 0.12em;"><em>i </em></sub>⊗ <strong>f</strong><sub style="top: 0.12em;"><em>a </em></sub>; </p><p>(1.4) i.e., the components of <strong>v </strong>⊗ <strong>w </strong>with respect to the chosen basis will be <em>v </em><sup style="top: -0.46em;"><em>i </em></sup><em>w</em><sup style="top: -0.46em;"><em>a</em></sup>. <br>We now see that there are two distinct types of elements in <em>V </em>⊗ <em>W</em>, namely, </p><p><em>decomposable </em>elements which have the form <strong>v </strong>⊗ <strong>w</strong>, and <em>indecomposable </em>elements, </p><p>which have the more general form of finite linear combinations of decomposable elements, such as in (1.3). The fact that not all elements are decomposable is due to the fact that linear combinations of decomposable elements do not have to be decomposable. In the case of exterior algebra, which we shall discuss shortly, the decomposable elements will sometimes define quadric hypersurfaces in the tensor product space, rather than vector subspaces. </p><p><em>b. Contravariant tensors</em>. – A common situation in tensor algebra (as well as in physics) is when the vector space <em>W </em>is the vector space <em>V</em>. One can then refer to the </p><p>elements of <em>V </em>⊗ <em>V </em>as <em>second-rank contravariant </em>tensors (over <em>V</em>). The term “second- </p><p>rank” refers to the fact that there are two copies of <em>V </em>in the tensor product. Hence, a basis can be defined by {<strong>e</strong><sub style="top: 0.12em;"><em>i </em></sub>⊗ <strong>e</strong><sub style="top: 0.12em;"><em>j </em></sub>, <em>i</em>, <em>j </em>= 1, …, <em>n</em>} and components will look like <em>t</em><sup style="top: -0.44em;"><em>ij </em></sup>: </p><p><strong>t </strong>= <em>t</em><sup style="top: -0.44em;"><em>ij </em></sup><strong>e</strong><sub style="top: 0.12em;"><em>i </em></sub>⊗ <strong>e</strong><sub style="top: 0.12em;"><em>j </em></sub>. </p><p>(1.5) <br>The term “contravariant” refers to the way that the components transform under a change of basis. In particular, if: </p><p><strong>e </strong>=<strong>e</strong><sub style="top: 0.26em;"><em>j </em></sub><em>A</em><sup style="top: -0.44em;"><em>j </em></sup></p><p>(1.6) </p><p><em>i</em></p><p><em>i</em></p><p>is a change of linear basis in <em>V </em>(so <em>A</em><sup style="top: -0.44em;"><em>j </em></sup>is an invertible matrix) then the components of <strong>v </strong></p><p><em>i</em></p><p>and <strong>w </strong>(which were <em>v </em><sup style="top: -0.46em;"><em>i </em></sup>and <em>w </em><sup style="top: -0.46em;"><em>i </em></sup>with respect to <strong>e</strong><sub style="top: 0.12em;"><em>i</em></sub>) will now be: </p><p>(<sup style="top: -0.38em;">1</sup>) From now on, we shall invoke the “summation convention,” which is often attributed to Einstein, namely, whenever a superscript agrees with a subscript, one sums over all defined values of the index in question. In the occasional situations where it is necessary to refer to components with doubled indices, such as the diagonal elements of matrices, the convention will usually be rescinded explicitly. </p><ul style="display: flex;"><li style="flex:1">§ 1. – Tensor algebra. </li><li style="flex:1">397 </li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1"><em>i</em></li><li style="flex:1"><em>j</em></li><li style="flex:1"><em>i</em></li><li style="flex:1"><em>j</em></li></ul><p></p><p><em>i</em></p><p><em>i</em></p><p></p><ul style="display: flex;"><li style="flex:1">ɶ</li><li style="flex:1">ɶ</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1"><em>v </em>= <em>A</em><sub style="top: 0.26em;"><em>j </em></sub><em>v </em>, </li><li style="flex:1"><em>w </em>= <em>A</em><sub style="top: 0.26em;"><em>j </em></sub><em>w </em></li></ul><p></p><p>(1.7) </p><p><em>i</em></p><p><em>i</em></p><p>ɶ</p><p>with respect to <strong>e </strong>. The notation <em>A</em><sub style="top: 0.26em;"><em>j </em></sub>refers to the inverse of the matrix <em>A</em><sub style="top: 0.26em;"><em>j </em></sub>, so this type of </p><p><em>i</em></p><p>transformation is referred to as <em>contravariant. </em><br>From the bilinearity of the tensor product, the resulting transformation of the </p><p>components of <strong>v </strong>⊗ <strong>w </strong>will be: </p><p></p><ul style="display: flex;"><li style="flex:1"><em>i</em></li><li style="flex:1"><em>j</em></li><li style="flex:1"><em>i</em></li></ul><p><em>k</em></p><ul style="display: flex;"><li style="flex:1"><em>j</em></li><li style="flex:1"><em>k</em></li><li style="flex:1"><em>l</em></li></ul><p></p><p>ɶ ɶ </p><p><em>v w </em>= <em>A A v w </em>, </p><p>(1.8) </p><p><em>l</em></p><p>and more generally (also due to the bilinearity of the tensor product), the components <em>t</em><sup style="top: -0.44em;"><em>ij </em></sup>of <strong>t</strong>, as in (1.5), will transform to: </p><p><em>ij </em></p><p><em>i k</em></p><ul style="display: flex;"><li style="flex:1"><em>j</em></li><li style="flex:1"><em>kl </em></li></ul><p></p><p>ɶ ɶ </p><p><em>t </em>= <em>A A t </em>. </p><p>(1.9) </p><p><em>l</em></p><p>Hence, one can say that they are <em>doubly-contravariant. </em><br>Since the tensor product is also associative: </p><p>(<strong>a </strong>⊗ <strong>b</strong>) ⊗ <strong>c </strong>= <strong>a </strong>⊗ (<strong>b </strong>⊗ <strong>c</strong>), </p><p>(1.10) one can define higher tensor products <em>V </em>⊗ … ⊗ <em>V </em>of a finite number of copies of <em>V</em>, and the elements of ⊗<sub style="top: 0.12em;"><em>k</em></sub><em>V </em>= <em>V </em>⊗ … ⊗ <em>V </em>when there are – say – <em>k </em>copies of <em>V </em>are then referred </p><p>to as <em>rank</em>-<em>k contravariant tensors </em>over <em>V</em>. Hence, a basis for ⊗<sub style="top: 0.12em;"><em>k</em></sub><em>V </em>can be given by all </p><p>tensor products of the form <strong>e </strong>⊗ … ⊗ <strong>e </strong>, the components of a general element <strong>t </strong>∈ ⊗<sub style="top: 0.12em;"><em>k</em></sub><em>V </em></p><p><em>i</em><sub style="top: 0.14em;">1 </sub></p><p><em>i</em><sub style="top: 0.14em;"><em>k </em></sub></p><p><em>i </em>⋯<em>i</em><sub style="top: 0.16em;"><em>k </em></sub></p><p>1</p><p></p><ul style="display: flex;"><li style="flex:1">will take the form <em>t </em></li><li style="flex:1">, and they will transform to: </li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1"><em>i</em><sub style="top: 0.16em;">1</sub>⋯<em>i</em><sub style="top: 0.16em;"><em>k </em></sub></li><li style="flex:1"><em>i</em><sub style="top: 0.14em;">1 </sub></li><li style="flex:1"><em>i</em><sub style="top: 0.14em;"><em>k </em></sub><em>j</em><sub style="top: 0.14em;">1</sub>⋯ <em>j</em><sub style="top: 0.14em;"><em>k </em></sub></li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">ɶ</li><li style="flex:1">ɶ</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1"><em>t</em></li><li style="flex:1">= <em>A</em><sub style="top: 0.26em;"><em>j </em></sub>⋯<em>A</em><sub style="top: 0.26em;"><em>j </em></sub><em>t </em></li></ul><p></p><p>(1.11) </p><p>1</p><p><em>k</em></p><p>under a change of basis on <em>V</em>. <br>Clearly, the dimension of ⊗<sub style="top: 0.12em;"><em>k</em></sub><em>V </em>will be <em>n</em><sup style="top: -0.46em;"><em>k</em></sup>. One can also form tensors of mixed rank over <em>V </em>by forming finite linear combinations of elements in various ⊗<sub style="top: 0.12em;"><em>k</em></sub><em>V</em>’s for different values of <em>k</em>. For instance, one might form the linear combination <strong>a </strong>+ <strong>b </strong>⊗ <strong>c</strong>. Such expressions cannot generally be simplified further, unless some further structure is imposed upon ⊗<sub style="top: 0.12em;"><em>k</em></sub><em>V</em>, which will usually be another algebra, for us. A tensor that does not have mixed type is referred to as <em>homogeneous</em>, although in most cases, that will be tacitly implied. </p><p>The direct sum ⊗<sub style="top: 0.12em;">0 </sub><em>V </em>⊕ ⊗<sub style="top: 0.12em;">1</sub><em>V </em>⊕ … ⊕ ⊗<sub style="top: 0.12em;"><em>k</em></sub><em>V </em>⊕ … of all the vector spaces ⊗<sub style="top: 0.12em;"><em>k</em></sub><em>V </em>as <em>k </em>varies </p><p>from 0 (⊗<sub style="top: 0.12em;">0</sub><em>V </em>≡ R) to infinity will denoted by simply ⊗<sub style="top: 0.12em;">*</sub><em>V</em>. It will be referred to as the <em>contravariant tensor algebra </em>over <em>V</em>, and will clearly be infinite-dimensional. One sees that the tensor product then makes the vector space ⊗<sub style="top: 0.12em;">*</sub><em>V </em>into an <em>algebra</em>, since it defines a </p><p>bilinear binary product on the vector space. It will also be associative and possess a unity element (namely, 1) as an algebra. In addition, since the tensor product of a homogeneous tensor of rank <em>k </em>with one of rank <em>l </em>will be a homogeneous tensor of rank <em>k </em>+ <em>l</em>, one refers to the algebra ⊗<sub style="top: 0.12em;">*</sub><em>V </em>as a <em>graded </em>algebra. </p><p></p><ul style="display: flex;"><li style="flex:1">398 </li><li style="flex:1">Appendix A – Tensors, differential forms on vector spaces </li></ul><p></p><p><em>c. Covariant tensors. </em>– The dual space <em>V</em><sup style="top: -0.46em;">* </sup>to the vector space <em>V </em>(viz., the vector space of all linear functionals on <em>V</em>) is itself a vector space, so one can still define tensors of any rank over it. Hence, ⊗<sup style="top: -0.46em;"><em>k</em></sup><em>V </em>≡ ⊗<sub style="top: 0.12em;"><em>k</em></sub><em>V</em><sup style="top: -0.46em;">* </sup>= <em>V</em><sup style="top: -0.46em;">* </sup>⊗ … ⊗ <em>V</em><sup style="top: -0.46em;">* </sup>(<em>k </em>copies) is a vector space of </p><p><em>i</em></p><p>dimension <em>n</em><sup style="top: -0.46em;"><em>k</em></sup>, and if {θ , <em>i </em>= 1, …, <em>n</em>} is a basis for <em>V</em><sup style="top: -0.46em;">* </sup>then a basis for ⊗<sup style="top: -0.46em;"><em>k</em></sup><em>V </em>can be defined </p><p><em>i</em><sub style="top: 0.16em;"><em>k </em></sub></p><p><em>k</em></p><p><em>i</em><sub style="top: 0.14em;">1 </sub></p><p>by θ ⊗ … ⊗ θ , and a general element <em>t </em>∈⊗ <em>V </em>will then take the form: </p><p><em>i</em><sub style="top: 0.16em;"><em>k </em></sub></p><p><em>i</em><sub style="top: 0.14em;">1 </sub></p><p><em>t </em>= <em>t</em><sub style="top: 0.26em;"><em>i </em>⋯<em>i </em></sub>θ ⊗ … ⊗ θ . </p><p>(1.12) </p><p>1</p><p><em>k</em></p><p>The scalar components <em>t</em><sub style="top: 0.26em;"><em>i </em>⋯<em>i </em></sub>are then the components of a rank-<em>k </em>covariant tensor over <em>V</em>. </p><p>1</p><p><em>k</em></p><p>Under a change of basis on <em>V</em><sup style="top: -0.46em;">* </sup>: </p><p></p><ul style="display: flex;"><li style="flex:1"><em>i</em></li><li style="flex:1"><em>j</em></li></ul><p></p><p><em>i</em></p><p>ɶ</p><p>θ = <em>A</em><sub style="top: 0.26em;"><em>j </em></sub>θ , </p><p>(1.13) (1.14) the components <em>t</em><sub style="top: 0.26em;"><em>i </em>⋯<em>i </em></sub>will transform to: </p><p>1</p><p><em>k</em></p><p><em>t</em></p><p>= <em>A</em><sup style="top: -0.46em;"><em>j</em></sup><sup style="top: 0.16em;">1 </sup>⋯ <em>A</em><sup style="top: -0.46em;"><em>j</em></sup><sup style="top: 0.16em;"><em>k </em></sup><em>t</em><sub style="top: 0.26em;"><em>j </em>⋯ <em>j </em></sub></p><p>.</p><p><em>i</em><sub style="top: 0.14em;">1</sub>⋯<em>i</em><sub style="top: 0.14em;"><em>k </em></sub></p><p><em>i</em><sub style="top: 0.14em;">1 </sub></p><p><em>i</em><sub style="top: 0.14em;"><em>k </em></sub></p><p>1</p><p><em>k</em></p><p><em>i</em></p><p>Of particular interest is the case in which the basis θ for <em>V</em><sup style="top: -0.46em;">* </sup>is <em>reciprocal </em>to the basis <strong>e</strong><sub style="top: 0.12em;"><em>i </em></sub>for <em>V</em>. In that case, one will have, by definition: </p><p>1 <em>i </em>= <em>j </em>0 <em>i </em>≠ <em>j</em>. </p><p></p><p><em>i</em></p><p>θ (<strong>e</strong><sub style="top: 0.12em;"><em>j</em></sub>) = δ <sup style="top: -0.44em;"><em>i </em></sup>= </p><p>(1.15) </p><p><em>j</em></p><p>δ <sup style="top: -0.44em;"><em>i</em></sup><sub style="top: 0.26em;"><em>j </em></sub>is then the <em>Kronecker delta symbol. </em></p><p>In order for a reciprocal basis to go to a reciprocal basis under a change of basis <strong>e</strong><sub style="top: 0.12em;"><em>i </em></sub>, such as in (1.6), one must have: </p><p><em>l</em></p><p>δ <sup style="top: -0.44em;"><em>i</em></sup><sub style="top: 0.26em;"><em>j </em></sub>= θ<sup style="top: -0.44em;"><em>i </em></sup>(<strong>e</strong><sub style="top: 0.26em;"><em>j </em></sub>) = <em>B</em><sup style="top: -0.44em;"><em>i </em></sup>θ (<strong>e</strong><sub style="top: 0.26em;"><em>k </em></sub><em>A</em><sup style="top: -0.44em;"><em>k</em></sup><sub style="top: 0.26em;"><em>j </em></sub>) = <em>B</em><sup style="top: -0.44em;"><em>i </em></sup><em>A</em><sup style="top: -0.44em;"><em>k</em></sup><sub style="top: 0.26em;"><em>j </em></sub>θ<sup style="top: -0.44em;"><em>l </em></sup>(<strong>e</strong><sub style="top: 0.26em;"><em>k </em></sub>) = <em>B</em><sup style="top: -0.44em;"><em>i </em></sup><em>A</em><sup style="top: -0.44em;"><em>k</em></sup><sub style="top: 0.26em;"><em>j </em></sub>δ<sub style="top: 0.26em;"><em>k</em></sub><sup style="top: -0.44em;"><em>l </em></sup>= <em>B</em><sub style="top: 0.26em;"><em>k</em></sub><sup style="top: -0.44em;"><em>i </em></sup><em>A</em><sup style="top: -0.44em;"><em>k</em></sup><sub style="top: 0.26em;"><em>j </em></sub>. </p><p></p><ul style="display: flex;"><li style="flex:1"><em>l</em></li><li style="flex:1"><em>l</em></li><li style="flex:1"><em>l</em></li></ul><p></p><p><em>i</em></p><p><em>i</em></p><p><em>i</em></p><p>ɶ</p><p>Hence, <em>B</em><sub style="top: 0.26em;"><em>j </em></sub>can only be <em>A</em><sub style="top: 0.26em;"><em>j </em></sub>. One then says that the basis θ transforms <em>contragrediently </em>to the basis <strong>e</strong><sub style="top: 0.12em;"><em>i </em></sub>, and in fact, so do the components of tensors over <em>V</em><sup style="top: -0.46em;">*</sup>. </p><p>One can also form the direct sum ⊗<sup style="top: -0.46em;">*</sup><em>V </em>≡ ⊗<sub style="top: 0.12em;">*</sub><em>V</em><sup style="top: -0.46em;">* </sup>= ⊗<sup style="top: -0.46em;">0</sup><em>V </em>⊕ ⊗<sup style="top: -0.46em;">1</sup><em>V </em>⊕ … ⊕ ⊗<sup style="top: -0.46em;"><em>k</em></sup><em>V </em>⊕ … over </p><p>all <em>k </em>and call it the <em>algebra of covariant tensors over V</em>. It will once more be an infinitedimensional graded associative algebra with unity. </p><p><em>d. Tensors mixed variance. </em>– One can take the tensor product ⊗<sup style="top: -0.44em;"><em>k</em></sup><em>V </em>≡ (⊗<sup style="top: -0.46em;"><em>k</em></sup><em>V</em>) ⊗ (⊗ <em>V</em>) </p><p><em>l</em></p><p><em>l</em></p><p>and obtain a vector space of dimension <em>n</em><sup style="top: -0.46em;"><em>k</em>+<em>l </em></sup>whose elements are finite linear combinations of (homogeneous) elements of the form α<sup style="top: -0.44em;"><em>i</em></sup><sup style="top: 0.14em;">1 </sup>⊗ … ⊗ α<sup style="top: -0.46em;"><em>i</em></sup><sup style="top: 0.16em;"><em>k </em></sup>⊗ <strong>v </strong><sub style="top: 0.26em;"><em>j </em></sub>⊗ … ⊗ <strong>v </strong><sub style="top: 0.26em;"><em>j </em></sub>, in which the α’s </p><p>1</p><p><em>l</em></p><p>are linear functionals on <em>V</em>, and the <strong>v</strong>’s are vectors in <em>V</em>. Such an element will then be a tensor that is <em>k</em>-times covariant and <em>l</em>-times contravariant. Hence, a basis for ⊗<sup style="top: -0.44em;"><em>k</em></sup><em>V </em>can be </p><p><em>l</em></p><p><em>i</em><sub style="top: 0.16em;"><em>k </em></sub></p><p><em>i</em><sub style="top: 0.14em;">1 </sub></p><p>defined by the tensor products θ ⊗ … ⊗ θ ⊗ <strong>e</strong><sub style="top: 0.26em;"><em>i </em></sub>⊗ … ⊗ <strong>e</strong><sub style="top: 0.26em;"><em>i </em></sub>, and a general element <em>t </em></p><p>1</p><p><em>l</em></p><p>∈⊗<sup style="top: -0.44em;"><em>k</em></sup><em>V </em>will take the component form: </p><p><em>l</em></p><p></p><ul style="display: flex;"><li style="flex:1">§ 1. – Tensor algebra. </li><li style="flex:1">399 </li></ul><p></p><p><em>j</em><sub style="top: 0.16em;">1</sub>⋯ <em>j</em><sub style="top: 0.16em;"><em>l </em></sub></p><p><em>i</em><sub style="top: 0.14em;"><em>i</em></sub>⋯<em>i</em><sub style="top: 0.14em;"><em>k </em></sub></p><p><em>i</em><sub style="top: 0.16em;"><em>k </em></sub></p><p><em>i</em><sub style="top: 0.14em;">1 </sub></p><p><em>t </em>= <em>t </em></p><p>θ ⊗ … ⊗ θ ⊗ <strong>e</strong><sub style="top: 0.26em;"><em>i </em></sub>⊗ … ⊗ <strong>e</strong><sub style="top: 0.26em;"><em>i </em></sub>. </p><p>(1.16) </p><p>1</p><p><em>l</em></p><p>Under a change of basis (1.6) for <em>V </em>and the contragredient change of the reciprocal basis for <em>V</em><sup style="top: -0.46em;">*</sup>, the components <em>t </em><sup style="top: -0.46em;"><em>j</em></sup><sup style="top: 0.16em;">1</sup><sup style="top: -0.46em;">⋯ <em>j</em></sup><sup style="top: 0.16em;"><em>l </em></sup>will become: </p><p><em>i</em><sub style="top: 0.14em;"><em>i</em></sub>⋯<em>i</em><sub style="top: 0.14em;"><em>k </em></sub></p><p><em>j</em><sub style="top: 0.16em;">1</sub>⋯ <em>j</em><sub style="top: 0.16em;"><em>l </em></sub></p><p><em>i</em><sub style="top: 0.14em;"><em>i</em></sub>⋯<em>i</em><sub style="top: 0.14em;"><em>k </em></sub></p><p><em>r i</em><sub style="top: 0.14em;">1 </sub></p><p><em>r</em><sub style="top: 0.14em;"><em>k </em></sub><em>i</em><sub style="top: 0.14em;"><em>k </em></sub></p>
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages92 Page
-
File Size-