TENSOR ALGEBRA in These Notes We Will Be Working in a Field F with Charf = 2. the Goal of These Notes Is to Introduce Tensor

Total Page:16

File Type:pdf, Size:1020Kb

TENSOR ALGEBRA in These Notes We Will Be Working in a Field F with Charf = 2. the Goal of These Notes Is to Introduce Tensor TENSOR ALGEBRA INNA ZAKHAREVICH In these notes we will be working in a field F with char F 6= 2. The goal of these notes is to introduce tensor products and skew-symmetric tensor products of vector spaces, with the goal of introducing determinants as an invariant of a linear transformation. Hoffman & Kunze has a discussion of determinants from this perspective as well, in Chapter 5, Sections 6 and 7; however, they discuss it from the dual perspective, in terms of multilinear functionals. Let V be a vector space over F . Recall that V ∗ = L(V; F ), and that we discussed that for finite-dimensional V there exists a (non-canonical!) isomorphism V ! V ∗. However, for infinite- dimensional V this is not necessarily the case. We do have a linear transformation V ! V ∗∗, ∗ however, defined by sending the vector v to the map fv : V ! F given by fv(α) = α(v). Equiva- lently,1 there is a map ev : V × V ∗ −! F given by (v; α) 7! α(v). Note that this map is not linear: if it were then α(v) + β(w) = ev(v; α) + ev(w; β) = ev((v; α) + (w; β)) = ev(v + w; α + β) = (α + β)(v + w): However, note that directly from the definition we can see that ev is linear in each variable sepa- rately. Thus (α + β)(v + w) = (α + β)(v) + (α + β)(w) = α(v) + β(v) + α(w) + β(w): Thus we need to either develop a separate theory of bilinear maps, or to construct a representing object: a vector space W such that bilinear maps L(W; F ) =∼ : V × V ∗ ! F This is where we diverge from Hoffman & Kunze. They proceed to develop the theory of bilinear maps, and we will construct the representing object. In addition, we solve a slightly more general problem which will in fact be easier to work with: for any two vector spaces V and W , we will construct a vector space V ⊗ W which will have the property that bilinear maps L(V ⊗ W; Z) =∼ V × W ! Z for any vector space Z. We will do the construction in two ways. First, we will do a completely general construction, which will work for both vector spaces over fields and modules over rings. The advantage of this construction is that it will involve no choices, and thus showing that various nice properties hold will be easy with this construction. After this we will do a much more computational construction using bases; this will not be a natural construction, but it will make some later computations easier. Let A be the free vector space on pairs v ⊗ w, with v 2 V and w 2 W ; by this we mean that as a set, A is the set of formal linear combinations of vectors v ⊗ w, for v 2 V and w 2 W . A is very large, and if F is infinite it will always be infinite-dimensional. From this definition we have that set maps L(A; Z) = : V × W ! Z 1You are asked to show that this equivalence holds on the next homework 1 2 INNA ZAKHAREVICH Now obviously the set of set maps V ×W ! Z is much larger than the set of linear maps, so we need to make A smaller. We do this by enforcing the bilinearity: for any f 2 L(V ⊗ W; Z) we want to have f((v+v0)⊗w) = f(v⊗w)+f(v0 ⊗w); we can enforce this by setting (v+v0)⊗w = v⊗w+v0 ⊗w in V ⊗ W . We define 8 9 (v + v0) ⊗ w − v ⊗ w − v0 ⊗ w > > < v ⊗ (w + w0) − v ⊗ w − v ⊗ w0 = A0 = span a 2 F; v 2 V; w 2 W : (av) ⊗ w − a(v ⊗ w) > > : v ⊗ (aw) − a(v ⊗ w) ; We then define V ⊗ W = A=A0; since A and A0 are abelian groups this definition is well-defined as an abelian group. All that needs to be checked is that it inherits a scalar multiplication from A.2 To get a bilinear map from a linear map V ⊗ W ! Z it suffices to check that (1) the map V × W ! V ⊗ W given by (v; w) 7! v ⊗ w is bilinear, and (2) the composition of a bilinear map and a linear map is bilinear. Thus to check that V ⊗ W has the right bijection it remains to check that if we are given a bilinear map V × W ! Z it gives us a linear map V ⊗ W ! Z. Given a bilinear map f : V × W ! Z we can define a linear map g : A ! Z by defining g(v ⊗ w) = f(v; w) for all v 2 V and w 2 W . Since these form a basis of A, g is a well-defined linear map. In order to check that we get a linear map V ⊗ W ! Z from g we just need to check that g(A0) = f0g; then g descends to a well-defined linear map on A=A0. In particular, we need to check that g is 0 on each of the four types of generators of A0; that this holds follows directly from the bilinearity of f. This construction of V ⊗W is very clean and formal, but it does not give us a good model for how to work with V ⊗ W explicitly. For example, if dim V = m and dim W = n, what is dim V ⊗ W ? To answer this question, we give a second construction of V ⊗ W , this time using a basis. Let fv1; : : : ; vmg be a basis for V and fw1; : : : ; wng be a basis for W . Then we claim that fvi ⊗ wj j 1 ≤ i ≤ m; 1 ≤ j ≤ ng is a basis for V ⊗ W . Indeed, note that any v ⊗ w 2 V ⊗ W can Pm Pn be written in terms of these: if v = i=1 aivi and w = j=1 bjwj then m ! m m X X X v ⊗ w = aivi ⊗ w = (aivi) ⊗ w = ai(vi ⊗ w) i=1 i=1 i=1 m 0 n 1 m n m n X X X X X X = ai @vi ⊗ bjwjA = ai vi ⊗ (bjwj) = ai bj(vi ⊗ wj) i=1 j=1 i=1 j=1 i=1 j=1 m n X X = aibj(vi ⊗ wj): i=1 j=1 Thus V ⊗ W = span(v ⊗ w j v 2 V; w 2 W ) = span(vi ⊗ wj j 1 ≤ i ≤ m; 1 ≤ j ≤ n): We claim that this is actually a basis for V ⊗W , and thus that we can define V ⊗W to be the vector space with this basis. Checking that these are linearly independent directly from the definition is somewhat difficult, so we do something slightly indirect. Let Z be a vector space on a basis zij, with 1 ≤ i ≤ m and 1 ≤ j ≤ n. We clearly have a linear transformation Z ! V ⊗ W given by zij 7! vi ⊗ wj. We will show that this is an isomorphism by constructing an inverse. To do this, note that the above calculation of v ⊗ w in terms of the vi ⊗ wj defines a linear map A ! Z, so it suffices to check that this is 0 on A0; this follows directly from the definition of A0. Note that since vi ⊗ wj 2 A, the map V ⊗ W ! Z is surjective, and checking that it is in fact the inverse to 2Note that this definition works equally well if V and W are R-modules; this may be helpful on the homework. TENSOR ALGEBRA 3 the map Z ! V ⊗ W is straightforward.3 Thus we see that we could alternately define V ⊗ W as the vector space with basis vi ⊗ wj. From this it follows that dim V ⊗ W = mn. Important: the vector space V ⊗ W is spanned by vectors of the form v ⊗ w, but not every vector in V ⊗ W can be written in this form. For example, if dim V; dim W ≥ 2 and fv1; v2g and fw1; w2g are linearly independent in V and W , respectively, then v1 ⊗ w1 + v2 ⊗ w2 is not a pure tensor. Example 0.1. If W = F then V ⊗ W =∼ V . Example 0.2. Let V = W = F , and let µ : F × F ! F be the usual multiplication in F . This is a bilinear map by the distributivity property, so it gives us a linear map F ⊗ F ! F . Example 0.3. Let V; W; Z be vector spaces. Composition of linear maps is bilinear (check this!) and thus composition gives us a linear map L(W; Z) ⊗ L(V; W ) !L(V; Z). Note that if T : V ! W and S : V 0 ! W 0 are linear transformations, then we get a linear transformation T ⊗ S : V ⊗ V 0 ! W ⊗ W 0 by defining (T ⊗ S)(v ⊗ v0) = (T v) ⊗ (Sv0): To check that it is linear it suffices to check that the map V × V 0 ! W ⊗ W 0 given by (v; v0) 7! (T v) ⊗ (Sv0) is bilinear.4 If all of these vector spaces are finite dimensional, then the matrix of the linear transformation T ⊗ S will be the (dim V )(dim V 0) × (dim W )(dim W 0) matrix whose entries will be the pairwise products of entries in the matrix of T with entries in the matrix of S. There is one more important definition we need before we can introduce the determinant. We will want to be able to look at the skew-symmetric tensor product: we want to look at the tensor product with one extra relation imposed: that v ⊗ v0 = −v0 ⊗ v.
Recommended publications
  • Equivalence of A-Approximate Continuity for Self-Adjoint
    Equivalence of A-Approximate Continuity for Self-Adjoint Expansive Linear Maps a,1 b, ,2 Sz. Gy. R´ev´esz , A. San Antol´ın ∗ aA. R´enyi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, P.O.B. 127, 1364 Hungary bDepartamento de Matem´aticas, Universidad Aut´onoma de Madrid, 28049 Madrid, Spain Abstract Let A : Rd Rd, d 1, be an expansive linear map. The notion of A-approximate −→ ≥ continuity was recently used to give a characterization of scaling functions in a multiresolution analysis (MRA). The definition of A-approximate continuity at a point x – or, equivalently, the definition of the family of sets having x as point of A-density – depend on the expansive linear map A. The aim of the present paper is to characterize those self-adjoint expansive linear maps A , A : Rd Rd for which 1 2 → the respective concepts of Aµ-approximate continuity (µ = 1, 2) coincide. These we apply to analyze the equivalence among dilation matrices for a construction of systems of MRA. In particular, we give a full description for the equivalence class of the dyadic dilation matrix among all self-adjoint expansive maps. If the so-called “four exponentials conjecture” of algebraic number theory holds true, then a similar full description follows even for general self-adjoint expansive linear maps, too. arXiv:math/0703349v2 [math.CA] 7 Sep 2007 Key words: A-approximate continuity, multiresolution analysis, point of A-density, self-adjoint expansive linear map. 1 Supported in part in the framework of the Hungarian-Spanish Scientific and Technological Governmental Cooperation, Project # E-38/04.
    [Show full text]
  • LECTURES on PURE SPINORS and MOMENT MAPS Contents 1
    LECTURES ON PURE SPINORS AND MOMENT MAPS E. MEINRENKEN Contents 1. Introduction 1 2. Volume forms on conjugacy classes 1 3. Clifford algebras and spinors 3 4. Linear Dirac geometry 8 5. The Cartan-Dirac structure 11 6. Dirac structures 13 7. Group-valued moment maps 16 References 20 1. Introduction This article is an expanded version of notes for my lectures at the summer school on `Poisson geometry in mathematics and physics' at Keio University, Yokohama, June 5{9 2006. The plan of these lectures was to give an elementary introduction to the theory of Dirac structures, with applications to Lie group valued moment maps. Special emphasis was given to the pure spinor approach to Dirac structures, developed in Alekseev-Xu [7] and Gualtieri [20]. (See [11, 12, 16] for the more standard approach.) The connection to moment maps was made in the work of Bursztyn-Crainic [10]. Parts of these lecture notes are based on a forthcoming joint paper [1] with Anton Alekseev and Henrique Bursztyn. I would like to thank the organizers of the school, Yoshi Maeda and Guiseppe Dito, for the opportunity to deliver these lectures, and for a greatly enjoyable meeting. I also thank Yvette Kosmann-Schwarzbach and the referee for a number of helpful comments. 2. Volume forms on conjugacy classes We will begin with the following FACT, which at first sight may seem quite unrelated to the theme of these lectures: FACT. Let G be a simply connected semi-simple real Lie group. Then every conjugacy class in G carries a canonical invariant volume form.
    [Show full text]
  • NOTES on DIFFERENTIAL FORMS. PART 3: TENSORS 1. What Is A
    NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? 1 n Let V be a finite-dimensional vector space. It could be R , it could be the tangent space to a manifold at a point, or it could just be an abstract vector space. A k-tensor is a map T : V × · · · × V ! R 2 (where there are k factors of V ) that is linear in each factor. That is, for fixed ~v2; : : : ;~vk, T (~v1;~v2; : : : ;~vk−1;~vk) is a linear function of ~v1, and for fixed ~v1;~v3; : : : ;~vk, T (~v1; : : : ;~vk) is a k ∗ linear function of ~v2, and so on. The space of k-tensors on V is denoted T (V ). Examples: n • If V = R , then the inner product P (~v; ~w) = ~v · ~w is a 2-tensor. For fixed ~v it's linear in ~w, and for fixed ~w it's linear in ~v. n • If V = R , D(~v1; : : : ;~vn) = det ~v1 ··· ~vn is an n-tensor. n • If V = R , T hree(~v) = \the 3rd entry of ~v" is a 1-tensor. • A 0-tensor is just a number. It requires no inputs at all to generate an output. Note that the definition of tensor says nothing about how things behave when you rotate vectors or permute their order. The inner product P stays the same when you swap the two vectors, but the determinant D changes sign when you swap two vectors. Both are tensors. For a 1-tensor like T hree, permuting the order of entries doesn't even make sense! ~ ~ Let fb1;:::; bng be a basis for V .
    [Show full text]
  • A Note on Bilinear Forms
    A note on bilinear forms Darij Grinberg July 9, 2020 Contents 1. Introduction1 2. Bilinear maps and forms1 3. Left and right orthogonal spaces4 4. Interlude: Symmetric and antisymmetric bilinear forms 14 5. The morphism of quotients I 15 6. The morphism of quotients II 20 7. More on orthogonal spaces 24 1. Introduction In this note, we shall prove some elementary linear-algebraic properties of bi- linear forms. These properties generalize some of the standard facts about non- degenerate bilinear forms on finite-dimensional vector spaces (e.g., the fact that ? V? = V for a vector subspace V of a vector space A equipped with a nonde- generate symmetric bilinear form) to bilinear forms which may be degenerate. They are unlikely to be new, but I have not found them explored anywhere, whence this note. 2. Bilinear maps and forms We fix a field k. This field will be fixed for the rest of this note. The notion of a “vector space” will always be understood to mean “k-vector space”. The 1 A note on bilinear forms July 9, 2020 word “subspace” will always mean “k-vector subspace”. The word “linear” will always mean “k-linear”. If V and W are two k-vector spaces, then Hom (V, W) denotes the vector space of all linear maps from V to W. If S is a subset of a vector space V, then span S will mean the span of S (that is, the subspace of V spanned by S). We recall the definition of a bilinear map: Definition 2.1.
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Bornologically Isomorphic Representations of Tensor Distributions
    Bornologically isomorphic representations of distributions on manifolds E. Nigsch Thursday 15th November, 2018 Abstract Distributional tensor fields can be regarded as multilinear mappings with distributional values or as (classical) tensor fields with distribu- tional coefficients. We show that the corresponding isomorphisms hold also in the bornological setting. 1 Introduction ′ ′ ′r s ′ Let D (M) := Γc(M, Vol(M)) and Ds (M) := Γc(M, Tr(M) ⊗ Vol(M)) be the strong duals of the space of compactly supported sections of the volume s bundle Vol(M) and of its tensor product with the tensor bundle Tr(M) over a manifold; these are the spaces of scalar and tensor distributions on M as defined in [?, ?]. A property of the space of tensor distributions which is fundamental in distributional geometry is given by the C∞(M)-module isomorphisms ′r ∼ s ′ ∼ r ′ Ds (M) = LC∞(M)(Tr (M), D (M)) = Ts (M) ⊗C∞(M) D (M) (1) (cf. [?, Theorem 3.1.12 and Corollary 3.1.15]) where C∞(M) is the space of smooth functions on M. In[?] a space of Colombeau-type nonlinear generalized tensor fields was constructed. This involved handling smooth functions (in the sense of convenient calculus as developed in [?]) in par- arXiv:1105.1642v1 [math.FA] 9 May 2011 ∞ r ′ ticular on the C (M)-module tensor products Ts (M) ⊗C∞(M) D (M) and Γ(E) ⊗C∞(M) Γ(F ), where Γ(E) denotes the space of smooth sections of a vector bundle E over M. In[?], however, only minor attention was paid to questions of topology on these tensor products.
    [Show full text]
  • Determinants in Geometric Algebra
    Determinants in Geometric Algebra Eckhard Hitzer 16 June 2003, recovered+expanded May 2020 1 Definition Let f be a linear map1, of a real linear vector space Rn into itself, an endomor- phism n 0 n f : a 2 R ! a 2 R : (1) This map is extended by outermorphism (symbol f) to act linearly on multi- vectors f(a1 ^ a2 ::: ^ ak) = f(a1) ^ f(a2) ::: ^ f(ak); k ≤ n: (2) By definition f is grade-preserving and linear, mapping multivectors to mul- tivectors. Examples are the reflections, rotations and translations described earlier. The outermorphism of a product of two linear maps fg is the product of the outermorphisms f g f[g(a1)] ^ f[g(a2)] ::: ^ f[g(ak)] = f[g(a1) ^ g(a2) ::: ^ g(ak)] = f[g(a1 ^ a2 ::: ^ ak)]; (3) with k ≤ n. The square brackets can safely be omitted. The n{grade pseudoscalars of a geometric algebra are unique up to a scalar factor. This can be used to define the determinant2 of a linear map as det(f) = f(I)I−1 = f(I) ∗ I−1; and therefore f(I) = det(f)I: (4) For an orthonormal basis fe1; e2;:::; eng the unit pseudoscalar is I = e1e2 ::: en −1 q q n(n−1)=2 with inverse I = (−1) enen−1 ::: e1 = (−1) (−1) I, where q gives the number of basis vectors, that square to −1 (the linear space is then Rp;q). According to Grassmann n-grade vectors represent oriented volume elements of dimension n. The determinant therefore shows how these volumes change under linear maps.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • A Symplectic Banach Space with No Lagrangian Subspaces
    transactions of the american mathematical society Volume 273, Number 1, September 1982 A SYMPLECTIC BANACHSPACE WITH NO LAGRANGIANSUBSPACES BY N. J. KALTON1 AND R. C. SWANSON Abstract. In this paper we construct a symplectic Banach space (X, Ü) which does not split as a direct sum of closed isotropic subspaces. Thus, the question of whether every symplectic Banach space is isomorphic to one of the canonical form Y X Y* is settled in the negative. The proof also shows that £(A") admits a nontrivial continuous homomorphism into £(//) where H is a Hilbert space. 1. Introduction. Given a Banach space E, a linear symplectic form on F is a continuous bilinear map ß: E X E -> R which is alternating and nondegenerate in the (strong) sense that the induced map ß: E — E* given by Û(e)(f) = ü(e, f) is an isomorphism of E onto E*. A Banach space with such a form is called a symplectic Banach space. It can be shown, by essentially the argument of Lemma 2 below, that any symplectic Banach space can be renormed so that ß is an isometry. Any symplectic Banach space is reflexive. Standard examples of symplectic Banach spaces all arise in the following way. Let F be a reflexive Banach space and set E — Y © Y*. Define the linear symplectic form fiyby Qy[(^. y% (z>z*)] = z*(y) ~y*(z)- We define two symplectic spaces (£,, ß,) and (E2, ß2) to be equivalent if there is an isomorphism A : Ex -» E2 such that Q2(Ax, Ay) = ß,(x, y). A. Weinstein [10] has asked the question whether every symplectic Banach space is equivalent to one of the form (Y © Y*, üy).
    [Show full text]
  • Math 395. Tensor Products and Bases Let V and V Be Finite-Dimensional
    Math 395. Tensor products and bases Let V and V 0 be finite-dimensional vector spaces over a field F . Recall that a tensor product of V and V 0 is a pait (T, t) consisting of a vector space T over F and a bilinear pairing t : V × V 0 → T with the following universal property: for any bilinear pairing B : V × V 0 → W to any vector space W over F , there exists a unique linear map L : T → W such that B = L ◦ t. Roughly speaking, t “uniquely linearizes” all bilinear pairings of V and V 0 into arbitrary F -vector spaces. In class it was proved that if (T, t) and (T 0, t0) are two tensor products of V and V 0, then there exists a unique linear isomorphism T ' T 0 carrying t and t0 (and vice-versa). In this sense, the tensor product of V and V 0 (equipped with its “universal” bilinear pairing from V × V 0!) is unique up to unique isomorphism, and so we may speak of “the” tensor product of V and V 0. You must never forget to think about the data of t when you contemplate the tensor product of V and V 0: it is the pair (T, t) and not merely the underlying vector space T that is the focus of interest. In this handout, we review a method of construction of tensor products (there is another method that involved no choices, but is horribly “big”-looking and is needed when considering modules over commutative rings) and we work out some examples related to the construction.
    [Show full text]
  • Tensor Algebra
    TENSOR ALGEBRA Continuum Mechanics Course (MMC) - ETSECCPB - UPC Introduction to Tensors Tensor Algebra 2 Introduction SCALAR , , ... v VECTOR vf, , ... MATRIX σε,,... ? C,... 3 Concept of Tensor A TENSOR is an algebraic entity with various components which generalizes the concepts of scalar, vector and matrix. Many physical quantities are mathematically represented as tensors. Tensors are independent of any reference system but, by need, are commonly represented in one by means of their “component matrices”. The components of a tensor will depend on the reference system chosen and will vary with it. 4 Order of a Tensor The order of a tensor is given by the number of indexes needed to specify without ambiguity a component of a tensor. a Scalar: zero dimension 3.14 1.2 v 0.3 a , a Vector: 1 dimension i 0.8 0.1 0 1.3 2nd order: 2 dimensions A, A E 02.40.5 ij rd A , A 3 order: 3 dimensions 1.3 0.5 5.8 A , A 4th order … 5 Cartesian Coordinate System Given an orthonormal basis formed by three mutually perpendicular unit vectors: eeˆˆ12,, ee ˆˆ 23 ee ˆˆ 31 Where: eeeˆˆˆ1231, 1, 1 Note that 1 if ij eeˆˆi j ij 0 if ij 6 Cylindrical Coordinate System x3 xr1 cos x(,rz , ) xr2 sin xz3 eeeˆˆˆr cosθθ 12 sin eeeˆˆˆsinθθ cos x2 12 eeˆˆz 3 x1 7 Spherical Coordinate System x3 xr1 sin cos xrxr, , 2 sin sin xr3 cos ˆˆˆˆ x2 eeeer sinθφ sin 123sin θ cos φ cos θ eeeˆˆˆ cosφφ 12sin x1 eeeeˆˆˆˆφ cosθφ sin 123cos θ cos φ sin θ 8 Indicial or (Index) Notation Tensor Algebra 9 Tensor Bases – VECTOR A vector v can be written as a unique linear combination of the three vector basis eˆ for i 1, 2, 3 .
    [Show full text]