(Multi)Linear Algebra 1 1.1. Tensor Products 1 1.2. the Grassmann (Exterior) Algebra and Alternating Maps 7 1.3

Total Page:16

File Type:pdf, Size:1020Kb

(Multi)Linear Algebra 1 1.1. Tensor Products 1 1.2. the Grassmann (Exterior) Algebra and Alternating Maps 7 1.3 MULTILINEAR ALGEBRA NOTES EUGENE LERMAN Contents 1. (Multi)linear algebra 1 1.1. Tensor products 1 1.2. The Grassmann (exterior) algebra and alternating maps 7 1.3. Pairings 9 1. (Multi)linear algebra The goal of this note is to define tensors, tensor algebra and Grassmann (exterior) algebra. Unless noted otherwise all vector spaces are over the real number and are finite dimensional. There are two ways to think about tensors: (1) tensors are multi-linear maps; (2) tensors are elements of a \tensor product" of two or more vector spaces. The first way is more concrete. The second is more abstract but also more powerful. 1.1. Tensor products. We start by reviewing multi-linear maps. Definition 1.1. Let V1;:::;Vn and U be vector spaces. A map n factors z }| { f :V1 × · · · × Vn! U; (v1; : : : ; vn) 7! f(v1; : : : ; vn) is multi-linear if for each fixed index i and a fixed (n − 1)-tuple of vectors v1; : : : ; vi−1; vi+1; : : : ; vn the map Vi ! U; w 7! f(v1; : : : ; vi−1; w; vi+1; : : : ; vn) is linear. When the number of factors is n, as above, we will also say that f is n-linear. n factors 2 z n }| n{ For example, if we identify Rn ' R × · · · × R by thinking of an n × n matrix as an n-tuple of column vectors, then the determinant n factors z n }| n{ det :R × · · · × R ! R; (v1; : : : ; vn) 7! det(v1j ::: jvn) is an n-linear map. Here is an example of a bilinear map. Any inner product on a vector space V : V × V 3 (v; w) 7! v · w 2 R is bilinear. There is no standard notation for the space of n-linear maps from V1 × · · · × Vn to U. We will denote it by Mult(V1 × · · · × Vn;U) = Multn(V1 × · · · × Vn;U) (n is to indicate that these are n-linear maps). This space, Mult(V1 × · · · × Vn;U), is a vector space: any linear combination of two n-linear maps is n-linear. We now take a closer look at the space of bilinear maps Mult2(V × W; U). This case is complicated enough to understand what happens with multi-linear maps in general, but simple enough not to bog down in notation. typeset February 21, 2011. 1 ∗ ∗ Lemma 1.2. Let fvig, fwjg and fukg denote the bases of V , W and U respectively and fvi g, fwj g and ∗ fukg the corresponding dual bases. Then the maps k k ∗ ∗ φij : V × W ! U; φij(v; w) = vi (v)wj (w)uk are bilinear and form a basis of Mult2(V × W; U). Hence dim Mult2(V × W; U) = dim V dim W dim U: k Proof. It is easy to see that φij are bilinear. Next, for any b 2 Mult2(V × W; U), any w 2 W and any v 2 V , X ∗ X ∗ b(v; w) = b( vi (v)vi; wj (w)wj) X ∗ ∗ = vi (v)wj (w)b(vi; wj) i;j X ∗ ∗ ∗ = vi (v)wj (w)uk(b(vi; wj))uk i;j;k X ∗ k = uk(b(vi; wj))φij(v; w): i;j;k k ∗ Hence the maps φij span Mult2(V × W; U). Also, the collection of numbers uk(b(vi; wj)) uniquely determine k the bilinear form b. Hence φij's are linearly independent. We now turn to the definition of the tensor product V ⊗ W [pronounced \V tensor W "] of two vector spaces V and W . Informally it consists of finite linear combinations of symbols v ⊗ w, where v 2 V and w 2 W . Additionally, these symbols are subject to the following identities: (v1 + v2) ⊗ w − v1 ⊗ w − v2 ⊗ w = 0 v ⊗ (w1 + w2) − v ⊗ w1 − v ⊗ w2 = 0 α (v ⊗ w) − (αv) ⊗ w = 0 α (v ⊗ w) − v ⊗ (αw) = 0; for all v; v1; v2 2 V , w; w1; w2 2 W and α 2 R. These identities simply say that the map ⊗ : V ×W ! V ⊗W , (v; w) 7! v ⊗ w, is a bilinear map. The fact that everything in V ⊗ W is a linear combination of symbols v ⊗ w means that the image of the map ⊗ : V × W ! V ⊗ W spans V ⊗ W .1 Here is the formal definition of the tensor product of two vector spaces. Definition 1.3. A tensor product of two finite dimensional vector spaces V and W is a vector space V ⊗ W together with a bilinear map ⊗ : V × W ! V ⊗ W ,(v; w) 7! v ⊗ w2 such that for any bilinear map b : V × W ! U there is a unique linear map ¯b : V ⊗ W ! U with ¯b(v ⊗ w) = b(v; w). That is, the diagram b V × W / U ; ⊗ ¯b V ⊗ W commutes. The existence of the map ¯b satisfying the above conditions is called the universal property of the tensor product. This definition is quite abstract. It is not clear that such objects exist and, if they exist, that they are unique. Setting the question of existence and uniqueness of tensor products aside, let's us sort out the relationship between V ⊗ W and bilinear maps Mult(V × W; U). Recall that Hom(X; Y ) is the space of all linear maps from a vector space X to a vector space Y and is itself a vector space. Lemma 1.4. Assume that V ⊗ W exists. Then there is a canonical isomorphism Hom(V ⊗ W; U) −!' Mult(V × W; U): 1But the image of ⊗ is not all of V ⊗ W . The elements in the image are called decomposable tensors. 2The symbol v ⊗ w stands for the value of the map ⊗ on the pair (v; w) 2 Proof. The isomorphism in question is built into the definition of the tensor product. Given a linear map A : V ⊗ W ! U the composition A ◦ ⊗ : V × W ! U is bilinear. And conversely, given a bilinear map b 2 Mult(V × W; U) there is a unique linear map ¯b : V ⊗ W ! U so that (¯b ◦ ⊗)(v; w) = b(v; w) for all (v; w) 2 V × W . In other words the maps Hom(V ⊗ W; U) 3 A 7! A ◦ ⊗ 2 Mult(V × W; U) and Mult(V × W; U) 3 b 7! ¯b 2 Hom(V ⊗ W; U) are inverses of each other. Next we observed that the uniqueness of the tensor product is also built into the definition of the tensor product. Proposition 1.5. If tensor products exist, they are unique up to isomorphism. Proof. The proof is quite formal and uses nothing but the universal property. Suppose there are two vector spaces V ⊗1W and V ⊗2W with corresponding bilinear maps ⊗1 : V ×W ! V ⊗1W and ⊗2 : V ×W ! V ⊗2W which satisfy the conditions of the Definition 1.3. We will argue that these vector spaces are isomorphic. By the universal property there exist a unique linear map ⊗1 : V ⊗2 W ! V ⊗1 W so that the diagram ⊗1 V × W / V ⊗1 W 8 ⊗2 ⊗ 1 V ⊗2 W commutes. By the same argument, switching the roles of ⊗1 and ⊗2, there is a unique linear map ⊗2 : V ⊗1 W ! V ⊗2 W making the diagram ⊗2 V × W / V ⊗2 W 8 ⊗1 ⊗ 2 V ⊗1 W commute. Define T1 = ⊗1 ◦ ⊗2 : V ⊗1 W ! V ⊗1 W T2 = ⊗2 ◦ ⊗1 : V ⊗2 W ! V ⊗2 W: These are linear maps making the diagrams ⊗1 ⊗2 V × W / V ⊗1 W and V × W / V ⊗2 W 8 8 ⊗1 ⊗2 T1 T2 V ⊗1 W V ⊗2 W commute. But the identity maps idi : V ⊗i W ! V ⊗i W , i = 1; 2, are linear and also make the respective diagrams commute. By uniqueness Ti = idi. Hence ⊗1 and ⊗2 are inverses of each other and provide the desired isomorphisms. Now we construct the tensor product as a quotient of an infinite dimensional vector space by an infinite dimensional subspace thereby proving its existence. Proposition 1.6. Tensor products exist. Proof. Let V and W be two finite dimensional vector spaces. We want to construct a new vector space V ⊗ W and a bilinear map ⊗ : V × W ! V ⊗ W satisfying the conditions of Definition 1.3. We start with a vector space F (V × W ) made of formal finite linear combinations of ordered pairs (v; w), v 2 V , w 2 W . Its basis is the set f(v; w) j v 2 V; w 2 W g = V × W . If you prefer you can think of F (V × W ) as the set of functions ff : V × W ! R j f(v; w) 6= 0 for only finitely many pairs (v; w)g: 3 This set of functions is an infinite dimensional vector space. Its basis consists of functions that take value 1 on a given pair (v0; w0) and 0 on all other pairs. It's tempting to call this function (v0; w0). The vector space F (V × W ) is called the free vector space generated by the set V × W . Note that we have an inclusion map ι : V × W ! F (V × W ), ι(v; w) = (v; w). It is not bilinear since (v1 + v2; w) 6= (v1; w) + (v2; w) in F (V; W ). Consider the smallest subspace K of F (V; W ) containing the following collection of vectors: 8 9 (v1 + v2; w) − (v1; w) − (v2; w) > > < (v; w1 + w2) − (v; w1) − (v; w2) = S = v; v1; v2 2 V; w; w1; w2 2 W and c 2 R c(v; w) − (cv; w) > > : c(v; w) − (v; cw); ; In other words, consider the subspace K of F (V ×W ) spanned by the set S.
Recommended publications
  • Equivalence of A-Approximate Continuity for Self-Adjoint
    Equivalence of A-Approximate Continuity for Self-Adjoint Expansive Linear Maps a,1 b, ,2 Sz. Gy. R´ev´esz , A. San Antol´ın ∗ aA. R´enyi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, P.O.B. 127, 1364 Hungary bDepartamento de Matem´aticas, Universidad Aut´onoma de Madrid, 28049 Madrid, Spain Abstract Let A : Rd Rd, d 1, be an expansive linear map. The notion of A-approximate −→ ≥ continuity was recently used to give a characterization of scaling functions in a multiresolution analysis (MRA). The definition of A-approximate continuity at a point x – or, equivalently, the definition of the family of sets having x as point of A-density – depend on the expansive linear map A. The aim of the present paper is to characterize those self-adjoint expansive linear maps A , A : Rd Rd for which 1 2 → the respective concepts of Aµ-approximate continuity (µ = 1, 2) coincide. These we apply to analyze the equivalence among dilation matrices for a construction of systems of MRA. In particular, we give a full description for the equivalence class of the dyadic dilation matrix among all self-adjoint expansive maps. If the so-called “four exponentials conjecture” of algebraic number theory holds true, then a similar full description follows even for general self-adjoint expansive linear maps, too. arXiv:math/0703349v2 [math.CA] 7 Sep 2007 Key words: A-approximate continuity, multiresolution analysis, point of A-density, self-adjoint expansive linear map. 1 Supported in part in the framework of the Hungarian-Spanish Scientific and Technological Governmental Cooperation, Project # E-38/04.
    [Show full text]
  • LECTURES on PURE SPINORS and MOMENT MAPS Contents 1
    LECTURES ON PURE SPINORS AND MOMENT MAPS E. MEINRENKEN Contents 1. Introduction 1 2. Volume forms on conjugacy classes 1 3. Clifford algebras and spinors 3 4. Linear Dirac geometry 8 5. The Cartan-Dirac structure 11 6. Dirac structures 13 7. Group-valued moment maps 16 References 20 1. Introduction This article is an expanded version of notes for my lectures at the summer school on `Poisson geometry in mathematics and physics' at Keio University, Yokohama, June 5{9 2006. The plan of these lectures was to give an elementary introduction to the theory of Dirac structures, with applications to Lie group valued moment maps. Special emphasis was given to the pure spinor approach to Dirac structures, developed in Alekseev-Xu [7] and Gualtieri [20]. (See [11, 12, 16] for the more standard approach.) The connection to moment maps was made in the work of Bursztyn-Crainic [10]. Parts of these lecture notes are based on a forthcoming joint paper [1] with Anton Alekseev and Henrique Bursztyn. I would like to thank the organizers of the school, Yoshi Maeda and Guiseppe Dito, for the opportunity to deliver these lectures, and for a greatly enjoyable meeting. I also thank Yvette Kosmann-Schwarzbach and the referee for a number of helpful comments. 2. Volume forms on conjugacy classes We will begin with the following FACT, which at first sight may seem quite unrelated to the theme of these lectures: FACT. Let G be a simply connected semi-simple real Lie group. Then every conjugacy class in G carries a canonical invariant volume form.
    [Show full text]
  • NOTES on DIFFERENTIAL FORMS. PART 3: TENSORS 1. What Is A
    NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? 1 n Let V be a finite-dimensional vector space. It could be R , it could be the tangent space to a manifold at a point, or it could just be an abstract vector space. A k-tensor is a map T : V × · · · × V ! R 2 (where there are k factors of V ) that is linear in each factor. That is, for fixed ~v2; : : : ;~vk, T (~v1;~v2; : : : ;~vk−1;~vk) is a linear function of ~v1, and for fixed ~v1;~v3; : : : ;~vk, T (~v1; : : : ;~vk) is a k ∗ linear function of ~v2, and so on. The space of k-tensors on V is denoted T (V ). Examples: n • If V = R , then the inner product P (~v; ~w) = ~v · ~w is a 2-tensor. For fixed ~v it's linear in ~w, and for fixed ~w it's linear in ~v. n • If V = R , D(~v1; : : : ;~vn) = det ~v1 ··· ~vn is an n-tensor. n • If V = R , T hree(~v) = \the 3rd entry of ~v" is a 1-tensor. • A 0-tensor is just a number. It requires no inputs at all to generate an output. Note that the definition of tensor says nothing about how things behave when you rotate vectors or permute their order. The inner product P stays the same when you swap the two vectors, but the determinant D changes sign when you swap two vectors. Both are tensors. For a 1-tensor like T hree, permuting the order of entries doesn't even make sense! ~ ~ Let fb1;:::; bng be a basis for V .
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • On Manifolds of Tensors of Fixed Tt-Rank
    ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format for the approximation of solutions of high dimensional problems. In this paper, we prove some new results for the TT representation of a tensor U ∈ Rn1×...×nd and for the manifold of tensors of TT-rank r. As a first result, we prove that the TT (or compression) ranks ri of a tensor U are unique and equal to the respective seperation ranks of U if the components of the TT decomposition are required to fulfil a certain maximal rank condition. We then show that d the set T of TT tensors of fixed rank r forms an embedded manifold in Rn , therefore preserving the essential theoretical properties of the Tucker format, but often showing an improved scaling behaviour. Extending a similar approach for matrices [7], we introduce certain gauge conditions to obtain a unique representation of the tangent space TU T of T and deduce a local parametrization of the TT manifold. The parametrisation of TU T is often crucial for an algorithmic treatment of high-dimensional time-dependent PDEs and minimisation problems [33]. We conclude with remarks on those applications and present some numerical examples. 1. Introduction The treatment of high-dimensional problems, typically of problems involving quantities from Rd for larger dimensions d, is still a challenging task for numerical approxima- tion. This is owed to the principal problem that classical approaches for their treatment normally scale exponentially in the dimension d in both needed storage and computa- tional time and thus quickly become computationally infeasable for sensible discretiza- tions of problems of interest.
    [Show full text]
  • Causalx: Causal Explanations and Block Multilinear Factor Analysis
    To appear: Proc. of the 2020 25th International Conference on Pattern Recognition (ICPR 2020) Milan, Italy, Jan. 10-15, 2021. CausalX: Causal eXplanations and Block Multilinear Factor Analysis M. Alex O. Vasilescu1;2 Eric Kim2;1 Xiao S. Zeng2 [email protected] [email protected] [email protected] 1Tensor Vision Technologies, Los Angeles, California 2Department of Computer Science,University of California, Los Angeles Abstract—By adhering to the dictum, “No causation without I. INTRODUCTION:PROBLEM DEFINITION manipulation (treatment, intervention)”, cause and effect data Developing causal explanations for correct results or for failures analysis represents changes in observed data in terms of changes from mathematical equations and data is important in developing in the causal factors. When causal factors are not amenable for a trustworthy artificial intelligence, and retaining public trust. active manipulation in the real world due to current technological limitations or ethical considerations, a counterfactual approach Causal explanations are germane to the “right to an explanation” performs an intervention on the model of data formation. In the statute [15], [13] i.e., to data driven decisions, such as those case of object representation or activity (temporal object) rep- that rely on images. Computer graphics and computer vision resentation, varying object parts is generally unfeasible whether problems, also known as forward and inverse imaging problems, they be spatial and/or temporal. Multilinear algebra, the algebra have been cast as causal inference questions [40], [42] consistent of higher order tensors, is a suitable and transparent framework for disentangling the causal factors of data formation. Learning a with Donald Rubin’s quantitative definition of causality, where part-based intrinsic causal factor representations in a multilinear “A causes B” means “the effect of A is B”, a measurable framework requires applying a set of interventions on a part- and experimentally repeatable quantity [14], [17].
    [Show full text]
  • Determinants in Geometric Algebra
    Determinants in Geometric Algebra Eckhard Hitzer 16 June 2003, recovered+expanded May 2020 1 Definition Let f be a linear map1, of a real linear vector space Rn into itself, an endomor- phism n 0 n f : a 2 R ! a 2 R : (1) This map is extended by outermorphism (symbol f) to act linearly on multi- vectors f(a1 ^ a2 ::: ^ ak) = f(a1) ^ f(a2) ::: ^ f(ak); k ≤ n: (2) By definition f is grade-preserving and linear, mapping multivectors to mul- tivectors. Examples are the reflections, rotations and translations described earlier. The outermorphism of a product of two linear maps fg is the product of the outermorphisms f g f[g(a1)] ^ f[g(a2)] ::: ^ f[g(ak)] = f[g(a1) ^ g(a2) ::: ^ g(ak)] = f[g(a1 ^ a2 ::: ^ ak)]; (3) with k ≤ n. The square brackets can safely be omitted. The n{grade pseudoscalars of a geometric algebra are unique up to a scalar factor. This can be used to define the determinant2 of a linear map as det(f) = f(I)I−1 = f(I) ∗ I−1; and therefore f(I) = det(f)I: (4) For an orthonormal basis fe1; e2;:::; eng the unit pseudoscalar is I = e1e2 ::: en −1 q q n(n−1)=2 with inverse I = (−1) enen−1 ::: e1 = (−1) (−1) I, where q gives the number of basis vectors, that square to −1 (the linear space is then Rp;q). According to Grassmann n-grade vectors represent oriented volume elements of dimension n. The determinant therefore shows how these volumes change under linear maps.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Multilinear Algebra in Data Analysis: Tensors, Symmetric Tensors, Nonnegative Tensors
    Multilinear Algebra in Data Analysis: tensors, symmetric tensors, nonnegative tensors Lek-Heng Lim Stanford University Workshop on Algorithms for Modern Massive Datasets Stanford, CA June 21–24, 2006 Thanks: G. Carlsson, L. De Lathauwer, J.M. Landsberg, M. Mahoney, L. Qi, B. Sturmfels; Collaborators: P. Comon, V. de Silva, P. Drineas, G. Golub References http://www-sccm.stanford.edu/nf-publications-tech.html [CGLM2] P. Comon, G. Golub, L.-H. Lim, and B. Mourrain, “Symmetric tensors and symmetric tensor rank,” SCCM Tech. Rep., 06-02, 2006. [CGLM1] P. Comon, B. Mourrain, L.-H. Lim, and G.H. Golub, “Genericity and rank deficiency of high order symmetric tensors,” Proc. IEEE Int. Con- ference on Acoustics, Speech, and Signal Processing (ICASSP), 31 (2006), no. 3, pp. 125–128. [dSL] V. de Silva and L.-H. Lim, “Tensor rank and the ill-posedness of the best low-rank approximation problem,” SCCM Tech. Rep., 06-06 (2006). [GL] G. Golub and L.-H. Lim, “Nonnegative decomposition and approximation of nonnegative matrices and tensors,” SCCM Tech. Rep., 06-01 (2006), forthcoming. [L] L.-H. Lim, “Singular values and eigenvalues of tensors: a variational approach,” Proc. IEEE Int. Workshop on Computational Advances in Multi- Sensor Adaptive Processing (CAMSAP), 1 (2005), pp. 129–132. 2 What is not a tensor, I • What is a vector? – Mathematician: An element of a vector space. – Physicist: “What kind of physical quantities can be rep- resented by vectors?” Answer: Once a basis is chosen, an n-dimensional vector is something that is represented by n real numbers only if those real numbers transform themselves as expected (ie.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • The Tensor Product and Induced Modules
    The Tensor Product and Induced Modules Nayab Khalid The Tensor Product The Tensor Product A Construction (via the Universal Property) Properties Examples References Nayab Khalid PPS Talk University of St. Andrews October 16, 2015 The Tensor Product and Induced Modules Overview of the Presentation Nayab Khalid The Tensor Product A 1 The Tensor Product Construction Properties Examples 2 A Construction References 3 Properties 4 Examples 5 References The Tensor Product and Induced Modules The Tensor Product Nayab Khalid The Tensor Product A Construction Properties The tensor product of modules is a construction that allows Examples multilinear maps to be carried out in terms of linear maps. References The tensor product can be constructed in many ways, such as using the basis of free modules. However, the standard, more comprehensive, definition of the tensor product stems from category theory and the universal property. The Tensor Product and Induced Modules The Tensor Product (via the Nayab Khalid Universal Property) The Tensor Product A Construction Properties Examples Definition References Let R be a commutative ring. Let E1; E2 and F be modules over R. Consider bilinear maps of the form f : E1 × E2 ! F : The Tensor Product and Induced Modules The Tensor Product (via the Nayab Khalid Universal Property) The Tensor Product Definition A Construction The tensor product E1 ⊗ E2 is the module with a bilinear map Properties Examples λ: E1 × E2 ! E1 ⊗ E2 References such that there exists a unique homomorphism which makes the following diagram commute. The Tensor Product and Induced Modules A Construction of the Nayab Khalid Tensor Product The Tensor Product Here is a more constructive definition of the tensor product.
    [Show full text]