Multilinear Algebra and Applications July 15, 2014
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Package 'Einsum'
Package ‘einsum’ May 15, 2021 Type Package Title Einstein Summation Version 0.1.0 Description The summation notation suggested by Einstein (1916) <doi:10.1002/andp.19163540702> is a concise mathematical notation that implicitly sums over repeated indices of n- dimensional arrays. Many ordinary matrix operations (e.g. transpose, matrix multiplication, scalar product, 'diag()', trace etc.) can be written using Einstein notation. The notation is particularly convenient for expressing operations on arrays with more than two dimensions because the respective operators ('tensor products') might not have a standardized name. License MIT + file LICENSE Encoding UTF-8 SystemRequirements C++11 Suggests testthat, covr RdMacros mathjaxr RoxygenNote 7.1.1 LinkingTo Rcpp Imports Rcpp, glue, mathjaxr R topics documented: einsum . .2 einsum_package . .3 Index 5 1 2 einsum einsum Einstein Summation Description Einstein summation is a convenient and concise notation for operations on n-dimensional arrays. Usage einsum(equation_string, ...) einsum_generator(equation_string, compile_function = TRUE) Arguments equation_string a string in Einstein notation where arrays are separated by ’,’ and the result is separated by ’->’. For example "ij,jk->ik" corresponds to a standard matrix multiplication. Whitespace inside the equation_string is ignored. Unlike the equivalent functions in Python, einsum() only supports the explicit mode. This means that the equation_string must contain ’->’. ... the arrays that are combined. All arguments are converted to arrays with -
Notes on Manifolds
Notes on Manifolds Justin H. Le Department of Electrical & Computer Engineering University of Nevada, Las Vegas [email protected] August 3, 2016 1 Multilinear maps A tensor T of order r can be expressed as the tensor product of r vectors: T = u1 ⊗ u2 ⊗ ::: ⊗ ur (1) We herein fix r = 3 whenever it eases exposition. Recall that a vector u 2 U can be expressed as the combination of the basis vectors of U. Transform these basis vectors with a matrix A, and if the resulting vector u0 is equivalent to uA, then the components of u are said to be covariant. If u0 = A−1u, i.e., the vector changes inversely with the change of basis, then the components of u are contravariant. By Einstein notation, we index the covariant components of a tensor in subscript and the contravariant components in superscript. Just as the components of a vector u can be indexed by an integer i (as in ui), tensor components can be indexed as Tijk. Additionally, as we can view a matrix to be a linear map M : U ! V from one finite-dimensional vector space to another, we can consider a tensor to be multilinear map T : V ∗r × V s ! R, where V s denotes the s-th-order Cartesian product of vector space V with itself and likewise for its algebraic dual space V ∗. In this sense, a tensor maps an ordered sequence of vectors to one of its (scalar) components. Just as a linear map satisfies M(a1u1 + a2u2) = a1M(u1) + a2M(u2), we call an r-th-order tensor multilinear if it satisfies T (u1; : : : ; a1v1 + a2v2; : : : ; ur) = a1T (u1; : : : ; v1; : : : ; ur) + a2T (u1; : : : ; v2; : : : ; ur); (2) for scalars a1 and a2. -
21. Orthonormal Bases
21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else. -
Abstract Tensor Systems and Diagrammatic Representations
Abstract tensor systems and diagrammatic representations J¯anisLazovskis September 28, 2012 Abstract The diagrammatic tensor calculus used by Roger Penrose (most notably in [7]) is introduced without a solid mathematical grounding. We will attempt to derive the tools of such a system, but in a broader setting. We show that Penrose's work comes from the diagrammisation of the symmetric algebra. Lie algebra representations and their extensions to knot theory are also discussed. Contents 1 Abstract tensors and derived structures 2 1.1 Abstract tensor notation . 2 1.2 Some basic operations . 3 1.3 Tensor diagrams . 3 2 A diagrammised abstract tensor system 6 2.1 Generation . 6 2.2 Tensor concepts . 9 3 Representations of algebras 11 3.1 The symmetric algebra . 12 3.2 Lie algebras . 13 3.3 The tensor algebra T(g)....................................... 16 3.4 The symmetric Lie algebra S(g)................................... 17 3.5 The universal enveloping algebra U(g) ............................... 18 3.6 The metrized Lie algebra . 20 3.6.1 Diagrammisation with a bilinear form . 20 3.6.2 Diagrammisation with a symmetric bilinear form . 24 3.6.3 Diagrammisation with a symmetric bilinear form and an orthonormal basis . 24 3.6.4 Diagrammisation under ad-invariance . 29 3.7 The universal enveloping algebra U(g) for a metrized Lie algebra g . 30 4 Ensuing connections 32 A Appendix 35 Note: This work relies heavily upon the text of Chapter 12 of a draft of \An Introduction to Quantum and Vassiliev Invariants of Knots," by David M.R. Jackson and Iain Moffatt, a yet-unpublished book at the time of writing. -
The Mechanics of the Fermionic and Bosonic Fields: an Introduction to the Standard Model and Particle Physics
The Mechanics of the Fermionic and Bosonic Fields: An Introduction to the Standard Model and Particle Physics Evan McCarthy Phys. 460: Seminar in Physics, Spring 2014 Aug. 27,! 2014 1.Introduction 2.The Standard Model of Particle Physics 2.1.The Standard Model Lagrangian 2.2.Gauge Invariance 3.Mechanics of the Fermionic Field 3.1.Fermi-Dirac Statistics 3.2.Fermion Spinor Field 4.Mechanics of the Bosonic Field 4.1.Spin-Statistics Theorem 4.2.Bose Einstein Statistics !5.Conclusion ! 1. Introduction While Quantum Field Theory (QFT) is a remarkably successful tool of quantum particle physics, it is not used as a strictly predictive model. Rather, it is used as a framework within which predictive models - such as the Standard Model of particle physics (SM) - may operate. The overarching success of QFT lends it the ability to mathematically unify three of the four forces of nature, namely, the strong and weak nuclear forces, and electromagnetism. Recently substantiated further by the prediction and discovery of the Higgs boson, the SM has proven to be an extraordinarily proficient predictive model for all the subatomic particles and forces. The question remains, what is to be done with gravity - the fourth force of nature? Within the framework of QFT theoreticians have predicted the existence of yet another boson called the graviton. For this reason QFT has a very attractive allure, despite its limitations. According to !1 QFT the gravitational force is attributed to the interaction between two gravitons, however when applying the equations of General Relativity (GR) the force between two gravitons becomes infinite! Results like this are nonsensical and must be resolved for the theory to stand. -
Causalx: Causal Explanations and Block Multilinear Factor Analysis
To appear: Proc. of the 2020 25th International Conference on Pattern Recognition (ICPR 2020) Milan, Italy, Jan. 10-15, 2021. CausalX: Causal eXplanations and Block Multilinear Factor Analysis M. Alex O. Vasilescu1;2 Eric Kim2;1 Xiao S. Zeng2 [email protected] [email protected] [email protected] 1Tensor Vision Technologies, Los Angeles, California 2Department of Computer Science,University of California, Los Angeles Abstract—By adhering to the dictum, “No causation without I. INTRODUCTION:PROBLEM DEFINITION manipulation (treatment, intervention)”, cause and effect data Developing causal explanations for correct results or for failures analysis represents changes in observed data in terms of changes from mathematical equations and data is important in developing in the causal factors. When causal factors are not amenable for a trustworthy artificial intelligence, and retaining public trust. active manipulation in the real world due to current technological limitations or ethical considerations, a counterfactual approach Causal explanations are germane to the “right to an explanation” performs an intervention on the model of data formation. In the statute [15], [13] i.e., to data driven decisions, such as those case of object representation or activity (temporal object) rep- that rely on images. Computer graphics and computer vision resentation, varying object parts is generally unfeasible whether problems, also known as forward and inverse imaging problems, they be spatial and/or temporal. Multilinear algebra, the algebra have been cast as causal inference questions [40], [42] consistent of higher order tensors, is a suitable and transparent framework for disentangling the causal factors of data formation. Learning a with Donald Rubin’s quantitative definition of causality, where part-based intrinsic causal factor representations in a multilinear “A causes B” means “the effect of A is B”, a measurable framework requires applying a set of interventions on a part- and experimentally repeatable quantity [14], [17]. -
Geometric-Algebra Adaptive Filters Wilder B
1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions. -
Matrices and Tensors
APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM.. -
28. Exterior Powers
28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1. -
Multilinear Algebra in Data Analysis: Tensors, Symmetric Tensors, Nonnegative Tensors
Multilinear Algebra in Data Analysis: tensors, symmetric tensors, nonnegative tensors Lek-Heng Lim Stanford University Workshop on Algorithms for Modern Massive Datasets Stanford, CA June 21–24, 2006 Thanks: G. Carlsson, L. De Lathauwer, J.M. Landsberg, M. Mahoney, L. Qi, B. Sturmfels; Collaborators: P. Comon, V. de Silva, P. Drineas, G. Golub References http://www-sccm.stanford.edu/nf-publications-tech.html [CGLM2] P. Comon, G. Golub, L.-H. Lim, and B. Mourrain, “Symmetric tensors and symmetric tensor rank,” SCCM Tech. Rep., 06-02, 2006. [CGLM1] P. Comon, B. Mourrain, L.-H. Lim, and G.H. Golub, “Genericity and rank deficiency of high order symmetric tensors,” Proc. IEEE Int. Con- ference on Acoustics, Speech, and Signal Processing (ICASSP), 31 (2006), no. 3, pp. 125–128. [dSL] V. de Silva and L.-H. Lim, “Tensor rank and the ill-posedness of the best low-rank approximation problem,” SCCM Tech. Rep., 06-06 (2006). [GL] G. Golub and L.-H. Lim, “Nonnegative decomposition and approximation of nonnegative matrices and tensors,” SCCM Tech. Rep., 06-01 (2006), forthcoming. [L] L.-H. Lim, “Singular values and eigenvalues of tensors: a variational approach,” Proc. IEEE Int. Workshop on Computational Advances in Multi- Sensor Adaptive Processing (CAMSAP), 1 (2005), pp. 129–132. 2 What is not a tensor, I • What is a vector? – Mathematician: An element of a vector space. – Physicist: “What kind of physical quantities can be rep- resented by vectors?” Answer: Once a basis is chosen, an n-dimensional vector is something that is represented by n real numbers only if those real numbers transform themselves as expected (ie. -
Abstract Tensor Systems As Monoidal Categories
Abstract Tensor Systems as Monoidal Categories Aleks Kissinger Dedicated to Joachim Lambek on the occasion of his 90th birthday October 31, 2018 Abstract The primary contribution of this paper is to give a formal, categorical treatment to Penrose’s abstract tensor notation, in the context of traced symmetric monoidal categories. To do so, we introduce a typed, sum-free version of an abstract tensor system and demonstrate the construction of its associated category. We then show that the associated category of the free abstract tensor system is in fact the free traced symmetric monoidal category on a monoidal signature. A notable consequence of this result is a simple proof for the soundness and completeness of the diagrammatic language for traced symmetric monoidal categories. 1 Introduction This paper formalises the connection between monoidal categories and the ab- stract index notation developed by Penrose in the 1970s, which has been used by physicists directly, and category theorists implicitly, via the diagrammatic languages for traced symmetric monoidal and compact closed categories. This connection is given as a representation theorem for the free traced symmet- ric monoidal category as a syntactically-defined strict monoidal category whose morphisms are equivalence classes of certain kinds of terms called Einstein ex- pressions. Representation theorems of this kind form a rich history of coherence results for monoidal categories originating in the 1960s [17, 6]. Lambek’s con- arXiv:1308.3586v1 [math.CT] 16 Aug 2013 tribution [15, 16] plays an essential role in this history, providing some of the earliest examples of syntactically-constructed free categories and most of the key ingredients in Kelly and Mac Lane’s proof of the coherence theorem for closed monoidal categories [11]. -
Quantum Chromodynamics (QCD) QCD Is the Theory That Describes the Action of the Strong Force
Quantum chromodynamics (QCD) QCD is the theory that describes the action of the strong force. QCD was constructed in analogy to quantum electrodynamics (QED), the quantum field theory of the electromagnetic force. In QED the electromagnetic interactions of charged particles are described through the emission and subsequent absorption of massless photons (force carriers of QED); such interactions are not possible between uncharged, electrically neutral particles. By analogy with QED, quantum chromodynamics predicts the existence of gluons, which transmit the strong force between particles of matter that carry color, a strong charge. The color charge was introduced in 1964 by Greenberg to resolve spin-statistics contradictions in hadron spectroscopy. In 1965 Nambu and Han introduced the octet of gluons. In 1972, Gell-Mann and Fritzsch, coined the term quantum chromodynamics as the gauge theory of the strong interaction. In particular, they employed the general field theory developed in the 1950s by Yang and Mills, in which the carrier particles of a force can themselves radiate further carrier particles. (This is different from QED, where the photons that carry the electromagnetic force do not radiate further photons.) First, QED Lagrangian… µ ! # 1 µν LQED = ψeiγ "∂µ +ieAµ $ψe − meψeψe − Fµν F 4 µν µ ν ν µ Einstein notation: • F =∂ A −∂ A EM field tensor when an index variable µ • A four potential of the photon field appears twice in a single term, it implies summation µ •γ Dirac 4x4 matrices of that term over all the values of the index