Hyperdeterminant and Tensor Rank

Total Page:16

File Type:pdf, Size:1020Kb

Hyperdeterminant and Tensor Rank Hyperdeterminant and Tensor Rank Lek-Heng Lim Workshop on Tensor Decompositions and Applications CIRM, Luminy, France August 29–September 2, 2005 Thanks: NSF DMS 01-01364 Collaborators Pierre Comon Laboratoire I3S Universit´ede Nice Sophia Antipolis Vin de Silva Department of Mathematics Stanford University 2 Matrix Multiplication Let f : U → V and g : V → W be linear maps; U, V, W vector spaces over R of dimensions n, m, l. With choice of bases on U, V, W , g, f have matrix representations l×m m×n A = [aij] ∈ R ,B = [bjk] ∈ R . The matrix representation of h = g ◦ f (ie. h(x) := g(f(x))) is l×n then C = [cik] ∈ R where n c := X a b . ik j=1 ij jk Similarly for bilinear g : V1 × V2 → R and linear f1 : U1 → V1, f2 : d ×d d ×s U2 → V2 with matrix representations A ∈ R 1 2, B1 ∈ R 1 1, d ×s B2 ∈ R 2 2. The composite map h, where h(x, y) := g(f1(x), f2(y)), has ma- trix representation > s1×s2 C = B2 AB1 ∈ R . 3 Multilinear Matrix Multiplication Do the same for multilinear map g : V1 × · · · × Vk → R and linear maps f1 : U1 → V1, . , fk : Uk → Vk; dim(Vi) = si, dim(Ui) = di. With choice of bases on Vi’s and Ui’s, g is represented by A = d1×···×dk and by = [ 1 ] d1×s1 = aj1···j ∈ R f1, . , fk M1 mj i ∈ R ,...,Mk J kK 1 1 [mk ] ∈ dk×sk. jkik R If we compose g by f1, . , fk to get h : U1 × · · · × Uk → R defined by h(x1,..., xk) = g(f(x1), . , f(xk)), s1×···×sk then h is represented by ci1···i ∈ R where J kK c := Pd1 ··· Pdk a m1 ··· mk . (1) i1···ik j1=1 jk=1 j1···jk j1i1 jkik The covariant multilinear matrix multiplication will be written s1×···×sk A(M1,...,Mk) := ci1···i ∈ R . J kK 4 Contravariant Version The contravariant multilinear matrix multiplication of aj1···jk ∈ d1×···×dk by matrices L = [`1 ] ∈ r1×d1,...,L =J [`k ]K ∈ R 1 i1j1 R k ikjk r ×d R k k is defined by r1×···×rk (L1,...,Lk)A = bi1···i ∈ R , J kK b := Pd1 ··· Pdk `1 ··· `k a . (2) i1···ik j1=1 jk=1 i1j1 ikjk j1···jk ∗ This comes from the composition of a multilinear map g : V1 × ∗ · · · × Vk → R by linear maps f1 : V1 → U1, . , fk : Vk → Uk. Simple relation if we disregard covariance/contravariance: > > (L1,...,Lk)A = A(L1 ,...,Lk ) > > A(M1,...,Mk) = (M1 ,...,Mk )A. > † Works over C too (replace Li by Li ). 5 Properties d ×···×d r ×d • Let A, B ∈ R 1 k and λ, µ ∈ R. Let L1 ∈ R 1 1,..., Lk ∈ r ×d R k k. Then (L1,...,Lk)(λA + µB) = λ(L1,...,Lk)A + µ(L1,...,Lk)B. d ×···×d r ×d r ×d • Let A ∈ R 1 k. Let L1 ∈ R 1 1,..., Lk ∈ R k k, and s ×r s ×r M1 ∈ R 1 1,..., Mk ∈ R k k. Then (M1,..., Mk)(L1,..., Lk)A = (M1L1,..., MkLk)A s ×d where MiLi ∈ R i i is simply the matrix-matrix product of Mi and Li. d ×···×d r ×d • Let A ∈ R 1 k and λ, µ ∈ R. Let L1 ∈ R 1 1,..., Lj, Mj ∈ rj×dj r ×d R ,...,Lk ∈ R k k. Then A(L1, . , λLj + µMj,...,Lk) = λ(L1,..., Lj,...,Lk)A + µ(L1,..., Mj,...,Lk)A. 6 Aside: Relation with Kronecker Product d ×···×d d ···d Forgetful map R 1 k → R 1 k, A 7→ vec(A) (‘forgets’ the multilinear structure), then vec((L1,...,Lk)A) = L1 ⊗ · · · ⊗ Lk vec(A). d ···d ×d ···d where L1 ⊗ · · · ⊗ Lk ∈ R 1 k 1 k is the Kronecker product of L1,...,Lk. 7 Matrix Techniques m×n×l m×n×l Start with R and C . l = 2 is well understood, may m×n 2 m×n 2 be regarded as pairs of matrices (A, B) ∈ (C ) or (R ) , m×n or equivalently, as a matrix pencil λA + µB ∈ C[λ, µ] or m×n R[λ, µ] . Kronecker-Weierstrass Theory. There exist S ∈ GL(m),T ∈ GL(n) such that (SAT, SBT ) can be decomposed into block pairs of the following forms 1 0 0 1 1 0 .. .. (p+1)×p 0 . , 1 . ∈ R , . . .. 1 .. 0 0 1 1 0 0 1 1 0 0 1 , ∈ q×(q+1), ... ... ... ... R 1 0 0 1 1 0 −a0 . 1 1 .. , ∈ r×r. ... .. R . 0 −ar−2 1 1 −ar−1 Likewise for C. Similar but simpler results obtained by Jos ten Berge for generic pairs. 8 Larger Sizes and Higher Orders Want to obtain results as general as possible — for tensors of arbitrary size and order over both R and C. For larger values of k or d1, . , dk, techniques relying on multilinear matrix multipli- cations become increasingly less effective. Inherent limitation: d1×···×d k dim(R k) = d1 ····· dk = O(d ) while 2 2 2 dim(GL(d1) × · · · × GL(dk)) = d1 + ··· + dk = O(kd ) and 2 dim(O(d1)×· · ·×O(dk)) = d1(d1−1)/2+···+dk(dk−1)/2 = O(kd ). d ×···×d The action of GL(d1)×· · ·×GL(dk) on R 1 k has uncountably many orbits, {(L1,...,Lk)A | (L1,...,Lk) ∈ GL(d1) × · · · × GL(dk)}, as soon as di > 2, k > 4. 9 Multilinear Functional and its Gradient d ×···×d Multilinear functional associated with A ∈ R 1 k, ie. d1 d fA : R × · · · × R k → R, (3) Pd1 Pdk 1 k (x1,..., x ) 7→ ··· a x ··· x , k j1=1 jk=1 j1···jk j1 jk can be written as fA(x1,..., xk) = A(x1,..., xk) (4) where the rhs is the right multilinear multiplication by xi = (xi , . , xi )>, regarded as a d × 1 matrix. 1 di i Gradient of fA may be written as ∇fA = (∇x1fA,..., ∇xkfA) where ∂fA ∂fA ∇ if (x1,..., x ) = ,..., = A(x1,..., xi−1,I , x +1,..., x ). x A k ∂xi ∂xi di i k 1 di denotes identity matrix. Idi di × di 10 Hyperdeterminant (d +1)×···×(d +1) Work in C 1 k for the time being (di ≥ 1). Consider (d1+1)×···×(d +1) S := {A ∈ C k | ∇fA(x1,..., xk) = 0 for some non-zero (x1,..., xk)}. Theorem (Gelfand, Kapranov, Zelevinsky, 1992). S is a hypersurface if and only if X dj ≤ di i6=j for all j = 1, . , k. Let ∆ be the equation of the hypersurface, ie. a multivariate polynomial in the entries of A such that (d +1)×···×(d +1) S = {A ∈ C 1 k | ∆(A) = 0}. Then ∆ may be chosen to have integer coefficients. m×n For C , the condition becomes m ≤ n and n ≤ m — that’s why matrix determinants is only defined for square matrices. Since ∆ has integer coefficients, ∆(A) is real-valued for A ∈ (d +1)×···×(d +1) R 1 k . 11 Geometric View (d +1)×···×(d +1) d +1 Let X = {x1 ⊗ · · · ⊗ xk ∈ C 1 k | xi ∈ C i } be the (smooth) manifold of decomposable tensors (X oftened called the Segre variety). (d +1)×···×(d +1) Let A ∈ C 1 k . Then the condition ∇fA(x1,..., xk) = 0 for some non-zero (x1,..., xk) is equivalent to saying that the hyperplane orthogonal to A, ie. (d1+1)×···×(d +1) HA := {B ∈ C k | hA, Bi = 0} contains a tangent to X at the point x1 ⊗ · · · ⊗ xk. This may also be taken as an alternative definition of the hyperdeterminant ∆(A). Projective duality: X∗ = S. 12 ¢(A) = 0 rank(A) = 1 ¢(B) ≠ 0 rank(B) = 2 rank(B) = 2 ¢(A) = 0 ¢(B) = 0 rank(A) = 1 Minor Inaccuracy (d +1)×···×(d +1) Should really be working in projective spaces P (C 1 k ) = (d +1)···(d +1)−1 P 1 k . This is the set of equivalence classes (d +1)×···×(d +1) × [A] := {λA ∈ C 1 k | λ ∈ C }. (d +1)×···×(d +1) Thing to note is that the for any A ∈ C 1 k and × λ ∈ C , rank⊗(λA) = rank⊗(A). (d +1)×···×(d +1) So outer-product rank is well-defined in P (C 1 k ), (d +1)×···×(d +1) ie. given [A] ∈ P (C 1 k ) define rank⊗([A]) = rank⊗(A) for any A ∈ [A]. 13 Examples A. Cayley, “On the theory of linear transformation,” Cambridge Math. J., 4 (1845), pp. 193–209. 2×2×2 Hyperdeterminant of A = [[aijk]] ∈ R is " " # " #! 1 a a a a ∆(A) = det 000 010 + 100 110 4 a001 a011 a101 a111 " # " #!#2 a a a a − det 000 010 − 100 110 a001 a011 a101 a111 " # " # a a a a − 4 det 000 010 det 100 110 . a001 a011 a101 a111 A result that parallels the matrix case is the following: the system 14 of bilinear equations a000x0y0 + a010x0y1 + a100x1y0 + a110x1y1 = 0, a001x0y0 + a011x0y1 + a101x1y0 + a111x1y1 = 0, a000x0z0 + a001x0z1 + a100x1z0 + a101x1z1 = 0, a010x0z0 + a011x0z1 + a110x1z0 + a111x1z1 = 0, a000y0z0 + a001y0z1 + a010y1z0 + a011y1z1 = 0, a100y0z0 + a101y0z1 + a110y1z0 + a111y1z1 = 0, has a non-trivial solution iff ∆(A) = 0. Examples 2×2×3 Hyperdeterminant of A = [[aijk]] ∈ R is a000 a001 a002 a100 a101 a102 ∆(A) = det a100 a101 a102 det a010 a011 a012 a010 a011 a012 a110 a111 a112 a000 a001 a002 a000 a001 a002 − det a100 a101 a102 det a010 a011 a012 a110 a111 a112 a110 a111 a112 Again, the following is true: a000x0y0 + a010x0y1 + a100x1y0 + a110x1y1 = 0, a001x0y0 + a011x0y1 + a101x1y0 + a111x1y1 = 0, a002x0y0 + a012x0y1 + a102x1y0 + a112x1y1 = 0, a000x0z0 + a001x0z1 + a002x0z2 + a100x1z0 + a101x1z1 + a102x1z2 = 0, a010x0z0 + a011x0z1 + a012x0z2 + a110x1z0 + a111x1z1 + a112x1z2 = 0, a000y0z0 + a001y0z1 + a002y0z2 + a010y1z0 + a011y1z1 + a012y1z2 = 0, a100y0z0 + a101y0z1 + a102y0z2 + a110y1z0 + a111y1z1 + a112y1z2 = 0, has a non-trivial solution iff ∆(A) = 0.
Recommended publications
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Arxiv:1810.05857V3 [Math.AG] 11 Jun 2020
    HYPERDETERMINANTS FROM THE E8 DISCRIMINANT FRED´ ERIC´ HOLWECK AND LUKE OEDING Abstract. We find expressions of the polynomials defining the dual varieties of Grass- mannians Gr(3, 9) and Gr(4, 8) both in terms of the fundamental invariants and in terms of a generic semi-simple element. We restrict the polynomial defining the dual of the ad- joint orbit of E8 and obtain the polynomials of interest as factors. To find an expression of the Gr(4, 8) discriminant in terms of fundamental invariants, which has 15, 942 terms, we perform interpolation with mod-p reductions and rational reconstruction. From these expressions for the discriminants of Gr(3, 9) and Gr(4, 8) we also obtain expressions for well-known hyperdeterminants of formats 3 × 3 × 3 and 2 × 2 × 2 × 2. 1. Introduction Cayley’s 2 × 2 × 2 hyperdeterminant is the well-known polynomial 2 2 2 2 2 2 2 2 ∆222 = x000x111 + x001x110 + x010x101 + x100x011 + 4(x000x011x101x110 + x001x010x100x111) − 2(x000x001x110x111 + x000x010x101x111 + x000x100x011x111+ x001x010x101x110 + x001x100x011x110 + x010x100x011x101). ×3 It generates the ring of invariants for the group SL(2) ⋉S3 acting on the tensor space C2×2×2. It is well-studied in Algebraic Geometry. Its vanishing defines the projective dual of the Segre embedding of three copies of the projective line (a toric variety) [13], and also coincides with the tangential variety of the same Segre product [24, 28, 33]. On real tensors it separates real ranks 2 and 3 [9]. It is the unique relation among the principal minors of a general 3 × 3 symmetric matrix [18].
    [Show full text]
  • Multilinear Algebra a Bilinear Map in Chapter I
    ·~ ...,... .chapter II 1 60 MULTILINEAR ALGEBRA Here a:"~ E v,' a:J E VJ for j =F i, and X EF. Thus, a function rjl: v 1 X ••• X v n --+ v is a multilinear mapping if for all i e { 1, ... , n} and for all vectors a: 1 e V 1> ... , a:1_ 1 eV1_h a:1+ 1eV1+ 1 , ••. , a:.ev•• we have rjl(a: 1 , ... ,a:1_ 1, ··, a 1+ 1, ... , a:.)e HomJ>{V" V). Before proceeding further, let us give a few examples of multilinear maps. Example 1. .2: Ifn o:= 1, then a function cf>: V 1 ..... Vis a multilinear mapping if and only if r/1 is a linear transformation. Thus, linear tra~sformations are just special cases of multilinear maps. 0 Example 1.3: If n = 2, then a multilinear map r/1: V 1 x V 2 ..... V is what we called .Multilinear Algebra a bilinear map in Chapter I. For a concrete example, we have w: V x v• ..... F given by cv(a:, T) = T(a:) (equation 6.6 of Chapter 1). 0 '\1 Example 1.4: The determinant, det{A), of an n x n matrix A can be thought of .l as a multilinear mapping cf>: F .. x · · · x P--+ F in the following way: If 1. MULTILINEAR MAPS AND TENSOR PRODUCTS a: (a , ... , a .) e F" for i 1, ... , o, then set rjl(a: , ... , a:.) det(a J)· The fact "'I 1 = 11 1 = 1 = 1 that rJ> is multilinear is an easy computation, which we leave as an exercise at the In Chapter I, we dealt mainly with functions of one variable.
    [Show full text]
  • Cross Products, Automorphisms, and Gradings 3
    CROSS PRODUCTS, AUTOMORPHISMS, AND GRADINGS ALBERTO DAZA-GARC´IA, ALBERTO ELDUQUE, AND LIMING TANG Abstract. The affine group schemes of automorphisms of the multilinear r- fold cross products on finite-dimensional vectors spaces over fields of character- istic not two are determined. Gradings by abelian groups on these structures, that correspond to morphisms from diagonalizable group schemes into these group schemes of automorphisms, are completely classified, up to isomorphism. 1. Introduction Eckmann [Eck43] defined a vector cross product on an n-dimensional real vector space V , endowed with a (positive definite) inner product b(u, v), to be a continuous map X : V r −→ V (1 ≤ r ≤ n) satisfying the following axioms: b X(v1,...,vr), vi =0, 1 ≤ i ≤ r, (1.1) bX(v1,...,vr),X(v1,...,vr) = det b(vi, vj ) , (1.2) There are very few possibilities. Theorem 1.1 ([Eck43, Whi63]). A vector cross product exists in precisely the following cases: • n is even, r =1, • n ≥ 3, r = n − 1, • n =7, r =2, • n =8, r =3. Multilinear vector cross products X on vector spaces V over arbitrary fields of characteristic not two, relative to a nondegenerate symmetric bilinear form b(u, v), were classified by Brown and Gray [BG67]. These are the multilinear maps X : V r → V (1 ≤ r ≤ n) satisfying (1.1) and (1.2). The possible pairs (n, r) are again those in Theorem 1.1. The exceptional cases: (n, r) = (7, 2) and (8, 3), are intimately related to the arXiv:2006.10324v1 [math.RT] 18 Jun 2020 octonion, or Cayley, algebras.
    [Show full text]
  • Singularities of Hyperdeterminants Annales De L’Institut Fourier, Tome 46, No 3 (1996), P
    ANNALES DE L’INSTITUT FOURIER JERZY WEYMAN ANDREI ZELEVINSKY Singularities of hyperdeterminants Annales de l’institut Fourier, tome 46, no 3 (1996), p. 591-644 <http://www.numdam.org/item?id=AIF_1996__46_3_591_0> © Annales de l’institut Fourier, 1996, tous droits réservés. L’accès aux archives de la revue « Annales de l’institut Fourier » (http://annalif.ujf-grenoble.fr/) implique l’accord avec les conditions gé- nérales d’utilisation (http://www.numdam.org/conditions). Toute utilisa- tion commerciale ou impression systématique est constitutive d’une in- fraction pénale. Toute copie ou impression de ce fichier doit conte- nir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ Ann. Inst. Fourier, Grenoble 46, 3 (1996), 591-644 SINGULARITIES OF HYPERDETERMINANTS by J. WEYMAN (1) and A. ZELEVINSKY (2) Contents. 0. Introduction 1. Symmetric matrices with diagonal lacunae 2. Cusp type singularities 3. Eliminating special node components 4. The generic node component 5. Exceptional cases: the zoo of three- and four-dimensional matrices 6. Decomposition of the singular locus into cusp and node parts 7. Multi-dimensional "diagonal" matrices and the Vandermonde matrix 0. Introduction. In this paper we continue the study of hyperdeterminants recently undertaken in [4], [5], [12]. The hyperdeterminants are analogs of deter- minants for multi-dimensional "matrices". Their study was initiated by (1) Partially supported by the NSF (DMS-9102432). (2) Partially supported by the NSF (DMS- 930424 7). Key words: Hyperdeterminant - Singular locus - Cusp type singularities - Node type singularities - Projectively dual variety - Segre embedding. Math. classification: 14B05.
    [Show full text]
  • Multibraces on the Hochschild Space  Fusun( Akman Department of Mathematics and Statistics, Coastal Carolina University, P.O
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Pure and Applied Algebra 167 (2002) 129–163 www.elsevier.com/locate/jpaa Multibraces on the Hochschild space Fusun( Akman Department of Mathematics and Statistics, Coastal Carolina University, P.O. Box 261954, Conway, SC 29528-6054, USA Received 24 March 1997; received in revised form8 January 2001 Communicated by J. Huebschmann Abstract We generalize the coupled braces {x}{y} of Gerstenhaber and {x}{y1;:::;yn} of Gersten- haber and Getzler depicting compositions of multilinear maps in the Hochschild space C•(A)= Hom(T •A; A) of a graded vector space A to expressions of the form {x(1);:::;x(1)}··· 1 i1 {x(m);:::;x(m)} on the extended space C•;•(A) = Hom(T •A; T •A). We apply multibraces to 1 im study associative and Lie algebras, Batalin–Vilkovisky algebras, and A∞ and L∞ algebras: most importantly, we introduce a new variant of the master identity for L∞ algebras in the form {m˜ ◦ m˜}{sa1}{sa2}···{san} = 0. Using the new language, we also explain the signiÿcance of this notation for bialgebras (coassociativity is simply ◦=0), comment on the bialgebra coho- mology di>erential of Gerstenhaber and Schack, and deÿne multilinear higher-order di>erential operators with respect to multilinear maps. c 2002 Elsevier Science B.V. All rights reserved. MSC: 17A30; 17A40; 17A42; 16W55; 16W30 1. Introduction Occasionally, an algebraic identity we encounter in mathematical physics or homo- logical algebra boils down to the following: the composition of a multilinear map with another one, or a sumof such compositions,is identically zero.
    [Show full text]
  • Lecture Notes on Linear and Multilinear Algebra 2301-610
    Lecture Notes on Linear and Multilinear Algebra 2301-610 Wicharn Lewkeeratiyutkul Department of Mathematics and Computer Science Faculty of Science Chulalongkorn University August 2014 Contents Preface iii 1 Vector Spaces 1 1.1 Vector Spaces and Subspaces . 2 1.2 Basis and Dimension . 10 1.3 Linear Maps . 18 1.4 Matrix Representation . 32 1.5 Change of Bases . 42 1.6 Sums and Direct Sums . 48 1.7 Quotient Spaces . 61 1.8 Dual Spaces . 65 2 Multilinear Algebra 73 2.1 Free Vector Spaces . 73 2.2 Multilinear Maps and Tensor Products . 78 2.3 Determinants . 95 2.4 Exterior Products . 101 3 Canonical Forms 107 3.1 Polynomials . 107 3.2 Diagonalization . 115 3.3 Minimal Polynomial . 128 3.4 Jordan Canonical Forms . 141 i ii CONTENTS 4 Inner Product Spaces 161 4.1 Bilinear and Sesquilinear Forms . 161 4.2 Inner Product Spaces . 167 4.3 Operators on Inner Product Spaces . 180 4.4 Spectral Theorem . 190 Bibliography 203 Preface This book grew out of the lecture notes for the course 2301-610 Linear and Multilinaer Algebra given at the Deparment of Mathematics, Faculty of Science, Chulalongkorn University that I have taught in the past 5 years. Linear Algebra is one of the most important subjects in Mathematics, with numerous applications in pure and applied sciences. A more theoretical linear algebra course will emphasize on linear maps between vector spaces, while an applied-oriented course will mainly work with matrices. Matrices have an ad- vantage of being easier to compute, while it is easier to establish the results by working with linear maps.
    [Show full text]
  • Tensors in 10 Minutes Or Less
    Tensors In 10 Minutes Or Less Sydney Timmerman Under the esteemed mentorship of Apurva Nakade December 5, 2018 JHU Math Directed Reading Program Theorem If there is a relationship between two tensor fields in one coordinate system, that relationship holds in certain other coordinate systems This means the laws of physics can be expressed as relationships between tensors! Why tensors are cool, kids This means the laws of physics can be expressed as relationships between tensors! Why tensors are cool, kids Theorem If there is a relationship between two tensor fields in one coordinate system, that relationship holds in certain other coordinate systems Why tensors are cool, kids Theorem If there is a relationship between two tensor fields in one coordinate system, that relationship holds in certain other coordinate systems This means the laws of physics can be expressed as relationships between tensors! Example Einstein's field equations govern general relativity, 8πG G = T µν c4 µν 0|{z} 0|{z} (2) tensor (2) tensor What tensors look like What tensors look like Example Einstein's field equations govern general relativity, 8πG G = T µν c4 µν 0|{z} 0|{z} (2) tensor (2) tensor Let V be an n-dimensional vector space over R Definition Given u; w 2 V and λ 2 R, a covector is a map α : V ! R satisfying α(u + λw) = α(u) + λα(w) Definition Covectors form the dual space V ∗ The Dual Space Definition Given u; w 2 V and λ 2 R, a covector is a map α : V ! R satisfying α(u + λw) = α(u) + λα(w) Definition Covectors form the dual space V ∗ The Dual Space Let V be
    [Show full text]
  • Arxiv:1404.6852V1 [Quant-Ph] 28 Apr 2014 Asnmes 36.A 22.J 03.65.-W 02.20.Hj, 03.67.-A, Numbers: PACS Given
    SLOCC Invariants for Multipartite Mixed States Naihuan Jing1,5,∗ Ming Li2,3, Xianqing Li-Jost2, Tinggui Zhang2, and Shao-Ming Fei2,4 1School of Sciences, South China University of Technology, Guangzhou 510640, China 2Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig, Germany 3Department of Mathematics, China University of Petroleum , Qingdao 266555, China 4School of Mathematical Sciences, Capital Normal University, Beijing 100048, China 5Department of Mathematics, North Carolina State University, Raleigh, NC 27695, USA Abstract We construct a nontrivial set of invariants for any multipartite mixed states under the SLOCC symmetry. These invariants are given by hyperdeterminants and independent from basis change. In particular, a family of d2 invariants for arbitrary d-dimensional even partite mixed states are explicitly given. PACS numbers: 03.67.-a, 02.20.Hj, 03.65.-w arXiv:1404.6852v1 [quant-ph] 28 Apr 2014 ∗Electronic address: [email protected] 1 I. INTRODUCTION Classification of multipartite states under stochastic local operations and classical commu- nication (SLOCC) has been a central problem in quantum communication and computation. Recently advances have been made for the classification of pure multipartite states under SLOCC [1, 2] and the dimension of the space of homogeneous SLOCC-invariants in a fixed degree is given as a function of the number of qudits. In this work we present a general method to construct polynomial invariants for mixed states under SLOCC. In particular we also derive general invariants under local unitary (LU) symmetry for mixed states. Polynomial invariants have been investigated in [3–6], which allow in principle to deter- mine all the invariants of local unitary transformations.
    [Show full text]
  • Hyperdeterminants As Integrable Discrete Systems
    Takustraße 7 Konrad-Zuse-Zentrum D-14195 Berlin-Dahlem f¨ur Informationstechnik Berlin Germany SERGEY P. TSAREV 1, THOMAS WOLF 2 Hyperdeterminants as integrable discrete systems 1Siberian Federal University, Krasnoyarsk, Russia 2Department of Mathematics, Brock University, St.Catharines, Canada and ZIB Fellow ZIB-Report 09-17 (May 2009) Hyperdeterminants as integrable discrete systems S.P. Tsarev‡ and T. Wolf Siberian Federal University, Svobodnyi avenue, 79, 660041, Krasnoyarsk, Russia and Department of Mathematics, Brock University 500 Glenridge Avenue, St.Catharines, Ontario, Canada L2S 3A1 e-mails: [email protected] [email protected] Abstract We give the basic definitions and some theoretical results about hyperdeter- minants, introduced by A. Cayley in 1845. We prove integrability (understood as 4d-consistency) of a nonlinear difference equation defined by the 2×2×2 - hy- perdeterminant. This result gives rise to the following hypothesis: the difference equations defined by hyperdeterminants of any size are integrable. We show that this hypothesis already fails in the case of the 2 × 2 × 2 × 2 - hyperdeterminant. 1 Introduction Discrete integrable equations have become a very vivid topic in the last decade. A number of important results on the classification of different classes of such equations, based on the notion of consistency [3], were obtained in [1, 2, 17] (cf. also references to earlier publications given there). As a rule, discrete equations describe relations on the C Zn scalar field variables fi1...in ∈ associated with the points of a lattice with vertices n at integer points in the n-dimensional space R = {(x1,...,xn)|xs ∈ R}. If we take the elementary cubic cell Kn = {(i1,...,in) | is ∈{0, 1}} of this lattice and the field n variables fi1...in associated to its 2 vertices, an n-dimensional discrete system of the type considered here is given by an equation of the form Qn(f)=0.
    [Show full text]
  • Lectures on Algebraic Statistics
    Lectures on Algebraic Statistics Mathias Drton, Bernd Sturmfels, Seth Sullivant September 27, 2008 2 Contents 1 Markov Bases 7 1.1 Hypothesis Tests for Contingency Tables . 7 1.2 Markov Bases of Hierarchical Models . 17 1.3 The Many Bases of an Integer Lattice . 26 2 Likelihood Inference 35 2.1 Discrete and Gaussian Models . 35 2.2 Likelihood Equations for Implicit Models . 46 2.3 Likelihood Ratio Tests . 54 3 Conditional Independence 67 3.1 Conditional Independence Models . 67 3.2 Graphical Models . 75 3.3 Parametrizations of Graphical Models . 85 4 Hidden Variables 95 4.1 Secant Varieties in Statistics . 95 4.2 Factor Analysis . 105 5 Bayesian Integrals 113 5.1 Information Criteria and Asymptotics . 113 5.2 Exact Integration for Discrete Models . 122 6 Exercises 131 6.1 Markov Bases Fixing Subtable Sums . 131 6.2 Quasi-symmetry and Cycles . 136 6.3 A Colored Gaussian Graphical Model . 139 6.4 Instrumental Variables and Tangent Cones . 143 6.5 Fisher Information for Multivariate Normals . 150 6.6 The Intersection Axiom and Its Failure . 152 6.7 Primary Decomposition for CI Inference . 155 6.8 An Independence Model and Its Mixture . 158 7 Open Problems 165 4 Contents Preface Algebraic statistics is concerned with the development of techniques in algebraic geometry, commutative algebra, and combinatorics, to address problems in statis- tics and its applications. On the one hand, algebra provides a powerful tool set for addressing statistical problems. On the other hand, it is rarely the case that algebraic techniques are ready-made to address statistical challenges, and usually new algebraic results need to be developed.
    [Show full text]
  • Hyperdeterminants of Polynomials
    Hyperdeterminants of Polynomials Luke Oeding University of California, Berkeley June 5, 2012 Support: NSF IRFP (#0853000) while at the University of Florence. Luke Oeding (UC Berkeley) Hyperdets of Polys June 5, 2012 1 / 23 Quadratic polynomials, matrices, and discriminants Square matrix Quadratic polynomial a a f = ax 2 + bx + c A = 1;1 1;2 a2;1 a2;2 discriminant ∆(f ) = ac − b2=4: determinant a1;1a2;2 − a1;2a2;1: vanishes when A is singular- vanishes when f is singular e.g. columns linearly dependent. ∆(f ) = 0 () f has a repeated root. Notice can associate to f a symmetric matrix a b=2 A = f b=2 c and det(Af ) = ∆(f ). Luke Oeding (UC Berkeley) Hyperdets of Polys June 5, 2012 2 / 23 Quadratic polynomials, matrices, and discriminants Square matrix Quadratic polynomial f = 0a a a ::: a 1 X 2 X 1;1 1;2 1;3 1;n ai;i xi + ai;j 2xi xj Ba2;1 a2;2 a2;3 ::: a2;nC 1≤i≤n 1≤i<j≤n A = B C B . .. C @ . ::: . A l Symmetric matrix Af = an;1 an;2 an;3 ::: an;n 0 1 a1;1 a1;2 a1;3 ::: a1;n determinant det(A): Ba1;2 a2;2 a2;3 ::: a2;nC vanishes when A is singular- B C B . .. C e.g. columns linearly dependent. @ . ::: . A a1;n a2;n a3;n ::: an;n discriminant ∆(f ): vanishes when f is singular i.e. has a repeated root. If f is quadratic, then det(Af ) = ∆(f ). The symmetrization of the determinant of a matrix = the determinant of a symmetrized matrix.
    [Show full text]