Cayley's Hyperdeterminant, the Principal Minors of a Symmetric

Total Page:16

File Type:pdf, Size:1020Kb

Cayley's Hyperdeterminant, the Principal Minors of a Symmetric Forty-Sixth Annual Allerton Conference WeA5.6 Allerton House, UIUC, Illinois, USA September 23-26, 2008 Cayley’s Hyperdeterminant, the Principal Minors of a Symmetric Matrix and the Entropy Region of 4 Gaussian Random Variables Sormeh Shadbakht and Babak Hassibi Electrical Engineering Department California Institute of Technology Pasadena, California 91125 Email: sormeh, [email protected] Abstract— It has recently been shown that there is a connec- is called balanced if for all i ∈N, α:i∈α aα =0.For tion between Cayley’s hypdeterminant and the principal minors example H1 +H2 −H12 ≥ 0 is balanced and H1 ≥ 0 is not. of a symmetric matrix. With an eye towards characterizing Theorem 1 (Discrete/continuous information inequalities): the entropy region of jointly Gaussian random variables, we obtain three new results on the relationship between Gaussian [5] random variables and the hyperdeterminant. The first is a new 1) A linear continuous information inequality (determinant) formula for the 2 × 2 × 2 hyperdeterminant. The ≥ α aαhα 0 is valid if and only if its discrete second is a new (transparent) proof of the fact that the principal ≥ n×n 2 × 2 × ...× 2 counterpart α aαHα 0 is balanced and valid. minors of an symmetric matrix satisfy the ≥ (n times) hyperdeterminant relations. The third is a minimal 2) A linear discrete information inequality α aαHα 0 is valid if and only if it can be written as α βαHα + set of 5 equations that 15 real numbers must satisfy to be the n 4 × 4 c − c ≥ ≥ principal minors of a symmetric matrix. i=1 ri(Hi,i Hi ) 0 for some ri 0, where α βαhα ≥ 0 is a valid continuous information c I. INTRODUCTION inequality (i denotes the complement of i in N ). Therefore one can study continuous random variables Let X1, ··· ,Xn be n jointly distributed discrete random ∗ to determine Γn. Among all continuous random variables variables with arbitrary alphabet size . The vector of all the N Gaussians are the most natural ones to study first. In fact it n − joint entropies of these random variables is referred 2 1 turns out that these distributions have interesting properties to as their “entropy vector” and conversely any n − di- 2 1 that make them even more desirable to study. mensional vector whose elements can be regarded as the joint T 1 Let X1, ··· ,Xn ∈ R be n jointly distributed zero-mean entropies of some n random variables, for some alphabet size vector valued Gaussian random variables of dimension T N, is called “entropic”. The entropy region is defined as the nT ×nT with covariance matrix R ∈ R . Clearly, R is sym- region of all possible entropic vectors and is denoted by ∗ Γn metric, positive semi-definite, and consists of block matrices [1]. Due to its deep connections with important problems in of size T × T (corresponding to each random variable). We information theory and probabilistic reasoning such as the will allow T to be arbitrary and will therefore consider the capacity of information networks [2][3], or the conditional normalized joint entropy of any subset α ⊆Nof these independence compatibility problem [4], characterizing this random variables region turns out to be of fundamental importance. While 1 1 T |α| it is completely solved for n =2, 3 random variables, the hα = · log (2πe) det Rα , (1) complete characterization for n ≥ 4 remains an interesting T 2 open problem. where |α| denotes the cardinality of the set α and Rα is the The above discussion focused on discrete random vari- |α|T ×|α|T matrix obtained by keeping those block rows ables; however, characterizing the entropy region of a number and block columns of R that are indexed by α. Note that of continuous random variables is as important. In fact, our normalization is by the dimensionality of the Xi, i.e., by it has been shown in [5] that there is a correspondence T , and that we have used h to denote normalized entropy. between the continuous and discrete information inequalities Normalization has the following important consequence. and therefore one can characterize one region from the other. Theorem 2 (Convexity of the region for h): The closure Let N = {1, ··· ,n} and for any α ⊆N, let Hα = of the region of normalized Gaussian entropy vectors is H(Xi,i ∈ α) (or hα whenever the underlying probability convex [6]. distributions are continuous) be the joint entropies. A valid It further turns out that for n =2, 3 random variables, discrete information inequality of the form α aαHα ≥ 0 vector-valued Gaussian random variables can be used to obtain the entire entropy region for continuous random This work was supported in part by the National Science Foundation variables [6]. through grant CCF-0729203, by the David and Lucille Packard Foundation, by the Office of Naval Research through a MURI under contract no. 1Since differential entropy is invariant to shifts there is no point in N00014-08-1-0747, and by Caltech’s Lee Center for Advanced Networking. assuming nonzero means for the Xi. 978-1-4244-2926-4/08/$25.00 ©2008 IEEE 185 Authorized licensed use limited to: CALIFORNIA INSTITUTE OF TECHNOLOGY. Downloaded on April 12,2010 at 17:36:59 UTC from IEEE Xplore. Restrictions apply. WeA5.6 Theorem 3: (Gaussians generate the entropy region for The multilinear form f is said to be degenerate if and only n =2, 3) For two and three random variables, the cone if there is a non-trivial solution (X1,X2,...,Xn) to the generated by the space of vector-valued Gaussian entropy following system of partial derivative equations [12]: vectors is the entire entropy region for continuous random ∂f variables. =0 for all j =1,...,n and i =1,...,kj (4) ∂xj, i In an effort to characterize the entropy region of discrete random variables, some inner and outer bounds have been The unique (up to a scale) irreducible polynomial with established among which the Ingleton bound is the most integral coefficients in the entries ai1,i2,...,in of a tensor A well-known. The Ingleton inequality was first discovered that vanishes when f is degenerate is called the hyperdeter- for the ranks of representable matroids [7]. In fact let minant. ··· N { ··· } v1, ,vn be n vector subspaces and = 1, ,n . Example (2 × 2 hyperdeterminant): Consider the 2 × 2 ⊆N 1 Further let α and rα be the rank function defined as hyperdeterminant, f(X1,X2)= i,j=0 ai,jxiyj. The mul- the dimension of the subspace ⊕i∈αvi. Then for any subsets tilinear form f is degenerate if there is a non-tirivial solution α1,α2,α3,α4 ⊆N, the Ingleton inequality is defined as for X1,X2, rα + rα + rα ∪α ∪α + rα ∪α ∪α + rα ∪α ∂f 1 2 1 2 3 1 2 4 3 4 = a00y0 + a01y1 =0 (5) ∂x0 −rα ∪α − rα ∪α − rα ∪α − rα ∪α − rα ∪α ≤ 0(2) 1 2 1 3 1 4 2 3 2 4 ∂f = a00x0 + a10x1 =0 (6) Although not all the entropy vectors satisfy this inequality ∂y0 [8], it turns out that certain types of entropy vectors, in par- ∂f = a10y0 + a11y1 =0 (7) ticular all the linearly representable (corresponding to linear ∂x1 codes over finite fields) and the abelian group characterizable ∂f = a01x0 + a11x1 =0 (8) entropy vectors do and hence fall into this innerbound. An ∂y1 important property of Gaussian random variables is that the entropy vector of 4 jointly Gaussian distributed random Trying to solve this system of equations, we obtain that, variables can be arranged so as to violate the Ingleton bound y0 −a01 −a11 = = (9) [6][9]. y1 a00 a10 x0 −a10 −a11 = = (10) A. Cayley’s Hyperdeterminant x1 a00 a01 Recall that the entropy of a collection of Gaussian random We see that a non-trivial solution exists if and only if, variables is simply the “log-determinant” of their covariance a00a11 − a10a01 =0, i.e. the hyperdeterminant is simply matrix. Similarly, the entropy of any subset of variables from the determinant in this case. a collection of Gaussian random variables is simply the “log” The hyperdeterminant of a 2 × 2 × 2 multilinear form was of the principal minor of the covariance matrix corresponding first computed by Cayley [11] and is as follows: to this subset. Therefore one approach to characterizing the 2 2 2 2 2 2 2 2 entropy region of Gaussians, is to study the determinantal −a000a111 − a100a011 − a010a101 − a001a110 relations of a symmetric positive semi-definite matrix. −4a000a110a101a011 − 4a100a010a001a111 For example, consider 3 Gaussian random variables. While +2a000a100a011a111 +2a000a010a101a111 the entropy vector of 3 random variables is a 7 dimensional object, there are only 6 free parameters in a symmetric +2a000a001a110a111 +2a100a010a101a011 positive semi-definite matrix. Therefore the minors should +2a100a001a110a011 +2a010a001a110a101 =0(11) satisfy a relation. It has very recently been shown that this In [10] it is further shown that the principal minors of relation is given by the Cayley’s so-called 2 × 2 × 2 ”hyper- an × symmetric matrix satisfy the × × × hy- determinant” [10]. The hyperdeterminant is a generalization n n 2 2 ... 2 of the determinant concept for matrices to tensors and it was n times first introduced by Cayley in 1845 [11]. perdeterminant. It is thus clear that determining the entropy region of Gaussian random variables is intimately related to There are a couple of equivalent definitions for the hyper- Cayley’s hyperdeterminant. determinant among which we choose the definition through the degeneracy of a multilinear form. Consider the following It is with this viewpoint in mind that we study the hyperdeterminant in this paper.
Recommended publications
  • Arxiv:1810.05857V3 [Math.AG] 11 Jun 2020
    HYPERDETERMINANTS FROM THE E8 DISCRIMINANT FRED´ ERIC´ HOLWECK AND LUKE OEDING Abstract. We find expressions of the polynomials defining the dual varieties of Grass- mannians Gr(3, 9) and Gr(4, 8) both in terms of the fundamental invariants and in terms of a generic semi-simple element. We restrict the polynomial defining the dual of the ad- joint orbit of E8 and obtain the polynomials of interest as factors. To find an expression of the Gr(4, 8) discriminant in terms of fundamental invariants, which has 15, 942 terms, we perform interpolation with mod-p reductions and rational reconstruction. From these expressions for the discriminants of Gr(3, 9) and Gr(4, 8) we also obtain expressions for well-known hyperdeterminants of formats 3 × 3 × 3 and 2 × 2 × 2 × 2. 1. Introduction Cayley’s 2 × 2 × 2 hyperdeterminant is the well-known polynomial 2 2 2 2 2 2 2 2 ∆222 = x000x111 + x001x110 + x010x101 + x100x011 + 4(x000x011x101x110 + x001x010x100x111) − 2(x000x001x110x111 + x000x010x101x111 + x000x100x011x111+ x001x010x101x110 + x001x100x011x110 + x010x100x011x101). ×3 It generates the ring of invariants for the group SL(2) ⋉S3 acting on the tensor space C2×2×2. It is well-studied in Algebraic Geometry. Its vanishing defines the projective dual of the Segre embedding of three copies of the projective line (a toric variety) [13], and also coincides with the tangential variety of the same Segre product [24, 28, 33]. On real tensors it separates real ranks 2 and 3 [9]. It is the unique relation among the principal minors of a general 3 × 3 symmetric matrix [18].
    [Show full text]
  • Singularities of Hyperdeterminants Annales De L’Institut Fourier, Tome 46, No 3 (1996), P
    ANNALES DE L’INSTITUT FOURIER JERZY WEYMAN ANDREI ZELEVINSKY Singularities of hyperdeterminants Annales de l’institut Fourier, tome 46, no 3 (1996), p. 591-644 <http://www.numdam.org/item?id=AIF_1996__46_3_591_0> © Annales de l’institut Fourier, 1996, tous droits réservés. L’accès aux archives de la revue « Annales de l’institut Fourier » (http://annalif.ujf-grenoble.fr/) implique l’accord avec les conditions gé- nérales d’utilisation (http://www.numdam.org/conditions). Toute utilisa- tion commerciale ou impression systématique est constitutive d’une in- fraction pénale. Toute copie ou impression de ce fichier doit conte- nir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ Ann. Inst. Fourier, Grenoble 46, 3 (1996), 591-644 SINGULARITIES OF HYPERDETERMINANTS by J. WEYMAN (1) and A. ZELEVINSKY (2) Contents. 0. Introduction 1. Symmetric matrices with diagonal lacunae 2. Cusp type singularities 3. Eliminating special node components 4. The generic node component 5. Exceptional cases: the zoo of three- and four-dimensional matrices 6. Decomposition of the singular locus into cusp and node parts 7. Multi-dimensional "diagonal" matrices and the Vandermonde matrix 0. Introduction. In this paper we continue the study of hyperdeterminants recently undertaken in [4], [5], [12]. The hyperdeterminants are analogs of deter- minants for multi-dimensional "matrices". Their study was initiated by (1) Partially supported by the NSF (DMS-9102432). (2) Partially supported by the NSF (DMS- 930424 7). Key words: Hyperdeterminant - Singular locus - Cusp type singularities - Node type singularities - Projectively dual variety - Segre embedding. Math. classification: 14B05.
    [Show full text]
  • Arxiv:1404.6852V1 [Quant-Ph] 28 Apr 2014 Asnmes 36.A 22.J 03.65.-W 02.20.Hj, 03.67.-A, Numbers: PACS Given
    SLOCC Invariants for Multipartite Mixed States Naihuan Jing1,5,∗ Ming Li2,3, Xianqing Li-Jost2, Tinggui Zhang2, and Shao-Ming Fei2,4 1School of Sciences, South China University of Technology, Guangzhou 510640, China 2Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig, Germany 3Department of Mathematics, China University of Petroleum , Qingdao 266555, China 4School of Mathematical Sciences, Capital Normal University, Beijing 100048, China 5Department of Mathematics, North Carolina State University, Raleigh, NC 27695, USA Abstract We construct a nontrivial set of invariants for any multipartite mixed states under the SLOCC symmetry. These invariants are given by hyperdeterminants and independent from basis change. In particular, a family of d2 invariants for arbitrary d-dimensional even partite mixed states are explicitly given. PACS numbers: 03.67.-a, 02.20.Hj, 03.65.-w arXiv:1404.6852v1 [quant-ph] 28 Apr 2014 ∗Electronic address: [email protected] 1 I. INTRODUCTION Classification of multipartite states under stochastic local operations and classical commu- nication (SLOCC) has been a central problem in quantum communication and computation. Recently advances have been made for the classification of pure multipartite states under SLOCC [1, 2] and the dimension of the space of homogeneous SLOCC-invariants in a fixed degree is given as a function of the number of qudits. In this work we present a general method to construct polynomial invariants for mixed states under SLOCC. In particular we also derive general invariants under local unitary (LU) symmetry for mixed states. Polynomial invariants have been investigated in [3–6], which allow in principle to deter- mine all the invariants of local unitary transformations.
    [Show full text]
  • Hyperdeterminants As Integrable Discrete Systems
    Takustraße 7 Konrad-Zuse-Zentrum D-14195 Berlin-Dahlem f¨ur Informationstechnik Berlin Germany SERGEY P. TSAREV 1, THOMAS WOLF 2 Hyperdeterminants as integrable discrete systems 1Siberian Federal University, Krasnoyarsk, Russia 2Department of Mathematics, Brock University, St.Catharines, Canada and ZIB Fellow ZIB-Report 09-17 (May 2009) Hyperdeterminants as integrable discrete systems S.P. Tsarev‡ and T. Wolf Siberian Federal University, Svobodnyi avenue, 79, 660041, Krasnoyarsk, Russia and Department of Mathematics, Brock University 500 Glenridge Avenue, St.Catharines, Ontario, Canada L2S 3A1 e-mails: [email protected] [email protected] Abstract We give the basic definitions and some theoretical results about hyperdeter- minants, introduced by A. Cayley in 1845. We prove integrability (understood as 4d-consistency) of a nonlinear difference equation defined by the 2×2×2 - hy- perdeterminant. This result gives rise to the following hypothesis: the difference equations defined by hyperdeterminants of any size are integrable. We show that this hypothesis already fails in the case of the 2 × 2 × 2 × 2 - hyperdeterminant. 1 Introduction Discrete integrable equations have become a very vivid topic in the last decade. A number of important results on the classification of different classes of such equations, based on the notion of consistency [3], were obtained in [1, 2, 17] (cf. also references to earlier publications given there). As a rule, discrete equations describe relations on the C Zn scalar field variables fi1...in ∈ associated with the points of a lattice with vertices n at integer points in the n-dimensional space R = {(x1,...,xn)|xs ∈ R}. If we take the elementary cubic cell Kn = {(i1,...,in) | is ∈{0, 1}} of this lattice and the field n variables fi1...in associated to its 2 vertices, an n-dimensional discrete system of the type considered here is given by an equation of the form Qn(f)=0.
    [Show full text]
  • Lectures on Algebraic Statistics
    Lectures on Algebraic Statistics Mathias Drton, Bernd Sturmfels, Seth Sullivant September 27, 2008 2 Contents 1 Markov Bases 7 1.1 Hypothesis Tests for Contingency Tables . 7 1.2 Markov Bases of Hierarchical Models . 17 1.3 The Many Bases of an Integer Lattice . 26 2 Likelihood Inference 35 2.1 Discrete and Gaussian Models . 35 2.2 Likelihood Equations for Implicit Models . 46 2.3 Likelihood Ratio Tests . 54 3 Conditional Independence 67 3.1 Conditional Independence Models . 67 3.2 Graphical Models . 75 3.3 Parametrizations of Graphical Models . 85 4 Hidden Variables 95 4.1 Secant Varieties in Statistics . 95 4.2 Factor Analysis . 105 5 Bayesian Integrals 113 5.1 Information Criteria and Asymptotics . 113 5.2 Exact Integration for Discrete Models . 122 6 Exercises 131 6.1 Markov Bases Fixing Subtable Sums . 131 6.2 Quasi-symmetry and Cycles . 136 6.3 A Colored Gaussian Graphical Model . 139 6.4 Instrumental Variables and Tangent Cones . 143 6.5 Fisher Information for Multivariate Normals . 150 6.6 The Intersection Axiom and Its Failure . 152 6.7 Primary Decomposition for CI Inference . 155 6.8 An Independence Model and Its Mixture . 158 7 Open Problems 165 4 Contents Preface Algebraic statistics is concerned with the development of techniques in algebraic geometry, commutative algebra, and combinatorics, to address problems in statis- tics and its applications. On the one hand, algebra provides a powerful tool set for addressing statistical problems. On the other hand, it is rarely the case that algebraic techniques are ready-made to address statistical challenges, and usually new algebraic results need to be developed.
    [Show full text]
  • Hyperdeterminants of Polynomials
    Hyperdeterminants of Polynomials Luke Oeding University of California, Berkeley June 5, 2012 Support: NSF IRFP (#0853000) while at the University of Florence. Luke Oeding (UC Berkeley) Hyperdets of Polys June 5, 2012 1 / 23 Quadratic polynomials, matrices, and discriminants Square matrix Quadratic polynomial a a f = ax 2 + bx + c A = 1;1 1;2 a2;1 a2;2 discriminant ∆(f ) = ac − b2=4: determinant a1;1a2;2 − a1;2a2;1: vanishes when A is singular- vanishes when f is singular e.g. columns linearly dependent. ∆(f ) = 0 () f has a repeated root. Notice can associate to f a symmetric matrix a b=2 A = f b=2 c and det(Af ) = ∆(f ). Luke Oeding (UC Berkeley) Hyperdets of Polys June 5, 2012 2 / 23 Quadratic polynomials, matrices, and discriminants Square matrix Quadratic polynomial f = 0a a a ::: a 1 X 2 X 1;1 1;2 1;3 1;n ai;i xi + ai;j 2xi xj Ba2;1 a2;2 a2;3 ::: a2;nC 1≤i≤n 1≤i<j≤n A = B C B . .. C @ . ::: . A l Symmetric matrix Af = an;1 an;2 an;3 ::: an;n 0 1 a1;1 a1;2 a1;3 ::: a1;n determinant det(A): Ba1;2 a2;2 a2;3 ::: a2;nC vanishes when A is singular- B C B . .. C e.g. columns linearly dependent. @ . ::: . A a1;n a2;n a3;n ::: an;n discriminant ∆(f ): vanishes when f is singular i.e. has a repeated root. If f is quadratic, then det(Af ) = ∆(f ). The symmetrization of the determinant of a matrix = the determinant of a symmetrized matrix.
    [Show full text]
  • Hyperdeterminants and Symmetric Functions
    Hyperdeterminants and symmetric functions Jean-Gabriel Luque in collaboration with Christophe Carr´e 24 novembre 2012 Jean-Gabriel Luque in collaboration with Christophe Carr´e Hyperdeterminants and symmetric functions The notion is due to Cayley (1846). For k = 2 we recover the classical determinant. Hyperdeterminants A little history Simplest generalization of the determinant to higher tensor (arrays M = (Mi1;:::;ik )1≤i1;:::;ik ) 1 X Det(M) = (σ ) : : : (σ )M ::: M ; n! 1 2 σ1(1),...,σk (1) σ1(n),...,σk (n) σ1,...,σk 2Sn (σ) is the sign of the permutation σ. Jean-Gabriel Luque in collaboration with Christophe Carr´e Hyperdeterminants and symmetric functions For k = 2 we recover the classical determinant. Hyperdeterminants A little history Simplest generalization of the determinant to higher tensor (arrays M = (Mi1;:::;ik )1≤i1;:::;ik ) 1 X Det(M) = (σ ) : : : (σ )M ::: M ; n! 1 2 σ1(1),...,σk (1) σ1(n),...,σk (n) σ1,...,σk 2Sn (σ) is the sign of the permutation σ. The notion is due to Cayley (1846). Jean-Gabriel Luque in collaboration with Christophe Carr´e Hyperdeterminants and symmetric functions Hyperdeterminants A little history Simplest generalization of the determinant to higher tensor (arrays M = (Mi1;:::;ik )1≤i1;:::;ik ) 1 X Det(M) = (σ ) : : : (σ )M ::: M ; n! 1 2 σ1(1),...,σk (1) σ1(n),...,σk (n) σ1,...,σk 2Sn (σ) is the sign of the permutation σ. The notion is due to Cayley (1846). For k = 2 we recover the classical determinant. Jean-Gabriel Luque in collaboration with Christophe Carr´e Hyperdeterminants and symmetric functions
    [Show full text]
  • Discriminants, Resultants, and Their Tropicalization
    Discriminants, resultants, and their tropicalization Course by Bernd Sturmfels - Notes by Silvia Adduci 2006 Contents 1 Introduction 3 2 Newton polytopes and tropical varieties 3 2.1 Polytopes . 3 2.2 Newton polytope . 6 2.3 Term orders and initial monomials . 8 2.4 Tropical hypersurfaces and tropical varieties . 9 2.5 Computing tropical varieties . 12 2.5.1 Implementation in GFan ..................... 12 2.6 Valuations and Connectivity . 12 2.7 Tropicalization of linear spaces . 13 3 Discriminants & Resultants 14 3.1 Introduction . 14 3.2 The A-Discriminant . 16 3.3 Computing the A-discriminant . 17 3.4 Determinantal Varieties . 18 3.5 Elliptic Curves . 19 3.6 2 × 2 × 2-Hyperdeterminant . 20 3.7 2 × 2 × 2 × 2-Hyperdeterminant . 21 3.8 Ge’lfand, Kapranov, Zelevinsky . 21 1 4 Tropical Discriminants 22 4.1 Elliptic curves revisited . 23 4.2 Tropical Horn uniformization . 24 4.3 Recovering the Newton polytope . 25 4.4 Horn uniformization . 29 5 Tropical Implicitization 30 5.1 The problem of implicitization . 30 5.2 Tropical implicitization . 31 5.3 A simple test case: tropical implicitization of curves . 32 5.4 How about for d ≥ 2 unknowns? . 33 5.5 Tropical implicitization of plane curves . 34 6 References 35 2 1 Introduction The aim of this course is to introduce discriminants and resultants, in the sense of Gel’fand, Kapranov and Zelevinsky [6], with emphasis on the tropical approach which was developed by Dickenstein, Feichtner, and the lecturer [3]. This tropical approach in mathematics has gotten a lot of attention recently in combinatorics, algebraic geometry and related fields.
    [Show full text]
  • Hyperdeterminants from the E8 Discriminant
    HYPERDETERMINANTS FROM THE E8 DISCRIMINANT FRED´ ERIC´ HOLWECK AND LUKE OEDING Abstract. We find expressions of the polynomials defining the dual varieties of Grassman- nians Gr(3; 9) and Gr(4; 8) both in terms of the fundamental invariants and in terms of a generic semi-simple element. We project the polynomial defining the dual of the adjoint orbit of E8, and obtain the polynomials of interest as factors. To find an expression of the Gr(4; 8) discriminant in terms of fundamental invariants, which has 15; 942 terms, we perform interpolation with mod-p reduction and rational reconstruction. From these expres- sions for the discriminants of Gr(3; 9) and Gr(4; 8) we also obtain expressions for well-known hyperdeterminants of formats 3 × 3 × 3 and 2 × 2 × 2 × 2. 1. Introduction Cayley's 2 × 2 × 2 hyperdeterminant is the well-known polynomial 2 2 2 2 2 2 2 2 ∆222 = x000x111 + x010x101 + x001x110 + x011x100 + 4(x000x011x101x110 + x001x010x100x111) − 2(x000x001x110x111 + x000x010x101x111 + x000x011x100x111+ x001x010x101x110 + x001x011x100x110 + x010x011x100x101): ×3 It generates the ring of invariants for the natural group SL(2) nS3 acting on the tensor space C2×2×2. It is well-studied in Algebraic Geometry. Its vanishing defines the projective dual of the Segre embedding of three copies of the projective line (a toric variety) [11, 32], and also coincides with the tangential variety of the same Segre product [21,25,30]. It is the unique relation among the principal minors of a general 3 × 3 symmetric matrix [15]. It is the starting point for many interesting studies in combinatorics [11].
    [Show full text]
  • Hyperdeterminants As Integrable 3D Difference Equations (Joint Project with Sergey Tsarev, Krasnoyarsk State Pedagogical University)
    Hyperdeterminants as integrable 3D difference equations (joint project with Sergey Tsarev, Krasnoyarsk State Pedagogical University) Thomas Wolf Brock University, Ontario Ste-Adèle, June 26, 2008 Thomas Wolf Hyperdeterminants: a CA challenge I Modern applications of hyperdeterminants I The classical heritage: A.Cayley et al. I The definition of hyperdeterminants and its variations I The next step: the (a?) 2 × 2 × 2 × 2 hyperdeterminant (B.Sturmfels et al., 2006) I FORM computations: how far can we reach now? I Principal Minor Assignment Problem (O.Holtz, B.Sturmfels, 2006) I Hyperdeterminants as discrete integrable systems: 2 × 2 and 2 × 2 × 2. I Is 2 × 2 × 2 × 2 hyperdeterminant integrable?? Plan: I The simplest (hyper)determinants: 2 × 2 and 2 × 2 × 2. Thomas Wolf Hyperdeterminants: a CA challenge I The classical heritage: A.Cayley et al. I The definition of hyperdeterminants and its variations I The next step: the (a?) 2 × 2 × 2 × 2 hyperdeterminant (B.Sturmfels et al., 2006) I FORM computations: how far can we reach now? I Principal Minor Assignment Problem (O.Holtz, B.Sturmfels, 2006) I Hyperdeterminants as discrete integrable systems: 2 × 2 and 2 × 2 × 2. I Is 2 × 2 × 2 × 2 hyperdeterminant integrable?? Plan: I The simplest (hyper)determinants: 2 × 2 and 2 × 2 × 2. I Modern applications of hyperdeterminants Thomas Wolf Hyperdeterminants: a CA challenge I The definition of hyperdeterminants and its variations I The next step: the (a?) 2 × 2 × 2 × 2 hyperdeterminant (B.Sturmfels et al., 2006) I FORM computations: how far can we reach now? I Principal Minor Assignment Problem (O.Holtz, B.Sturmfels, 2006) I Hyperdeterminants as discrete integrable systems: 2 × 2 and 2 × 2 × 2.
    [Show full text]
  • Subtracting a Best Rank-1 Approximation Does Not Necessarily Decrease Tensor Rank Alwin Stegeman, Pierre Comon
    Subtracting a best rank-1 approximation does not necessarily decrease tensor rank Alwin Stegeman, Pierre Comon To cite this version: Alwin Stegeman, Pierre Comon. Subtracting a best rank-1 approximation does not necessarily de- crease tensor rank. Linear Algebra and its Applications, Elsevier, 2010, 433 (7), pp.1276–1300. 10.1016/j.laa.2010.06.027. hal-00512275 HAL Id: hal-00512275 https://hal.archives-ouvertes.fr/hal-00512275 Submitted on 29 Aug 2010 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Subtracting a best rank-1 approximation may increase tensor rank 1 Alwin Stegeman2 and Pierre Comon3 April 7, 2010 Abstract It has been shown that a best rank-R approximation of an order-k tensor may not exist when R 2 and k 3. This poses a serious problem to data analysts using tensor decompositions. ≥ ≥ It has been observed numerically that, generally, this issue cannot be solved by consecutively computing and subtracting best rank-1 approximations. The reason for this is that subtracting a best rank-1 approximation generally does not decrease tensor rank. In this paper, we provide a mathematical treatment of this property for real-valued 2 2 2 tensors, with symmetric × × tensors as a special case.
    [Show full text]
  • Tensor Decomposition and Algebraic Geometry
    TENSOR DECOMPOSITION AND ALGEBRAIC GEOMETRY LUKE OEDING 1. Summary In this lecture I will introduce tensors and tensor rank from an algebraic perspective. I will introduce multilinear rank and tensor rank, and I will discuss the related classical algebraic varieties { subspace varieties and secant varieties. I will give basic tools for computing dimensions, Terracini's lemma and the notion of the abstract secant variety. This will lead to the notion of generic rank. I will briefly disucss implicit equations, which lead to rank tests. Some basic references: [CGO14, Lan12]. In the second lecture I will focus on one special case of tensor decomposition - symmetric tensors and Waring decomposition. I will start by discussing the naive approach, then I will discuss Sylvester's algorithm for binary forms. As a bonus I will show how Sylvester's algorithm for symmetric tensor decomposition also gives a method find the roots of a cubic polynomial in one variable. I will discuss what to expect regarding generic rank and uniqueness of tensor decomposi- tion. With the remaining time I will discuss the recently defined notion of an eigenvector of a (symmetric) tensor ([Lim05, Qi05]), which leads to a new method (developed with Ottaviani) for exact Waring decomposition ([OO13]. Lecture 1: Tensors and classical Algebraic Geometry 2. Tensors with and without coordinates The main goal in these lectures is to explain the basics of tensors from an Algebraic Geometry point of view. The main goal is to find exact expressions for specific tensors that are as efficient as possible. The guiding questions are the following: • Find the space using the least number of variables for whit to represent a tensor.
    [Show full text]