<<

MATRICES WITH HIGH COMPLETELY POSITIVE SEMIDEFINITE

SANDER GRIBLING, DAVID DE LAAT, AND MONIQUE LAURENT

Abstract. A real symmetric M is completely positive semidefinite if it admits a Gram representation by positive semidefinite matrices (of any size d). The smallest such d is called the completely positive semidefinite rank of M, and it is an open question whether there exists an upper bound on this number as a function of the matrix size. We show that if such an upper bound exists, it has to be at least exponential in the matrix size. For this we exploit connections to quantum information theory and we construct extremal bipartite correlation matrices of large rank. We also exhibit a class of completely positive matrices with quadratic (in terms of the matrix size) completely positive rank, but with linear completely positive semidefinite rank, and we make a connection to the existence of Hadamard matrices.

1. Introduction A matrix is said to be completely positive semidefinite if it admits a Gram rep- resentation by positive semidefinite matrices (of any size). The n × n completely positive semidefinite matrices form a convex cone, called the completely positive n semidefinite cone, which we denote by CS+. The motivation for the study of the completely positive semidefinite cone is n twofold. Firstly, the completely positive semidefinite cone CS+ is a natural analog of the completely positive cone CPn, which consists of the matrices admitting a factorization by nonnegative vectors. The cone CPn is well studied (see, for exam- ple, the monograph [1]), and, in particular, it can be used to model classical graph parameters. For instance, [12] shows how to model the stability number of a graph as a conic optimization problem over the completely positive cone. A second moti- n vation lies in the connection to quantum information theory. Indeed, the cone CS+ was introduced in [13] to model quantum graph parameters (including quantum stability numbers) as conic optimization problems, an approach extended in [17] for quantum graph homomorphisms and in [20] for quantum correlations. In this paper we are interested in the size of the factors needed in Gram rep- resentations of matrices. This type of question is of interest for factorizations by nonnegative vectors as well as by positive semidefinite matrices. A matrix M is said to be completely positive if there exist nonnegative vectors d T v1, . . . , vn ∈ R+ such that Mi,j = hvi, vji for all i and j, where hvi, vji = vi vj de- notes the Euclidean inner product. The smallest d for which these vectors exist is denoted by cp-rank(M) and is called the completely positive rank of M. Similarly, a matrix M is called completely positive semidefinite if there exist positive semi- d definite matrices X1,...,Xn ∈ S+ such that Mi,j = hXi,Xji for all i and j, where hXi,Xji = Tr(XiXj) is the trace inner product. The smallest d for which these matrices exist is denoted by cpsd-rank(M), and this is called this the completely positive semidefinite rank of M. We call such a set of vectors (matrices) a Gram representation or factorization of M by nonnegative vectors (positive semidefinite

Date: May 3, 2016. 1 2 SANDER GRIBLING, DAVID DE LAAT, AND MONIQUE LAURENT matrices). By construction, we have the following inclusions n n n n×n CP ⊆ CS+ ⊆ S+ ∩ R+ . The three cones coincide for n ≤ 4 (since doubly nonnegative matrices of size n ≤ 4 are completely positive), but both inclusions are strict for n ≥ 5 (see [13] for details). By Carath´eodory’s theorem, the completely positive rank of a matrix in CPn is n+1 at most 2 . In [19] the following stronger bound is given: n + 1 (1) cp-rank(M) ≤ − 4 for M ∈ CPn and n ≥ 5, 2 which is also not known to be tight. No upper bound is known for the completely n positive semidefinite rank of matrices in CS+. It is not even known whether such a bound exists. A positive answer would have strong implications. It would imply n that the cone CS+ is closed. This, in turn, would imply that the set of quantum correlations is closed, since it can be seen as a projection of an affine slice of the completely positive semidefinite cone (see [16, 20]). Whether the set of quantum correlations is closed is an open question in quantum information theory. In con- trast, as an application of the upper bound (1), the completely positive cone CPn is easily seen to be closed. A description of the closure of the completely positive semidefinite cone in terms of factorizations by positive elements in von Neumann algebras can be found in [4]. Such factorizations were used to show a separation n n n×n between the closure of CS+ and the doubly nonnegative cone S+ ∩R+ (see [9, 13]). In this paper we show that if an upper bound exists for the completely positive n semidefinite rank of matrices in CS+ then it needs to grow at least exponentially in the matrix size n. Our main result is the following: Theorem 1.1. For each positive integer r, there exists a completely positive semi- √ br/2c definite matrix M of size r2 + r + 2 with cpsd-rank(M) ≥ 2 . The proof of this result relies on a connection with quantum information the- ory and geometric properties of (bipartite) correlation matrices. We refer to the main text for the definitions of quantum and bipartite correlations. A first basic ingredient is the fact from [20] that a quantum correlation p can be realized in lo- cal d if and only if there exists a certain completely positive semidefinite matrix with cpsd-rank at most d. Then, the key idea is to construct a class of quan- tum correlations p that need large local dimension. We construct such quantum correlations from bipartite correlation matrices. For this we use classical results of Tsirelson [21, 22], which characterize bipartite correlation matrices in terms of operator representations and provide a link to Clifford algebras. In this way we reduce the problem to the problem of finding bipartite correlation matrices that are extreme points of the set of bipartite correlations and have large rank. Inter- estingly, for our construction we use and combine classical results of Tsirelson with recent results in rigidity theory. For the completely positive rank we have the quadratic upper bound (1), and completely positive matrices have been constructed whose completely positive rank grows quadratically in the size of the matrix. This is the case, for instance, for the matrices  1  Ik k Jk 2k Mk = 1 ∈ CP , k Jk Ik 2 k k where cp-rank(Mk) = k . Here Ik ∈ S is the and Jk ∈ S is the all-ones matrix. This leads to the natural question of how fast cpsd-rank(Mk) grows. As a second result we show that the completely positive semidefinite rank MATRICES WITH HIGH COMPLETELY POSITIVE SEMIDEFINITE RANK 3 grows linearly for the matrices Mk, and we exhibit a link to the question of existence of Hadamard matrices. More precisely, we show that

k ≤ cpsd-rank(Mk) ≤ 2k, with equality cpsd-rank(Mk) = k if and only if there exists a real of order k. The completely positive and completely positive semidefinite rank are symmet- ric analogs of the nonnegative and positive semidefinite rank. Here the nonneg- m×n ative rank, denoted rank+(M), of a matrix M ∈ R+ , is the smallest inte- d ger d for which there exist nonnegative vectors u1, . . . , um, v1, . . . , vn ∈ R+ such that Mi,j = hui, vji for all i and j, and the positive semidefinite rank, denoted rankpsd(M), is the smallest d for which there exist positive semidefinite matrices d X1,...,Xm,Y1,...,Yn ∈ S+ such that Mi,j = hxi, yji for all i and j. These no- tions have many applications, in particular to communication complexity and for the study of efficient linear or semidefinite extensions of convex polyhedra (see [25, 10]). Unlike in the symmetric setting, in the asymmetric setting the following bounds, which show a linear regime, can easily be checked:

rankpsd(A) ≤ rank+(A) ≤ min(m, n). We refer to [8] and the references therein for a recent overview of results about the positive semidefinite rank.

Organization of the paper. In Section 2 we first present some simple properties of the completely positive semidefinite rank, and then investigate its value for the matrices Mk, where we also show a link to the existence of Hadamard matrices. We also give a simple heuristic for finding approximate positive semidefinite fac- torizations. The proof of our main result in Theorem 1.1 boils down to several key ingredients which we treat in the subsequent sections. In Section 3 we group old and new results about the set of bipartite correlation matrices. In particular, we give a geometric characterization of the extreme points, we revisit some conditions due to Tsirelson and links to rigidity theory, and we construct a class of extreme bipartite correlations with optimal parameters. In Section 4 we recall some characterizations, due to Tsirelson, of bipartite corre- lations in terms of operator representations. We also recall connections to Clifford algebras, and for bipartite correlations that are extreme points we relate their rank to the dimension of their operator representations. Finally in Section 5 we introduce quantum correlations and recall their link to completely positive semidefinite matrices. We show how to construct quantum correlations from bipartite correlation matrices, and we prove the main theorem.

Note. Upon completion of this paper we learned of the recent independent work [18], where a class of matrices with exponential cpsd-rank is also constructed. The key idea of using extremal bipartite correlation matrices having large rank is the same. Our construction uses bipartite correlation matrices with optimized param- eters meeting Tsirelson’s upper bound (see Corollary 3.10). As a consequence, our completely positive semidefinite matrices have the best ratio between cpsd-rank and size that can be obtained using this technique.

2. Some properties of the completely positive semidefinite rank In this section we consider the completely positive semidefinite rank of matrices in the completely positive cone. In particular, for a class of matrices, we show a 4 SANDER GRIBLING, DAVID DE LAAT, AND MONIQUE LAURENT quadratic separation in terms of the matrix size between both ranks. We also men- tion a simple heuristic for building completely positive semidefinite factorizations, which we have used to test several explicit examples. We start by collecting some simple properties of the completely positive semidef- inite rank. A first observation is that if a matrix M admits a Gram representation by Hermitian positive semidefinite matrices of size d, then it also admits a Gram representation by real symmetric positive semidefinite matrices of size 2d, and thus M is completely positive semidefinite with cpsd-rank(M) ≤ 2d. This is based on the well-known fact that mapping a Hermitian d × d matrix X to 1  Re(X) − Im(X) √ ∈ S2d 2 Im(X)T Re(X) is an isometry that preserves positive semidefiniteness. The completely positive n semidefinite rank is subadditive, that is, for A, B ∈ CS+ we have cpsd-rank(A + B) ≤ cpsd-rank(A) + cpsd-rank(B), k which can be seen as follows: If A is the Gram matrix of X1,...,Xn ∈ S+ and r B is the Gram matrix of Y1,...,Yn ∈ S+, then A + B is the Gram matrix of the k+r matrices X1 ⊕ Y1,...,Xn ⊕ Yn ∈ S+ . n For M ∈ CS+ with r = cpsd-rank(M), we have the inequality r + 1 ≥ rank(M), 2

r+1 r ( 2 ) since the factors of M in S+ yield another factorization of M by vectors in R . Finally, the next lemma shows that if the completely positive semidefinite rank of a matrix is high, then each factorization by positive semidefinite matrices must contain at least one matrix with high rank. Lemma 2.1. If M ∈ Rn×n has a Gram representation by positive semidefinite matrices X1,...,Xn with rank(Xi) = ri, then cpsd-rank(M) ≤ r1 + ... + rn. d Proof. Let X1,...,Xn ∈ S+ be a Gram representation of M with rank(Xi) = ri for all i ∈ [n]. Let k = r1 + ... + rn. If d > k, then there exist orthonormal d vectors v1, . . . , vd−k ∈ R such that Xivj = 0 for all i and j. Let u1, . . . , uk be ⊥ an orthonormal of span{v1, . . . , vd−k} , and let U be the matrix whose ith ∗ column is ui. With Yi = U XiU we have hXi,Xji = hYi,Yji for all i and j, so k M = Gram(Y1,...,Yn). And since Yi ∈ S+ for all i, this completes the proof.  2.1. A connection to the existence of Hadamard matrices. Consider the 2k × 2k matrix  1  Ik k Jk Mk = 1 , k Jk Ik where Ik is the k × k identity matrix and Jk the k × k all-ones matrix. The 2 completely positive rank of Mk equals k , which is well known and easy to check (see Proposition 2.2 below). The Drew-Johnson-Loewy (DJL) conjecture in [6] states that bn2/4c is an upper bound on the completely positive rank of n × n matrices. Thus, for the matrices Mk, the completely positive rank attains this upper bound. The DJL conjecture holds for n ≤ 5, but it was recently disproved in [3], where explicit matrices are constructed of size 7 ≤ n ≤ 11 that have completely positive rank larger than the conjectured bound. For k ≥ 6 the matrices Mk still have the largest known completely positive rank. It is therefore natural to ask whether the matrices Mk also have large (quadratic in k) completely positive semidefinite rank. As we see below this is not the case. We show that the completely positive semidefinite rank is at most 2k. We more- over show that it is equal to k if and only if a real Hadamard matrix of order k MATRICES WITH HIGH COMPLETELY POSITIVE SEMIDEFINITE RANK 5 exists, which suggests that determining the completely positive semidefinite rank is a difficult problem in general. A real Hadamard matrix of order k is a matrix H ∈ {±1}k×k with pairwise orthogonal columns. We use the following notation: d The support of a vector u ∈ R is the set of indices i ∈ [d] such that ui 6= 0. For completeness we first give a proof that the completely positive rank is k2.

2 Proposition 2.2. The completely positive rank of Mk is equal to k . √ √ Proof. For i ∈ [k] consider the vectors vi = 1/ k ei ⊗ 1 and ui = 1/ k 1 ⊗ ei, k k where ei is the ith basis vector in R and 1 is the all-ones vector in R . The vectors v1, . . . , vk, u1, . . . , uk are nonnegative and form a Gram representation of 2 Mk, which shows cp-rank(Mk) ≤ k . d Suppose Mk = Gram(v1, v2, . . . , vk, u1, u2, . . . , uk) with vi, ui ∈ R+. In the re- 2 mainder of the proof we show d ≥ k . We have (Mk)i,j = δij for 1 ≤ i, j ≤ k. Since the vectors vi are nonnegative, they must have disjoint supports. The same holds for the vectors u1, . . . , uk. Since (Mk)i,j = 1/k > 0 for 1 ≤ i ≤ k and k+1 ≤ j ≤ 2k, the support of vi overlaps with the support of uj for each i and j. This means that for each i ∈ [k], the size of the support of the vector vi is at least k. This is only 2 possible if d ≥ k . 

Proposition 2.3. For each k we have k ≤ cpsd-rank(Mk) ≤ 2k, with equality cpsd-rank(Mk) = k if and only if there exists a real Hadamard matrix of order k.

Proof. Since cpsd-rank(Ik) = k, the lower bound cpsd-rank(Mk) ≥ k follows be- cause Ik is a principal submatrix of Mk. We show the upper bound by giving an explicit factorization using real positive semidefinite 2k × 2k matrices. For this we first give a factorization with Hermitian positive semidefinite k × k matrices. Let Hk be a of size k; that is, its entries are complex numbers with modulus one and its columns are pairwise orthogonal. Such a matrix exists for every k. Take for example 2πi(i−1)(j−1)/k (Hk)i,j = e for i, j ∈ [k]. We define the factors u u∗ X = e eT and Y = i i for i ∈ [k], i i i i k k where ei is the ith standard basis vector of R and ui is the ith column of Hk. By direct computation it follows that Mk = Gram(X1,...,Xk,Y1,...,Yk). As observed earlier from the Hermitian positive semidefinite matrices Xi,Yi we can construct real positive semidefinite matrices of size 2k forming a Gram factorization of Mk. This shows cpsd-rank(Mk) ≤ 2k. We now show that cpsd-rank(Mk) = k if and only if there exists a real Hadamard matrix of order k. One direction follows directly from the above proof: If a real Hadamard matrix of size k exists, then we can replace Hk by this real matrix and this yields a factorization by real positive semidefinite k × k matrices. k Conversely, assume that cpsd-rank(Mk) = k and let X1,...,Xk,Y1,...,Yk ∈ S+ be a Gram representation of M. We first show there exist two orthonormal bases k T T u1, . . . , uk and v1, . . . , vk of R such that Xi = uiui and Yi = vivi . For this we observe that I = Gram(X1,...,Xk), which implies Xi 6= 0 and XiXj = 0 for all i 6= j. Hence, the range of Xj is contained in the kernel of Xi for all i 6= j. This means that the range of Xi is orthogonal to the range of Xj. We now have Pk i=1 dim(range(Xi)) ≤ k and dim(range(Xi)) ≥ 1 for all i. From this it follows k that rank(Xi) = 1 for all i ∈ [k]. This means there exist u1, . . . , uk ∈ R such T that Xi = uiui for all i. From I = Gram(X1,...,Xk) it follows that the vectors k u1, . . . , uk form an of R . The same argument can be made for 6 SANDER GRIBLING, DAVID DE LAAT, AND MONIQUE LAURENT

T the matrices Yi, thus Yi = vivi and the vectors v1, . . . , vk form an orthonormal basis of Rk. Up to an orthogonal transformation we may assume that the first basis is the standard basis; that is, ui = ei for i ∈ [k]. We then obtain 1 2 = (M ) = he , v i2 = (v )  for i, j ∈ [k], k k i,j+k i j j i √ √ hence (vj)i = ±1/ k. Therefore, the k × k matrix whose kth column is k vk is a real Hadamard matrix. 

The above proposition leaves open the exact determination of cpsd-rank(Mk) for the cases where a real Hadamard matrix of order k does not exist. Extensive experimentation using the heuristic from Section 2.2 suggests that for k = 3, 5, 6, 7 the cpsd-rank of Mk equals 2k, which leads to the following question:

Question 2.4. Is the completely positive semidefinite rank of Mk equal to 2k if a real Hadamard matrix of size k × k does not exist? We also used the heuristic from Section 2.2 to check numerically that the afore- mentioned matrices from [3], which have completely positive rank greater than bn2/4c, have small (smaller than n) cpsd-rank. In fact, in our numerical experi- ments we never found a completely positive n × n matrix for which we could not find a factorization in dimension n, which leads to the following question: Question 2.5. Is the completely positive semidefinite rank of a completely positive n × n matrix upper bounded by n? 2.2. A heuristic for finding Gram representations. In this section we give an adaptation to the symmetric setting of the seesaw method from [24], which is used n to find good quantum strategies for nonlocal games. Given a matrix M ∈ CS+ with cpsd-rank(M) ≤ d, we give a heuristic to find a Gram representation of M by positive semidefinite d × d matrices. Although this heuristic is not guaranteed to converge to a factorization of M, for small n and d (say, n, d ≤ 10) it works well in practice by restarting the algorithm several times. The following algorithm seeks to minimize the function

E(X1,...,Xn) = kGram(X1,...,Xn) − Mk∞. Algorithm 2.6. Initialize the algorithm by setting k = 1 and generating random 0 0 d 0 0 matrices X1 ,...,Xn ∈ S+ that satisfy hXi ,Xi i = Mi,i for all i ∈ [n]. Iterate the following steps:

(1) Let (δ, Y1,...,Yn) be a (near) optimal solution of the semidefinite program n o d k−1 min δ : δ ∈ R+,Y1,...,Yn ∈ S+, Xi ,Yj − Mi,j ≤ δ for i, j ∈ [n] . (2) Perform a line search to find the scalar r ∈ [0, 1] minimizing k−1 k−1  E (1 − r)X1 + rY1,..., (1 − r)Xn + rYn , k k−1 and set Xi = (1 − r)Xi + rYi for each i ∈ [n]. k k (3) If E(X1 ,...,Xn) is not small enough, increase k by one and go to step (1). k k Otherwise, return the matrices X1 , ..., Xn.

3. The set of bipartite correlations In this section we define the set Cor(m, n) of bipartite correlations and we discuss properties of the extreme points of Cor(m, n), which will play a crucial role in the construction of CS+-matrices with large cpsd-rank. In particular we give a characterization of the extreme points of Cor(m, n) in terms of extreme points of the related set Em+n of correlation matrices. We use it to give a simple construction MATRICES WITH HIGH COMPLETELY POSITIVE SEMIDEFINITE RANK 7

r+1 of a class of extreme points of Cor(m, n) with rank r, when m = n = 2 . We also revisit conditions for extreme points introduced by Tsirelson [21] and point out links with universal rigidity. Based on these we can construct extreme points r of Cor(m, n) with rank r when m = r and n = 2 + 1, which are used to prove our main result (Theorem 1.1). A matrix C ∈ Rm×n is called a bipartite correlation matrix if there exist real d unit vectors x1, . . . , xm, y1, . . . , yn ∈ R (for some d ≥ 1) such that Cs,t = hxs, yti for all s ∈ [m] and t ∈ [n]. Following Tsirelson [21], any such system of real unit vectors is called a C-system. We let Cor(m, n) denote the set of all m × n bipartite correlation matrices. The elliptope En is defined as n n o En = E ∈ S+ : Eii = 1 for i = 1, . . . , n , its elements are the correlation matrices, which can alternatively be defined as all n d matrices of the form (hzi, zji)i,j=1 for some real unit vectors z1, . . . , zn ∈ R (d ≥ 1). We have the surjective projection  QC (2) π : E → Cor(m, n), 7→ C. m+n CT R

Hence, Cor(m, n) is a projection of the elliptope Em+n and therefore a convex set. Given C ∈ Cor(m, n), any matrix E ∈ Em+n such that π(E) = C is called an extension of C to the elliptope and we let fib(C) denote the fiber (the set of extensions) of C. Theorem 3.3 below characterizes extreme points of Cor(m, n) in terms of extreme points of Em+n. It is based on two intermediary results. The first result relates extreme points C ∈ Cor(m, n) to properties of their set of extensions fib(C). It is shown in [7] (in a more general setting); we omit the (easy) proof. Lemma 3.1 ([7, Lemma 2.4]). Let C ∈ Cor(m, n). Then C is an extreme point of Cor(m, n) if and only if the set fib(C) is a face of Em+n. Moreover, if C is an extreme point of Cor(m, n), then every extreme point of fib(C) is an extreme point of Em+n. The second result (from Tsirelson [21]) shows that every extreme point C of Cor(m, n) has a unique extension E in Em+n, we give a proof for completeness. Lemma 3.2 ([21]). Assume C is an extreme point of Cor(m, n).

(i) If x1, . . . , xm, y1, . . . , yn is a C-system, then

Span{x1, . . . , xm} = Span{y1, . . . , yn}.

(ii) The matrix C has a unique extension to a matrix E ∈ Em+n, and there r exists a C-system x1, . . . , xm, y1, . . . , yn ∈ R , with r = rank(C), such that

E = Gram(x1, . . . , xm, y1, . . . , yn).

Proof. We will use the following observation: Each matrix C = (has, bti)s∈[m],t∈[n], where as, bt are vectors with kask, kbtk ≤ 1, belongs to Cor(m, n) since it satisfies     * as bt + p 2 Cs,t =  1 − kask  ,  0  for all (s, t) ∈ [m] × [n]. p 2 0 1 − kbtk

(i) Set V = Span{x1, . . . , xm} and assume yk 6∈ V for some k ∈ [n]. Let w denote the orthogonal projection of yk onto V . Then kwk < 1 and one can choose + m×n a nonzero vector u ∈ V such that kw −+ uk ≤ 1. Define the matrices C− ∈ R by ( hxs, w + ui if t = k, −+ − Cs,t = hxs, yti if t 6= k. 8 SANDER GRIBLING, DAVID DE LAAT, AND MONIQUE LAURENT

Then, C−+ ∈ Cor(m, n) (by the above observation) and C = (C+ + C−)/2. As C is an extreme point of Cor(m, n) one must have C = C+ = C−. Hence u is orthogonal to each xs and thus u = 0, a contradiction. This shows the inclusion Span{y1, . . . , ym} ⊆ Span{x1, . . . , xm} and the reverse one follows in the same way. 0 0 00 00 0 0 00 00 (ii) Assume {xs, yt} and {xs , yt } are two C-systems. We show hxr, xsi = hxr , xs i 0 0 00 00 for all r, s ∈ S and√ hyt, yui = hyt , yui for√ all t, u ∈ T . For this define the vectors 0 00 0 00 xs = (xs ⊕ xs )/ 2 and yt = (yt ⊕ yt )/ 2, which again form a C-system. Using s P s (i), for any s ∈ S, there exist scalars λt such that xs = t∈T λt yt and thus 0 P s 0 00 P s 00 xs = t∈T λt yt and xs = t∈T λt yt . This shows

0 0 X r 0 0 X r X r 00 00 00 00 hxr, xsi = λt hyt, xsi = λt Cs,t = λt hyt , xs i = hxr , xs i t∈T t∈T t∈T

0 0 00 00 for all r, s ∈ S. The analogous argument shows hyt, yui = hyt , yui for all t, u ∈ T . This shows C has a unique extension to a matrix E ∈ Em+n. Finally, we show that rank(E) = rank(C). Say E is the Gram matrix of x1, . . . , xm, y1, . . . , yn. In view of (i), rank(E) = rank{x1, . . . , xm} and thus it suffices to show that rank{x1, . . . , xm} ≤ rank(C). For this note that if {xs : s ∈ I} (for some I ⊆ S) is linearly independent then the corresponding rows of C are lin- P P early independent, since s∈I λshxs, yti = 0 (for all t ∈ T ) implies s∈I λsxs = 0 (using (i)) and thus λs = 0 for all s. 

Theorem 3.3. A matrix C is an extreme point of Cor(m, n) if and only if C has a unique extension to a matrix E ∈ Em+n and E is an extreme point of Em+n.

Proof. Direct application of Lemma 3.1 and Lemma 3.2 (ii). 

We can use the following lemma to construct explicit examples of extreme points of Cor(m, n) for the case m = n.

Lemma 3.4. Each extreme point of En is an extreme point of Cor(n, n).

Proof. Let C be an extreme point of En. Define the matrix CC E = . CC

Then E ∈ E2n is an extension of C. In view of Theorem 3.3 it suffices to show that E is the unique extension of C and that E is an extreme point of E2n. With e1, . . . , en n denoting the standard unit vectors in R , observe that the vectors ei ⊕−ei (i ∈ [n]) lie in the kernel of any matrix E0 ∈ fib(C), since E0 and C have an all-ones diagonal. This implies that fib(C) = {E}. We now show that E is an extreme point of E2n. For this let E1,E2 ∈ E2n and 0 < λ < 1 such that E = λE1 + (1 − λ)E2. As the kernel of E is the intersection of the kernels of E1 and E2, the vectors ei ⊕ −ei belong to the kernel of E1 and E2 and thus     C1 C1 C2 C2 E1 = and E2 = C1 C1 C2 C2 for some C1,C2 ∈ En. Hence, C = λC1 + (1 − λ)C2, which implies C = C1 = C2, since C is an extreme point of En. Thus E = E1 = E2, which completes the proof. 

The above lemma shows how to construct extreme points of Cor(n, n) from ex- treme points of the elliptope En. Li and Tam [15] give the following characterization of the extreme points of En. MATRICES WITH HIGH COMPLETELY POSITIVE SEMIDEFINITE RANK 9

Theorem 3.5 ([15]). Consider a matrix E ∈ En with rank r and unit vectors r z1, . . . , zn ∈ R such that E = Gram(z1, . . . , zn). Then E is an extreme point of En if and only if r + 1 (3) = dim(Span{z zT, . . . , z zT}). 2 1 1 n n

r+1 In particular, if E is an extreme point of En, then 2 ≤ n.

Example 3.6 ([15]). For each integer r ≥ 1 there exists an extreme point of En of r+1 rank r, where n = 2 . For example, let e1, . . . , er be the standard basis vectors of Rr and define

 e1 + e2 e1 + e3 er−1 + er  E = Gram e1, . . . , er, √ , √ ,..., √ . 2 2 2

Then E is an extreme point of En of rank r. Note that the above example is optimal in the sense that a rank r extreme r+1 point of En can exist only if n ≥ 2 (by Theorem 3.5). By combining this with Lemma 3.4, this gives a class of extreme points of Cor(m, n) with rank r and r+1 m = n = 2 . If C is an extreme point of Cor(m, n) with rank r, then by Theorems 3.3 and 3.5 r+1 r+1 we have 2 ≤ m+n. Tsirelson [21] claimed the stronger bound 2 ≤ m+n−1 (see Corollary 3.10 below). In what follows we will construct extreme points of r C(m, n) with rank r and m = r, n = 2 + 1, thus with optimal parameters with respect to Tsirelson’s upper bound. For this we need to investigate in more detail the unique extension property of extreme points of Cor(m, n). Let C ∈ Cor(m, n) with rank r, let {xs}, {yt} be r a C-system in R , and let E = Gram(x1, . . . , xm, y1, . . . , yn) ∈ Em+n. In view of Theorem 3.3, if C is an extreme point of Cor(m, n), then E is the unique extension of C in Em+n. This uniqueness property can be rephrased as the requirement that an associated semidefinite program has a unique solution. Namely, consider the following dual pair of semidefinite programs:

n S∪T o (4) max 0 : X ∈ S+ ,Xk,k = 1 for k ∈ S ∪ T,Xs,t = Cs,t for s ∈ S, t ∈ T ,

n X X X Diag(λ) W  o (5) min λ + µ +2 W C : Ω = ∈ SS∪T . s t s,t s,t W T Diag(µ) + s∈S t∈T s∈S,t∈T

The feasible region of problem (4) consists of all possible extensions of C in Em+n, and the feasible region of (5) consists of the positive semidefinite matrices Ω whose support (consisting of all off-diagonal pairs (i, j) with Ωi,j 6= 0) is contained in the complete bipartite graph Km,n corresponding to the bipartition S ∪ T . Moreover, the optimal values of both problems are equal to 0, and for any primal feasible (optimal) X and dual optimal Ω the equalityXΩ = 0 holds, implying rank(X) + rank(Ω) ≤ m + n. The next result, shown in [14] in the more general context of universal rigidity, shows that equality rank(X) + rank(Ω) = m + n (also known as strict complementarity) implies that X is the unique feasible solution of program (4), and thus C has a unique extension in Em+n.

Theorem 3.7 ([14, Theorem 3.2]). Let C ∈ Cor(m, n) and let {xs}, {yt} be a r C-system spanning R . Assume E = Gram(x1, . . . , xm, y1, . . . , yn) is an extreme point of Em+n. If there exists an optimal solution Ω of program (5) with rank(Ω) = m + n − r, then E is the only extension of C in Em+n. 10 SANDER GRIBLING, DAVID DE LAAT, AND MONIQUE LAURENT

In addition one can relate uniqueness of an extension of C in the elliptope to the existence of a quadric separating the two point sets {xs} and {yt} (Theorem 3.9 below). Roughly speaking, such a quadric allows us to construct a suitable optimal dual solution Ω and to apply Theorem 3.7. This property was stated by Tsirelson [21, 22], however without proof. Interestingly, an analogous result was shown re- cently by Connelly and Gortler [5] in the setting of universal rigidity. We will give a proof sketch of Theorem 3.9, based on Theorem 3.7, arguments in [5], and the following basic property of semidefinite programs (which can be seen as an analog of Farkas’ lemma for linear programs).

n m Theorem 3.8. Given A1,...,Am ∈ S and b ∈ R , and assume that there exists n a matrix X ∈ S such that hAj,Xi = bj for all j ∈ [m]. Then exactly one of the following two alternatives holds:

(i) There exists a matrix X 0 such that hAj,Xi = bj for all j ∈ [m]. m Pm (ii) There exists y ∈ R such that Ω = j=1 yjAj  0, Ω 6= 0, and ΩX = 0.

r Theorem 3.9 ([21, 22]). Let C ∈ Cor(m, n), let x1, . . . , xm, y1, . . . , yn ∈ R be a r C-system spanning R , and let E = Gram(x1, . . . , xm, y1, . . . , yn) ∈ Em+n. (i) If C is an extreme point of Cor(m, n), then there exist nonnegative scalars λ1, . . . , λm, µ1, . . . , µn, not all equal to zero, such that m n X T X T (6) λsxsxs = µtytyt . s=1 t=1

(ii) If E is an extreme point of Em+n and there exist strictly positive scalars λ1, . . . , λm, µ1, . . . , µn for which relation (6) holds, then C is an extreme point of Cor(m, n).

Proof. (i) By assumption, C is an extreme point of Cor(m, n), so by Lemma 3.2 (ii) E is the only feasible solution of the program (4). As E has rank r < m+n, it follows that the program (4) does not have a positive definite feasible solution. Applying Theorem 3.8 it follows that there exists a nonzero matrix Ω that is feasible for the dual program (5) and satisfies ΩE = 0. This gives: X X λsxs + Ws,tyt = 0 (s ∈ S), µtyt + Ws,txs = 0 (t ∈ T ). t∈T s∈S

Since Ω  0, the scalars λs, µt are nonnegative. We claim that they satisfy (6). We T T multiply the left relation by xs and the right one by yt to obtain

T X T T X T λsxsxs + Ws,tytxs = 0 (s ∈ S), µtytyt + Ws,txsyt = 0 (t ∈ T ). t∈T s∈S Summing the left relation over s ∈ S and the right one over t ∈ T we get:

X T X X T X T λsxsxs = − Ws,tytxs = µtytyt , s∈S s∈S t∈T t∈T and thus (6) holds. (ii) Assume that E is an extreme point of Em+n and that there exist strictly pos- itive scalars λ1, . . . , λm, µ1, . . . , µn for which (6) holds. The key idea is to construct a matrix Ω that is optimal for the program (5) and has rank m + n − r, since then we can apply Theorem 3.7 and conclude that E is the only extension of C in Em+n. The construction of such a matrix Ω is analogous to the construction given in [5] for frameworks (see Theorem 4.3 and its proof), so we omit the details.  MATRICES WITH HIGH COMPLETELY POSITIVE SEMIDEFINITE RANK 11

Corollary 3.10 ([22]). If C is an extreme point of Cor(m, n) with rank(C) = r, then r + 1 (7) ≤ n + m − 1. 2 r r Proof. Let x1, . . . , xm, y1, . . . , yn ∈ R be a C-system spanning R and E their Gram matrix. As E is an extreme point of Em+n, it follows from relation (3) that r T T S is spanned by the m + n matrices xixi , yjyj (i ∈ S, j ∈ T ). Combining this with the identity (6) this implies that Sr is spanned by a set of m + n − 1 matrices r+1 and thus its dimension 2 is at most m + n − 1.  We conclude with constructing a new family of extreme points C of Cor(m, n) r with rank(C) = r, m = r, and n = 2 + 1, thus showing that inequality (7) is tight. Such a family of bipartite correlation matrices can also be inferred from [23], where the correlation matrices are obtained through analytical methods as optimal solutions of linear optimization problems over Cor(m, n). Instead, we use the sufficient conditions for extremality of bipartite correlations given above. We construct matrices E, Ω ∈ Sr+n that satisfy the conditions of Theorem 3.7; that is, E is an extreme point of Er+n, Ω is positive semidefinite with support contained in the complete bipartite graph Kr,n, rank(E) = r, rank(Ω) = n − r, and EΩ = 0. Our construction of Ω is inspired by [11], which studies the maximum possible rank of extremal positive semidefinite matrices with a complete bipartite support. Consider the matrix B ∈ Rr×n whose last column is the all-ones vector and whose r remaining n − 1 = 2 columns are indexed by the pairs (i, j) with 1 ≤ i < j ≤ r, with entries Bi,(i,j) = 1, Bj,(i,j) = −1 for 1 ≤ i < j ≤ r, and all other entries 0. T Note that BB = rIr. Then define the following matrices: √ √ !  nI nB I − n B 0 √ r 0 √ r r Ω = T ,E = n ∈ Sr+n. nB rIn T n T − r B r2 B B One can easily verify that Ω0,E0  0, Ω0E0 = 0, rank Ω0 = n, and rank(E0) = r. It suffices now to modify the matrix E0 in order to get a matrix E with an all-ones diagonal. For this, consider the r r r D = Ir ⊕ √ In−1 ⊕ I1 2n n 0 −1 0 −1 and set E = DE D and Ω = D Ω D . Then E has an all-ones√ diagonal, it is in fact the Gram matrix√ of the vectors e1, . . . , er,(ei − ej)/ 2 (for 1 ≤ i < j ≤ r), and (e1 +...+er)/ r, and thus E is an extreme point of Er+n. Moreover, ΩE = 0, rank E = r, and rank Ω = n. Therefore the conditions of Theorem 3.7 are fulfilled and we can conclude that the matrix C = π(E) is an extreme point of Cor(r, n). So we have shown the following result. r Lemma 3.11. For each integer r ≥ 1 and n = 2 + 1, there exists a matrix C ∈ Cor(r, n) that is an extreme point of C(r, n) and has rank r. We can take C to √ √ be the matrix with columns (ei − ej)/ 2 (for 1 ≤ i < j ≤ r) and (e1 + ... + er)/ r, r where e1, . . . , er ∈ R is the standard basis.

4. Lower bounding the size of operator representations We start with recalling, in Theorem 4.1, some equivalent characterizations for bi- partite correlations in terms of operator representations, due to Tsirelson. For this consider a matrix C ∈ Rm×n. We say that C admits a tensor operator representa- tion if there exist an integer d (the local dimension), a unit vector ψ ∈ Cd ⊗Cd, and 12 SANDER GRIBLING, DAVID DE LAAT, AND MONIQUE LAURENT

m n Hermitian d × d matrices {Xs}s=1 and {Yt}t=1 with spectra contained in [−1, 1], ∗ such that Cs,t = ψ (Xs ⊗ Yt)ψ for all s and t. Moreover we say that C admits a (finite dimensional) commuting operator repre- sentation if there exist an integer d, a Hermitian positive semidefinite d × d matrix W with trace(W ) = 1, and Hermitian d × d matrices {Xs} and {Yt} with spectra contained in [−1, 1], such that XsYt = YtXs and Cs,t = Tr(XsYtW ) for all s and t. A commuting operator representation is said to be pure if rank(W ) = 1. Existence of these various operator representations relies on using Clifford alge- bras. For an integer r ≥ 1 the Clifford algebra C(r) of order r can be defined as the ∗ universal C -algebra with Hermitian generators a1, . . . , ar and relations 2 (8) ai = 1 and aiaj + ajai = 0 for i 6= j. We call these relations the Clifford relations. To represent the elements of C(r) by matrices we can use the following map, which is a ∗-isomorphism onto its image:

( ⊗ i−1 ⊗d r e− i+1 2dr/2e×2dr/2e Z 2 ⊗ X ⊗ I 2 2 for i odd, (9) ϕr : C(r) → C , ϕr(ai) = ⊗ i−2 ⊗d r e− i Z 2 ⊗ Y ⊗ I 2 2 for i even. Here we use the 0 1 0 −i 1 0  X = ,Y = ,Z = . 1 0 i 0 0 −1

For even r the representation ϕr is irreducible and thus C(r) is isomorphic to the full dr/2e matrix algebra with matrix size 2 . For odd r the representation ϕr decomposes as a direct sum of two irreducible representations, each of dimension 2br/2c. There- 2 fore, if X1,...,Xr is a set of Hermitian matrices satisfying the relations Xi = I br/2c and XiXj + XjXi = 0 for i 6= j, then they must have size at least 2 . Theorem 4.1 ([21]). Let C ∈ Rm×n. The following statements are equivalent: (1) C is a bipartite correlation. (2) C admits a tensor operator representation. (3) C admits a pure commuting operator representation. (4) C admits a commuting operator representation.

Proof. (1) ⇒ (2) Let C ∈ Cor(m, n). That means there exist unit vectors {xs} r and {yt} in R , where r = rank(C), such that Cs,t = hxs, yti for all s and t. Set d = 2dr/2e and define r r X X T Xs = (xs)iϕr(ai),Yt = (yt)iϕr(ai) , i=1 i=1 d where ϕ is the representation of C(r) as defined in (9). With ψ = √1 P e ⊗ e r d i=1 i i one can derive the following identity (see for example [2]): T ∗ Cs,t = hxs, yti = Tr(XsYt )/d = ψ (Xs ⊗ Yt)ψ for all s ∈ S, t ∈ T.

The eigenvalues of the matrices ϕ(a1), . . . , ϕ(ar) lie in {−1, 1}, and the Clifford relations (8) can be used to derive that the eigenvalues of Xs and Yt also lie in {−1, 1}. Thus, ({Xs}, {Yt}, ψ) is a tensor operator representation of C. (2) ⇒ (3) If ({Xs}, {Yt}, ψ) is a tensor operator representation of C, then the operators Xs ⊗ I and I ⊗ Yt commute, and by using the identity ∗ ∗ ψ (Xs ⊗ Yt)ψ = Tr((Xs ⊗ I)(I ⊗ Yt)ψψ ) ∗ we see that ({Xs ⊗ I}, {I ⊗ Yy}, ψψ ) is a pure commuting operator representation. (3) ⇒ (4) This is immediate. (4) ⇒ (1) Suppose ({Xs}, {Yt},W ) is a commuting operator representation of C. Since W is positive semidefinite and has trace 1, there exist nonnegative scalars MATRICES WITH HIGH COMPLETELY POSITIVE SEMIDEFINITE RANK 13

d d P ∗ λi and orthonormal unit vectors ψi ∈ C ⊗ C such that W = λiψiψi and P i i λi = 1. Then,

X ∗ X ∗ Cs,t = Tr(XsYtW ) = λi Tr(XsYtψiψi ) = λiψi XsYtψi. i i So, with     X p Re(Xsψi) X p Re(Ytψi) xs = λi and yt = λi Im(Xsψi) Im(Ytψi) i i we have Cs,t = hxs, yti and kxsk, kysk ≤ 1, and by using the observation in the proof of Lemma 3.2 we can extend the vectors xs and yt to unit vectors.  Remark 4.2. If C has a tensor operator representation in dimension d, then it has a commuting operator representation by matrices of size d2. The remainder of this section is devoted to showing that there are bipartite corre- lation matrices for which every operator representation requires a large dimension. For this we need two more definitions. A commuting operator representation ({Xs}, {Yt},W ) is nondegenerate if there does not exist a P 6= I such that P WP = W , XsP = PXs, and YtP = PYt for all s and t. It is said to be Clifford if there exist matrices Q ∈ Rm×m and R ∈ Rn×n with all-ones diagonals, such that

0 XsXs0 + Xs0 Xs = 2Qs,s0 I for all s, s ∈ S, 0 YtYt0 + Yt0 Yt = 2Rt,t0 I for all t, t ∈ T. We will use the following theorem from Tsirelson as crucial ingredient. Theorem 4.3 ([21]). If C is an extreme point of Cor(m, n), then any nondegenerate commuting operator representation of C is Clifford. We can now state and prove the main result of this section. Theorem 4.4. Let C be an extreme point of Cor(m, n) and let r = rank(C). Every commuting operator representation of C uses matrices of size at least 2br/2c.

Proof. Let ({Xs}, {Yt},W ) be a commuting operator representation of C where br/2c Xs,Yt and W are matrices of size d, our goal is to show d ≥ 2 . If this rep- resentation is degenerate, then there exists a projection matrix P 6= I such that Pk ∗ P WP = W , XsP = PXs, and YtP = PYt for all s and t. Let P = i=1 vivi be its spectral decomposition, where the vectors v1, . . . , vk are orthonormal, and ∗ ∗ ∗ set U = (v1, . . . , vk). Then, one can verify that ({U XsU}, {U YsU},U WU) is a commuting operator representation of C of smaller dimension. We are proving a lower bound on the dimension, therefore we can assume ({Xs}, {Yt},W ) is a nondegenerate commuting operator representation. By extremality of C we may assume the operator representation is pure. Hence, there is a unit vector ψ such that W = ψψ∗. This gives

∗ Cs,t = Tr(XsYtW ) = ψ XsYtψ = hxs, yti, where     Re(Xsψ) Re(Ytψ) xs = and yt = . Im(Xsψ) Im(Ytψ)

These vectors {xs} and {yt} are unit vectors because C is extreme (see the proof of Lemma 3.2), and therefore, they form a C-system. 14 SANDER GRIBLING, DAVID DE LAAT, AND MONIQUE LAURENT

By Theorem 4.3 the commuting operator representation ({Xs}, {Yt},W ) is Clif- ford. So, there exist matrices Q ∈ Rm×m and R ∈ Rn×n with all-one diagonals such that 0 XsXs0 + Xs0 Xs = 2Qs,s0 I for all s, s ∈ S, 0 YtYt0 + Yt0 Yt = 2Rt,t0 I for all t, t ∈ T. We show that E is an extension of C to the elliptope, where  QC E = . CT R

For this, we have to show Qs,s0 = hxs, xs0 i and Rt,t0 = hyt, yt0 i. Indeed, ∗ ∗  hxs, xs0 i + hxs0 , xsi = Re ψ XsXs0 ψ + ψ Xs0 Xsψ ∗  = Re ψ (XsXs0 + Xs0 Xs)ψ ∗  = Re ψ (2Qs,s0 I)ψ = 2Qs,s0 , and in the same way hyt, yt0 i + hyt0 , yti = 2Rt,t0 . By Theorem 3.3 the matrix E is the unique extension of C to the elliptope. Furthermore, Lemma 3.2 tells us that rank(Q) = rank(R) = rank(C) = r. Pr ∗ Consider the spectral decomposition of Q: Q = i=1 αivivi , where the vectors v1, . . . , vr are orthonormal, and consider the algebra ChA1,...,Ari, where m 1 X A = √ (v ) X for i ∈ [r]. i α i s s i s=1 We have m 1 X AiAj + AjAi = √ ((vi)s(vj)s0 XsXs0 + (vj)s(vi)s0 XsXs0 ) αiαj s,s0=1 m 1 X = √ (vi)s(vj)s0 (XsXs0 + Xs0 Xs) αiαj s,s0=1 m 1 X 2 ∗ = √ (vi)s(vj)s0 2Qs,s0 I = √ vi QvjI = 2δi,jI, αiαj αiαj s,s0=1 which means that ChA1,...,Ari is a representation of the Clifford algebra C(r). br/2c Hence, the matrices A1,...,Ar must have size at least 2 , which implies that br/2c the matrices {Xs} and {Yt} have size d ≥ 2 .  Corollary 4.5. Let C be an extreme point of Cor(m, n) and let r = rank(C). Every √ br/2c tensor operator representation of C has local dimension at least 2 .

Proof. Directly from Theorem 4.4 using Remark 4.2. 

5. Matrices with high completely positive semidefinite rank In this section we prove our main result and construct completely positive semi- definite matrices with exponentially large cpsd-rank. In order to do so we are going to use an addditional link between bipartite correlations and quantum correlations, combined with the fact that quantum correlations arise as projections of completely positive semidefinite matrices. We start with recalling the facts that we need about quantum correlations. Let A, B, S, and T be finite sets. A function p: A × B × S × T → [0, 1] is called a quantum correlation, realizable in local dimension d, if there exist a unit vector MATRICES WITH HIGH COMPLETELY POSITIVE SEMIDEFINITE RANK 15

d d a ψ ∈ C ⊗ C and Hermitian positive semidefinite d × d matrices Xs (s ∈ S, a ∈ A) b and Yt (t ∈ T , b ∈ B) satisfying the following two conditions: X a X b (10) Xs = Yt = I for all s ∈ S, t ∈ T, a∈A b∈B

∗ a b (11) p(a, b|s, t) = ψ (Xs ⊗ Yt )ψ for all a ∈ A, b ∈ B, s ∈ S, t ∈ T.

The next theorem shows a link between quantum correlations and CS+-matrices. This result can be found in [20, Theorem 3.2] (see also [16]). This link allows us to construct CS+-matrices with large cpsd-rank by finding quantum correlations that cannot be realized in a small local dimension. Theorem 5.1. A function p: A × B × S × T → [0, 1] is a quantum correlation that can be realized in local dimension d if and only if there exists a completely positive semidefinite matrix M, with rows and columns indexed by the disjoint union (A × S) t (B × T ), satisfying the following conditions: (12) cpsd-rank(M) ≤ d,

(13) M(a,s),(b,t) = p(a, b|s, t) for all a ∈ A, b ∈ B, s ∈ S, t ∈ T, and X X X (14) M(a,s),(b,t) = M(a,s),(a0,s0) = M(b,t),(b0,t0) = 1 a∈A,b∈B a,a0∈A b,b0∈B for all s, s0 ∈ S and t, t0 ∈ T . Next we show how to construct from a bipartite correlation C ∈ Cor(m, n) a quantum correlation p, with |A| = |B| = 2 and S = [m], T = [n], having the property that the smallest local dimension in which p can be realized is lower bounded by the smallest local dimension of a tensor representation of C. Lemma 5.2. Let C ∈ Cor(m, n) and assume that each tensor representation of C requires local dimension at least d. There exists a quantum correlation p defined on {0, 1} × {0, 1} × [m] × [n], satisfying the relations (15) C(s, t) = p(0, 0|s, t) + p(1, 1|s, t) − p(0, 1|s, t) − p(1, 0|s, t) for s ∈ [m], t ∈ [n], that can be realized only in local dimension at least d. Proof. We first show the existence of a quantum correlation that satisfies (15). Let C ∈ Cor(m, n). Theorem 4.1 tells us that there exist an integer d, a unit vector d d ψ ∈ C ⊗ C , and Hermitian d × d matrices X1,...,Xm,Y1,...,Yn, whose spectra ∗ are contained in [−1, 1], such that Cs,t = ψ (Xs ⊗ Yt)ψ for all s and t. We define the Hermitian positive semidefinite matrices I + (−1)aX I + (−1)bY (16) Xa = s ,Y b = t for a, b ∈ {0, 1}. s 2 t 2 0 1 0 1 0 1 0 1 Using the fact that Xs + Xs = Yt + Yt = I, Xs = Xs − Xs , and Yt = Yt − Yt , it ∗ a b follows that the function p(a, b|s, t) = ψ (Xs ⊗ Yt )ψ is a quantum correlation that satisfies (15). Assume that p can be realized in dimension k, we show that k ≥ d. As p is realizable in dimension k there exist a unit vector ψ ∈ Ck ⊗ Ck and Hermitian a b positive semidefinite k × k matrices {Xs } and {Yt } such that X a X b Xs = Yt = I for all s ∈ S, t ∈ T, a∈{0,1} b∈{0,1} 16 SANDER GRIBLING, DAVID DE LAAT, AND MONIQUE LAURENT

∗ a b for which we have p(a, b|s, t) = ψ (Xs ⊗ Yt )ψ. Observe that the spectrum of the a b 0 1 0 1 operators Xs and Yt is contained in [0, 1]. We define Xs = Xs −Xs ,Yt = Yt −Yt . Then, using (15), we can conclude ∗ Cs,t = ψ (Xs ⊗ Yt)ψ. This means that C has a tensor operator representation in local dimension k and thus, by the assumption of the lemma, k ≥ d.  By combining the above with the result of Corollary 4.5 from the previous section we can now prove our main theorem. Theorem 1.1. For each positive integer r there exists a completely positive semi- √ br/2c definite matrix M of size r2 + r + 2 with cpsd-rank(M) ≥ 2 . √ r br/2c Proof. Let r be a positive integer and set n = 2 + 1 and d = 2 . By Lemma 3.11 there exists an extreme point C of Cor(r, n) with rank r. Corollary 4.5 tells us that every tensor operator representation of C requires local dimension at least d. Let p: A × B × S × T → [0, 1], with |A| = |B| = 2 and |S| = r, |T | = n, be the quantum correlation constructed from C as indicated in Lemma 5.2. Let M be the completely positive semidefinite matrix constructed from p as indicated in Theorem 5.1, with size |A||S| + |B||T | = 2(r + n) = r2 + r + 2. Then, by Lemma 5.2, p is not realizable in dimension smaller than d and thus, by Theorem 5.1, cpsd-rank(M) ≥ d. 

References [1] A. Berman and N. Shaked-Monderer. Completely Positive Matrices. World Scientific Pub- lishing, 2003. [2] J. Bri¨et. Grothendieck Inequalities, Nonlocal Games and Optimization. PhD Thesis, 2011. [3] I.M. Bomze, W. Schachinger, and R. Ullrich. From seven to eleven: completely positive matrices with high cp-rank. and its Applications 459:208–221, 2014. [4] S. Burgdorf, M. Laurent, and T. Piovesan. On the closure of the completely positive semidef- inite cone and linear approximations to quantum colorings. In S. Beigi and R. Koenig (Eds.), Proceedings of the 10th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2015) (pp. 127–146). Leibniz: Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik. (Leibniz International Proceedings in Informatics, 44), 2015. [5] R. Connelly and S.J. Gortler. Universal rigidity of complete bipartite graphs. Preprint, 2015. [6] J.H. Drew, C.R. Johnson, and R. Loewy. Completely positive matrices associated with M- matrices. Linear and Multilinear Algebra, 37:303–31, 1994. [7] M. E.-Nagy, M. Laurent and A. Varvitsiotis. Forbidden characterizations for low- rank optimal solutions to semidefinite programs over the elliptope. Journal of Combinatorial Theory B, 108:40–80, 2014. [8] H. Fawzi, J. Gouveia, P.A. Parrilo, R.Z. Robinson, and R.R. Thomas. Positive semidefinite rank. Math. Program. Ser. B, 153(1):133–177, 2015. [9] P.E. Frenkel and M. Weiner. On vector configurations that can be realized in the cone of positive semidefinite matrices. Linear Algebra and its Applications, 459:465–474, 2014. [10] J. Gouveia, R.R. Thomas, and P. Parrilo. Lifts of convex sets and cone factorizations. Math- ematics of Operations Research, 38:248–264, 2013. [11] R. Grone and S. Pierce. Extremal bipartite matrices. Linear Algebra and its Applications, 131:39–50, 1990. [12] E. de Klerk and D.V. Pasechnik. Approximation of the stability number of a graph via copositive programming. SIAM Journal on Optimization, 12:875–892, 2002. [13] M. Laurent and T. Piovesan. Conic approach to quantum graph parameters using linear optimization over the completely positive semidefinite cone. SIAM Journal on Optimization, 25(4):2461–2493, 2015. [14] M. Laurent and A. Varvitsiotis. Positive semidefinite matrix completion, universal rigidity and the Strong Arnold Property. Linear Algebra and its Applications 452(1):292–317, 2014. [15] C.-K. Li and B.S. Tam. A Note on Extreme Correlation Matrices. SIAM Journal on Matrix Analysis and Applications, 15(3):903–908, 1995. MATRICES WITH HIGH COMPLETELY POSITIVE SEMIDEFINITE RANK 17

[16] L. Manˇcinska and D.E. Roberson. Note on the correspondence between quantum correlations and the completely positive semidefinite cone, 2014. (Available at quantuminfo.quantumlah. org/memberpages/laura/corr.pdf) [17] L. Manˇcinska and D.E. Roberson. Graph homomorphisms for quantum players. Journal of Combinatorial Theory, Series B, to appear, 2016. [18] A. Prakash, J. Sikora, A. Varvitsiotis, and Z. Wei. Completely positive semidefinite rank. arXiv:1604.07199, 2016. [19] N. Shaked-Monderer, A. Berman, I.M. Bomze, F. Jarre, and W. Schachinger. New results on the cp-rank and related properties of co(mpletely) positive matrices. Linear and Multilinear Algebra, 63(2):384–396, 2015. [20] J. Sikora and A. Varvitsiotis, Linear conic formulations for two-party correlations and values of nonlocal games. arXiv:1506.07297, 2015. [21] B.S. Tsirel’son. Quantum analogues of the Bell inequalities. The case of two spatially sepa- rated domains. Journal of Soviet Mathematics, 36(4):557–570, 1987. [22] B.S. Tsirel’son. Some results and problems on quantum Bell-type inequalities. Hadronic J. Suppl., 8(4), 329–345, 1993. [23] T. V´ertesi,K.F. P´al.Bounding the dimension of bipartite quantum systems. Phys. Rev. A, 79, 2009. [24] R.F. Werner and M.M. Wolf. Bell inequalities and entanglement. Quantum Inf. Comput., 1(3):1–25, 2001. [25] M. Yannakakis. Expressing combinatorial optimization problems by linear programs. J. Com- put. System Sci., 43(3):441–466, 1991.

Centrum Wiskunde & Informatica (CWI), Amsterdam, The Netherlands E-mail address: [email protected]

Centrum Wiskunde & Informatica (CWI), Amsterdam, The Netherlands E-mail address: [email protected]

Centrum Wiskunde & Informatica (CWI), Amsterdam, The Netherlands, Tilburg University, Tilburg, The Netherlands E-mail address: [email protected]