Copositive Plus Matrices

Willemieke van Vliet

Master Thesis in Applied Mathematics October 2011

Copositive Plus Matrices

Summary In this report we discuss the set of copositive plus matrices and their properties. We examine certain subsets of copositive plus matrices, copositive plus matrices with small dimensions, and the copositive plus cone and its dual. Furthermore, we consider the Copositive Plus Completion Problem, which is the problem of deciding whether a with unspecified entries can be completed to obtain a copositive plus matrix. The set of copositive plus matrices is important for Lemke’s algorithm, which is an al- gorithm for solving the Linear Complementarity Problem (LCP). The LCP is the problem of deciding whether a solution for a specific system of equations exists and finding such a solution. Lemke’s algorithm always terminates in a finite number of steps, but for some prob- lems Lemke’s algorithm terminates with no solution while the problem does have a solution. However, when the data matrix of the LCP is copositive plus, Lemke’s algorithm always gives a solution if such solution exists.

Master Thesis in Applied Mathematics Author: Willemieke van Vliet First supervisor: Dr. Mirjam E. D¨ur Second supervisor: Prof. dr. Harry L. Trentelman Date: October 2011

Johann Bernoulli Institute of Mathematics and Computer Science P.O. Box 407 9700 AK Groningen The Netherlands

Contents

1 Introduction 1 1.1 Structure ...... 1 1.2 Notation ...... 1

2 Copositive Plus Matrices and their Properties 3 2.1 The Class of Copositive Matrices ...... 3 2.2 Properties of Copositive Matrices ...... 4 2.3 Properties of Copositive Plus Matrices ...... 5 2.4 Subsets ...... 8 2.5 Small Dimensions ...... 10 2.6 The Copositive Plus Cone and its Dual Cone ...... 13 2.6.1 The Copositive Plus Cone ...... 14 2.6.2 The Dual Copositive Plus Cone ...... 14 2.7 Copositive Plus of Order r ...... 15 2.8 Copositive Plus Matrices with −1, 0, 1 Entries ...... 15

3 The Copositive Plus Completion Problem 19 3.1 Unspecified Non-diagonal Elements ...... 19 3.2 Unspecified Diagonal Entries ...... 25

4 Lemke’s Algorithm 29 4.1 The Linear Complementarity Problem ...... 29 4.2 Lemke’s Algorithm ...... 31 4.3 Termination and Correctness ...... 35 4.3.1 Termination for Nondegenerate Problems ...... 36 4.3.2 Termination for Degenerate Problems ...... 37 4.3.3 Conditions under which Lemke’s Algorithm is Correct ...... 40 4.4 Applications in Linear- and Quadratic Programming ...... 40 4.4.1 Linear Programming ...... 40 4.4.2 Quadratic Programming ...... 43 4.5 Applications in the Game Theory ...... 44 4.5.1 Two Person Games ...... 45 4.5.2 Polymatrix Games ...... 46 4.6 An Application in Economics ...... 49

Nomenclature 53

iii iv CONTENTS

Index 55

Bibliography 57 Chapter 1

Introduction

1.1 Structure

In 1968 Cottle and Dantzig proposed the Linear Complementarity Problem (LCP)[2]. The LCP is the problem of deciding whether a solution for a specific system of equations exists. An algorithm for solving the LCP is Lemke’s algorithm which is also called the complementary pivot algorithm. It was proposed by Lemke in 1965 [12] for finding equilibrium points. Lemke’s algorithm always terminates in a finite number of steps, but for some problems Lemke’s algorithm terminates with no solution while the problem does have a solution. However, when the data matrix of the LCP is Copositive Plus, Lemke’s algorithm always gives a solution if such solution exists. In this report we discuss the LCP as well as Lemke’s algorithm. Further, we examine the set of copositive plus matrices and their properties. In chapters 2 and 3, we focus on copositive plus matrices. In chapter 2, we discuss some basic properties of the copositive plus matrices. We examine certain subsets of copositive plus matrices, copositive plus matrices with small dimensions, and the copositive plus cone and its dual. Furthermore, we consider matrices which are copositive plus of order r and we consider copositive plus matrices with only −1, 0, 1 entries. In chapter 3, we discuss the Copositive Plus Completion Problem. We consider matrices in which some entries are specified and the remaining entries are unspecified and are free to be chosen, such matrices are called partial matrices. The choice of values for the unspecified entries is a completion of the partial matrix. The Copositive Plus Completion Problem is the problem of deciding which partial matrices have a copositive plus completion. In the first part of this chapter we examine matrices with unspecified non-diagonal entries and in the second part we examine matrices with unspecified diagonal entries. In chapter 4, we discuss the LCP and Lemke’s algorithm. We show that Lemke’s algorithm always terminates in a finite number of steps. Furthermore, we discuss some applications of the LCP: Linear and Quadratic programming, the problem of finding equilibrium points in two person and polymatrix games, and the problem of finding equilibrium points in economics.

1.2 Notation

In this report we will use the following notation. The set of nonnegative matrices is denoted by N and the set of symmetric matrices is denoted by S.

1 2 CHAPTER 1. INTRODUCTION

The set R is the set of real numbers. The set of nonnegative real numbers is denoted by n R+. So if a vector v is in R+, then all n entries of the vector v are nonnegative. Further, n n+1 the n-dimensional sphere with radius 1 is defined as the set S = {v ∈ R | kvk = 1}. The n n+1 nonnegative quadrant of this sphere is denoted by S+ = {v ∈ R+ | kvk = 1}. We denote the ith element of a vector v by vi and the element of the ith row and jth column of a matrix M is denoted by Mij. The vector e is the vector with ones everywhere. The unit vector ei is the vector with at the ith entry an one and zeros everywhere else. Inequality of vectors is always meant entry wise. For example, given a vector v, v ≥ 0 means that every entry of v is nonnegative. T T At last, the inner product of two vectors v1 and v2 is denoted by hv1, v2i = v1 v2(= v2 v1). The norm of a vector v is given by kvk = phv, vi. Furthermore, the infinity norm of a vector v is given by kvk∞ = max(|v1|, |v2|,..., |vn|). Chapter 2

Copositive Plus Matrices and their Properties

In the last sixty years, several articles about the properties of the set of copositive matrices are proposed; see for example [3], [4], [17], [16], [6] and [5]. Known is what the cone and the dual cone of these matrices look like and what we can say about this set of matrices for small dimensions. Further, many sufficient and necessary conditions are found for the copositive matrices. Much less is known about the copositive plus matrices, which form a subset of the copositive matrices. These matrices are introduced by C.E. Lemke [12] and the properties of these matrices have been studied by R.W. Cottle, G.J. Habetler, and C.E. Lemke in [3] and [4]; by A.J. Hoffman and F. Pereira in [8]; and by H. V¨aliahoin [17]. In this chapter the most important results of these articles will be presented and we will present some new theorems about copositive plus matrices.

2.1 The Class of Copositive Matrices

We will give here the definitions of copositive and copositive plus matrices with respect to symmetric matrices. However, for every non M, we have that M˜ = 1 T ˜ 2 (M + M ) is a symmetric matrix. So if a definition of a property holds for M, we say that the corresponding non symmetric matrix M also satisfies this property. We provide the following definitions and notation for the class of copositive matrices.

Definition 1. Let M be a real symmetric n×n matrix. The matrix M is said to be copositive, denoted by M ∈ C, if T z Mz > 0 for all z > 0. The matrix M is said to be copositive plus, denoted by M ∈ C+, if

T M ∈ C and for z > 0, z Mz = 0 implies Mz = 0. The matrix M is said to be strictly copositive if

T z Mz > 0 for all nonzero z > 0. The interior of C is the set of strictly copositive matrices. Therefore, if a matrix M is strictly copositive it will be denoted by M ∈ int(C).

3 4 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES

Note that for a non symmetric matrix M and its corresponding symmetric matrix M˜ , the quadratic product 1 1 1 zT Mz = zT Mz + zT M T z = zT (M + M T )z = zMz.˜ 2 2 2 So above definitions almost holds for non symmetric matrices, the only difference is that for a T T non symmetric copositive plus matrix M, z > 0 with z Mz = 0 implies that (M +M )z = 0. A class of matrices which is close to the class of copositive matrices is the class of positive definite matrices.

Definition 2. Let M be a real symmetric n × n matrix. The matrix M is said to be positive semidefinite, denoted by M ∈ S+, if

T z Mz > 0 for all z.

The matrix M is said to be positive definite, denoted by M ∈ S++, if

zT Mz > 0 for all z 6= 0.

Two important properties are the property of inheritance and the property of closure under principal rearrangements. All classes of matrices defined in this section satisfies both properties; see [3]. This first property is about the principal submatrices of a matrix, such a principal submatrix can be obtained by removing similarly indexed rows and columns of a given square matrix. The second property is about the principal rearrangements of a matrix, by a principal rearrangement of a matrix we mean a matrix P T MP where P is a .

Definition 3. A class X satisfies the property of inheritance if any principal submatrix of a matrix in class X is again in class X. Further, a class X satisfies the property of closure under principal rearrangements if any principal rearrangement of a matrix in class X is again in class X.

2.2 Properties of Copositive Matrices

Here we discuss some properties about the values of the entries of copositive matrices. It is easy to see that the diagonal elements of a copositive matrix must be nonnegative. This can be shown with proof by contradiction. Assume there is a copositive matrix M with Mii < 0, then a contradiction occurs for the quadratic product of M with the corresponding unit vector T ei. The product ei Mei = Mii < 0 and this contradicts with the copositivity of M. If all diagonal entries are equal to one, then we can say something about the other entries. This result is shown in the following theorem.

Theorem 1. If M is a copositive n × n matrix with Mii = 1 for all i, then

ˆ the entries Mij ≥ −1 for all i 6= j, X ˆ the sum Mij ≥ −n. i6=j 2.3. PROPERTIES OF COPOSITIVE PLUS MATRICES 5

Proof. We use in this proof that the quadratic product of a symmetric matrix M, with only ones on the diagonal, is equal to

T X x Mx = Mijxixj i,j X 2 X = Miixi + Mijxixj i i6=j X 2 X = xi + Mijxixj. i i6=j

T ˆ If x = ei + ej with i 6= j, then the quadratic product x Mx is equal to 2 + 2Mij. This product is nonnegative, since M is copositive and x ≥ 0. It follows that Mij ≥ −1 for all i 6= j.

T X ˆ If x = e, then x Mx = n + Mij and again this is nonnegative. It follows that i6=j X Mij ≥ −n. i6=j

This theorem requires that the diagonal entries are equal to one, however every matrix with positive diagonal entries can be scaled to a matrix with only ones on the diagonal. We can rewrite this theorem for general copositive matrices.

Theorem 2. If M is a copositive matrix, then ˆ 1 the entries Mij ≥ − 2 (Mii + Mjj) for all i 6= j, X X ˆ the sum Mij ≥ − Mii. i6=j i Proof. This proof is similar to the proof of Theorem 1.

The previous theorem gives a lower bound for the entry Mij for all i 6= j. The next theorem gives a more tight lower bound for Mij. p Theorem 3. If M is a copositive matrix, then Mij ≥ − MiiMjj for all i 6= j. √ p T p Proof. If x = Mjjei + Miiej with i 6= j, then x Mx = 2MiiMjj + 2Mij MiiMjj. This p product is nonnegative, since M is copositive and x ≥ 0. It follows that Mij ≥ − MiiMjj for all i 6= j.

2.3 Properties of Copositive Plus Matrices

A copositive plus matrix is copositive, so the results of the previous section hold for copositive plus matrices. In this section we discuss some specific results for copositive plus matrices. From the previous section we know that all the diagonal elements of a copositive plus matrix are nonnegative. If a copositive plus matrix has a zero diagonal entry, then this gives restrictions for the entries in the corresponding row and column. 6 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES

Theorem 4 ([3]). If M is copositive plus and Mii = 0, then Mij = Mji = 0 for all j.

Note that Theorem 4 also holds for positive (semi) definite matrices. With a principal rearrangement we can change the order of the rows and the columns in such a way that every zero column and the corresponding zero row moves respectively to the left and the bottom. This gives the following result.

Theorem 5 ([3]). If M 6= 0 is copositive plus, then there is a principal rearrangement M ∗ of M such that  A 0  M ∗ = , 0 0

where Aii > 0.

The following theorem gives another principal rearrangement for copositive plus matrices.

Theorem 6 ([4]). Let M be a copositive n × n matrix. M is copositive plus if and only if there is a principal rearrangement M ∗ of M which in block form is

 AB  M ∗ = , BT D

such that

ˆ A is positive semidefinite r × r matrix with 0 ≤ r ≤ n;

ˆ B = AB∗, for some B∗;

ˆ D − (B∗)T AB∗ is strictly copositive (hence D is strictly copositive).

The following theorem, Theorem 7, is about strictly copositive matrices. Theorem 8 is a similar theorem about copositive plus matrices.

Theorem 7. If M is a strictly copositive matrix, then there is an  > 0 such that M − I is strictly copositive.

Proof. Consider the constant xT Mx k = min ( ). x≥0,x6=0 kxk2 This k is well defined if this minimum exists. If x ≥ 0 and x 6= 0, then there is a normalized vector y such that x = kxky. We have that

xT Mx kxk2yT My min ( ) = min ( ) = min (yT My). (2.1) x≥0,x6=0 kxk2 y≥0,kyk=1 kxk2kyk2 y≥0,kyk=1

n−1 n We take the minimum over the set S+ = {y ∈ R+ | kyk = 1}. This set is compact, because n−1 n−1 n−1 T S+ is a closed subset of S and S is compact. Furthermore, the function y 7→ y My is continuous. The extreme value theorem states that the minimum (2.1) exists. So k is well defined. 2.3. PROPERTIES OF COPOSITIVE PLUS MATRICES 7

The matrix M is strictly copositive, so k is a positive constant. Choose  such that 0 <  < k. If z ≥ 0 is an arbitrary vector with z 6= 0, then

zT (M − I)z = zT Mz − kzk2 zT Mz > kzk2( − k) kzk2 zT Mz xT Mx = kzk2( − min ( ) ≥ 0. kzk2 x≥0,x6=0 kxk2

Hence, if we choose  such that 0 <  < k, then the matrix M − I is strictly copositive.

Theorem 8. Let M be a copositive plus matrix, let

n W = {i | ∃ x ∈ ker(M) ∩ R+ with xi > 0},

and let  1 if i = j and i∈ / W , (I ) = W ij 0 otherwise.

There exists an  > 0 such that the matrix M − IW is copositive plus.

To prove this theorem we use the set

n−1 n−1 Z = {y ∈ S+ | supp(x) * supp(y) ∀ x ∈ ker(M) ∩ S+ }. (2.2)

Here supp(x)={i | xi 6= 0}.

Theorem 9. If M is a nonzero matrix, then the set Z, as defined by (2.2), is non-empty and compact.

Proof. If M is a nonzero matrix, then there are indices i and j such that Mij 6= 0. Therefore, n−1 the vector ej ∈ S+ is not in the kernel of M. Further, the supp(ej) = {j} and supp(x) * {j} n−1 for all x ∈ ker(M) ∩ S+ . So the vector ej ∈ Z and hence Z is non-empty. n−1 n−1 The set Z ⊆ S+ is bounded, since S+ is bounded. Left to show is that Z is closed. n−1 n−1 If y ∈ S+ \ Z, then there is an x ∈ ker(M) ∩ S+ such that supp(x) ⊆ supp(y). Let n−1  = mini∈supp(x) yi > 0. We consider all w ∈ S+ with kw − yk∞ < .

kw − yk∞ <  ⇒ |w − y|j <  ∀ j

⇒ |w − y|j <  ∀ j ∈ supp(x)

⇒ |w − y|j < min yi ∀ j ∈ supp(x) i∈supp(x)

⇒ wj > 0 ∀ j ∈ supp(x) ⇒ supp(x) ⊆ supp(w).

n−1 n−1 Hence w ∈ S+ \ Z. So for all y ∈ S+ \ Z, there is an  > 0 such that all vectors w with n−1 n−1 n−1 kw − yk∞ <  are in S+ \ Z. So the set S+ \ Z is open in S+ and therefor Z is closed n−1 n−1 in S+ . The set S+ is closed, so the set Z is closed. Hence Z is compact.

We will now proof Theorem 8. 8 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES

Proof of Theorem 8. Consider the constant

xT Mx k = min( T ). x∈Z x IW x This k is well defined and positive, since Z is nonempty, compact, and every x ∈ Z is not in the kernel of M. Choose  such that 0 <  < k. Take an arbitrary vector x ≥ 0 and x 6= 0. T There is a vector z such that x = kxkz and kzk = 1. The product x (M − IW )x ≥ 0 if and T only if z (M − IW )z ≥ 0. We split the problem in three cases.

ˆ Case 1: z ∈ ker(M). If z ∈ ker(M), then z ∈ ker(IW ) and

T T T z (M − IW )z = z Mz − z IW z = 0.

ˆ Case 2: z ∈ Z. If z ∈ Z, then

T T T z (M − IW )z = z Mz − z IW z T T x Mx T > z Mz − min( T )z IW z x∈Z x IW x T T z Mz T ≥ z Mz − T z IW z = 0. z IW z

ˆ Case 3: z∈ / ker(M) and z∈ / Z. If z∈ / Z, then there is an y ∈ ker(M) with supp(y) ⊆ supp(z). If z α = min ( i ), i∈supp(y) yi

then z ≥ αy and there is an i ∈ supp(y) such that zi = αyi. Let p = z − αy ≥ 0. Due T to the choice of α, supp(y) * supp(p). If p ∈ Z, then p (M − IW )p > 0; see case 2. If p∈ / Z, then there is a v ∈ ker(M) with supp(v) ⊆ supp(p) and we can repeat previous steps until we find a p ∈ Z. We will eventually find a p ∈ Z, because due to the choice of α the supp of the remaining vector P becomes smaller. So there is a moment that there is no y ∈ ker(M) with supp(y) ⊆ supp(p). Further, T T T p (M − IW )p = (z − αy) (M − IW )(z − αy) = z Mz. So zT Mz is positive.

T Hence, if we choose  such that 0 <  < k, then x Mx ≥ 0 for all x ≥ 0. So M − IW T is copositive. Furthermore, x ≥ 0 and x Mx = 0 if and only if x ker(M). So M − IW is copositive plus.

2.4 Subsets

In this section we discuss certain subsets of the copositive plus matrices. We have the following inclusions:

int(C) ⊆ C+ ⊆ C and S++ ⊆ S+. 2.4. SUBSETS 9

These inclusions follow directly from the definitions of these sets. Two other inclusions which follow easily form the definitions are

S++ ⊆ int(C) and S+ ⊆ C.

In the following theorem we see an inclusion which is not trivial. It is proved in [3], but here also another proof is proposed.

Theorem 10 ([3]). Every positive semidefinite matrix is copositive plus, that is, S+ ⊆ C+.

Proof. Let M be a positive semidefinite matrix. The matrix M is copositive, since S+ ⊆ C. Further, the matrix M has a Cholesky decomposition, that is, M = AT A. If z ≥ 0 and zT Mz = 0, then

zT Mz = zT AT Az = (Az)T Az = kAzk2 = 0 ⇒ Az = 0 ⇒ AT Az = Mz = 0.

So, if z ≥ 0 and zT Mz = 0, then Mz = 0 and hence M is copositive plus.

The nonnegative matrices are a subset of the copositive matrices. However, it is not a subset of the copositive plus matrices. An example of a which is not copositive plus is the matrix  0 1  M = . 1 0 It follows directly from Theorem 4 that M is not copositive plus. However, there is a subset of the nonnegative matrices, for which every element is a copositive plus matrix. We define this subset as the flatly nonnegative matrices; see [4]. A matrix M is said to be flatly nonnegative, denoted by N +, if

M ∈ N and Mii = 0 imply Mij = Mji = 0 for all i 6= j.

Note that the interior of the nonnegative matrices, the strictly positive matrices, is in N +.

Theorem 11. Every flatly nonnegative matrix is copositive plus, that is, N + ⊆ C+.

Proof. Let M be a flatly nonnegative matrix. It is easy to see that M is copositive, because M ∈ N + ⊆ N ⊆ C. Left to prove is that x ≥ 0 with xT Mx = 0 implies Mx = 0. T P T For x ≥ 0, every term of x Mx = i,j xixjMij is nonnegative. So if x Mx = 0, then all T P terms of the sum x Mx = i,j xixjMij are zero. We have the following implications:

T x ≥ 0 and x Mx = 0 ⇒ xixjMij = 0 ∀ i, j 2 ⇒ xi Mii = 0 ∀ i ⇒ xi = 0 ∨ Mii = 0 ∀ i + ⇒ xi = 0 ∨ Mii = Mij = Mji = 0 ∀ i, j (because M ∈ N ) n X ⇒ (Mx)j = Mijxi = 0 ∀ j i=0 ⇒ Mx = 0

So M is copositive plus. 10 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES

N N + C+

Figure 2.1: N ∩ C+ = N +

Further, if a copositive plus matrix is nonnegative it is also flatly nonnegative, this follows from Theorem 4. So N ∩ C+ = N +. The Minkovski sum of two sets of matrices A and B is the result of adding every element of A to every element of B, that is, the set A + B = {a + b | a ∈ A, b ∈ B}. We know that the Minkovski sum of the nonnegative matrices and the positive semidefinite matrices is a subset of the copositive matrices. We will show that the Minkovski sum of the flatly nonnegative matrices and the positive semidefinite matrices is a subset of the copositive plus matrices. Theorem 12. The Minkovski sum of the flatly nonnegative matrices and the positive semidef- inite matrices is copositive plus, that is, N + + S+ ⊆ C+. Proof. Let A be a flatly nonnegative matrix and let B be a positive semidefinite matrix. The matrix A + B is copositive, because A + B ∈ N + + S+ ⊆ N + S+ ⊆ C. If x is a nonnegative vector, then xT (A + B)x = 0 ⇔ xT Ax + xT Bx = 0 ⇔ xT Ax = 0 and xT Bx = 0 (because A ∈ N + ⊆ C,B ∈ S+ ⊆ C) ⇔ Ax = 0 and Bx = 0 (because A ∈ N + ⊆ C+,B ∈ S+ ⊆ C+). So (A + B)x = 0 and hence A + B is copositive plus.

2.5 Small Dimensions

In this section we discuss the properties of copositive plus matrices with small dimensions. We know already that for dimension n = 2, the set of copositive matrices is equal to N ∪ S+. So every copositive 2 × 2 matrix is either nonnegative and/or positive semidefinite. We can say something similar about copositive plus 2 × 2 matrices. Theorem 13. Let M be a symmetric 2 × 2 matrix. The matrix M is copositive plus if and + + + only if it is flatly nonnegative or it is positive semidefinite. That is, C2×2 = N2×2 ∪ S2×2. Proof. Let M be a copositive plus 2 × 2 matrix of the form  a b  M = . b c The matrix M is copositive, so a and c are nonnegative. We split the proof in two cases. 2.5. SMALL DIMENSIONS 11

ˆ Case 1: a, c > 0. If b ≥ 0, then M is flatly nonnegative, so we are done. If b < 0, we can easily prove that M is positive semidefinite. For x ≥ 0, we have xT Mx ≥ 0 because M is copositive. For x ≤ 0, we have xT Mx = (−x)T M(−x) ≥ 0 because −x ≥ 0 and M is copositive. Finally, if x has one positive entry and one negative entry, then T 2 2 T T x Mx = ax1 + 2bx1x2 + cx2 has only positive terms and x Mx ≥ 0. So x Mx ≥ 0 for all x and hence M is positive semidefinite.

ˆ Case 2: a and/or c is equal to zero. Without loss of generality we can say a = 0. From Theorem 4 it follows that b is zero as well, hence M is flatly nonnegative.

+ + + So M is flatly nonnegative and/or positive semidefinite. So C2×2 ⊆ N2×2 ∪ S2×2, we already + + + + + + know that N ∪ S ⊆ C ; see Theorems 10 and 11. Hence, C2×2 = N2×2 ∪ S2×2. + + + Let the int(N ) be the strictly positive matrices. Note that N2×2∪S2×2 = int(N )2×2∪S2×2. + + + So in Theorem 13 we can replace N2×2 ∪ S2×2 with int(N )2×2 ∪ S2×2. So a symmetric 2 × 2 matrix is copositive plus if and only if it is positive semidefinite or strictly positive. For n ≥ 3 the previous theorem does not hold. Consider the counterexample

 8  1 − 10 1 8 M =  − 10 1 1  . (2.3) 1 1 1

If x ≥ 0, then 2 xT Mx = (x , x , x )M(x , x , x )T = (x − x )2 + x x + 2x x + 2x x . 1 2 3 1 2 3 1 2 5 1 2 1 3 2 3 The product xT Mx > 0 for all x 6= 0 and x ≥ 0, so M is strictly copositive and also copositive plus. However it is clearly not flatly nonnegative. Neither it is positive semidefinite, because for a vector x with x1 = x2 = 1 and x3 = −1 the quadratic form of M is negative. Hannu V¨aliaho[17] has characterized all the copositive plus matrices of dimension n ≤ 3. Theorem 14 ([17]). Let M be a symmetric n × n matrix with n ≤ 3. The matrix M is copositive plus if and only if it is positive semidefinite or, after deleting the possible zero rows and columns, strictly copositive. This theorem is proved in [17]. In this proof is used that a copositive plus 3 × 3 matrix of the form  1 a b  M =  a 1 c  , b c 1 with a, b, c ≥ −1 and |a| < 1, |b| < 1 or |c| < 1 is positive semidefinite. However this is not always true, see for a counterexample matrix (2.3). Therefore, we propose a different and more detailed proof here. For this proof we need the following theorem for copositive matrices. Theorem 15 ([6]). Let M be a symmetric 3 × 3 matrix. The matrix M is copositive if and only if the conditions

M11 ≥ 0,M22 ≥ 0,M33 ≥ 0, (2.4) p p p M12 ≥ − M11M22,M23 ≥ − M22M33,M13 ≥ − M11M33, (2.5) 12 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES

are satisfied, as well as at least one of the following conditions: p p p p M12 M33 + M23 M11 + M31 M22 + M11M22M33 ≥ 0, (2.6) det(M) ≥ 0. (2.7)

The matrix is strictly copositive if and only if these conditions are satisfied with strict inequal- ity in (2.4), (2.5) and (2.7).

We will now proof Theorem 14.

Proof of Theorem 14. Sufficiency is immediate, because both S+ and int(C) are subsets of C+. Further, necessity is clear for n = 1. The necessity for n = 2 follows from Theorem 13, because if a matrix is flatly nonnegative, then it is also, after deleting the possible zero rows and columns, strictly copositive. So it is left to show that this theorem holds for n = 3. If a 3 × 3 matrix has zero rows and columns, we can delete them and we obtain a matrix of lower dimension. For matrices with dimension lower than three we have already proved that the theorem is correct. So for n = 3 it suffices to consider the scaled matrix

 1 a b  M =  a 1 c  . b c 1

The matrix M is copositive, so it satisfies (2.4) and (2.5) and at least one of (2.6) or (2.7); see Theorem 15. The diagonal entries are one, so condition (2.4) is strict. From (2.5) it follows a, b, c ≥ −1. If a, b, c ≥ 0, then M is strictly copositive. Let us now consider the cases with at least one of a, b, c is negative, assume without loss of generality a < 0. We split the proof in three cases:

T ˆ Case 1: (2.5) is not strict, take a = −1. If x = e1 + e2, then x Mx = 0. The vector Mx = 0, since M is copositive plus. In particular, (Mx)3 = bx1 +cx2 +x3 = (b+c) = 0 and therefore b = −c. We know that a, b, c ≥ −1, so |b| and |c| are less or equal one.

One of the√ eigenvalues of M is equal to zero and the other eigenvalues are equal to 3 1 2 2 λ = 2 ± 2 1 + 8b . The value of b is between zero and one, so these two eigenvalues are nonnegative. This gives that all eigenvalues are nonnegative, so M is positive semidefinite.

ˆ Case 2: (2.5) is strict and (2.6) is satisfied or (2.7) with strict inequality sign is satisfied. It follows from Theorem 15 that M is strictly copositive.

ˆ Case 3: (2.5) is strict, det(M)= 0, and (2.6) is not satisfied. One of the eigenval- 3 ues of M is zero, since det(M)= 0. The other eigenvalues are equal to λ = 2 ± 1 p 2 2 2 2 −3 + 4(a + b + c ), note that the eigenvalues are real because the matrix M is symmetric. Further, the values of |b|, |c|, |a| < 1, since (2.5) is strict and (2.6) is not satisfied.Therefore, a2 +b2 +c2 < 3 and all eigenvalues are nonnegative, so M is positive semidefinite.

So we have proved that M is positive semidefinite or strictly copositive plus. 2.6. THE COPOSITIVE PLUS CONE AND ITS DUAL CONE 13

In [17] is given an example which shows that the preceding theorem does not hold for dimensions larger than n = 3. Consider the copositive plus matrix

 1 −1 0 0    M11 M12  −1 1 0 0  M = =   . M21 M22  0 0 1 2  0 0 2 1

Here M11 is positive semidefinite but not strictly copositive, and M22 is strictly copositive but not positive semidefinite. In the following theorem we characterize again the 2 × 2 and 3 × 3 copositive plus matrices, but this time the characterization also holds for 4×4 copositive plus matrices. The following theorem looks like the theorem for copositive n × n matrices with + n ≤ 4, which say that Cn×n = Nn×n + Sn×n for n ≤ 4; see [13]. Theorem 16. Let M be a symmetric n × n matrix with n ≤ 4. The matrix M is copositive plus if and only if there is a flatly nonnegative matrix A and a positive semidefinite matrix B + + + such that A + B = M. That is, Cn×n = Nn×n + Sn×n with n ≤ 4. Proof. We know N + + S+ ⊆ C+; see Theorem 12. Left to show is that for n ≤ 4 holds that C+ ⊆ N + + S+. Let M be a symmetric n × n matrix with n ≤ 4, let

n W = {i | ∃x ∈ ker(M) ∩ R+ with xi > 0}, and let  1 if i = j and i∈ / W , (I ) = W ij 0 otherwise.

From Theorem 8 it follows that there is an  > 0 such that the matrix M − IW is copositive. + Therefore there exists an A ∈ N and a B ∈ S such that M − IW = A + B; see [13]. It follows that M = A + IW + B := A˜ + B with A˜ = A + IW ∈ N . n If x ∈ ker(M) ∩ R+, then

xT Mx = 0 ⇒ xT Ax˜ + xT Bx = 0 ⇒ xT Ax˜ = 0 and xT Bx = 0 ⇒ xT Ax˜ = 0 and Bx = 0, (2.8) xT Mx = 0 ⇒ Mx = 0 ⇒ Ax˜ + Bx = 0. (2.9)

n ˜ The implications (2.8) and (2.9) imply that every x ∈ ker(M) ∩ R+ is in the kernel of A. This together with A˜ ∈ N gives that if i ∈ W or j ∈ W , then A˜ij = 0. Furthermore, we have that A˜ii ≥  > 0 for all i∈ / W . Therefor, A˜ is flatly nonnegative. We have construct a matrix A˜ ∈ N + and a matrix B ∈ S+ such that M = A˜ + B. Therefore, C+ ⊆ N + + S+.

2.6 The Copositive Plus Cone and its Dual Cone

The set of copositive matrices is a closed convex pointed cone with nonempty interior. In this section we will see that the set of copositive plus matrices is also a cone and that it shares many properties with the copositive cone. However, the copositive plus cone is not closed and we examine its closure. At the end of this section we will consider the dual cone of the copositive plus matrices. 14 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES

2.6.1 The Copositive Plus Cone A set K is called a cone if for every x ∈ K and every α ≥ 0, we have αx ∈ K. Furthermore, a set K is called a if for every x, y ∈ K and every λ1, λ2 ≥ 0, we have λ1x+λ2y ∈ K. A cone K is pointed if −K ∩ K = {0}, that is, the cone K does not contain a straight line. In the next theorem we will see that the set of copositive plus matrices is a convex cone.

Theorem 17. The set of copositive plus matrices is a convex pointed cone with nonempty interior.

Proof. Take two arbitrary copositive plus matrices A and B and scalars λ1, λ2 > 0. Let D be the convex combination of the matrices A and B, that is, D = λ1A + λ2B. The matrix D is copositive, because the set of copositive matrices is a convex cone. The matrices A and B are both copositive, so zT Az ≥ 0 and zT Bz ≥ 0 for all z ≥ 0. If z ≥ 0 and zT Dz = 0, then

T T T T T z Dz = λ1z Az + λ2z Bz = 0 ⇔ z Az = 0 and z Bz = 0 ⇔ Az = 0 and Bz = 0.

Hence, Dz = λ1Az + λ2Bz = 0. Consequently, D is copositive plus and the set of copositive plus matrices is a convex cone. The copositive cone is pointed and the copositive plus cone is a subset of this cone, so the copositive plus cone is also pointed. Further, the interior of C, the set of strictly copositive matrices, is a subset of C+ and the interior of C is non empty. Hence, the interior of C+ is nonempty.

The copositive plus cone is not closed. We will illustrate this with an example in dimension n = 2. Consider the sequence of 2 × 2 matrices of the form

 a 1  M = i , 1 ai where ai is a sequence of positive numbers which converges to zero. Each matrix in this sequence is copositive plus, because they are all in N +. However the matrix where the sequence converges to, is not copositive plus. Hence, the copositive plus cone is not closed.

Theorem 18. The closure of the copositive plus matrices is the set of copositive matrices.

Proof. We have that cl(int(C)) ⊆ cl(C+) ⊆ cl(C), since int(C) ⊆ C+ ⊆ C. The closure of C is C, since C is closed. Furthermore, the closure of the interior of C is C. Hence, the closure of C+ is C.

2.6.2 The Dual Copositive Plus Cone The definition of the dual cone of a set K is equal to K∗ = {A ∈ S | hA, Bi ≥ 0 for all B ∈ K}, where hA, Bi = trace(AT B). The dual cone of the copositive matrices is equal to C∗ = {A ∈ T Sn×n | A = FF with F ∈ Nn×m}; see [6].

Theorem 19. The dual cone of the copositive plus matrices is equal to the dual cone of the + ∗ T copositive matrices. That is, (C ) = {A ∈ Sn×n | A = FF with F ∈ Nn×m}. 2.7. COPOSITIVE PLUS OF ORDER R 15

Proof. The dual cone of the copositive matrices C∗ ⊆ (C+)∗, since C+ ⊆ C. Left to show is that (C+)∗ ⊆ C∗. We will prove that if M/∈ C∗, then M/∈ (C+)∗. If matrix M/∈ C∗, then there is a matrix B ∈ C with hB,Mi < 0. For every  > 0, the matrix B + I ∈ int(C) ⊆ C+ and for  small enough hB + I, Mi = hB,Mi + hI, Mi < 0. So if M/∈ C∗, then M/∈ (C+)∗. Hence, (C+)∗ ⊆ C∗.

2.7 Copositive Plus of Order r

Matrices which are not copositive plus, can still have copositive plus principal submatrices. In this section we will consider matrices for which all r × r principal submatrices are copositive plus. More precise, we say that M is copositive plus of order r if and only if every r × r principal submatrix is copositive plus. We will present here some theorems with necessary conditions, but also sufficient condi- tions for a matrix to be copositive plus.

n×n Theorem 20 ([16]). If M ∈ R is copositive plus of order n − 1 but not strictly copositive, then it is copositive plus if and only if it is singular.

n×n Theorem 21 ([17]). If M ∈ R has p < n positive eigenvalues, then it is copositive plus if and only if it is copositive plus of order p + 1.

n×n Theorem 22 ([17]). If M ∈ R is of rank r < n, then it is copositive plus if and only if it is copositive plus of order r.

2.8 Copositive Plus Matrices with −1, 0, 1 Entries

In this section we characterize the matrices with −1, 0, 1 entries. Let E be the set of symmetric matrices with ones on the diagonal and zeros, ones and minus ones elsewhere. In [8], A. J. Hoffman and F. Pereira have shown under which conditions a matrix in E is copositive, copositive plus or positive semidefinite. Below, we will give part of their main results. For this, we will refer to the following set of 3 × 3 matrices:

 1 −1 −1   1 −1 −1   1 1 1   −1 1 −1  (2.10)  −1 1 0  (2.11)  1 1 0  (2.12) −1 −1 1 −1 0 1 1 0 1

 1 1 −1   1 −1 1   1 1 0  (2.13)  −1 1 1  (2.14) −1 0 1 1 1 1

Theorem 23 ([8]). Let A ∈ E.

ˆ The matrix A is copositive if and only if it has no 3 × 3 principal submatrices which, after principal rearrangement, are of the form (2.10) or (2.11).

ˆ The matrix A is positive semidefinite if and only if it has no 3 × 3 principal submatrices which, after principal rearrangement, are of the form (2.10)-(2.14). 16 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES

ˆ The matrix A is copositive plus if and only if A contains no 3 × 3 principal submatrices which, after principal rearrangement, are of the form (2.10), (2.11), (2.13), (2.14).

Let G−1(A),G0(A) and G1(A) be undirected graphs associated with a symmetric n × n matrix A ∈ E. Here we define G−1(A)(G0(A),G1(A)) to be the graph with n vertices such that the vertices i and j are adjacent if and only if Aij = −1(Aij = 0,Aij = 1). The characterization of the graphs G−1(A),G0(A) and G1(A), where A is a copositive matrix, is given in [8]. The following theorem is about the graphs G−1(A),G0(A) and G1(A), where A is copositive plus. Theorem 24. Let A ∈ E. The matrix A ∈ C+ if and only if each of the following statements is true:

1. G−1(A) contains no triangles.

2. G1(A) contains those edges (i, j) where i and j are at distance 2 in G−1(A).

3. G0(A) contains those edges (i, j) where i and j are at distance 2 in G1(A).

4. G−1(A)∪G0(A), G−1(A)∪G1(A), or G0(A)∪G1(A) contains a triangle for every subset of three vertices. Proof. Statement 1 excludes submatrix (2.10), statement 2 excludes submatrix (2.11), state- ment 3 excludes submatrix (2.14) and statement 4 excludes submatrix (2.13).

A subset of E are the symmetric matrices with ones on the diagonal and ones and minus ones elsewhere. This set is denoted by E+. Rewriting Theorem 23 for matrices in E+ gives the following theorem. Theorem 25. Let A ∈ E+. ˆ The matrix A is copositive if and only if it has no 3 × 3 principal submatrices which, after principal rearrangement, are of the form (2.10).

ˆ The matrix A is positive semidefinite if and only if it has no 3 × 3 principal submatrices which, after principal rearrangement, are of the form (2.10), (2.14).

ˆ The matrix A is copositive plus if and only if A contains no 3 × 3 principal submatrices which, after principal rearrangement, are of the form (2.10), (2.14). Proof. Delete in Theorem 23 all principal submatrices which contain zero’s and we are left with Theorem 25.

Given a matrix A ∈ E+, Theorem 25 gives the same conditions for A to be copositive plus or to be positive semidefinite. We get the following theorem. Theorem 26. Let M ∈ E+. The matrix M is copositive plus if and only if M is positive semidefinite. Let G(M) be an undirected graph associated with a symmetric n × n matrix M ∈ E+. Here we define G(M) to be the graph with n vertices such that the vertices i and j are adjacent if and only if Mij = −1. The following theorem is about the graphs G(M), where M is copositive plus. 2.8. COPOSITIVE PLUS MATRICES WITH −1, 0, 1 ENTRIES 17

Theorem 27 ([7]). Let M ∈ E+. The matrix M is copositive plus (or positive semidefinite) if and only if G(M) is Kp,n−p for some 0 < p < n.

Here Kp,n−p is a complete bipartite graph with partitions of size p and size n − p. 18 CHAPTER 2. COPOSITIVE PLUS MATRICES AND THEIR PROPERTIES Chapter 3

The Copositive Plus Completion Problem

In this chapter we consider matrices in which some entries are specified, but where the re- maining entries are unspecified and are free to be chosen. Such matrices are called partial matrices. A choice of values for the unspecified entries is a completion of the partial matrix. In a completion problem we ask for which partial matrices we can find a completion such that some desired property is satisfied. The (strictly) copositive (plus) completion problem is the problem of deciding which partial matrices have a (strictly) copositive (plus) completion. A necessary condition for a partial matrix to have a (strictly) copositive (plus) completion is that all fully specified principal submatrices have the desired property, otherwise the prop- erty of inheritance is violated; see Definition 3. A partial matrix for which every fully specified principal submatrix is (strictly) copositive (plus) is partial (strictly) copositive (plus).

3.1 Unspecified Non-diagonal Elements

Throughout this section, we assume that all diagonal entries of a partial matrix are specified. We first assume that only one pair of non-diagonal entries is unspecified, without loss of generality we can take the entries in the upper right and lower left corners as the unspecified entries. So in this section, we consider the partial matrix A of the form  a bT ?  A =  b A0 c  , (3.1) ? cT d where the question marks denote the unspecified entries. For (strictly) copositive matrices it is shown that every partial (strictly) copositive matrix has (strictly) copositive completions. See the following theorem. Theorem 28 ([10]). If A is a partial copositive matrix of the form (3.1), then  a bT s  A =  b A0 c  s cT d √ √ is a copositive matrix for s ≥ ad. If A is partial strictly copositive, then A with s ≥ ad is strictly copositive.

19 20 CHAPTER 3. THE COPOSITIVE PLUS COMPLETION PROBLEM

We cannot replace copositive in Theorem 28 with copositive plus. Consider the coun- terexample  1 −1 s  A =  −1 1 2  . (3.2) s 2 1 This matrix is partial copositive plus, because the upper left 2 × 2 submatrix is positive semidefinite and the lower right 2 × 2 submatrix is flatly nonnegative; see Theorems 10 and T T T 11. If we take√ x = (1, 1, 0) , then x Ax is zero. But Ax = (0, 0, s + 2) and this cannot be zero if s ≥ ad = 1. In this section we will see that this matrix is an example of a partial copositive plus matrix which does not have a copositive plus completion at all. The following theorem tells us under which conditions a partial copositive plus matrix has a copositive plus completion. Theorem 29. If A is a partial copositive plus matrix of the form (3.1), then

 a bT s  A =  b A0 c  (3.3) s cT d √ is a copositive plus matrix if s ≥ ad and the following two conditions hold for x ≥ 0:

 a bT  x = 0 ⇒ s cT  x = 0, (3.4) b A0  A0 c  x = 0 ⇒ bT s  x = 0. (3.5) cT d

To prove this theorem, we need the following theorem. Theorem 30. Let A be a copositive matrix and let x be a strictly positive vector. If xT Ax = 0, then Ax = 0. Proof. Consider the model min xT Ax. x≥0 The objective value of this model is always nonnegative, because A is copositive. Further, if x = 0, then xT Ax = 0. So the absolute minimum value of this model is zero. We need to show that if x∗ is an absolute minimum, then Ax∗ = 0. For proving this we will use the KKT-conditions. T Let f(x) = x Ax and let gi(x) = −xi ≤ 0 for i = 1, . . . , n. The KKT-conditions for this model are n n X X ∇f(x) + µi∇gi(x) = 2Ax − µiei = 0, i=1 i=1 µi ≥ 0, µigi(x) = −µixi = 0, and gi(x) = −xi ≤ 0 for all i.

A vector x satisfies the constraint qualifications if it satisfies the constraint qualification, that is, the gradients of the active inequality constraints are linearly independent at x. Further, if x∗ is a local minimum that satisfies the constraint qualifications, then there exists constants µi such that the KKT-conditions hold. 3.1. UNSPECIFIED NON-DIAGONAL ELEMENTS 21

Take a vector x∗ > 0 with x∗T Ax∗ = 0, then x∗ is an absolute minimum of the model. All ∗ ∗ inequality constraints of the model are not active, gi(x ) > 0, since x > 0. Consequently, the ∗ linear independence constraint qualification is satisfied and therefore there exist constants µi ∗ ∗ ∗ such that the KKT-conditions hold. From µi gi(x ) = 0 it follows that µi = 0 for all i. The first KKT-condition becomes ∇f(x∗) = 0. The gradient ∇f(x∗) = 2Ax∗ = 0, so Ax∗ = 0.

Before proving Theorem 29, we introduce some notation. Let A be a matrix of the form (3.3), then we denote the principal submatrix of the first n−1 columns by Au and the principal submatrix of the last n − 1 columns by Al, that is,

 a bT   A0 c  A = and A = . u b A0 l cT d Recall that if A is partial copositive plus, every fully specified principal submatrix is copositive plus. Therefore, Au and Al are copositive plus. n Further, let x ∈ R be the vector which consists of the three components x1, xn ∈ R and 0 n−2 T 0T T 0T T 0T x ∈ R such that x = (x1, x , xn). Let xu = (x1, x ) and let xl = (x , xn). The quadratic product of A is equal to

T 2 0T 0 0 2 T 0 T 0 x Ax = ax1 + x A x + dxn + 2x1b x + 2xnc x + 2sx1xn, (3.6) T 2 T 0 = xu Auxu + dxn + 2xnc x + 2sx1xn, (3.7) T 2 T 0 = xl Alxl + ax1 + 2x1b x + 2sx1xn. (3.8) Further, the product Ax is equal to

 T 0  ax1 + b x + sxn 0 0 Ax =  bx1 + A x + cxn  . (3.9) T 0 sx1 + c x + dxn √ Proof of Theorem 29. The first restriction for s, s ≥ ad, guarantees the copositivity of the matrix A; see Theorem 28. Left to show is that matrix A is copositive plus. Take an x ≥ 0 such that xT Ax = 0, we will show that this always implies Ax = 0. We split the problem in 0 five cases, the case when x > 0, the three cases where respectively x1 = 0, xn = 0 and x = 0 0 0 and the final case where x1, xn, x 6= 0 and xi = 0 for certain i. ˆ Case 1: x > 0. The vector Ax = 0, since A is copositive and x > 0; see Theorem 30.

ˆ Case 2: x1 = 0. Consider the terms of Ax like in (3.9). The first term of each entry T T of Ax is zero, since x1 = 0. Further, Alxl = 0, since x Ax = xl Alxl = 0 and Al is T 0 copositive plus. If Alxl = 0, then b x + sxn = 0; see restriction (3.5). So the sum of the last two terms of each entry of Ax is also zero. Hence, Ax = 0.

ˆ Case 3: xn = 0. This case can be proven analogously to case 2, but in this case we need restriction (3.4).

ˆ Case 4: x0 = 0. Consider the terms of Ax like in (3.9). The second term for each entry 0 T 2 2 of Ax is zero, since x = 0. The product x Ax = ax1 + dxn + 2sx1x2 = 0 and all terms of xT Ax are nonnegative. Therefore, all terms of xT Ax are zero. 2 If ax1 = 0, then x1 = 0 or a = 0. If a = 0, then b = 0; see Theorem 4. Furthermore, if a and b are zero and we substitute x = e1 in restriction (3.4), then also s = 0. So 22 CHAPTER 3. THE COPOSITIVE PLUS COMPLETION PROBLEM

a = b = s = 0 or x1 = 0, which shows that the first term of each entry of Ax is zero. If dxn = 0, then we can show in a similar way that the last term of each entry of Ax is zero. Hence, all terms of each entry of Ax are zero, so Ax = 0. ˆ 0 0 Case 5: x1, xn, x 6= 0 and there is an i such that xi = 0. Let

0T T n 0 0 T X = {x = (x1, x , xn) ∈ R+ | x1, xn, x 6= 0, ∃ i s.t. xi = 0, x Ax = 0}.

We will show with an iterative process that if x ∈ X, then Ax = 0. For this, we introduce the set Xleft and at the start of the process Xleft = X. We will show for one vector x at the time that Ax = 0. If we have shown for a vector x ∈ Xleft that Ax = 0, then we remove all vectors βx with β ≥ 0 from Xleft. We will continue this process until Xleft is empty and then we have proved that Ax = 0 for all vectors x of X. T For all vectors x ≥ 0, x∈ / Xleft, and x Ax = 0 we have already proven that Ax = 0 in a previous case or in a previous step of this case. So if a vector x ≥ 0 and xT Ax = 0 is not in Xleft, then Ax = 0.

If Xleft is not empty, then we take a vector x ∈ Xleft for which there is no vector w ∈ Xleft with supp(w) ⊂ supp(x), where supp(x) = {i | xi 6= 0}.

For all i with xi = 0, we delete the ith row and ith column of A. The remaining matrix 0 A˜ has at least dimension 3, since x1 6= 0, xn 6= 0, and x has at least one nonzero element. Consider the matrix

 a ˜bT s  A˜ =  ˜b A˜0 c˜  . s c˜T d

Further, let A˜u denote the principal submatrix of A˜ which consists of all the rows and columns of A˜ except from the last row and column. Let A˜l denote the principal submatrix of A˜ which consists of all the rows and columns of A˜ except from the first row and column. Finally,x ˜ is the vector which we get if we remove all zeros from x. Like wise, if we go back fromx ˜ to x we add zeros tox ˜. ˜ Note that A is not partial strictly copositive. This can be shown with√ contradiction, if A˜ is partial strictly copositive, then A˜ is strictly copositive since s ≥ ad. This is not true, because there exists ax ˜ 6= 0 withx ˜T A˜x˜ = 0. So A˜ is not partial strictly copositive and therefore, one of the matrices A˜u or A˜l is not strictly copositive. Assume without T loss of generality that A˜u is not strictly copositive. So there is an vectorp ˜ ≥ 0 and p˜ 6= 0 such thatp ˜T A˜ p˜ = 0. If α = min( xi ), then αp˜ = (αp , αp˜0) ≤ (x , x˜0) and there u p˜i 1 1 is an i such that αp˜i =x ˜i. Consider the vector     x1 αp1  0  v1v˜  x˜0  =  αp˜0  + . vn xn 0

Due to the choice of α, the vectorv ˜ has at least one zero. Therefore, supp(˜v) ⊂ supp(˜x) and also supp(v) ⊂ supp(x). Remember that we have chosen x in such a way that there is no vector w ∈ Xleft with supp(w) ⊂ supp(x). So v in not in Xleft. 3.1. UNSPECIFIED NON-DIAGONAL ELEMENTS 23

We can rewrite the product A˜x˜ as follows,

 αp˜  A˜x˜ = A˜ + A˜v.˜ (3.10) 0

The matrix A˜ is copositive, because it is a principal submatrix of A. Further, the vector x˜ is strictly positive. The productx ˜T A˜x˜ = xT Ax = 0, from Theorem 30 it follows that A˜x˜ = 0. Further,

T T p˜ A˜up˜ = 0 ⇒ p Aup = 0 + ⇒ Aup = 0 (Au ∈ C )  p  ⇒ A = 0 (see(3.4)) (3.11) 0  p˜  ⇒ A˜ = 0. 0

So A˜v˜ = 0, since A˜x˜ = 0 and A˜(αp˜T , 0)T = 0; see (3.10). We have that

A˜v˜ = 0 ⇒ v˜T A˜v˜ = 0 ⇒ vT Av = 0

⇒ Av = 0 (v∈ / Xleft). (3.12)

Finally,

 αp  Ax = A + Av = 0, 0

since A(αpT , 0)T = 0 and Av = 0; see (3.11) and (3.12).

Remove all vectors βx with β ≥ 0 from Xleft and repeat this process until Xleft is empty.

The restrictions (3.4) and (3.5) are necessary. For example, if (3.4) is not satisfied, then T T T there is a vector xu such that Auxu = 0 and (s, c )xu 6= 0. The vector x = (xu , 0) gives T T T x Ax = xu Auxu = 0, but (Ax)n = (s, c )xu 6= 0. So (3.4) is necessary. It can be shown analogously that (3.5) is necessary. So matrices which cannot satisfy both (3.4) and (3.5) do not have a copositive plus completion. An example of this is matrix

 1 −1 −1 s   −1 1 1 −0.5  A =   .  −1 1 1 0  s −0.5 0 1

This matrix is partial copositive plus, because all principal submatrices are positive semidef- T T T T inite. Take x1 = (1100) and x2 = (1010) , then x1 Ax1 and x2 Ax2 are both zero. Further, Ax1 is zero if and only if s − 0.5 = 0 and Ax2 is zero if and only if s = 0. So we cannot find a value for s for which (3.4) is satisfied. So this matrix does not have a copositive plus completion. 24 CHAPTER 3. THE COPOSITIVE PLUS COMPLETION PROBLEM √ In [10] it is mentioned that the restriction s ≥ ad is not always necessary, there are examples for which√ we can choose s smaller. For examples we refer to [10]. However, the restriction s ≥ − ad is necessary for copositivity; see Theorem 3. If a partial copositive plus matrix A, then SA is the set of all possible values for s such that (3.4) and (3.5) are satisfied. We can have the following cases: √ ˆ There is a value s ∈ SA such that s ≥ ad ⇒ The matrix A has a copositive plus completion. (see Theorem 29) √ ˆ The set SA is empty or all s ∈ SA satisfy s ≤ − ad ⇒ The matrix A does not have a copositive plus completion. (see Theorem 3) √ √ ˆ All s ∈ SA satisfy s ≤ ad and there is at least one s ∈ SA such that s ≥ − ad ⇒ It is possible that the matrix A has a copositive plus completion.

An example of the second case is matrix (3.2). The following matrix is an example of the third case:  1 −1 s  A =  −1 1 1  s 1 1 This matrix is partial copositive plus, because the upper left 2 × 2 submatrix is positive semidefinite and the lower right 2 × 2 submatrix is positive. From (3.4) and (3.5) it follows that s = −1. If s = −1, then the matrix A is copositive plus; see Theorem 25. So this is an example of a partial copositive plus matrix, for which we cannot see with the theorem that it has a copositive plus completion. Another example of the third case is the following matrix:

 1 −1 −1 s   −1 1 1 1  A =    −1 1 1.1 0  s 1 0 1

This matrix is partial copositive plus, because the upper left 3 × 3 submatrix is positive semidefinite and the lower right 3 × 3 submatrix is flatly nonnegative. From restrictions (3.4) and (3.5) it follows that s must be -1. If s = −1, then A is not copositive, since the quadratic product of A and the vector e − e is − 9 . So this is an example, which show that we cannot √ 2 10 √ replace the restriction s ≥ ad with s ≥ − ad.

Theorem 31. If A is a partial copositive plus matrix of the form (3.1), then

ˆ The matrices which has an s which satisfies all conditions of Theorem 29 have certainly a copositive completion. √ ˆ Further, the matrices which has no s ≥ − ad which satisfies both restrictions (3.4) and (3.5) cannot be completed to a copositive plus matrix.

For the other partial copositive plus matrices we cannot say whether the matrix has a copositive plus completion. 3.2. UNSPECIFIED DIAGONAL ENTRIES 25

So far we only considered partial matrices with one pair of unspecified non-diagonal entries. If a partial (strictly) copositive matrix A has more unspecified non-diagonal entries, then we can still complete it to a (strictly) copositive matrix. This can be done by applying the p following rule repeatedly: “fill in s in place Apq and Aqp, where s ≥ AppAqq”; see [10]. We obtain the following theorem. Theorem 32 ([10]). If A is a partial (strictly) copositive n × n matrix with all diagonal elements specified, then there is a completion of A which is (strictly) copositive. For partial copositive plus matrices with multiple pairs of unspecified non-diagonal el- ements√ it is difficult to say whether it has a copositive plus completion, since restriction s ≥ ad is not necessary. Therefor, the copositive plus completion problem with multiple pairs of unspecified non-diagonal elements is still open for further research.

3.2 Unspecified Diagonal Entries

In this section we consider partial matrices in which all non-diagonal entries are specified. We first assume that only one diagonal entry is unspecified, without loss of generality the diagonal entry in the upper left corner is unspecified. We consider the partial matrix

 a bT  A = 11 . (3.13) b A0

n×n 0 n−1×n−1 n−1 Here A is a matrix in R , A is a matrix in R , b is a vector in R , and a11 ∈ R is an unspecified entry. In [9], L. Hogben proposed some theorems about the copositive completion problem with unspecified diagonal entries. It turns out that for partial strictly copositive matrices of the form (3.13), there is always a strictly copositive completion. Theorem 33 ([9]). Let A be a partial strictly copositive matrix of the form (3.13) and let

β = min bT y and γ = min yT A0y. n−2 n−2 y∈S+ y∈S+

β2 Any value of a > completes A to a strictly copositive matrix. 11 γ However, not all partial copositive matrices of the form (3.13) have a copositive comple- tion. An example for this is given in [9]. Consider the following matrix

 ? −1  A = . (3.14) −1 0

This matrix is partial copositive, but it does not have a copositive completion. Note that this matrix is also partial copositive plus, so this example also shows that not all partial copositive plus matrices of the form (3.13) have a copositive plus completion. The matrix (3.14) cannot have a copositive plus completion, because it does not satisfy Theorem 4. A condition which prevents this and which is necessary for partial copositive plus matrices of form (3.13) to have a copositive plus completion is the following

“For all y ≥ 0 with A0y = 0 ⇒ bT y = 0”. (3.15) 26 CHAPTER 3. THE COPOSITIVE PLUS COMPLETION PROBLEM

That this condition is necessary can be easily proven. Take a vector y ≥ 0 with A0y = 0 and bT y 6= 0. If vT = (0, yT ) ≥ 0, then vT Av = yT A0y = 0 and (Av)T = (bT y, (A0y)T ) = (bT y, 0) 6= 0. So A cannot be completed to a copositive plus matrix if there exists a vector y ≥ 0 which does not satisfy condition (3.15). If we change the the choice of β and γ a little, then we can change Theorem 33 such that it also holds for copositive plus matrices. For this, let

n−2 0 n−2 Z = {y ∈ S+ | supp(x) * supp(y) ∀ x ∈ ker(A ) ∩ S+ }. (3.16)

Recall, that supp(x)={i | xi 6= 0}. Let

β˜ = min bT y andγ ˜ = min yT A0y. y∈Z y∈Z

Note that, if A0 is a nonzero matrix, then β˜ andγ ˜ are well-defined; see Theorem 9. Furtherγ ˜ is strictly positive, because the quadratic product of A0 is nonnegative and we consider only vectors in Z. So if A0 is not the , then β˜2/γ˜ is well defined and nonnegative.

Theorem 34. Let A be a partial copositive plus matrix of the form (3.13) which satisfies condition (3.15). If A0 is a nonzero matrix, then any value of

β˜2 a > ≥ 0 (3.17) 11 γ˜ completes A to a copositive plus matrix. 0 Furthermore, if A is the zero matrix, then any value of a11 ≥ 0 completes A to a copositive plus matrix.

Proof. Assume that A0 is a nonzero matrix and let

β˜2 a > ≥ 0. 11 γ˜

n T T First we will show that A is copositive. Consider the vector v ∈ R , with v = (z, y ) ≥ 0 n−1 where z ∈ R and y ∈ R . T 2 T T 0 2 If y = 0, then v Av = a11z + 2b yz + y A y = a11z ≥ 0. Note that a11 is strictly positive and therefore equality only occurs when z = 0. If y 6= 0, then we rescale the vector vT tov ˜T = (˜z, y˜T ) such that ky˜k = 1. We get

1 v˜ = v. kyk

The quadratic product vT Av = kyk2v˜T Av˜. Sov ˜T Av˜ is nonnegative if and only if vT Av is nonnegative. Furthermore,v ˜T Av˜ is zero if and only if vT Av is zero. So it is enough to show thatv ˜T Av˜ is nonnegative. We split the problem in three cases:

ˆ Case 1:y ˜ ∈ ker(A0). Ify ˜ ∈ ker(A0), then A0y˜ = 0 and bT y˜ = 0; see condition (3.15). T 2 T T 0 2 Therefore,v ˜ Av˜ = a11z˜ + 2b y˜z˜ +y ˜ A y˜ = a11z˜ ≥ 0. Note that equality only occurs whenz ˜ = 0. 3.2. UNSPECIFIED DIAGONAL ENTRIES 27

ˆ Case 2:y ˜ ∈ Z. The quadratic product T 2 T T 0 v˜ Av˜ = a11z˜ + 2b y˜z˜ +y ˜ A y˜ β˜2 > z˜2 + 2β˜z˜ +γ ˜ γ˜ 1 = (β˜z˜ +γ ˜)2 ≥ 0. γ˜ So for this case, the quadratic product is strictly positive. ˆ 0 0 n−1 Case 3:y ˜ ∈ / ker(A ) andy ˜ ∈ / Z. Ify ˜ ∈ / Z, then there is a x ∈ ker(A ) ∩ S+ such that supp(x) ⊆ supp(˜y). Further, if

α = mini∈supp(x)(yi/xi) > 0,

then αx ≤ y˜ and there is an i ∈ supp(x) such that αxi =y ˜i. Let p =y ˜ − αx, so y˜ = p + αx. The vector p is nonnegative and p is unequal zero, sincey ˜ is not in the kernel of A0. The quadratic product T 2 T T 0 v˜ Av˜ = a11z˜ + 2b y˜z˜ +y ˜ A y˜ 2 T T 0 = a11z˜ + 2b (p + αx)˜z + (p + αx) A (p + αx) 2 T T T 0 2 T 0 T 0 = a11z˜ + 2b pz˜ + 2αb xz˜ + p A p + α x A x + 2αp A x 2 T T 0 = a11z˜ + 2b pz˜ + p A p = (˜z, pT )A(˜z, pT )T . So the quadratic product of A andv ˜ is equal to the quadratic product of A and (˜z, pT ). We rescale the vector (˜z, pT ) tov ˆT = (ˆz, pˆT ) such that kpˆk = 1, that is 1 vˆT = (˜z, pT ). kpk Ifp ˆ ∈ Z, then we can continue the proof as we will describe below. If the vectorp ˆ ∈ / Z, then we can repeat the steps above until there remains a vector which is in Z. We will eventually find a vectorp ˆ ∈ Z, because due to the choice of α there is an i ∈ supp(x) such that pi =y ˜i − αxi = 0. Therefore, we have that supp(p) ⊂ supp(˜y). So each step the supp of the remaining vectorp ˆ becomes smaller and hence there will be a moment 0 n−1 that there is no x ∈ ker(A ) ∩ R+ and x 6= 0 such that the supp(x) is a subset of supp(ˆp). When we found a vectorp ˆ ∈ Z, it follows from case 2 thatv ˆT Avˆ is always strictly positive. The quadratic productv ˜T Av˜ = kpk2vˆT Avˆ. Sov ˜T Av˜ is strictly positive. Sov ˜T Av˜ is nonnegative for allv ˜ ≥ 0. therefore, vT Av is nonnegative for all v ≥ 0 and hence A is copositive. Furthermore,v ˜ ≥ 0 andv ˜T Av˜ = 0 if and only ifz ˜ = 0 andy ˜ ∈ ker(A0). Therefore, vT = (z, yT ) ≥ 0 and vT Av = 0 if and only if z = 0 and y ∈ ker(A0). This together with (3.15) imply that if vT Av = 0 then  a z + bT y   0  Av = 11 = . bT z + A0y 0 So A is copositive plus. If A0 is the zero matrix, then A satisfies condition (3.15) if and only if b = 0. Therefore, it is clear that A is copositive plus for all a11 ≥ 0. 28 CHAPTER 3. THE COPOSITIVE PLUS COMPLETION PROBLEM

So far we have considered partial copositive plus matrices with one unspecified diagonal entry. Let us now consider partial copositive plus matrices with more unspecified diagonal entries. Consider the matrix  T  a1,1 a1,2 ··· a1,n b1 T  a2,1 a2,2 ··· a2,n b   2   ......  A =  . . . . .  . (3.18)  T   an,1 an,2 ··· an,n bn  0 b1 b2 ··· bn A 0 k×k Here, the entries a1,1 till an,n are unspecified and A is a fully specified matrix in R . It turns out that for all partial strictly copositive matrices of the form (3.18), it is still possible to find a strictly copositive completion; see [9]. However, not all partial copositive plus matrices of the form (3.18) have a copositive plus completion. A necessary condition for a copositive matrix of the form (3.18) is 0 T “For all y ≥ 0 with A y = 0 ⇒ bj y = 0 for all j = 1, . . . , n”. (3.19) That this condition is necessary follows directly from condition (3.15). We have the following theorem for partial copositive plus matrices with multiple unspec- ified diagonal entries. Theorem 35. If A is a partial copositive plus matrix of the form (3.18) which satisfies condition (3.19), then matrix A has a copositive plus completion. Proof. Consider the following principal submatrix of A  T  an,n bn 0 . (3.20) bn A This partial submatrix has one unspecified diagonal element and it satisfies (3.15), since A satisfies (3.19). Every value for an,n satisfying condition (3.17) completes the submatrix (3.20) to a copositive plus matrix; see Theorem 34. From the proof of Theorem 34 we know that if an,n satisfies (3.17), then  T  an,n bn k+1 T T 0 k+1 ker( 0 ) ∩ R+ = {(0, y ) | y ∈ ker(A ) ∩ R+ }. (3.21) bn A

Consider the next unspecified value an−1,n−1 and the principal submatrix of A  T  an−1,n−1 an−1,n bn−1  ˜T  T an−1,n−1 bn−1  an−1,n an,n bn  = ˜ ˜0 . (3.22) 0 bn−1 A bn−1 bn A

We have already set an,n to a value, so the submatrix (3.20) is fully specified and it is denoted by A˜0. So we consider again a partial submatrix which has one unspecified diagonal element. Further, (3.15) is satisfied, since (3.21) holds for the kernel of A˜ and condition (3.19) holds for A. So again every value for an−1,n−1 satisfying condition (3.17) completes the submatrix (3.22) to a copositive plus matrix; see Theorem 34. We can continue this process, because the nonnegative kernel of every newly fully specified T T 0 submatrix is again like in equation (3.21) equal to {(0,..., 0, y ) | y ∈ ker(A ) ∩ R+}. So every time condition (3.15) will holds and we can choose an appropriate value according to (3.17). Chapter 4

Lemke’s Algorithm

In the previous chapters we treated some properties and theorems about copositive plus matrices, but we did not show why copositive plus matrices are interesting. This is the content of the current chapter. Here we will discuss Lemke’s algorithm, for which copositive plus matrices are useful. In the first section we will describe the problem for which Lemke’s algorithm is used, the Linear Complementarity Problem, and in the second section we will describe the algorithm. At the end of the chapter we will treat some applications. In the literature there are published several articles and books about Lemke’s algorithm. This chapter is mainly based on the article of C. E. Lemke [12] and on the book of K. G. Murty [15]. In the last section about the applications in game theory we also made use of the articles [14] and [1].

4.1 The Linear Complementarity Problem

In 1968 Cottle and Dantzig proposed the Linear Complementarity Problem (LCP) in [2]. The n LCP is the problem of finding w, z ∈ R such that

Iw − Mz = q, (4.1) w, z ≥ 0, (4.2)

wjzj = 0 for all j. (4.3)

n×n n Here M is a matrix in R and q is a vector in R . In this problem we have 2n decision variables, the variables w1, . . . , wn and z1, . . . , zn. By (4.2) and (4.3) either wj or zj is zero for all j. The pair (wj, zj) is a complementary pair. We set one variable of each complementary pair to zero. By doing this we make sure that the final constraint is fulfilled. The variables which we have set to zero are the nonbasic variables. There are n nonbasic variables, since there are n complementary pairs. The remaining variables are the basic variables. Note that it is possible to have more than n variables equal to zero, so basic variables can also be equal to zero. For every set of nonbasic variables, we can check whether we can find nonnegative values for the basic variables such that (4.1) is satisfied. If we cannot find such values, then we know that for the chosen set of nonbasic variables there does not exist a solution.

29 30 CHAPTER 4. LEMKE’S ALGORITHM

y

y (−2, 6) (−2, 6)

(−1, 2) (0, 1) (−2, 1) (−2, 1) x x

(a) Basic variables: z1 and z2 (b) Basic variables: w2 and z1

Figure 4.1: Complementary Cone

Let us consider an example. Let

 2 1   −2  M = and q = . −1 −2 6

The LCP is to find w1, w2, z1, and z2 such that  1   0   −2   −1   −2  w + w + z + z = , 0 1 1 2 1 1 2 2 6

w1, w2, z1, z2 ≥ 0, and w1z1 = w2z2 = 0.

In this problem there are four possible sets of nonbasic variables. Let w1 and w2 be the nonbasic variables. For this set of nonbasic variables the problem reduces to: Do there exist nonnegative values for the basic variables z1 and z2 such that  −2   −1   −2  z + z = ? 1 1 2 2 6

This is the same question as: Can we write q as a nonnegative linear combination of the vectors  −2   −1  and ? 1 2 Geometrically, this is depicted in figure 4.1(a). Here we see the complementary cone of these vectors. The complementary cone of a set of vectors consists of all points which can be written as a nonnegative linear combination of these vectors. If the point q is in this cone, then q is a nonnegative linear combination of these vectors and we have found a solution. 4.2. LEMKE’S ALGORITHM 31

Unfortunately, the point q is not in the cone. So this set of nonbasic variables does not yield a solution. Let us consider another set of nonbasic variables. If w1 and z2 are the nonbasic variables, then the variables w2 and z1 are the basic variables and we can make the complementary cone of the vectors  −2   0  and . 1 1 Figure 4.1(b) shows that the point q is in the cone and hence we can write q as a nonnegative linear combination of these vectors. If we take w1 = 0, w2 = 5, z1 = 1, and z2 = 0, then this is indeed a solution of the LCP.

In this example we considered a set of nonbasic variables and checked whether the point q was in the corresponding complementary cone. This cone is formed with the vectors corre- sponding to the basic variables, where the vector I.,i corresponds to the variable wi and the vector −M.,i corresponds to the variable zi. The pair (I.,i, −M.,i) is the ith complementary pair of vectors. One of these vectors corresponds to a nonbasic variable and the other vector corresponds to a basic variable. The vectors corresponding to a basic variable form the com- plementary cone and these vectors are denoted by C.,i. The vectors C.,i are the basic columns and together they form the basic matrix C. We can now define the complementary cone more precisely. The complementary cone is the set

{y | y = α1C.,1 + ... + αnC.,n, αi ≥ 0 for all i}.

In an LCP we want to find a set of column vectors (C.,1,...,C.,n) such that

ˆ C.,i ∈ {I.,i, −M.,i} for all i,

ˆ q is a nonnegative linear combination of (C.,1,...,C.,n).

We can check whether q is a nonnegative linear combination of (C.,1,...,C.,n) by checking whether q is in the corresponding complementary cone. However, checking all the comple- mentary cones works for low values of n. In our example n = 2 and we have to check at most four complementary cones. In general, there are 2n complementary cones, so simply checking all the complementary cones will take a great amount of time for large n. In the following section we describe Lemke’s algorithm, which checks in a smarter way whether the LCP has a solution.

4.2 Lemke’s Algorithm

An algorithm for solving the LCP is Lemke’s algorithm, which is also called the complemen- tary pivot algorithm. Before describing this algorithm, we will first make some definitions of the previous section more precise and introduce some new notation. The LCP has 2n decision variables. Let the variables w1, . . . , wn be the first n variables, so these variables correspond to the set of indices {1, . . . , n}. Further, let the variables z1, . . . , zn be the last n variables, so these variables correspond to the set of indices {n + 1,..., 2n}. So if we refer to the the ith variable with i ∈ {1, . . . , n}, then we mean wi and if we refer to the n + ith variable with i ∈ {1, . . . , n}, then we mean zi. 32 CHAPTER 4. LEMKE’S ALGORITHM

In the previous section we saw that (4.2) and (4.3) imply that there are at least n decision variables equal to zero, because at least one variable of each complementary pair has to be zero. Choose of every complementary pair one variable and set it to zero. These variables are the nonbasic variables and we will denote these variables by xN . Further, the collection of the indices of these variables is the set N. The remaining n variables are the basic variables and we will denote them by xB. The collection of the indices of the basic variables is the set B = {1,..., 2n}\ N. Note that one of the indices i and n + i with i ∈ {1, . . . , n} is in N and the other index is in B. At last we define the matrices AB and AN . The basic matrix AB, in the previous section denoted as C, is the matrix which consists of the columns corresponding to the basic variables. More precise, if A = [I, −M], then AB is the matrix with the columns of A corresponding to the set B. A set B ⊆ {1,..., 2n} is called a basis if the matrix AB is nonsingular. The nonbasic matrix AN is the matrix which consists of the columns of A corresponding to the nonbasic variables, that is, the columns of A corresponding to the set N. For a specific basis B we can rewrite the first equation of the problem as

−1 Iw − Mz = q ⇔ ABxB + AN xN = q ⇔ ABxB = q ⇔ xB = AB q.

−1 If AB q ≥ 0, then B is a feasible basis. If B is feasible, then we have found a feasible solution −1 with xB = AB q ≥ 0 and xN = 0. Our goal is to find a feasible basis for the LCP if one exists, because a feasible basis yields a solution for the LCP. If there is no feasible basis for the LCP, then the problem does not have a solution.

Let us consider the same example as in the previous section. Can we find w1, w2, z1, and z2 such that

 1   0   −2   −1   −2  w + w + z + z = , 0 1 1 2 1 1 2 2 6

w1, w2, z1, z2 ≥ 0, and w1z1 = w2z2 = 0.

This LCP has four different bases, see the table below.

N B AB xN xB -2 -1 {1, 2} {3, 4} w = 0, w = 0 z = −2/3, z = 10/3 non feasible basis 1 2 1 2 1 2 0 -2 {1, 4} {2, 3} w = 0, z = 0 w = 5, z = 1 feasible basis 1 1 1 2 2 1 1 -1 {2, 3} {1, 4} w = 0, z = 0 w = 1, z = 3 feasible basis 0 2 2 1 1 2 1 0 {3, 4} {1, 2} z = 0, z = 0 w = −2, w = 6 non feasible basis 0 1 1 2 1 2 In the table we see that two of these bases are feasible, so these bases correspond to a solution of the LCP.

We consider a modified LCP which is used in Lemke’s algorithm. To construct the mod- ified problem, we introduce an artificial variable z0 ∈ R with a corresponding column e. Consider the following problems 4.2. LEMKE’S ALGORITHM 33

ˆ The Original Problem: n Find w, z ∈ R such that Iw − Mz = q, w, z ≥ 0,

wjzj = 0 for all j.

ˆ The Modified Problem: n Find w, z ∈ R and z0 ∈ R such that

Iw − Mz − ez0 = q,

w, z, z0 ≥ 0,

wjzj = 0 for all j.

Note that the modified problem has 2n+1 decision variables. Let z0 be the 2n+1th variable, so its index is 2n + 1. For this problem the basic variables xB will still consist of n variables. However, the non basic variables xN will consist of n + 1 variables. This modified LCP has always a feasible basis and solution. We can see this geomet- rically, consider the complementary cones which correspond to n vectors out of the set {e1, . . . , en, −e}. These complementary cones span the whole space, so q has to be in at least one of them. We can construct a feasible basis of the modified problem as follows: ˆ If q is nonnegative, then q is a nonnegative linear combination of the columns of I. The feasible basis B = {1, . . . , n} and the basic matrix AB = I. The feasible solution is wi = qi and zi = 0 for all i. Note that this solution is also a solution of the original problem. So if q is nonnegative, then the original LCP is solvable and we can find the solution as just described. So this is not an interesting case, so from now on we assume that q has at least one negative entry.

ˆ If the vector q has at least one negative entry, then for constructing a feasible basis we consider the entry of q with the lowest value. Call this entry qj, we have qj ≤ qi for all i. The feasible basis B = {1, . . . , j − 1, 2n + 1, j + 1, . . . , n} and the basic matrix AB = [e1, . . . , ej−1, −e, ej+1, . . . , en]. The feasible solution corresponding to this basis is z0 = −qj, zi = 0, and wi = qi − qj for all i. In Lemke’s algorithm is used that a feasible basis of the original problem is also a feasible basis of the modified problem. So instead of checking all bases of the original problem, Lemke’s algorithm checks the feasible bases of the modified problem. We will first describe the basic idea of the algorithm and then we will describe the algorithm in more detail. The main idea of Lemke’s algorithm: We start with a constructed feasible basis of the modified problem. From this, we go iterative from current feasible basis to a new feasible basis in the neighborhood. A new feasible basis is in the neighborhood of the current feasible basis if they differ in exactly one element. So every iteration one variable leaves the basis and one variable enters the basis. The entering variable is decided with the the complementary pivot rule. This rule is specific for Lemke’s algorithm and it will be explained later in this section. The leaving variable is decided with the minimum ratio test and due to this rule the new basis is still feasible for the modified problem. The algorithm can terminates in two 34 CHAPTER 4. LEMKE’S ALGORITHM ways. If after an iteration z0 = 0, then it terminates with a solution. Or the algorithm can terminates with ray-termination, which means in general that the algorithm cannot solve the problem and we still do not know if the LCP has a solution. Later we will see that if M satisfies certain conditions, ray termination means that the LCP does not have a solution. We will now explain the algorithm more precise.

Lemke’s Algorithm

ˆ Input: A matrix M and a vector q

ˆ Output: A solution to the LCP or the statement ray-termination.

ˆ Initial step: Introduce the artificial variable z0 and construct a feasible basis for the modified problem. If qj ≤ qi for all i, then z0 and all wi with i ∈ {1, . . . , n}\{j} are the basic variables. Basis B = {1, . . . , j − 1, 2n + 1, j + 1, . . . , n}. Basic matrix AB = [e1, . . . , ej−1, −e, ej+1, . . . , en]. T −1 Basic variables xB = (w1, . . . , wj−1, z0, wj+i, . . . , wn) = AB q. Total matrix A = [I, −M, −e]. Index last leaving variable l = j.

ˆ Step 1, Determinate the entering variable: According to the complementary pivot rule, the entering variable is the complement of the last leaving variable. The last leaving variable is the lth variable, so if l ≤ n, then the n + lth variable will enter the basis and if l > n, then the l − nth variable will enter the basis. Call the index of this entering variable h.

ˆ −1 Step 2, Compute w: w = AB A.h. If w ≤ 0, then the algorithm stops here with ray-termination. Otherwise, go to step 3.

ˆ Step 3, Determinate the leaving variable: We use the minimum ratio test. Compute

−1 (AB q)k γ = min( | wk > 0). (4.4) k∈B wk

Let the variable for which (4.4) obtains its minimum be the ith variable from the basis. Therefore, the ith element of B is the index of the leaving variable, denote this index with g. So the gth variable will leave the basis.

ˆ Step 4, Update the data:

Basis Bi ← h. Basic matrix (AB)i ← A.h. Basic variables xB ← xB − γw. Entering basic variable xh ← γ. Index last left variable l ← g.

After updating the data check whether z0 equals zero. If z0 = 0, then the algorithm terminates with a solution. Otherwise, go to step 1. 4.3. TERMINATION AND CORRECTNESS 35

Let us consider again the example of earlier this chapter. For solving this problem we will use tableau’s, because it is easier to do the calculations in that way. In a tableau we can see the current basic variables and their corresponding values, as well as the pivot matrix −1 AB [I, −M, −e]. See here the standard form of a tableau:

basic variables x −1 −1 xB AB [I, −M, −e] AB q We assume that the use of tableau’s is known, because it is also frequently used for the Simplex method. If the use of tableau’s is not known, we refer to chapter 2 of [15]. The tableau of the LCP of our example is as follows, here we have already introduced the variable z0.

basic variables w1 w2 z1 z2 z0 q w1 1 0 -2 -1 -1 -2 w2 0 1 1 2 -1 6

The lowest value of q is q1, so w1 will leave the basis and z0 will enter the basis. A feasible basis of the modified problem consists of the basic variables z0 and w2, see the first column in the table below and the second last column for their values.

basic variables w1 w2 z1 z2 z0 q ratio z0 -1 0 2 1 1 2 2/2 Min w2 -1 1 3 3 0 8 8/3

Variable w1 has left the basis, so the entering variable in the next step is z1. The cor- responding column of the pivot matrix is not nonpositive, so we can do the minimum ratio test, see the last column in the table. From this it follows that in the next step z0 will leave the basis.

basic variables w1 w2 z1 z2 z0 q z1 -1/2 0 1 1/2 1/2 1 w2 1/2 1 0 3/2 -1/2 5

Variable z0 has left the basis, so the current basis is also feasible for the original problem. From the last column in the table, we see that a solution of this LCP is w1 = 0, w2 = 5, z1 = 1, and z2 = 0.

4.3 Termination and Correctness

We saw that the algorithm can terminate in two ways. The first way is if z0 becomes zero, because then we have found a feasible basis/solution for the LCP. The other way is if the pivot column of the entering variable is nonpositive, then ray termination occurs and the algorithm cannot solve the LCP. In this case we cannot say whether the LCP has a solution. An important question is: Does the algorithm always terminates? Or is it possible that it will cycle and never terminates? To answer this questions we first introduce some new −1 definitions. A basis is nondegenerate if xB = AB q is strictly positive. Furthermore, a basis is degenerate if xB has at least one zero. Problems with only nondegenerate bases are nondegenerate problems, otherwise a problem is degenerate. Another term which we will 36 CHAPTER 4. LEMKE’S ALGORITHM introduce are the almost feasible basic solutions (afbs), this are solutions of the modified problem with the following form xB = (y1, . . . , yj−1, z0, yj+1, . . . , yn) where yi ∈ {wi, zi}. In Lemke’s algorithm we go from almost feasible basis to another almost feasible basis until the algorithm terminates. We will first proof termination for nondegenerate problems.

4.3.1 Termination for Nondegenerate Problems For nondegenerate problems we have three properties:

ˆ Consider an afbs with xB = (y1, . . . , yj−1, z0, yj+1, . . . , yn) and yi ∈ {wi, zi}. Except from the initial and the termination almost feasible bases, all almost feasible bases has two almost feasible neighbors, one neighbor is when wj enters the basis and the other neighbor is when zj enters the basis. This neighbors are both unique, because the minimum ratio test uniquely determines the leaving variable, since the problem is nondegenerate. ˆ Look to an arbitrary point in the algorithm, we are at an afbs. In the previous step we were in one almost feasible neighbor of this afbs and in the next step we move to the other almost feasible neighbor. We cannot move back to the almost feasible neighbor of the previous step, because we use the complementary pivot rule. ˆ The number of almost feasible basic solutions is finite. These three properties together guarantees that the algorithm terminates in a finite number of steps. We will explain this with the following picture.

Termination point

(...)

(w1, .., wn) (w1, ..., wj−1, z0, wj+1, ..., wn) Infeasible solution Almost feasible solutions

Figure 4.2: Termination.

In this picture every dot is an afbs. We start with the infeasible basis (w1, . . . , wn) and go, as described before, to an afbs which differs exactly one element. This current afbs has two almost feasible neighbors, from one we has just moved away and due to the complementary pivot rule we move to the other almost feasible neighbor. We continue this process and go from afbs to afbs. During this process we can never visit an afbs twice, because an afbs cannot have a third neighbor. So we go from afbs to afbs until we find a termination point. We will reach a termination point in finite steps, because the number of almost feasible basic solutions is finite and we will visit every afbs at most ones. Hence, the algorithm will terminates in a finite number of steps. 4.3. TERMINATION AND CORRECTNESS 37

4.3.2 Termination for Degenerate Problems

For degenerate problems Lemke’s algorithm does not always terminate, because sometimes cycling occurs. See here an example for which Lemke’s algorithm cycles. This example is constructed by M. M. Kostreva [11].

Consider the following initial tableau of a LCP with nonfeasible basis B = {1, 2, 3}.

basic variables w1 w2 w3 z1 z2 z3 z0 q w1 1 0 0 -1 -2 0 -1 -1 w2 0 1 0 0 -1 -2 -1 -1 w3 0 0 1 -2 0 -1 -1 -1

An almost feasible basis for this LCP is obtained if w1 leaves and z0 enters the basis. We obtain B = {7, 2, 3}, see the following tableau.

basic variables w1 w2 w3 z1 z2 z3 z0 q Ratio z0 -1 0 0 1 2 0 1 1 1/1 w2 -1 1 0 1 1 -2 0 0 0/1 Min w3 -1 0 1 -1 2 -1 0 0 -

The variable w1 has just left the basis, so z1 will enter the basis in the next tableau. With the minimum ratio test we see that w2 will leave the basis.

basic variables w1 w2 w3 z1 z2 z3 z0 q Ratio z0 0 -1 0 0 1 2 1 1 1/1 z1 -1 1 0 1 1 -2 0 0 0/1 Min w3 -2 1 1 0 3 -3 0 0 0/3 Min

In the next step z2 will enter the basis. However, the leaving variable is not uniquely determined, because there are two minimum values for the minimum ratio test. We can choose either z1 or w3 as the leaving variable. Let z1 leave the basis. basic variables w1 w2 w3 z1 z2 z3 z0 q Ratio z0 1 -2 0 -1 0 4 1 1 1/1 z2 -1 1 0 1 1 -2 0 0 - w3 1 -2 1 -3 0 3 0 0 0/1 Min

basic variables w1 w2 w3 z1 z2 z3 z0 q Ratio z0 0 0 -1 2 0 1 1 1 1/1 z2 0 -1 1 -2 1 1 0 0 0/1 Min w1 1 -2 1 -3 0 3 0 0 0/3 Min

Also here the minimum ratio test does not select an unique leaving variable. Let z2 leave the basis.

basic variables w1 w2 w3 z1 z2 z3 z0 q Ratio z0 0 1 -2 4 -1 0 1 1 1/1 z3 0 -1 1 -2 1 1 0 0 - w1 1 1 -2 3 -3 0 0 0 0/1 Min 38 CHAPTER 4. LEMKE’S ALGORITHM

basic variables w1 w2 w3 z1 z2 z3 z0 q Ratio z0 -1 0 0 1 2 0 1 1 1/1 z3 1 0 -1 1 -2 1 0 0 0/1 Min w2 1 1 -2 3 -3 0 0 0 0/3 Min

Let z3 leave the basis.

basic variables w1 w2 w3 z1 z2 z3 z0 q Ratio z0 -2 0 1 0 4 -1 1 1 1/1 z1 1 0 -1 1 -2 1 0 0 - w2 -2 1 1 0 3 -3 0 0 0/1 Min

In the next step w2 will leave the basis and w3 will enter the basis. Note that we get the same basis as in the third tableau. We can repeat the previous steps over and over again, the algorithm cycles.

So Lemke’s algorithm does not always terminate for degenerate problems. Cycling can occur, because the minimum ratio test does not always give an unique leaving variable. In some steps of the example, two variables have a minimum value for the ratiotest, so these variables will both be equal to zero in the next basis. However, it is not unique which of these variables becomes a nonbasic variable. There is a method to prevent Lemke’s algorithm from cycling. Instead of the minimum ratiotest we will use the Lexico minimum ratiotest, this test is also used to prevent the Simplex method from cycling.

n The Lexico Minimum Ratiotest A vector x ∈ R is said to be Lexico positive if x 6= 0 and the first nonzero element of x is strictly positive. This is denoted by x 0. Likewise, if n −x 0 then we say that x is Lexico negative, that is x ≺ 0. Given two vectors x ∈ R and n y ∈ R , then x y if and only if x − y 0. If x y then we say that vector x is Lexico bigger than y. To examine for two vectors which one is the Lexico minimum, we look at the first element in which x and y differ. The one for which this element is the smallest is the Lexico minimum. We can extend this definition of Lexico minimum to an arbitrary set of vectors. Finally, a basis AB is said to be Lexico feasible if the basis is feasible and if every −1 −1 row of (AB q, AB ) is Lexico positive. Recall that the ratiotest selects the variable with lowest ratio γ; see equation (4.4). The Lexico minimum ratiotest selects the variable which correspond to the Lexico minimum in

−1 −1 −1 −1 (AB q)k (AB )k1 (AB )k2 (AB )kn {( , , ,..., ) | wk > 0}, (4.5) wk wk wk wk

−1 where w = AB A.h. The values used in equation (4.5) are also in the tableau. For an LCP, the tableau is of the following form. Here the values of the last column, the second till n+1th column, and the h + 1th column are used in equation (4.5).

basic variables w z z0 −1 −1 −1 −1 xB AB −AB M −AB e AB q If the initial basis is Lexico feasible, the Lexico minimum ratio test guarantees that the Lexico feasibility for the bases is maintained. Further, every row of (4.5) is unique, because 4.3. TERMINATION AND CORRECTNESS 39

−1 AB has full rank. Therefore, a tie in the Lexico minimum ratiotest cannot occur and the Lexico minimum ratiotest always selects an unique leaving variable. So, except for the initial Lexico feasible basis and the Lexico feasible termination basis, every Lexico feasible basis has exactly two Lexico feasible neighbors. Now we can argument termination for degenerate problems in the same way as for nondegenerate problems. When the Lexico minimum ratiotest is used we have again three properties: T ˆ Consider an almost Lexico feasible solution (alfbs) with xB = (y1, . . . , yj−1, z0, yj+1, . . . , yn) and yi ∈ {wi, zi}. This alfbs has two Lexico feasible neighbors, one neighbor is when wj enters the basis and the other neighbor is when zj enters the basis. This both neighbors are both unique, because the Lexico minimum ratio test uniquely determines the leaving variable. ˆ Look to an arbitrary point in the algorithm, we are at a alfbs. In the previous step we were at one Lexico feasible neighbor of this alfbs and in the next step we move to the other Lexico feasible neighbor. ˆ The number of almost Lexico feasible basic solutions is finite. These three properties together guarantees that the algorithm terminates also for degener- ate problems in a finite number of steps. Argumentation for this is the same as for the nondegenerate case.

Consider the previous example where cycling did occur. We will now use the Lexico minimum ratiotest and show that the algorithm will indeed not cycle. Consider the initial tableau of the problem. basic variables w1 w2 w3 z1 z2 z3 z0 q w1 1 0 0 -1 -2 0 -1 -1 w2 0 1 0 0 -1 -2 -1 -1 w3 0 0 1 -2 0 -1 -1 -1

In this tableau we see that the elements of q are equal, so we can choose any wi as leaving variable to get an almost feasible basis. However, for using the Lexico minimum ratiotest we have to start with an almost Lexico feasible basis. For this we choose the variable which corresponds to the Lexico minimum of the rows of (q, I). We see that we have to choose w3 −1 −1 as the leaving variable. We get the following tableau, see that indeed the rows of (AB q, AB ) are Lexico positive and therefor this basis is indeed Lexico feasible.

basic variables w1 w2 w3 z1 z2 z3 z0 q Ratio w1 1 0 -1 1 -2 1 0 0 0/1 Min w2 0 1 -1 2 -1 -1 0 0 - z0 0 0 -1 2 0 1 1 1 1/1

basic variables w1 w2 w3 z1 z2 z3 z0 q Ratio z3 1 0 -1 1 -2 1 0 0 0/1 Min w2 1 1 -2 3 -3 0 0 0 0/3 Min z0 -1 0 0 1 2 0 1 1 1/1 In the tableau above, we have a tie for the normal ratiotest. The Lexico ratiotest selects the variable corresponding to the Lexico minimum of the following vectors: (0, 1, 0, −1), (0, 1/3, 1/3, −2/3), (1, −1, 0, 0). 40 CHAPTER 4. LEMKE’S ALGORITHM

The second row is the Lexico minimum, so w2 will leave the basis.

basic variables w1 w2 w3 z1 z2 z3 z0 q Ratio z3 2/3 -1/3 -1/3 0 -1 1 0 0 - z1 1/3 1/3 -2/3 1 -1 0 0 0 - z0 -4/3 -1/3 2/3 0 3 0 1 1 1/3

basic variables w1 w2 w3 z1 z2 z3 z0 q z3 2/9 -4/9 -1/9 0 0 1 1/3 1/3 z1 -1/9 2/9 -4/9 1 0 0 1/3 1/3 z2 -4/9 -1/9 2/9 0 1 0 1/3 1/3

The variable z0 has left the basis, so we have found a solution and the algorithm did not cycle.

So Lemke’s algorithm terminates in a finite number of steps when operated with the Lexico minimum ratio test for choosing the leaving variable in every pivot step.

4.3.3 Conditions under which Lemke’s Algorithm is Correct Lemke showed in [12] under which conditions for M his algorithm works correctly, that is, that the algorithm gives a solution if there is a feasible solution and it terminates only with ray-termination if there is no feasible solution. Lemke’s algorithm does not works always correctly, this is not surprising because the LCP is in general NP-hard.

Theorem 36 ([12]). If M is a copositive plus matrix, then Lemke’s algorithm works correctly. It only terminates with ray-termination if the LCP has no solution and if the LCP has a solution, then it terminates always with a solution. Further, if M is a strictly copositive matrix, then Lemke’s algorithm works correctly and terminates always with a solution. Hence, an LCP with strictly copositive matrix M has always a solution.

The proof of this theorem can be found in [12] and [15].

4.4 Applications in Linear- and Quadratic Programming

The LCP has applications in different fields. In this section we discuss Linear programs and Quadratic programs with linear constraint and write them as LCP’s. Further, we show that the LCP’s corresponding to linear programs and some quadratic programs have that M is copositive plus. Hence, Lemke’s algorithm will work for such LCP’s correctly.

4.4.1 Linear Programming A class of problems which can be written as LCP’s are Linear Programming problems (LP’s). Consider an LP in standard form, we can write every LP in this standard form, and its corresponding dual problem: We introduce for both problems slack variables, respectively u and v. This results in the following problems: 4.4. APPLICATIONS IN LINEAR- AND QUADRATIC PROGRAMMING 41

Primal Problem: (4.6) Dual Problem: (4.7) Minimize cT x, Maximize bT y, Subject to Ax ≤ b, Subject to −AT y ≤ −c, x ≥ 0. y ≥ 0.

Primal Problem: Dual Problem: Minimize cT x, Maximize bT y, Subject to Ax + Iu = b, Subject to −AT y + Iv = −c, x, u ≥ 0. y, v ≥ 0.

Recall that if the optimal value is attained, then the optimal value of the primal problem is equal to the optimal value of the dual problem. Furthermore, two vectors x and y are optimal for respectively the primal and the dual problem if and only if

xivi = yjuj = 0 for all i, j. (4.8) These conditions are the so called Complementary Slackness Conditions. The Complementary Slackness Conditions together with the constraints of the primal and the dual problem gives the following problem: Can we find x, y, u and v such that −AT y + Iv = −c, Ax + Iu = b, x, y, u, v ≥ 0,

xivi = yjuj = 0 for all i, j. This is an LCP. We see that in general the LCP corresponding to an LP in standard form (4.6) has the following form:  v   x   0 AT   −c  w = , z = ,M = , q = . u y −A 0 b Let us consider an example. Look at the following LP in standard form:

Max 3x1 − 2x2 + 4x3, =⇒ Min −3x1 + 2x2 − 4x3, s.t. −4x1 − 3x2 + x3 ≥ 3, s.t. 4x1 + 3x2 − x3 ≤ −3, x1 + 2x2 + x3 ≤ 4, x1 + 2x2 + x3 ≤ 4, x1, x2, x3 ≥ 0. x1, x2, x3 ≥ 0. We determine the dual problem and we introduce for both problems slack variables.

Primal Problem: Dual Problem: Min −3x1 + 2x2 − 4x3, Max −3y1 + 4y2, s.t. 4x1 + 3x2 − x3 + u1 = −3, s.t. −4y1 − y2 + v1 = 3, x1 + 2x2 + x3 + u2 = 4, −3y1 − 2y2 + v2 = −2, x1, x2, x3, u1, u2 ≥ 0. y1 − y2 + v3 = 4, y1, y2, v1, v2, v3 ≥ 0. 42 CHAPTER 4. LEMKE’S ALGORITHM

We combine the constraints of the two problems and the complementary slackness condi- tions, we get the following LCP:

Can we find x, y, u and v such that:

4x1 + 3x2 − x3 + u1 = −3,

x1 + 2x2 + x3 + u2 = 4,

−4y1 − y2 + v1 = 3,

−3y1 − 2y2 + v2 = −2,

y1 − y2 + v3 = 4, x, y, u, v ≥ 0,

xivi = yjuj = 0 for all i, j.

We can write this in the following matrix notation:           1 0 0 0 0 v1 0 0 0 4 1 x1 3  0 1 0 0 0   v2   0 0 0 3 2   x2   −2             0 0 1 0 0   v3  −  0 0 0 −1 1   x3  =  4  ,            0 0 0 1 0   u1   −4 −3 1 0 0   y1   −3  0 0 0 0 1 u2 −1 −2 −1 0 0 y2 4 x, v, y, u ≥ 0, and xT v = yT u = 0.

In section 4.3.3, we saw that Lemke’s algorithm works correctly for LCP’s if the cor- responding matrix M is copositive plus. The LCP corresponding to an LP satisfies this condition.

Theorem 37 ([15]). For LCP’s corresponding to LP’s, the matrix M is positive semidefinite and hence copositive plus. Hence, if Lemke’s algorithm applied to the LCP corresponding to an LP terminates in ray termination, then the LP must be infeasible or it is feasible but the objective value must be unbounded on the set of feasible solutions.

Proof. The matrix M of an LCP corresponding to an LP is not symmetric, so we consider ˜ 1 T the matrix M = 2 (M + M ). We get

1  0 AT   0 −AT  M˜ = ( + ) = 0. 2 −A 0 A 0

The zero matrix is indeed positive semidefinite, therefore so is M and hence M is copositive plus. From Theorem 36 follows that Lemke’s algorithm works correctly for this LCP.

So we can use Lemke’s algorithm for solving an LP. Note that, for solving an LP also the Simplex algorithm can be used. We can use the Simplex method directly for solving an LP, instead for Lemke’s algorithm we first have to rewrite the LP to an LCP. Doing this makes the problem larger and solving the problem will usually need more time. 4.4. APPLICATIONS IN LINEAR- AND QUADRATIC PROGRAMMING 43

4.4.2 Quadratic Programming We can write every problem with a quadratic objective function and linear constraints as an LCP. Consider a Quadratic Program (QP) in standard form, we can write every QP in this form: 1 Minimize Q(x) = cT x + xT Dx, (4.9) 2 Subject to Ax ≤ b, x ≥ 0.

m×n n×n Here A is a matrix in R and D is a matrix in R . We assume that D is symmetric, if D is not symmetric we can consider D˜ = (D + DT )/2. Theorem 38 ([15]). If x¯ is an optimum solution of (4.9), then x¯ is also an optimum solution of the LP

Minimize (cT +x ¯T D)x, (4.10) Subject to Ax ≤ b, x ≥ 0.

Proof. This proof can also be found in [15]. Letx ¯ be an optimum solution of (4.9). The problems (4.9) and (4.10) have the same constraints, sox ¯ is a feasible solution of (4.10). Take an arbitrary feasible solutionx ˆ and consider the convex combination ofx ¯ andx ˆ, xλ = λxˆ + (1 − λ)¯x =x ¯ + λ(ˆx − x¯) with 0 < λ < 1. The point xλ is feasible for both problems, since the set of feasible solutions of (4.9) and (4.10) is a convex polyhedron. Sincex ¯ is an optimum of (4.9), Q(xλ) − Q(¯x) ≥ 0. For all 0 < λ < 1, we have T T 2 T Q(xλ) − Q(¯x) = λ(c +x ¯ D)(ˆx − x¯) + 0.5λ (ˆx − x¯) D(ˆx − x¯) ≥ 0 ⇐⇒ (cT +x ¯T D)(ˆx − x¯) ≥ −0.5λ(ˆx − x¯)T D(ˆx − x¯).

This holds in particular for very small λ. So when we take the limit of λ ↓ 0, then the right part of the equation goes to zero. If (cT +x ¯T D)(ˆx−x¯) ≥ 0, then (cT +x ¯T D)ˆx ≥ (cT +x ¯T D)¯x. the pointx ˆ is an arbitrary feasible solution, sox ¯ is also an optimum solution of (4.10).

m Theorem 39 ([15]). If x¯ is an optimum solution of (4.9), then there exist vector y¯ ∈ R and n m slack variables u¯ ∈ R , v¯ ∈ R such that x,¯ y,¯ u,¯ v¯ together satisfy  v¯   −DT AT   x¯   −c  − = , (4.11) u¯ −A 0 y¯ b  v¯   x¯   x¯  ≥ 0, ≥ 0, and v¯ u¯  = 0. (4.12) u¯ y¯ y¯ Proof. Ifx ¯ is an optimum of (4.9), thenx ¯ is an optimum of (4.10); see Theorem 38. So there exists an optimum solutiony ¯ for the dual problem of (4.10) such that

Ax¯ + Iu¯ = b andx, ¯ u¯ ≥ 0, (Constraints Primal model) −AT y¯ + Iv¯ = −c − DT x¯ andy, ¯ v¯ ≥ 0, (Constraints Dual model) x¯T v¯ = 0 andy ¯T u¯ = 0, (Complementary Slackness Conditions).

Above equations can be written as (4.11) and (4.12). 44 CHAPTER 4. LEMKE’S ALGORITHM

This is an LCP, so we can write every QP of form (4.9) as an LCP. So if this LCP has no solution, the QP is not feasible or it is feasible but the objective function is unbounded. Further, every optimal solution of the QP is an solution of the LCP. However, not every solution of the LCP is necessary an optimal solution of the QP. We will show this with an example. Consider the QP with

 1 2   10 20   1   20  D = ,A = , c = and b = . 2 1 20 40 2 20

The only optimal solution of this QP is (0, 0)T with a zero objective value. Consider the LCP corresponding to this QP:

 −1 −2 10 20   −1      v¯  −2 −1 20 40  x¯  −2  −   =   , (4.13) u¯  −10 −20 0 0  y¯  20  −20 −40 0 0 20  v¯   x¯   x¯  ≥ 0, ≥ 0, and v¯ u¯  = 0. (4.14) u¯ y¯ y¯

The vector (¯xT , y¯T , v¯T , u¯T ) = (1, 0, 0, 1/10, 0, 0, 10, 0) is a solution to this LCP. However, xT = (1, 0) is not an optimal solution of the QP. So if we want to solve the QP, then we check for all the solutions of the LCP whether they are optimal. In the previous subsection we saw that for LCP’s corresponding to an LP have that the matrix M is copositive plus. The same holds for quadratic programs with −D is positive semidefinite.

Theorem 40 ([15]). For LCP’s corresponding to QP’s with −D is positive semidefinite, the matrix M is positive semidefinite and hence copositive plus. Hence, if Lemke’s algorithm applied to these LCP’s terminates in ray termination, then the corresponding QP must be infeasible or it is feasible but the objective value must be unbounded on the set of feasible solutions.

Proof. The matrix M of an LCP corresponding to an QP is not symmetric, so we consider ˜ 1 T the matrix M = 2 (M + M ). We get

1  −DT AT   −D −AT   − 1 (D + DT ) 0   −D 0  M˜ = ( + ) = 2 = . 2 −A 0 A 0 0 0 0 0

This matrix is indeed positive semidefinite if and only if −D is positive semidefinite. There- fore, for QP’s with −D is positive semidefinite the matrix M of the corresponding LCP is positive semidefinite and hence M is copositive plus. From Theorem 36 follows that Lemke’s algorithm works correctly for this LCP.

4.5 Applications in the Game Theory

In this section we discuss the problem of finding equilibria of two person and polymatrix games. This problem can be written as an LCP. 4.5. APPLICATIONS IN THE GAME THEORY 45

4.5.1 Two Person Games In two person games, there are two players: player I and player II. The players have respec- tively m and n choices and they have to choose one amongst them. If they both have made there choice, say player I chooses choice i and player II chooses choice j, then player I gets 0 0 a loss of aij and player II loses bij. Note that if the loss is negative the player has a reward instead of a loss. In game theory we want to find the best strategy for both players. More precisely, we want to find the equilibrium point(s) of the game. If both players have chosen their strategy, then this is an equilibrium point if and only if no player can benefit by changing his strategy while the other player keeps his strategy unchanged 0 0 Both players have a loss-matrix, player I has loss-matrix A = (aij) and player II has 0 0 0 0 loss-matrix B = (bij). If aij + bij is zero for all i and j, then we have a zero-sum game. For this kind of two person games is known how a equilibrium point of strategies can be found. A 0 0 more difficult case is when aij + bij 6= 0, we call this bi-matrix games. The problem of finding the equilibrium point(s) of a bi-matrix game can be written as an LCP. A strategy vector is a vector v in which vi denotes the probability that the player chooses choice i. Such vector is a probability vector, that is, every element of v is nonnegative and the sum of the elements of v is one. Let x be the strategy vector of player I and let y be the strategy vector of player II. The expected loss for player I is xT A0y and the expected loss for player II is xT B0y. In an equilibrium point (¯x, y¯) no player benefits from changing their strategy, sox ¯T A0y¯ ≤ xT A0y¯ for all probability vectors x andx ¯T B0y¯ ≤ x¯T B0y for all probability vectors y. 0 0 Let α and β be positive numbers such that aij = aij + α > 0 and bij = bij + β > 0 T 0 T for all i and j and let A = (aij) and B = (bij). We have that x A y = x Ay − α and xT B0y = xT By − β for all probability vectors x and y. Therefore, (¯x, y¯) is an equilibrium point of the bi-matrix game with matrices A0 and B0 if and only if (¯x, y¯) is an equilibrium point of the bi-matrix game with matrices A and B. So for determining the equilibrium point of the game with A0 and B0 we can also consider the game with matrices A and B. We have that (¯x, y¯) is an equilibrium point of A and B if and only if

x¯T Ay¯ ≤ xT Ay¯ for all probability vectors x and x¯T By¯ ≤ x¯T By for all probability vectors y.

Above equations hold for all probability vectors x and y, in particular they holds for the unit vectors ei. On the other hand, if the equations hold for all the unit vectors, then they also holds for all the convex combinations of the unit vectors which are all probability vectors. So above constraints are equivalent to

T T T x¯ Ay¯ ≤ Ai.y¯ andx ¯ By¯ ≤ x¯ B.j ⇔ (¯xT Ay¯)e ≤ Ay¯ and (¯xT By¯)e ≤ BT x.¯ (4.15)

The matrices A and B are strictly positive and thereforex ¯T Ay¯ andx ¯T By¯ are also strictly positive numbers. Let x¯ y¯ ξ¯ = andη ¯ = . x¯T By¯ x¯T Ay¯ Sincex ¯ andy ¯ are nonnegative, ξ ≥ 0, η ≥ 0, P P X x¯i 1 X y¯i 1 ξ = = , and η = = . (4.16) i x¯T By¯ x¯T By¯ i x¯T Ay¯ x¯T Ay¯ 46 CHAPTER 4. LEMKE’S ALGORITHM

Substitute ξ and η in (4.15) and introduce the slack variablesu ¯ andv ¯. We have that

Aη¯ ≥ e ⇒ −Aη¯ +u ¯ = −e and BT ξ¯ ≥ e ⇒ −BT ξ¯+v ¯ = −e. (4.17)

From these equations and (4.16) it follows for ξ,¯ η,¯ u¯, andv ¯ that

X X 1 1 − ξ¯ − η¯ + + = 0 ⇐⇒ i i x¯T By¯ x¯T Ay¯ X X x¯T y¯ x¯T y¯ − ξ¯ − η¯ + A + B = 0 ⇐⇒ i i x¯T By¯ x¯T Ay¯ xB¯ y¯ x¯T Ay¯ X X T T − ξ¯i − η¯i + ξ¯ Aη¯ + ξ¯ Bη¯ = 0 ⇐⇒  ξ¯  (−e + Aη¯)T (−e + BT ξ¯)T  = 0 ⇐⇒ η¯  ξ¯  u¯T v¯T  = 0. (4.18) η¯

The equations (4.17), (4.18), and the nonnegative constraints gives the following LCP

 u¯   0 A   ξ¯   −e  − = , v¯ BT 0 η¯ −e  u¯   ξ¯   ξ¯  ≥ 0, ≥ 0, and u¯T v¯T  = 0. v¯ η¯ η¯

If the LCP has an solutionη ¯ and ξ¯, then we can compute the equilibrium point (¯x, y¯) with

ξ¯ η¯ x¯ = P andy ¯ = P . ξ¯i η¯i

4.5.2 Polymatrix Games In the previous subsection we have written the problem of finding the equilibrium points of two person games as an LCP. The matrix M corresponding to such an LCP is not copositive plus; see Theorem 4. However, it is possible to write two person games as copositive plus LCP’s. In general, it is even possible to write every n-person game with n ∈ N as a copositive plus LCP. In this section we will show this. This section is a summary of the article, “Copositive-plus Lemke algorithm solves polymatrix games”[14]. Consider an n-person game. For simplicity, we assume that every player has m pure strategies. Let pi(k) be the probability that player i chooses choice k. The vectors p1 till pn are probability vectors, so

pi(k) ≥ 0 for all i and all choices k, (4.19) X pi(k) = 1 for all i. (4.20) k

0 Let rij(k, l) be the payoff for player i from player j when they choose respectively choice 0 k and l. If i = j, then rij(k, l) = 0 for all k and l. The total payoff of player i is 4.5. APPLICATIONS IN THE GAME THEORY 47

P 0 j,k,l pi(k)rij(k, l)pj(l). Consider the payoff-matrix

 0 0 0 0 0  r11(1, 1) r11(1, 2) . . . r11(1, m) r12(1, 1) . . . r1n(1, m)  r0 (2, 1) r0 (2, 2) . . . r0 (2, m) r0 (2, 1) . . . r0 (2, m)   11 11 11 12 1n   . . . .   . . . .     r0 (m, 1) r0 (m, 2) . . . r0 (m, m) r0 (m, 1) . . . r0 (m, m)   11 11 11 12 1n   r0 (1, 1) r0 (1, 2) . . . r0 (1, m) r0 (1, 1) . . . r0 (1, m)   21 21 21 22 2n  0  . . . .  R =  . . . .  .    r0 (m, 1) r0 (m, 2) . . . r0 (m, m) r0 (m, 1) . . . r0 (m, m)   21 21 21 22 2n   . . . .   . . . .     r0 (1, 1) r0 (1, 2) . . . r0 (1, m) r0 (1, 1) . . . r0 (1, m)   n1 n1 n1 n2 nn   . . . .   . . . .  0 0 0 0 0 rn1(m, 1) rn1(m, 2) . . . rn1(m, m) rn2(m, 1) . . . rnn(m, m) 0 Let α be a positive constant such that for all i, j, k, and l, rij(k, l) − α < 0. Let rij(k, l) = 0 rij(k, l) − α < 0 and let R be the new payoff-matrix. Note that if i = j, then rij(k, l) = −α. In the previous subsection we updated the payoff-matrix to a strictly positive matrix, but here we update the payoff-matrix to a strictly negative matrix. If the payoff-matrix is strictly negative, then it is not rewarding for player i if the sum of his strategy is greater than 1. So we can replace constraint (4.20) with X pi(k) ≥ 1 for all i. (4.21) k Let P be the set of vectors which satisfy (4.19) and (4.21). A pointp ¯ ∈ P is an equilibrium point of the game if and only if there is no point p ∈ P such thatp ¯T R0p¯ < pT R0p¯, which is the case if and only if there is no point p ∈ P such that (¯p − p)T R0p¯ < 0. P Let d = p − p¯ withp, ¯ p ∈ P . Ifp ¯i = 0, then di(k) ≥ 0 for all i, k. Further, if k p¯i(k) = 1, P then k di(k) ≥ 0. A pointp ¯ ∈ P is an equilibrium point if and only if there is no vector d with −dT Rp¯ < 0. T T T nm Let p = (p1 , . . . , pn ) ∈ R and let A be a n × nm matrix of the form  −1 ... −1 0 ... 0 ... 0 ... 0   0 ... 0 −1 ... −1 ... 0 ... 0  A =   .  ......   ......  0 ... 0 0 ... 0 ... −1 ... −1

A pointp ¯ ∈ P is an equilibrium point if and only if there is no vector d with −dT Rp¯ < 0 and d satisfying

if (Ap¯)i = −ei, then (Ad)i ≤ 0 for all i,

ifp ¯i(k) = 0, then − di(k) ≤ 0 for all i, k.

Let v = −e − Ap. If vi = 0, then (Ap¯)i = −ei. Let M be a big number. The vectorp ¯ is an equilibrium point if and only if the following system has no solution  A v   d  −dT Rp¯ < 0, ≤ 0. (4.22) −I p¯ −M 48 CHAPTER 4. LEMKE’S ALGORITHM

Farkas’ Lemma states: Given a real matrix B and a corresponding vector b, exactly one of the following systems has a solution

−bT µ < 0, Bµ ≤ 0, or BT ψ = b, ψ ≥ 0.

If  A v   Rp¯   d   y  B = , b = , µ = , and ψ = , −I p¯ 0 −M u then (4.22) does not have a solution if and only if the following system has a solution

 AT −I   y   Rp¯   y  = , ≥ 0. v p¯ u 0 u

This can we rewrite to the LCP  u¯   −RAT   p¯   0  − = , (4.23) v¯ −A 0 y¯ −e  u¯   p¯   p¯  ≥ 0, ≥ 0, and u¯ v¯  = 0. v¯ y¯ y¯

Theorem 41. For LCP’s of the form (4.23), the matrix M is flatly nonnegative and hence copositive plus. Lemke’s algorithm applied on (4.23) terminates in ray termination if and only if the n-person game has no equilibrium point. If Lemke’s algorithm terminates with a solution for the LCP, then it is an equilibrium point of the n-person game. Proof. The matrix M (4.23) is not necessary symmetric, so we consider the matrix M˜ = 1 T 2 (M + M ). We get 1  −R −AT   −RT AT   −1 (R + RT ) 0  M˜ = ( + ) = 2 . 2 A 0 −A 0 0 0 This matrix is indeed flatly nonnegative, because the matrix R is strictly negative. Hence the matrix M is copositive plus. From Theorem 36 follows that Lemke’s algorithm works correct for this LCP.

In particular, this result holds for two person games. Consider a two person game as defined in subsection 4.5.1. Recall that the loss-matrix for player I is A0, so his payoff-matrix is −A0. Further, the loss-matrix for player I is (B0)T , so his payoff-matrix is −(B0)T . Consider the total payoff-matrix  0 −A0  R0 = . −(B0)T 0 We update this matrix to a strictly negative matrix. If α is a positive constant such that 0 0 rij(k, l)−α < 0, then the zero-matrices in the upper-left and lower-right corners of R become strictly negative matrices C. Further, −A0 becomes the strictly negative matrix −A and −(B0) becomes the strictly negative matrix −B. Note that the matrix A is not the matrix A of (4.5.2) of earlier this subsection, but it is the matrix A of the previous subsection. The new total payoff-matrix becomes

 C −A  R = . −BT C 4.6. AN APPLICATION IN ECONOMICS 49

We get the following LCP:

 −C A e 0    T     u¯  B −C 0 e  p¯ 0 −   = , v¯  −e 0 0 0  y¯ −e 0 −e 0 0  u¯   p¯   p¯  ≥ 0, ≥ 0, and u¯ v¯  = 0. v¯ y¯ y¯

This is indeed an copositive plus LCP.

4.6 An Application in Economics

In this section we discuss an application in the field of Economics. In the article “Study- ing Economic Equilibria on Affine Networks via Lemke’s algorithm”[1], is described how we can find the economic equilibria on multi commodity transshipment networks with Lemke’s algorithm. In this section we will give a summary of this article. A transshipment network is a network with producers, consumers, and transport links on which we can transport commodities. We will consider multi commodity transshipment networks, these transport networks deals with multiple kinds of commodities. For instance, a commodity can be a “raw material”, an “intermediate product”, or a “finished product”. For these multi commodity transshipment networks we want to find the economic equilibrium points. In an economic equilibrium the system is balanced, that is, the quantity demanded and quantity supplied are equal and if their are no influences from outside the prices and quantities do not change. Mathematically, we can represent transport networks by directed graphs. These graphs have a finite number of nodes and arcs. The nodes represent a location with producers and/or consumers. The arcs represent the transport links on which a finite number of commodities can be transported for a certain price per unit. The directed arcs point in the direction in which commodities are transported to. If transport is possible in either ways between two nodes, then there are at least two arcs between these nodes pointing in opposite direction. There are no loops in the graph. All nodes and arcs are enumerated. We will always denote nodes by the symbol i or j and arcs by the symbol s or t. Further, a commodity is always indexed by the symbol c. In the table below we will introduce some notation.

Notation Explanation i → The set of all links directed out of node i. → i The set of all links directed into node i. )s The tail of link s. s) The head of link s. qic The excess quantity of commodity c produced by node i; and qi denotes the vector with components qic. pic The unit price of commodity c for node i; and pi denotes the vector with components pic. 50 CHAPTER 4. LEMKE’S ALGORITHM

Notation Explanation zsc The quantity of commodity c transported via link s; and zs denotes the vector with components zsc. psc The unit price for transporting commodity c via link s; and ps denotes the vector with components psc. Let z denotes the vector with vector components zs and let q denotes the vector with s vector components qi. Moreover, let p denotes the vector with vector components p and pi. Now we want to find the economic equilibria (z, q, p). There are six conditions for a point (z, q, p) to be an economic equilibrium point.

The conditions for an economic equilibrium:

1. zs ≥ 0 for all s; It is not possible to transport a negative number of commodities.

X s X s 2. qi = z − z for all i; The quantity that leaves node i minus the quantity that i→ →i comes in node i is equal to the excess quantity of node i.

s 3. p)s + p ≥ ps) for all s; If this condition is not true for link s and commodity c, then a person in node s) will buy as much as possible from node )s and resale it to the consumers in node s) for a higher price. This is an economic unstable situation.

s s s 4. hz , p)s + p − ps)i = 0 for all s; Intuitively, we can say that if p)s + p − ps) = 0, then the s s price is good and if p)s + p − ps) > 0, then the price is too high. So if p)s + p − ps) > 0, no one will buy from node )s and transport it over link s, so zs will be zero. On the other hand, if zsc > 0, then the price for buying and transporting is not too high and s p)s + p − ps) = 0.

5. pi = Aiqi + ai for all i, where Ai and ai are given constant matrices and vectors; This condition relates the supply and the demand to the price.

6. ps = Aszs + as for all s, where As and as are given constant matrices and vectors; This condition relates the transport prices to the transport volumes.

Now we want to write the problem of finding the economic equilibria as an LCP. For this s s we define the slack variables w = p)s + p − ps), after which we can replace conditions 3 with ws ≥ 0 and condition 4 with hws, zsi = 0. Using conditions 2, 5, and 6, we get the following:

s s s s s w = p)s + p − ps) = A)sq)s + a)s + A z + a − (As)qs) + as)) s s s = A)sq)s + A z − As)qs) + v X t X t s s X t X t s = A)s{ z − z } + A z − As){ z − z } + v . (4.24) )s→ →)s s)→ →s)

s s s s Where v = a)s + a − as). In equation (4.24), we see that w only depends on v , z, and the s given matrices A and Ai. Furthermore, the only conditions left are conditions 1, 3, and 4. This gives the following LCP:

Iw − Mz = v, hz, wi = 0, and z, w ≥ 0. 4.6. AN APPLICATION IN ECONOMICS 51

s Here, the matrix M only depends on the given matrices A and Ai. The matrix elements of M are given below:

 As + A + A if t = s,  )s s)  A + A if t ∈)s → ∩ → s) but t 6= s,  )s s)  A if t ∈)s → but t∈→ / s),  )s  A if t ∈→ s) but t∈ /)s →, M st = s) −A − A if t ∈ s) → ∩ →)s,  )s s)  −A if t ∈→)s but t∈ / s) →,  )s  −A if t ∈ s) → but t∈→ / )s,  s)  0 if otherwise.

s So if we have the matrices A and Ai and we have a directed graph representing the trans- shipment network, then we can construct the matrix M. For an example of a 2-commodity transshipment network and its corresponding LCP, we refer to [1]. If we have found a solution (z, w) of the LCP, then we can compute with conditions 2, 5, and 6 the corresponding economic equilibrium point. The following two theorems are the main results of the paper [1]. They tell us when Lemke’s algorithm works correctly.

Theorem 42 ([1]). If the matrix Ai is positive semidefinite for each node i, and if the matrix As is copositive plus for each link s, then the matrix M is copositive plus and hence Lemke’s algorithm terminates either with a solution to the LCP , or it demonstrates that no such solutions exist.

Theorem 43 ([1]). If the matrix Ai is positive semidefinite for each node i, and if the matrix As is strictly copositive for each link s, then Lemke’s algorithm always generates a solution to the LCP. 52 CHAPTER 4. LEMKE’S ALGORITHM Nomenclature

n R+ The set of nonnegative vectors of length n, page 2 n S The n-dimensional sphere with radius 1, page 2 n S+ The nonnegative quadrant of the n-dimensional sphere with radius 1, page 2 C The set of copositive matrices, page 3

C+ The set of copositive plus matrices, page 3

E The set of symmetric matrices with ones on the diagonal and zeros, ones, and minus ones elsewhere, page 15

E+ The set of symmetric matrices with ones on the diagonal and ones and minus ones elsewhere, page 16

K∗ The dual cone of the set K, page 14

N The set of nonnegative matrices, page 1

N + The set of flatly nonnegative matrices, page 9

S The set of symmetric matrices, page 1

S+ The set of positive semidefinite matrices, page 4

S++ The set of positive definite matrices, page 4 int(C) The set of strictly copositive matrices, page 3

A + B The Minkovski sum of two sets of matrices A and B, page 9

Kp,n−p The complete bipartite graph with partitions of size p and size n − p, page 16

53 54 CHAPTER 4. LEMKE’S ALGORITHM Index

Almost feasible basic solution, 36 matrix, 32 variable, 29, 32 Basic Nondegenerate, 35 column, 31 matrix, 31, 32 Partial matrix, 19 variable, 29, 32 Positive definite, 4 Basis, 32 Positive semidefinite, 4 feasible basis, 32 Property of closure under principal rearrange- ments, 4 Complementary pair of vectors, 31 Property of inheritance, 4 Completion, 19 Completion problem, 19 Strictly copositive, 3 copositive completion problem, 19 partial strictly copositive, 19 copositive plus completion problem, 19 strictly copositive completion problem, 19 Transshipment network, 49 Cone, 14 Two person game, 45 complementary cone, 30, 31 bi-matrix game, 45 convex cone, 14 dual cone, 14 Copositive, 3 partial copositive, 19 Copositive plus, 3 of order r, 15 partial copositive plus, 19

Degenerate, 35

Equilibrium point, 45

Flatly nonnegative, 9

Lexico feasible, 38 minimum, 38 negative, 38 positive, 38 Linear Complementarity problem, 29

Minkovski sum, 10

Nonbasic

55 56 INDEX Bibliography

[1] R. Asmuth, B. C. Eaves, and E. L. Peterson. Studying economic equilibria on affine networks via Lemke’s algorithm. Discussion Papers 314, Northwestern University, Center for Mathematical Studies in Economics and Management Science, 1978.

[2] R. W. Cottle and G. B. Dantzig. Complementary pivot theory of mathematical pro- gramming. and its Applications, 1:103–125, 1968.

[3] R. W. Cottle, G. J. Habetler, and C. E. Lemke. Quadratic forms semi-definite over convex cones. In Proceedings of the Princeton Symposium on Mathematical Programming, pages 551–565, 1967.

[4] R. W. Cottle, G. J. Habetler, and C. E. Lemke. On classes of copositive matrices. Linear Algebra and Its Applications, 3:295–310, 1970.

[5] P. H. Diananda. On non-negative forms in real variables some or all of which are non- negative. Mathematical Proceedings of the Cambridge Philosophical Society, 58:17–25, 1962.

[6] K. P. Hadeler. On copositive matrices. Linear Algebra and Its Applications, 49:79–89, 1983.

[7] E. Haynsworth and A. J. Hoffman. Two remarks on copositive matrices. Linear Algebra and Its Applications, 2:387–392, 1969.

[8] A. J. Hoffman and F. J. Pereira. On copositive matrices with −1, 0, 1 entries. Journal of Combinatorial Theory, Series A, 14:302–309, 1973.

[9] L. Hogben. The copositive completion problem: Unspecified diagonal entries. Linear Algebra and Its Applications, 420:160–162, 2007.

[10] L. Hogben, C. R. Johnson, and R. Reams. The copositive completion problem. Linear Algebra and Its Applications, 408:207–211, 2005.

[11] M. M. Kostreva. Cycling in linear complementarity problems. Mathematical Program- ming, 16:127–130, 1979.

[12] C. E. Lemke. Bimatrix equilibrium points and mathematical programming. Management Science, 11(7):681–689, 1965.

[13] J. E. Maxfield and H. Minc. On the matrix equation X0X = A. Proceedings of the Edinburgh Mathematical Society (Series 2), 13:125–129, 1962.

57 58 BIBLIOGRAPHY

[14] D. A. Miller and S. W. Zucker. Copositive-plus Lemke algorithm solves polymatrix games. Operations Research Letters, 10:285–290, 1991.

[15] K. G. Murty. Linear Complementarity, Linear and Nonlinear Programming. Helderman- Verlag, 1988.

[16] F. J. Pereira. On Characterizations of Copositive Matrices. PhD thesis, Stanford Uni- versity, 1972.

[17] H. V¨aliaho.Criteria for copositive matrices. Linear Algebra and Its Applications, 81:19– 34, 1986.