arXiv:1808.00281v4 [math.OC] 9 Sep 2019 rbe.Asa arx[1 sdfie saypoint any as defined linear is of [11] literature the star in studied A well problem. are c [12 matrices a Gowda star by on copositive studied of matrices was condition star complementarity to copositive respect or with pseudomonotone of concept The Introduction 1 a opeetrt hoy uho h eerhi eoe ofi to devoted is research the of much theory, complementarity ear satisfies arcsadpoe h eesr n ucetcniin of conditions sufficient and necessary the proved and matrices 1 orsodn author Corresponding A the o necessarily not nti ril,w nrdc h ls fsmmntn star semimonotone of class the introduce we article, this In sals h motneo h ls of class the of importance the establish sbuddadsal ne the under stable and bounded is sin is etrt hoy eso httepicplpvttransfor pivot principal the that show We theory. mentarity na neirpitmto osleLCP( solve to method point interior an on Keywords: loih,itro on ehd eioooesa matri star semimonotone method, point interior algorithm, M ujc classifications: subject AMS nSmmntn trMtie n Linear and Matrices Star Semimonotone On ∈ A E a E P T ninSaitclIsiue 0 .T od okt,70108, 700 Kolkata, Road, T. B. 203 Institute, Statistical Indian ˜ 0 s 0 s x 0 mtie ihsm diinlcniin,i in is conditions, additional some with -matrices . ∩ epoeta LCP( that prove We ≤ P 0 0 . iercmlmnaiypolm rnia io transfor pivot principal problem, complementarity Linear . efidsm odtosfrwihteslto e fLCP( of set solution the which for conditions some find We E aa n oe 1]studied [11] Lopez and Bazan b aaprUiest,Klaa,7002 India. 032, 700 , Kolkata University, Jadavpur opeetrt Problem Complementarity .Jana R. 0 s ngnrl oee,w rv that prove we However, general. in 1 mi:[email protected] Email: 3 mi:[email protected] Email: 2 a, mi:[email protected] Email: 1 .K Das K. A. , ,A q, E 03,90C90. 90C33, a epoesbeb ek’ loih if algorithm Lemke’s by processable be can ) ˜ Abstract 0 s poet.W rps nagrtmbased algorithm an propose We -property. 1 E 0 s ,A q, mtie ntecneto comple- of context the in -matrices a, 2 given ) .Sinha S. , F x 1 mti ntecneto star of context the in -matrix rmslto e fLCP( of set solution from E ˜ E A 0 s mtie,asbls of subclass a -matrices, 0 f ∈ ysoigti class this showing by E x, b, ˜ ( 0 of m s E 3 . F E 0 s ˜ 1 arcs We matrices. ) 0 s dotconstructive out nd poete.I lin- In -properties. -matrix. oe ovxcone convex losed E complementarity .Teproperties The ]. 0 s India. mti is -matrix ,Lemke’s m, ,A q, ) ,A q, ) characterization of Q0 and Q-matrices. The set K(A) denotes the closed cone con- n taining the nonnegative orthant R+. Eaves [10] showed that A ∈ Q0 if and only if K(A) is convex. A subclass Q of Q0 is defined by the property that A ∈ Q if and only if K(A)= Rn. Aganagic and Cottle [1] showed that Lemke’s algorithm processes LCP(q, A) if A ∈ P0 ∩ Q0. Many of the concepts and algorithms in optimization theory are developed based on principal pivot transform (PPT). The notion of PPT is originally motivated by the well known linear complementarity problem. The class of semimonotone matrices (E0) introduced by Eaves [10] (denoted by L1 also) consists of all real square matrices A such that LCP(q, A) has a unique solution for every q > 0. Cottle and Stone [6] f introduced the notion of a fully semimonotone matrix (E0 ) by requiring that every PPT of such a matrix is a semimonotone matrix. Stone studied various properties f f of E0 -matrices and conjectured that E0 with Q0-property are contained in P0. The linear complementarity problem is a combination of linear and nonlinear system of inequalities and equations. The problem may be stated as follows: Given A ∈ Rn×n and a vector q ∈ Rn, the linear complementarity problem LCP(q, A) is the problem of finding a solution w ∈ Rn and z ∈ Rn to the following system:

w − Az = q, w ≥ 0, z ≥ 0 (1.1)

wT z = 0 (1.2) Let FEA(q, A)= {z : q+Az ≥ 0} and SOL(q, A)= {z ∈ FEA(q, A): zT (q+Az)= 0} denote the feasible and solution set of LCP(q, A) respectively. In this article, we s introduce a class of matrices called semimonotone star matrix (E0) by introducing the notion of star property.

The outline of the article is as follows. In Section 2, some notations, definitions and results are presented that are used in the next sections. In section 3, we introduce s semimonotone star (E0)-matrix and study some properties of this class in connection with complementarity theory, principal pivot transform. Section 4 deals with PPT s based matrix classes under E0-property. In section 5, we consider the SOL(q, A) s under E0-property. In this connection we partially settle an open problem raised by Gowda and Jones [15]. We propose an iterative algorithm [9] to process LCP(q, A) ˜s s where A ∈ E0, a subclass of E0-matrix in section 6. A numerical example is presented to demonstrate the performance of the proposed algorithm in section 7.

2 Preliminaries

n n n We denote the n dimensional real space by R where R+ and R++ denote the nonneg- ative and positive orthant of Rn respectively. We consider vectors and matrices with real entries. For any set β ⊆{1, 2,...,n}, β¯ denotes its complement in {1, 2,...,n}. Any vector x ∈ Rn is a column vector unless otherwise specified. For any matrix n×n A ∈ R , aij denotes its ith row and jth column entry, A·j denotes the jth column and Ai· denotes the ith row of A. If A is a matrix of order n, ∅= 6 α ⊆ {1, 2,...,n}

2 and ∅= 6 β ⊆ {1, 2,...,n}, then Aαβ denotes the submatrix of A consisting of only the rows and columns of A whose indices are in α and β, respectively. For any set α, |α| denotes its cardinality. kAk and kqk denote the norms of a matrix A and a vector q respectively. The principal pivot transform (PPT) of A, a real n × n matrix, with respect to α ⊆{1, 2,...,n} is defined as the matrix given by M M M = αα αα¯  Mαα¯ Mα¯α¯ 

−1 −1 −1 where Mαα = (Aαα) , Mαα¯=−(Aαα) Aαα¯, Mαα¯ = Aαα¯ (Aαα) , Mα¯α¯ = Aα¯α¯ − −1 Aαα¯ (Aαα) Aαα¯. Note that PPT is only defined with respect to those α for which det Aαα =6 0. By a legitimate principal pivot transform we mean the PPT obtained from A by performing a principal pivot on its nonsingular principal submatrices. When α = ∅, by convention det Aαα = 1 and M = A. For further details see [4], [7], [20] and [21] in this connection. The PPT of LCP(q, A) with respect to α (obtained ′ by pivoting on Aαα) is given by LCP(q , M) where M has the same structure already ′ −1 ′ −1 mentioned with qα = −Aααqα and qα¯ = qα¯ − Aαα¯ Aααqα.

We say that A ∈ Rn×n is

− positive definite (PD) matrix if xT Ax > 0, ∀ 0 =6 x ∈ Rn. − positive semidefinite (PSD) matrix if xT Ax ≥ 0, ∀ x ∈ Rn. − column sufficient matrix if xi(Ax)i ≤ 0 ∀i =⇒ xi(Ax)i =0 ∀i. − row sufficient matrix if AT is column sufficient. − sufficient matrix if A is both column and row sufficient. − P (P0)-matrix if all its principal minors are positive (nonnegative). − N(N0)-matrix if all its principal minors are negative (nonpositive). T − copositive (C0) matrix if x Ax ≥ 0, ∀ x ≥ 0. − strictly copositive (C) matrix if xT Ax > 0, ∀ 0 =6 x ≥ 0. + T − copositive plus (C0 ) matrix if A is copositive and x Ax = 0, x ≥ 0 =⇒ (A + AT )x =0. ∗ T − copositive star (C0 ) matrix if A is copositive and x Ax = 0, Ax ≥ 0, x ≥ 0 =⇒ AT x ≤ 0. − semimonotone (E0) matrix if for every 0 =6 x ≥ 0, ∃ an i such that xi > 0 and (Ax)i ≥ 0. n T − L2-matrix if for every 0 =6 x ≥ 0, x ∈ R , such that Ax ≥ 0, x Ax = 0, ∃ two T diagonal matrices D1 ≥ 0 and D2 ≥ 0 such that D2x =06 and (D1A + A D2)x =0. − L-matrix if it is E0 ∩ L2. − strictly semimonotone (E) matrix if for every 0 =6 x ≥ 0, ∃ an i such that xi > 0 and (Ax)i > 0. − pseudomonotone matrix if for all x, y ≥ 0, (y − x)T Ax ≥ 0 =⇒ (y − x)T Ay ≥ 0. − positive subdefinite matrix (PSBD) if ∀x ∈ Rn, xT Ax < 0 =⇒ either AT x ≤ 0 or AT x ≥ 0. f − fully copositive (C0 ) matrix if every legitimate PPT of A is C0-matrix. f − fully semimonotone (E0 ) matrix if every legitimate PPT of A is E0-matrix.

3 − almost P0(P )-matrix if det Aαα ≥ 0 (> 0) ∀ α ⊂{1, 2,...,n} and det A< 0. − an almost N0(N)-matrix if det Aαα ≤ 0 (< 0) ∀ α ⊂{1, 2,...,n} and det A> 0. − almost copositive matrix if it is copositive of order n − 1 but not of order n. − almost E matrix if it is E of order n − 1 but not of order n. f − almost fully copositive (almost C0 ) matrix if its PPTs are either C0 or almost C0 and there exists atleast one PPT M of A for some α ⊂ {1, 2,...,n} that is almost C0. − copositive of exact order k matrix if it is copositive up to order n − k. − Z-matrix if aij ≤ 0. − K0-matrix [3] if it is Z-matrix as well as P0-matrix. − connected (Ec) matrix if ∀q, LCP(q, A) has a connected solution set. n − R-matrix if ∄ z ∈ R+, t(≥ 0) ∈ R satisfying

Ai.z + t = 0 if i such that zi > 0, Ai.z + t ≥ 0 if i such that zi =0.

− R0-matrix if LCP(0, A) has unique solution. n − Qb-matrix if SOL(q, A) is nonempty and compact ∀q ∈ R . − Q-matrix if for every q ∈ Rn, LCP(q, A) has a solution. n − Q0-matrix if for any q ∈ R , (1.1) has a solution implies that LCP(q, A) has a solution. − completely Q-matrix (Q¯) if all its principal submatrices are Q-matrices. − completely Q0-matrix (Q¯0) if all its principal submatrices are Q0-matrices.

We state some game theoretic results due to von Neumann [27] which are needed in the sequel. In a two person zero-sum matrix game, let v(A) denote the value of the game corresponding to the pay-off matrix A. The value of the game v(A) is positive (nonnegative) if there exists a 0 =6 x ≥ 0 such that Ax > 0(Ax ≥ 0). Similarly, v(A) is negative (nonpositive) if there exists a 0 =6 y ≥ 0 such that AT y < 0(AT y ≤ 0).

The following result was proved by V¨aliaho [26] for symmetric almost copositive matrices. However this is true for nonsymmetric almost copositive matrices as well.

Theorem 2.1. [8] Let A ∈ Rn×n be almost copositive. Then A is PSD of order n−1, and A is PD of order n − 2.

Theorem 2.2. [17] Suppose A ∈ Rn×n is a PSBD matrix and rank(A) ≥ 2. Then AT is PSBD and at least one of the following conditions hold: (i) A is a PSD matrix. (ii) (A + AT ) ≤ 0. ∗ (iii) A ∈ C0 . Theorem 2.3. [17] Suppose A ∈ Rn×n is a PSBD matrix and rank(A) ≥ 2. and A + AT ≤ 0. If A is not a skew-, then A ≤ 0.

Here we consider some more results which will be required in the next section.

4 n×n Theorem 2.4. [3] Suppose A ∈ R with A satisfies (++)-property. If A ∈ E0 then A ∈ P0. Theorem 2.5. [13] Let A ∈ Rn×n be given. Consider the statements (i) A ∈ R. (ii) A ∈ int(Q) ∩ R0. (iii) the zero vector is a stable solution of the LCP(0, A). (iv) A ∈ Q ∩ R0. (v) A ∈ R0. Then the following implications hold: (i) =⇒ (ii) =⇒ (iii) =⇒ (iv) =⇒ (v). Moreover, if A ∈ E0, then all five statements are equivalent. ∗ Theorem 2.6. [13] Let A ∈ int(Q) ∩ R0. If the LCP(q, A) has a unique solution x , then LCP(q, A) is stable at x∗. Theorem 2.7. [23] Let A ∈ Rn×n be such that for some index set α (possibly empty), Aα¯ =0. If Aαα ∈ P0 ∩ Q, then SOL(q, A) is connected for every q.

Theorem 2.8. [2] Suppose A ∈ Ec ∩ Q0. Then Lemke’s algorithm terminates at a solution of LCP(q, A) or determines that FEA(q, A)= ∅. Theorem . n 2.9 [12] Suppose that A is pseudomonotone on R+. Then A is a P0 matrix. Theorem . n×n f 2.10 [15] Suppose that A ∈ R ∩ Ec. Then A ∈ E0 .

Theorem 2.11. [26] Any 2 × 2 P0-matrix with positive diagonal is sufficient.

Theorem 2.12. [10] L-matrices are Q0-matrices. Theorem 2.13. [5] Let A ∈ Rn×n where n ≥ 2. Then A is sufficient if and only if A and each of its principal pivot transforms are sufficient of order 2. n×n Theorem 2.14. [22] [19] Suppose A ∈ E0 ∩ R . If A ∈ R0 then A ∈ Q.

Theorem 2.15. [11] Qb = Q ∩ R0.

s 3 Some properties of E0-matrices

s We begin by the definition of semimonotone star (E0) matrix. Definition . s 3.1 A semimonotone matrix A is said to be a semimonotone star (E0) matrix if xT Ax =0, Ax ≥ 0, x ≥ 0 =⇒ AT x ≤ 0.

0 −5 Example 3.1. Consider the matrix A = . Now xT Ax = −3x x . Consider  2 0  1 2 k1 x = , where k1, k2 ≥ 0. Hence we consider the following cases.  k2  T T Case I: For k1 = k2 =0, x =0, Ax =0, x Ax =0 implies A x =0. T T Case II: For k1 > 0, k2 =0, x ≥ 0, Ax ≥ 0, x Ax =0 implies A x ≤ 0. Case III: For k1 =0, k2 > 0, x ≥ 0. However Ax  0. s Hence A ∈ E0.

5 s The following result shows that E0-matrices are invariant under principal rear- rangement and scaling operations.

Theorem . n×n s n×n 3.16 If A ∈ R ∩ E0-matrix and P ∈ R is any , T s then P AP ∈ E0.

s n×n T Proof. Let A ∈ E0 and let P ∈ R be any permutation matrix. Then P AP is n T an E0-matrix by the Theorem 4.3 of [25]. Now for any x ∈ R+, let y = P x. Note that xT P AP T x = yT Ay = 0, AP T x = Ay ≥ 0 ⇒ AT y = AT P T x ≤ 0, since P is a T s permutation matrix. It follows that P AP is a E0-matrix. The converse of the above theorem follows from the fact that P T P = I and A = P T (P AP T )(P T )T .

0 11 Example 3.2. Let A =  2 02  . Clearly, A ∈ E0. The nonzero vectors in −4 −5 0   0 SOL(0, A) are of the form x =  0  for k > 0, and for such x the inequality AT x ≤ 0 k   0 0 1 s holds. Therefore A ∈ E0. Consider P =  1 0 0  . We get 0 1 0   0 −4 −5 P AP T =  1 0 1  . Hence xT P AP T x =0, PAP T x ≥ 0, x ≥ 0 imply P AT P T x ≤ 2 2 0  T  s 0. Therefore P AP is a E0-matrix. Theorem . n×n s n×n 3.17 Suppose A ∈ R is a E0-matrix. Let D ∈ R be a positive s T s . Then A ∈ E0 if and only if DAD ∈ E0.

s n T T T T Proof. Let A ∈ E0. For any x ∈ R+, let y = D x. Note that x DAD x = y Ay =0, ADT x = Ay ≥ 0 ⇒ AT y = AT DT x ≤ 0 since D is a positive diagonal matrix. Thus T s −1 DAD ∈ E0. The converse follows from the fact that D is a positive diagonal matrix and A = D−1(DADT )(D−1)T .

s T s The following example shows that A ∈ E0-matrix does not imply (A + A ) ∈ E0- matrix. 0 11 Example . s T 3.3 Let A =  2 01  . Clearly A ∈ E0, since x Ax =0, Ax ≥ 0, x ≥ 0 −1 −1 0   imply AT x ≤ 0. 0 3 0 T s It is easy to show that A + A =  3 0 0  is not a E0-matrix. 0 0 0   s s We show that PPT of E0-matrix need not be E0-matrix.

6 0 11 Example . s 3.4 Consider the matrix A =  2 01 . Note that A ∈ E0 and it is −1 −1 0   −1 1 −1 − 1 easy to show that A 1 =  1 −1 −2  is not a Es-matrix. Therefore any PPT 3 0 2 1 2 s s  of E0-matrix need not be E0-matrix.

Note that a matrix is in E0 if and only if its transpose is in E0. We show that s T s A ∈ E0-matrix does not imply A ∈ E0-matrix in general. 0 11 Example . s 3.5 Consider the matrix A =  2 01 . Note that A ∈ E0. However −1 −1 0   0 2 −1 T s A =  1 0 −1  is not a E0-matrix. 11 0   T s Now we show a condition under which A satisfies E0-property. Theorem . n T T 3.18 Suppose that A is pseudomonotone on R+ and A ∈ R0. Then A s satisfies E0-property.

n Proof. Since A is pseudomonotone on R+, then A is P0 matrix by Theorem 2.9. Hence T A ∈ E0. We have to show that A satisfies the following property. 0 =6 x ≥ 0, AT x ≥ 0, and xT AT x =0 =⇒ Ax ≤ 0. T T Since A ∈ R0 then 0 =6 x ≥ 0, A x = 0 has no solution. Therefore for atleast T one i, (A x)i > 0. Let us consider the vector ei which has one at the ith position and zeros elsewhere. Now consider y = ei + λej, where i =6 j and λ ≥ 0. Then, for any small δ > 0, we get T T T T (x − δy) A(δy)= δ[(A x)i + λ(A x)j − δy Ay] ≥ 0. T T By pseudomonotonicity, (x − δy) Ax ≥ 0. Thus y Ax ≤ 0. This gives (Ax)i + λ(Ax)j ≤ 0. As δ is arbitrary, (Ax)i ≤ 0 and (Ax)j ≤ 0. Hence Ax ≤ 0. Corollary . n 3.1 Suppose that A is pseudomonotone on R+, and satisfies one of the following conditions:

(i) A is invertible.

(ii) A is normal i.e. AAT = AT A.

T s Then A ∈ E0.

∗ s Note that C0 ⊆ E0. Definition . ¯s 3.2 A matrix A is said to be completely semimonotone star (E0) matrix if all its principal submatrices are semimonotone star matrix.

7 s We say that E0 is not a complete class which can be illustrated with the following example. 0 11 Example . s 3.6 Consider the matrix A =  2 01 . Note that A ∈ E0. It is −4 −5 0   0 1 easy to show that A = is not a Es-matrix. 12  2 0  0

Theorem 3.19. Let A ∈ PSBD ∩ E0 with rank(A) ≥ 2. Further suppose A is not a s skew symmetric matrix. Then A ∈ E0-matrix. Proof. Let A be PSBD matrix with rank(A) ≥ 2. By the Theorem 2.2 we have following three cases. s Case I: A is PSD matrix. This implies A ∈ E0. ∗ s Case II: A ∈ C0 . This implies A ∈ E0. Case III: (A + AT ) ≤ 0. As A is a PSBD matrix with rank(A) ≥ 2, A ≤ 0 by the s T Theorem 2.3. Note that A ∈ E0. To show A ∈ E0, consider x Ax =0, Ax ≥ 0, x ≥ 0 T s which implies A x ≤ 0. Hence A ∈ E0. 0 3 Example 3.7. Consider the matrix A = . It is easy to show that A is PSBD  −1 0  s matrix with rank(A) ≥ 2. Hence by Theorem 3.19, A ∈ E0.

s 4 PPT based matrix classes under E0-property

s We consider some PPT based matrix classes with E0-property in the context of lin- ear complementarity problem to show that these classes are processable by Lemke’s algorithm under certain condition. We settle the processability of Lemke’s algorithm through identification of a new subclass of P0 ∩ Q0-matrices. Definition . s ˜s 4.3 A matrix A ∈ E0 is said to be E0-matrix if for x ∈ SOL(0, A), T (A x)i =06 =⇒ (Ax)i =06 ∀ i. 0 11 Example . ∗ 4.8 Consider A =  2 02  . Note that, A∈ / C0 . For k > 0 and −2 −4 0   0 T T s x =  0  , x ≥ 0, Ax ≥ 0, x Ax = 0 implies A x ≤ 0. Hence A ∈ E0. Now k   −2 1 T T A x =  −4  and Ax =  2 . Therefore ∀ i, (A x)i =06 =⇒ (Ax)i =06 . Hence 0 0 ˜s     A ∈ E0. Remark . + ˜s 4.1 It is easy to show that C0 ⊆ E0.

8 s ˜s Note that not every E0-matrix is E0-matrix. We consider the following example from the paper [14].

1 −1 −2 Example 4.9. Consider A =  −1 1 0  . Note that A ∈ P0. Hence A ∈ E0. 0 0 1   k The only nonzero vectors in SOL(0, A) are of the form x =  k  for k > 0. Now 0   0 0 T s T for such x, A x ≤ 0 holds. Hence A ∈ E0. Now A x =  0  and Ax =  0 . −2k 0 T ˜s     Note that (A x)3 =06 but (Ax)3 =0. Hence A∈ / E0. Theorem . n×n ˜s 4.20 Let A ∈ R ∩E0. Assume that every legitimate PPT of A is either ˜s almost E or completely E0. Then A ∈ P0. ˜s T Proof. Since A ∈ E0, we say x ∈ SOL(0, A) which implies A x ≤ 0. Again by T definition (A x)i =6 0 which implies (Ax)i =6 0 ∀ i. Now by taking D2 = I, where I T represents the . Then D2x = Ix =06 . So (D1A + A I)x = 0 by taking,

T  −(A x)i  , (Ax)i =06 , Dii =  (Ax)i  0, (Ax)i =0.    s Where Dii denotes the ith diagonal of D1. So A ∈ E0 ∩ L2. Therefore A ∈ Q0 by the Theorem 2.12. For rest of the proof, we follow the approach given by Das [8]. However for the sake of completeness we give the proof. Note that every legitimate ˜s PPT of A is either almost E or completely E0. Suppose M is a PPT of A so that M ∈ almost E. Then all principal submatrices of M upto n − 1 order are Q.¯ Hence ¯ ˜s M ∈ Q0. Since the PPT of A is either almost E or completely E0, it follows that all proper principal submatrices are P0. Now to complete the proof, we need to show that det A ≥ 0. Suppose not. Then −1 −1 det A< 0. This implies that A is an almost P0-matrix. Therefore A ∈ N0. If A ∈ almost E then this contradicts that the diagonal entries are positive. Therefore det A ≥ 0. It follows that A ∈ P0.

Corollary . n×n ˜s 4.2 Let A ∈ R ∩E0. Assume that every legitimate PPT of A is either ˜s almost E or completely E0. Then LCP(q, A) is processable by Lemke’s algorithm. f Earlier Das [8] proposed exact order 2 C0 -matrices in connection with PPT based f matrix classes. We define exact order k C0 -matrices.

9 Definition . f 4.4 A is said to be an exact order k C0 -matrix if its PPTs are either exact order k C0 or E0 and there exists at least one PPT M of A for some α ⊂ {1, 2, ··· , n} that is exact order k C0. We prove the following theorem. Theorem . ˜s f 4.21 Let A ∈ E0 ∩ exact order k C0 (n ≥ k + 2). Assume that PPT of A has either exact order k C0 or E0 with at least k positive diagonal entries. Then LCP(q, A) is processable by Lemke’s algorithm.

Proof. We show that A ∈ P0. Suppose M is a PPT of A so that M ∈ exact order k C0. By Theorem 2.1, all the principal submatrices of order (n − k) of M are PSD. (n−k+1) (n−k+1) Now to show M ∈ P0 it is enough to show that det M ≥ 0. Suppose not. (n−k+1) (n−k+1) Then det M < 0. We consider B = M is an almost P0-matrix. Therefore −1 B ∈ N0 and there exists a nonempty subset α ⊂{1, 2,...,n − 1} satisfying [8]

−1 −1 −1 −1 Bαα ≤ 0, Bα¯α¯ ≤ 0, Bαα¯ ≥ 0 and Bαα¯ ≥ 0. (4.1)

−1 By definition B ∈ E0 with atleast k positive diagonal entry. This contradicts Equation 4.1. Therefore det M (n−k+1) ≥ 0. Now by the same argument as above, we ˜s show that det M ≥ 0. Therefore it follows that of A ∈ P0. Hence A ∈ P0 ∩ E0. So LCP(q, A) is processable by Lemke’s Algorithm.

A matrix A is said to satisfy (++)-property if there exists a matrix X ∈ K0 such that AX is a Z-matrix. For details see [3]. Now we establish the condition under which a matrix A is sufficient satisfying (++)-property.

n×n Theorem 4.22. Suppose A ∈ R ∩ E0 satisfies (++)-property. If each legitimate ˜s PPT of A are either almost C0 or completely E0 with full rank second order principal submatrices, then A is sufficient.

Proof. As A ∈ E0 with (++)-property. Hence A ∈ P0 by Theorem 2.4. Suppose M be a PPT of A. We consider the following cases.

Case I: If M be almost C0, then by the Theorem 2.1, M is PSD of order (n − 1). Hence M is PSD of order 2 also. So by the Theorem 2.13, M is sufficient of order (n − 1). ˜s Case II: If M is completely E0 then sign pattern of all 2 × 2 submatrices of M will be in the following subcases: 0 + 0 − Subcase I: If the sign pattern is or then these two patterns are  − 0   + 0  sufficient. + + + − Subcase II: If the sign pattern is or then by the Theorem 2.11  − +   + +  these two patterns are sufficient. + + Subcase III: If the sign pattern is then by the Theorem 2.11 this pattern  + +  is sufficient.

10 + − Subcase IV: If the sign pattern is then by the Theorem 2.11 this pattern  − +  is sufficient. Then for every PPT, Aαα of order 2 are sufficient. By the Theorem 2.13, A is sufficient.

s 5 Properties of SOL(q, A) under E0-property

s We show that solution set of LCP(q, A) is connected if A ∈ E0 with the following A + structure A = αα , where A ∈ R(n−1)×(n−1).  − 0  αα

× A + Theorem 5.23. Let A ∈ Rn n with A = αα and A ∈ P . Then A ∈ E˜s-  − 0  αα 0 0 matrix. A + Proof. First we show that A = αα with A ∈ P is E -matrix. Let us  − 0  αα 0 0 n consider (uα, v) ∈ R+ be a given vector where α = {1, 2, ··· , (n − 1)}. Without loss of generality we assume uα =6 0. Now as Aαα ∈ P0, we can write Aαα ∈ E0. By semimonotonicity Aαα ∃ an index i such that (uα)i > 0 and (Aααuα)i ≥ 0. For such an index i, (Aααuα + v)i ≥ 0. Hence A ∈ E0. We consider the following two cases: T Case I: Firstly we take x = [xα, 0] , where α ∈ {1, 2, ··· , (n − 1)}. Then suppose xT Ax =0, x ≥ 0, but in this case Ax  0. T T Case II: Take x = [xα, xα¯] , where xα, xα¯ ≥ 0. Then suppose for this x, x Ax = 0, but Ax  0. So the vector x for which xT Ax =0, Ax ≥ 0, x ≥ 0, are the zero vector and [0, 0, ··· ,c]T ,c> 0 and for both cases AT x ≤ 0. s T T Hence A is E0 matrix. Now it is easy to show that for x = [0, 0, ··· ,c] , (A x)i =06 ˜s =⇒ (Ax)i =6 0 for each i. Hence A ∈ E0.

A + Remark 5.2. Suppose A ∈ Rn×n with A = αα and A ∈ P ∩ Q. Then A is  − 0  αα 0 a connected matrix (Ec) from the Theorem 2.7 of [23]. A + Remark 5.3. Suppose A ∈ Rn×n with A = αα and A ∈ P ∩ Q. Now  − 0  αα 0 as A ∈ Ec so A ∈ Ec ∩ Q0 and by the Theorem 2.8, Lemke’s algorithm processes LCP(q, A). A + Theorem 5.24. Suppose that A ∈ Rn×n with A = αα and A ∈ P ∩ Q.  − 0  αα 0 Then A ∈ P0. A + Proof. Since A ∈ Rn×n with A = αα and A ∈ P ∩ Q then by the Remark  − 0  αα 0 f ˜s 5.2, A ∈ Ec. Again by the Theorem 2.10 A ∈ E0 . As A ∈ E0 by the Theorem 5.23, A ∈ L by the Theorem 4.20. By applying degree theory, A ∈ P0 in view of Corollary 3.1 of [18].

11 Remark 5.4. Gowda and Jones [15] raised the following open problem: Is it true that P0 ∩ Q0 = Ec ∩ Q0? Cao and Ferris [2] showed that P0 ∩ Q0 = Ec ∩ Q0 is true for second order matrices. We settle the above open problem partially by considering ˜s a subclass P0 ∩ E0 of P0 ∩ Q0. ˜s In general, SOL(q, A) is not bounded for every q ∈ int pos[−A, I] and A ∈ E0. Here we establish the following results. Theorem . ˜s 5.25 Let A ∈ E0 and SOL(q, A) is not bounded for all q ∈ int pos[−A, I]. Suppose r ∈ K(A) and ∃ vectors z and zλ =z ˆ + λz such that z ∈ SOL(0, A) \{0}, λ λ λ z ∈ SOL(q, A) ∀ λ ≥ 0 and w ∈ SOL(r, A). Then (z − w)α(A(z − w))α < 0 ∀ α = {i : zi =06 }. ˜s Proof. Suppose A ∈ E0 and SOL(q, A) is not bounded for all q ∈ int pos[−A, I]. Note s that A ∈ E0 ∩ L2 as shown in Theorem 4.20 and q ∈ int pos[−A, I] and there exist vectors z and zλ =z ˆ+ λz such that z ∈ SOL(0, A) \{0} and zλ ∈ SOL(q, A) ∀λ ≥ 0. We select an r ∈ K(A) such a way that α = {i : zi =6 0}. Then ri − qi < 0 Now for λ sufficiently large λ, (z − w)α > 0 and w ∈ SOL(r, A). We write λ (A(z − w))α = −qα − (Aw)α ≤ −qα + rα < 0. This implies λ λ (z − w)α(A(z − w))α < 0.

However strict inequality does not hold in case of α =6 {i : zi =06 }. Theorem . ˜s 5.26 Let A ∈ E0 and SOL(q, A) is not bounded for all q ∈ int pos[−A, I]. Suppose r ∈ K(A) and ∃ vectors z and zλ =z ˆ + λz such that z ∈ SOL(0, A) \{0}, λ λ λ z ∈ SOL(q, A) ∀ λ ≥ 0 and z˜ ∈ SOL(r, A). Then (z − w)α(A(z − w))α ≤ 0 ∀ α = {i :z ˆi ≥ 0, zi =0}. Proof. The first part of the proof follows from the proof of Theorem 5.25. Now we select an r ∈ K(A) and consider α = {i : zi =06 }. We select an r ∈ K(A) and consider λ α = {i : zi = 0}. Then ri − qi ≥ 0. Now for sufficiently large λ, (z − w)α > 0 and w ∈ SOL(r, A). Now we consider following two cases. Case I: Let α = {i :z ˆi > 0, zi =0}. Then ri − qi =0. We write λ λ λ λ (z − w)i(A(z − w))i =(z − w)i((Az )i − (Aw)i + qi − ri) λ λ λ = zi ((Az )i + qi) − wi((Az )i + qi)+ λ zi (−(Aw)i − ri) − wi(−(Aw)i − ri) ≤ 0.

Case II: Let α = {i : zi =z ˆi =0}. Then ri − qi > 0. We write λ λ λ (z − w)i(A(z − w))i = −wi((Az )i − (Aw)i) λ ≤ −wi((Az )i − (Aw)i + qi − ri) λ = −wi(Az + q)i + wi(Aw + r)i λ = −wi(Az + q)i ≤ 0.

12 ˜s Now we show the condition for which SOL(q, A) is compact where A ∈ E0. To establish the result we use game theoretic approach and Ville’s theorem of alternative.

Theorem . ˜s 5.27 Suppose A ∈ E0 with v(A) > 0. Then SOL(q, A) is compact. ˜s s Proof. By theorem 4.20, E0 ⊆ E0 ∩ L2. Since v(A) > 0, So A ∈ E0 ∩ Q. Now to establish A ∈ R0 it is enough to show that LCP(0, A) has only trivial solution. Suppose not, then LCP(0, A) has nontrivial solution, i.e. say, 0 =6 x ∈ SOL(0, A) T s T then 0 =6 x ≥ 0, Ax ≥ 0 and x Ax = 0. Since A ∈ E0, we can write A x ≤ 0. Now AT x ≤ 0, 0 =6 x ≥ 0 has a solution. According to Ville’s theorem of alternative, there does not exist x> 0 such that Ax > 0. However, Ax > 0, x> 0 has a solution since A ∈ Q. This is a contradiction. Hence LCP(0, A) has only trivial solution. Therefore A ∈ Q ∩ R0. Now by the Theorem 2.15, A ∈ Qb. Hence SOL(q, A) is nonempty and compact. We illustrate the result with the help of an example. 0 21 T Example 5.10. Consider the matrix A =  1 01 . Now x Ax = 3x1x2 + −2 −2 1 2   x3 − x3(x1 + x2). Now we consider the following four cases. T Case I: For x1 = 0, x2 = k, x3 = 0, where k > 0. Here x ≥ 0, x Ax = 0 holds but in this case Ax  0. T Case II: For x1 = k, x2 =0, x3 =0, where k > 0. Here x ≥ 0, x Ax =0 holds but in this case Ax  0. T Case III: x1 = 0, x2 = k, x3 = k, where k > 0. Here x ≥ 0, x Ax = 0 holds but in this case Ax  0. T Case IV: x1 = k, x2 = 0, x3 = k, where k > 0. Here x ≥ 0, x Ax = 0 holds but in this case Ax  0. Hence zero vector is the only vector for which x ≥ 0, Ax ≥ 0, xT Ax =0 implies T s ˜s A x ≤ 0 holds. So A ∈ E0-matrix. Also it is clear that A ∈ E0. Here we get that LCP(0, A) has unique solution. Hence A ∈ R0. We state some notion of stability of a linear complementarity problem at a solution point.

Definition 5.5. A solution x∗ is said to be stable if there are neighborhoods V of x∗ and U of (q, A) such that (i) for all (¯q, A¯) ∈ U, the set SOL(¯q, A¯) ∩ V =6 ∅. (ii) sup{ky − x∗k : y ∈ SOL(¯q, A¯) ∩ V =6 ∅} goes to zero as (¯q, A¯) approaches (q, A).

Definition 5.6. A solution x∗ is said to be strongly stable if there exists a neighbor- hood V of x∗ such that the set SOL(¯q, A¯) ∩ V is singleton.

Definition 5.7. A solution x∗ is said to be locally unique if there exists a neighbor- hood V of x∗ such that SOL(¯q, A¯) ∩ V = {x∗}.

13 The following result shows that the solution set of LCP(q, A) is stable when A ∈ ˜s E0. Theorem . ˜s 5.28 Suppose A ∈ E0 with v(A) > 0, if the LCP(q, A) has unique solution x∗, then LCP(q, A) is stable at x∗. ˜s Proof. As A ∈ E0 with v(A) > 0, then by the Theorem 5.27, A ∈ R0. Again as shown in the Theorem 2.5, A ∈ int(Q) ∩ R0. So by the Theorem 2.6, if the LCP(q, A) has unique solution x∗, then LCP(q, A) is stable at x∗.

6 Iterative algorithm to process LCP(q, A)

Aganagic and Cottle [1] proved that Lemke’s algorithm processes LCP(q, A) if A ∈ P0∩Q0. Todd and Ye [24] proposed a projective algorithm to solve linear programming problem considering a suitable merit function. Using the same merit function Pang [22] proposed an iterative descent type algorithm with a fixed value of the parameter κ to process LCP(q, A) where A is a row sufficient matrix. Kojima et al. [16] proposed an interior point method to process P0-matrices using similar type of merit function. Here we propose a modified version of interior point algorithm by using a dynamic κ for each iterations in line with Pang [22] for finding solution of LCP(q, A) given that ˜s ˜s A ∈ E0. Note that E0 contains P0-matrices as well as non P0-matrices. We prove that the search directions generated by the algorithm are descent and show that the proposed algorithm converges to the solution under some defined conditions.

Algorithm. n n Let z > 0, w = q + Az > 0, and ψ : R++ × R++ → R such that ψ(z,w) = k T n k k k k κ log(z w) − i=1 log (ziwi) ≥ 0. Further suppose ρ = mini{zi wi } and κ > zT w max(n, ρk ) forPk-th iteration.

1 0 Step 1: Let β ∈ (0, 1) and σ ∈ (0, 2 ) following line search step and z be a strictly feasible point of LCP(q, A) and w0 = q + Az0 > 0.

k k k k ∇zψk = ∇zψ(z ,w ), ∇wψk = ∇wψ(z ,w )

and

Zk = diag(zk), W k = diag(wk).

Step 2: Now to find the search direction, consider the following problem

T T minimize (∇zψk) dz +(∇wψk) dw

k −1 2 k −1 2 2 subject to dw = Adz, k(Z ) dzk + k(W ) dwk ≤ β .

Step 3: Find the smallest mk ≥ 0 such that

14 k −mk k k −mk k k k −mk T k T k ψ(z +2 dz ,w +2 dw) − ψ(z ,w ) ≤ σ2 [(∇zψk) dz +(∇wψk) dw].

Step 4: Set

k+1 k+1 k k −mk k k (z ,w )=(z ,w )+2 (dz ,dw).

Step 5: If (zk+1)T wk+1 ≤ ǫ, where ǫ is a very small positive quantity, stop else k = k +1. Remark 6.5. The algorithm is based on the existence of a strictly feasible point. As ˜s A ∈ E0 implies A ∈ Q0 in view of Theorem 4.20 then existence of a strictly feasible points for such a matrix will eventually lead to the solution of LCP(q, A).

Now we prove the following lemma for E0-matrices. Lemma . n n 6.1 Suppose A ∈ E0,z> 0, w = q + Az > 0, and ψ : R++ × R++ → R such T n k k k that ψ(z,w) = κ log(z w) − i=1 log (ziwi). Further suppose ρ = mini{zi wi } and k zT w k k κ > max(n, ρk ) for each kthP iteration. Then the search direction (dz ,dw) generated by the algorithm is descent direction. k T k Proof. Let us consider r = ∇zψk + A ∇wψk and first we show that r =6 0 for kth n n iteration. Consider the merit function z > 0, w = q+Az > 0 and ψ : R++ ×R++ → R T n such that ψ(z,w)= κ log(z w) − i=1 log (ziwi) ≥ 0. Note that P κ 1 T ∇zψ(z,w) i = z w vi − ziwi wi κ 1 T  = wi z w − ziwi .   Similarly we show κ 1 T ∇wψ(z,w) i = zi z w − ziwi . k zT w  k  k k Again for kth iteration κ > max(n, ρk ) where ρ = mini{zi wi }. This implies κk 1 T zi( z w − ziwi ) > 0.

Therefore ∇wψ(z,w) i > 0 ∀i. In a similar way we can show that ∇zψ(z,w) i > 0 T ∀i. Now A ∈ E0. So A ∈ E0. By the definition of semimonotonicity for ∇wψ(z,w ) > T T 0 ∃ a j such that (A ∇wψ(z,w))j ≥ 0. Therefore (∇zψ(z,w))j +(A ∇vψ(z,w))j =06  k − k T k (A ) 1r for atleast one j. Hence ∇zψ(z,w)+ A ∇vψ(z,w) =6 0. We have dz = − τk , k k k k −2 T k −2 dw = Adz from the algorithm. Again A =(Z ) + A (W ) A is positive definite as xT AT (W )−2Ax =(Ax)T (W )−2Ax =(y)T (W )−2y T −2 n T −2 and (y) (W ) y ≥ 0, ∀ y ∈ R , A (W ) A is positive semidefinite. So τk = (rk)T (Ak)−1rk is positive. Now we show that (∇ ψ )T dk +(∇ ψ )T dk < 0. We p β z k z w k w derive T k T k T T k (∇zψk) dz +(∇wψk) dw = ∇zψk + A ∇wψk dw 1 k T k −1 k 2 = − τk ( (r ) (A ) r ) 2 = −τkβp< 0.

15 k −mk k k −mk k k k −mk T k T k We consider ψ(z +2 dz ,w +2 dw)−ψ(z ,w ) ≤ σ2 [(∇zψk) dz +(∇wψk) dw]. k −mk k k −mk k k k k k Since 0 <β,σ< 1, we say ψ(z +2 dz ,w +2 dw)−ψ(z ,w ) < 0. Hence (dz ,dw) is descent direction in this algorithm.

Remark . ˜s ˜s 6.6 Note that the Lemma 6.1 is true for E0-matrices as E0 ⊆ E0. We prove the following theorem to show that the proposed algorithm converges to the solution under some defined condition.

Theorem . ˜s 6.29 If A ∈ E0 and LCP(q, A) has a strictly feasible solution, then every accumulation point of {zk} is the solution of LCP(q, A) i.e. algorithm converges to the solution.

Proof. If there exists strictly feasible points then LCP(q, A) has a solution where ˜s k A ∈ E0. Let us consider the subsequences {z : k ∈ ω}. Supposez ˜ is the limit of the subsequence andw ˜ = q + Az.˜ Again we know ψ(˜z, w˜) < ∞. So eitherz ˜T w˜ = 0 or (˜z, w˜) > 0. If the first case happen, then (˜z, w˜) is a solution. So let us consider that (˜z, w˜) > 0. Also supposer ˜ and A˜ are the limits of the subsequences {rk : k ∈ ω} and r˜T A˜−1r˜ {Ak : k ∈ ω} respectively. Consider τ k converges toτ ˜ = > 0, where A˜ p β ˜ ˜ k k remains positive definite. (dz, dw) be the limits of the sequence of direction (dz ,dw). So from the algorithm we get ˜ A˜−1r˜ ˜ ˜ dz = − τ˜ , dw = Adz. k+1 k+1 k k Now as {ψ(z ,w ) − ψ(z ,w )} converges to zero and since lim mk = ∞ as k+1 k+1 k −(mk−1) k k −(mk−1) k k → ∞, {(z ,w ): k ∈ ω} and {(z +2 dz ,w +2 dw): k ∈ ω} converges to (˜z, w˜). As mk is the smallest non-negative integers, we have,

k −(m −1) k k −(m −1) k k k ψ(z +2 k dz ,w +2 k dw )−ψ(z ,w ) 2 − m − > −σβ τ . 2 ( k 1) k Again on the other hand from the algorithm, k k k k ψ(z +1,w +1)−ψ(z ,w ) 2 2−mk ≤ −σβ τk. Now taking limit k →∞, we write, T T 2 ∇zψ(˜z, w˜) d˜z + ∇wψ(˜z, w˜) d˜w = −στβ˜ . Again from Lemma 6.1 we know, T k T k 2 (∇zψk) dz +(∇wψk) dw = −τkβ . Hence by taking limit k →∞, we get T T 2 ∇zψ(˜z, w˜) d˜z + ∇wψ(˜z, w˜) d˜w = −τβ˜ . Therefore we arrive at a contradiction. So our proposed algorithm converges to the solution.

16 7 Numerical illustration

A numerical example is considered to demonstrate the effectiveness and efficiency of the proposed algorithm.

Example 7.11. We consider the following example of LCP(q, A), where

0 11 −4 A =  2 02  and q =  −7  . −2 −5 0 10     ˜s It is easy to show that A ∈ E0. We apply proposed algorithm to find solution of the given problem. According to Theorem 6.29 algorithm converges to solution with z0,w0 > 0. To start with we initialize β =0.5, γ =0.5, σ =0.2, and ǫ =0.00001. We 1 2 set z0 =  1  and obtain w0 =  5  . 5 3     k k k k k k Iteration (k) z w dz dw ψ(z ,w ) 1.05 1.85 0.106 −0.298 1  1.09   4.62   0.189   −0.761  29.3308 4.76 2.42 −0.487 −1.155         1.1 1.7 0.0853 −0.294 2  1.17   4.25   0.1607   −0.74  23.2919 4.53 1.94 −0.4551 −0.974 .  .   .   .   .  ...... 1.07 0.00608 0.00047 −0.00171 50  1.57   0.00389   −0.00017   −0.00215  2.4617 2.43 0.00281 −0.00154 −0.00009 .  .   .   .   .  ...... 1.07 0.00001 −0.000001 −0.00000 96  1.57   0.000000   −0.00000   −0.00000  1.1684 2.43 0.00000 −0.000003 −0.00000         1.07 0.00001 0.000002 −0.000000 97  1.57   0.000009   0.000000   −0.000000  1.1684 2.43 0.000005 −0.000000 −0.000000 .  .   .   .   .  ...... 1.07 0.00000 0.000000 −0.000000 100  1.57   0.00000   −0.000000   −0.000000  1.0565 2.43 0.00000 −0.000000 0.00000         Table 1: Summary of computation for the proposed algorithm

17 Table 1 summarizes the computations for the first 2 iterations, 50th iteration and 96th, 97th iteration and 100th iteration. At the 100th iteration, sequence {zk} and {wk} produced by the proposed algorithm converges to the solution of the given 1.0714 0 ∗ ∗ LCP(q, A) i.e. z =  1.5714  and w =  0  . 2.4285 0     8 Concluding remark

In this article, we show that LCP(q, A) is processable by Lemke’s algorithm and the ˜s s solution set of LCP(q, A) is bounded if A ∈ E0 ∩ P0, a subclass of E0 ∩ P0. It can be shown that non-negative matrices with zero diagonal with atleast one aij > 0 with ˜s i =6 j is not a E0-matrix. Whether a matrix class belongs to P0 ∩ Q0 or not is difficult ˜s to verify. We find some conditions under which E0-matrix will belong P0 ∩ Q0 which will motivate further study and applications in matrix theory. Finally we propose an iterative and descent type interior point method to compute solution of LCP(q, A).

References

[1] Muhamed Aganagi´cand Richard W Cottle. A constructive characterization of Q0-matrices with nonnegative principal minors. Mathematical Programming, 37(2):223–231, 1987.

[2] Menglin Cao and Michael C Ferris. Pc-matrices and the linear complementarity problem. and its Applications, 246:299–312, 1996.

[3] Teresa H Chu. On semimonotone matrices with nonnegative principal minors. Linear algebra and its applications, 367:147–154, 2003. [4] Richard W Cottle. The principal pivoting method revisited. Mathematical Pro- gramming, 48(1):369–385, 1990. [5] Richard W Cottle and Sy-Ming Guu. Two characterizations of sufficient matrices. Linear algebra and its applications, 170:65–74, 1992.

[6] Richard W Cottle and Richard E Stone. On the uniqueness of solutions to linear complementarity problems. Mathematical programming, 27(2):191–213, 1983. [7] RW Cottle, JS Pang, and RE Stone. The linear complementarity problem. 1992. AP, New York. [8] AK Das. Properties of some matrix classes based on principal pivot transform. Annals of Operations Research, 243(1-2):375–382, 2016.

[9] AK Das, R Jana, et al. On generalized positive subdefinite matrices and inte- rior point algorithm. In International Conference on Frontiers in Optimization: Theory and Applications, pages 3–16. Springer, 2016.

18 [10] B Curtis Eaves. The linear complementarity problem. Management science, 17(9):612–634, 1971.

[11] F Flores-Baz´an and R L´opez. Characterizing Q-matrices beyond L-matrices. Journal of optimization theory and applications, 127(2):447–457, 2005.

[12] M Seetharama Gowda. Pseudomonotone and copositive star matrices. Linear Algebra and its Applications, 113:107–118, 1989.

[13] M Seetharama Gowda and Jong-Shi Pang. On solution stability of the linear complementarity problem. Mathematics of Operations Research, 17(1):77–83, 1992.

[14] R Jana, AK Das, and S Sinha. On processability of lemke’s algorithm. Applica- tions & Applied Mathematics, 13(2), 2018.

[15] Cristen Jones and M Seetharama Gowda. On the connectedness of solution sets in linear complementarity problems. Linear algebra and its applications, 272:33– 44, 1998.

[16] Masakazu Kojima, Nimrod Megiddo, Toshihito Noma, and Akiko Yoshise. A uni- fied approach to interior point algorithms for linear complementarity problems, volume 538. Springer Science & Business Media, 1991.

[17] SR Mohan, SK Neogy, and AK Das. More on positive subdefinite matrices and the linear complementarity problem. Linear Algebra and its Applications, 338(1-3):275–285, 2001.

[18] SR Mohan, SK Neogy, and AK Das. On the classes of fully copositive and fully semimonotone matrices. Linear Algebra and its Applications, 323(1-3):87–97, 2001.

[19] SK Neogy and AK Das. On almost type classes of matrices with q-property. Linear and Multilinear Algebra, 53(4):243–257, 2005.

[20] SK Neogy and AK Das. Principal pivot transforms of some classes of matrices. Linear algebra and its applications, 400:243–252, 2005.

[21] SK Neogy, AK Das, and Abhijit Gupta. Generalized principal pivot transforms, complementarity theory and their applications in stochastic games. Optimization Letters, 6(2):339–356, 2012.

[22] Jong-Shi Pang. Iterative descent algorithms for a row sufficient linear com- plementarity problem. SIAM journal on matrix analysis and applications, 12(4):611–624, 1991.

[23] T Parthasarathy and B Sriparna. On the solution sets of linear complementarity problems. SIAM Journal on Matrix Analysis and Applications, 21(4):1229–1235, 2000.

19 [24] Michael J Todd and Yinyu Ye. A centered projective algorithm for linear pro- gramming. Mathematics of Operations Research, 15(3):508–529, 1990.

[25] Michael J Tsatsomeros and Megan Wendler. Semimonotone matrices. Linear Algebra and its Applications, 2019.

[26] Hannu V¨aliaho. Almost copositive matrices. Linear Algebra and its applications, 116:121–134, 1989.

[27] John von Neumann. A certain zero-sum two-person game equivalent to the optimal assignment problem. Contributions to the Theory of Games, 2:5–12, 1953.

20