Linear Complementarity and Oriented

Linear Complementarity and Oriented Matroids

Robin Leroy Laurin Stenz

December 29, 2015

Contents

1 LCP Duality Theorem 2

2 Oriented Matroids 4 2.1 Definitions ...... 4 2.2 Dual ...... 6 2.3 OMCP Duality Theorem ...... 7 2.4 Basis ...... 9 2.5 Tableau ...... 10

3 Criss-Cross Method 11 3.1 Basis-Form Duality Theorem ...... 12 3.2 ...... 16

This report relies heavily on the paper

Linear Complementarity and Oriented Matroids by and Tam´asTerlaky. If not mentioned otherwise, one can assume that the ideas originated in this paper.

Robin Leroy, Laurin Stenz 1 Linear Complementarity and Oriented Matroids

1 LCP Duality Theorem

2n+1 2n n In the following, n is fixed, vectors in R are indexed from 0, vectors in R or R are indexed from 1. Recall the linear complementarity problem, namely, given an n by n matrix A and n n b ∈ R , to find w and z in R with nonnegative entries satisfying

w = Az + b, zT w = 0.

This problem can be reformulated as follows. Define the

 2n+1 V (A, b) = x ∈ R (−b|−A|1)x = 0 = ker(−b|−A|1).

The above linear complementarity problem is then equivalent to finding x ∈ V (A, b) with nonnegative entries satisfying

x0 = 1 and xixn+i = 0 for i = 1, . . . , n.

A vector satisfying xixn+i = 0 for i = 1, . . . , n is called complementary. The equivalence is given by   1   x = z  . w Indeed, substituting yields   1 1   (−b|−A| ) z  = −b − Az + w = 0, w

ziwi = 0 for i = 1, . . . , n,

T where second condition is equivalent to z w = 0 since the ziwi are nonnegative. We can generalize the problem further to finding a complementary vector in any given 2n+1 subspace V of R with nonnegative entries and its first entry equal to one. 2n+1 2n We call a vector x ∈ R or x ∈ R strictly sign preserving (s.s.p.) if

xixn+i ≥ 0 for i = 1, . . . , n

xjxn+j > 0 for some j > 0.

Robin Leroy, Laurin Stenz 2 Linear Complementarity and Oriented Matroids

We call it strictly sign reversing (s.s.r.) if

xixn+i ≤ 0 for i = 1, . . . , n

xjxn+j < 0 for some j > 0. We will then prove the following result about this problem.

Theorem 1.1 2n+1 Let V be a subspace of R satisfying the following conditions:

either V contains no s.s.r. vector x with x0 = 0 (1) or V contains no s.s.p. vector x with x0 6= 0

and

⊥ either V contains no s.s.r. vector y with y0 = 0 (2) ⊥ or V contains no s.s.p. vector y with y0 6= 0.

Then exactly one of the following statements holds:

(a) There exists a nonnegative complementary vector x ∈ V with x0 = 1.

⊥ (b) There exists a nonnegative complementary vector y ∈ V with y0 = 1.

The above theorem suggests a dual problem to that of finding a nonnegative comple- mentary vector in V with its first entry equal to one, namely finding a nonnegative complementary vector in V ⊥ with its first entry equal to one. Going back to the original LCP where V = V (A, b), the dual problem becomes to find a nonnegative complementary y ∈ V (A, b)⊥ with its first entry equal to 1. Since   −bT ⊥ 1 ⊥ 1 1 T  T  V (A, b) = (ker(−b|−A| )) = coim(−b|−A| ) = im(−b|−A| ) = im −A  , 1

n this is equivalent to finding an u ∈ R such that     −bT −bT u  T   T  −A  u = −A u 1 u has nonnegative entries, first entry 1, and is complementary. This is, in turn, equivalent to u having nonnegative entries, AT u having nonpositive entries, bT u = 1, and uT AT u = 0.

Robin Leroy, Laurin Stenz 3 Linear Complementarity and Oriented Matroids

We note that when written directly in terms of the initial A and b, the dual of the LCP looks very different from the primal LCP; the more general linear algebraic approach yields a much more symmetrical statement. The actual proof of this result will use an even more general formulation, using combi- natorial objects instead of vector spaces. Before we get to the proof, note that the matrices such that for all b, V (A, b) contains no s.s.r vector whose first entry vanishes and V (A, b)⊥ contains no s.s.p vector whose first entry does not vanish are exactly the sufficient matrices (as shown in the talk from November 17th).

2 Oriented Matroids

Remark that the theorem as formulated in the previous section made assumptions and statements only about the signs of entries of vectors. It is possible to come up with objets more general than vectors that express only those signs.

2.1 Definitions

Definition Notations Let E be a finite set. Let X,Y ∈ {+, −, 0}E be so called sign vectors or just vectors. Let S ⊂ E.

• E ground set

E • 0 := (0 0 ··· 0)> ∈ {0} zero sign vector

•− X is defined componentwise by negative  + if Xe = −  (−X)e := −Xe := − if Xe = +  0 if Xe = 0.

• X ◦ Y is defined componentwise by composition ( Xe if Xe 6= 0 (X ◦ Y )e = Ye otherwise.

• D(X,Y ) := {e ∈ E | Xe = −Ye 6= 0} set of separating elements

Robin Leroy, Laurin Stenz 4 Linear Complementarity and Oriented Matroids

• e ∈ D(X,Y ) e separates X and Y

• The support of a signed vector X is called support

X := {j ∈ E | Xj 6= 0} .

• Restriction of a signed vector is a subvector, namely restriction omitting S

E\S X \S ∈ {+, −, 0} such that ∀ e 6∈ S (X \S)e = Xe.

Definition Oriented , Signed Vector An oriented matroid M is a pair (E, F), where E is a finite set and F ⊂ {+, −, 0}E is a set of sign vectors (or just vectors) for which the following axioms are valid.

(OM1) 0 ∈ F

(OM2) X ∈ F then −X ∈ F (symmetry)

(OM3) X,Y ∈ F then X ◦ Y ∈ F (composition)

(OM4) X,Y ∈ F and f ∈ D(X,Y ) (covector elimination)

then ∃ Z ∈ F such that

Zf = 0

 ∀ j ∈ E \ D(X,Y ) holds Zj = (X ◦ Y )j. 

Intuitively, sign vectors X will correspond to vectors x in a subspace of R whose entries have the corresponding signs. Under that interpretation, −X corresponds to −x, and the composition X ◦ Y corresponds to “x + εy for some ε > 0”. E Formally, define δ : R −→ {+, −, 0} by δ(x)j = sign(xj). δ(x) is called the incidence vector of x. Then, for any vector space V , the axioms (OM1–4) hold for δ(V ). The first two axioms are immediate: 0 = δ(0), and −δ(x) = δ(−x). For (OM3), given x, x0 ∈ V , let ε > 0 be 0 0 0 such that |xi| > ε|x i| for all i such that xi 6= 0, then δ(x) ◦ δ(x ) = δ(x + εx ). Finally,

Robin Leroy, Laurin Stenz 5 Linear Complementarity and Oriented Matroids

0 0 to prove (OM4), note that f ∈ D(δ(x), δ(x )) implies xf = −λxf for some λ > 0. Let Z = δ(x + λx0), Z then has the required properties.

2.2 Dual

We can now transfer the notion of orthogonal complement to the world of oriented matroids. P Two vectors v and w are orthogonal if i viwi = 0. A necessary (but not sufficient) condition for that is that the nonzero terms of this sum, if there are any, should not all be of the same sign. P Moreover, if i viwi has nonzero terms of both signs, then the sum can be made 0 by rescaling some entries of v by a positive factor, so that there is a w0 orthogonal to v with the same incidence vector as w. This motivates the following definition: two sign vectors X,Y ∈ {+, −, 0}E are orthog- onal, denoted X ∗ Y , if

either ∀g ∈ E,XgYg = 0 or ∃f, g ∈ E,Xf Yf = −XgYg 6= 0.

The dual of a matroid (E, F) is (E, F ∗) defined by n o ∗ E F = Y ∈ {+, −, 0} ∀X ∈ F,X ∗ Y . It can seen that this is a matroid, and that F ∗∗ = F. The considerations above show that δ(V )∗ = δ(V ⊥), so that this does indeed generalize orthogonal complements to oriented matroids.

Let E2n = {1,..., 2n} and Eˆ2n = {0, 1,..., 2n}. Moreover, let  i + n 1 ≤ i ≤ n ¯i = i − n n + 1 ≤ i ≤ 2n.

More generally, we will use E2n for a set of 2n elements partitioned into n pairs, and Eˆ2n for E2n with an additional element g. For a signed vector X ∈ F of an oriented matroid (Eˆ2n, F), X is said to be complemen- tary under the same condition as before, namely if for all i ∈ E2n, XiX¯i = 0. Note that 2n+1 x ∈ R is complementary if and only if δ(x) is. Similarly, define simply sign preserving and simply sign reversing sign vectors as for 2n+1 vectors in R , and note that these properties are preserved by δ. We now have the formalism needed to state the complementarity problem and the duality 2n+1 theorem using oriented matroids instead of subspaces of R . The oriented matroid complementarity problem consists, given an oriented matroid

(Eˆ2n, F), in finding a complementary X ∈ F with nonnegative entries and first entry +.

Robin Leroy, Laurin Stenz 6 Linear Complementarity and Oriented Matroids

Note that this is solvable if and only if the linear complementarity problem is solvable; the correspondence is given by incidence vectors.

2.3 OMCP Duality Theorem

The generalization of the LCP duality theorem to oriented matroids is now immediate.

Theorem 2.1 OMCP Duality Theorem

Let M = (Eˆ2n, F) be an oriented matroid, which satisfies the following conditions

either F contains no s.s.r. vector X with Xg = 0 (3) or F contains no s.s.p. vector X with Xg 6= 0

and

? either F contains no s.s.r. vector Y with Yg = 0 (4) ? or F contains no s.s.p. vector Y with Yg 6= 0.

Then exactly one of the following statements holds

(a) There exists a nonnegative complementary vector X ∈ F with Xg = +.

? (b) There exists a nonnegative complementary vector Y ∈ F with Yg = +..

Again, this implies LCP duality, via incidence vectors. Let us now give an elementary proof of this result. For that purpose, some more defini- tions are needed.

Let S be a subset of E2n. Its complement S¯ is the set {s¯|s ∈ S}. S is complementary if S ∩ S¯ = ∅. Given a matroid (E, F), define

F\S = {X\S|X ∈ F and Xj = 0 for all j ∈ S} F/S = {X\S|X ∈ F} .

Let M = (Eˆ2n, F) be an oriented matroid, and S ⊆ E2n complementary. We define the complementary minor

M(S) = (Eˆ2n \ (S ∪ S¯), F(S)) as M(S) = M\S/S¯.

Robin Leroy, Laurin Stenz 7 Linear Complementarity and Oriented Matroids

Moreover, we write M(e) for M({e}). We then have the following properties of complementary minors:

X ∈ F and Xe = 0 for all e ∈ S =⇒ X\(S ∪ S¯) ∈ F(S) (5) 0 0 X ∈ F(S) =⇒ ∃X ∈ F,X\(S ∪ S¯) = X and Xe = 0 for all e ∈ S (6) ∗ ∗ Y ∈ F and Ye¯ = 0 for all e ∈ S =⇒ Y \(S ∪ S¯) ∈ F(S) (7) 0 ∗ ∗ 0 Y ∈ F(S) =⇒ ∃Y ∈ F ,Y \(S ∪ S¯) = Y and Ye¯ = 0 for all e ∈ S (8) (9)

The following lemma follows immediately.

Lemma 2.2

Let M = (Eˆ2n, F) be an oriented matroid satisfying the assumptions (3) and (4). Let S ⊆ E2n be complementary. Then the complementary minor M(S) satisfies (3) and (4).

Lemma 2.3

Let M = (Eˆ2n, F) be an oriented matroid satisfying the assumptions (3) and (4). Let e ∈ E2n. Then at most one of the following holds:

1 1 1 1 (a1) ∃X ∈ F complementary with X \{e, e¯} ≥ 0, Xe¯ = −, and Xg = +;

2 2 2 2 (a2) ∃X ∈ F complementary with X \{e, e¯} ≥ 0, Xe = −, and Xg = +;

1 ∗ 1 1 1 (b1) ∃Y ∈ F complementary with Y \{e, e¯} ≥ 0, Ye = −, and Yg = +;

2 ∗ 2 2 2 (b2) ∃Y ∈ F complementary with Y \{e, e¯} ≥ 0, Ye¯ = −, and Yg = +.

Proof Assume two of the above hold. If (a1) and (a2) hold, let Z0 = X1 ◦ X2. Then Z0 is s.s.p. by (a1) and (a2), and 0 00 00 00 1 2 Zg 6= 0. By (OM4), there is a Z ∈ F such that Zg = 0 and Zj = (X ◦ −X )j for all 1 2 00 00 00 00 00 j∈ / D(X , −X ). By (a1) and (a2), Zj Z¯j ≤ 0 for all j ∈ E2n, and Ze Ze¯ < 0. Thus Z

Robin Leroy, Laurin Stenz 8 Linear Complementarity and Oriented Matroids

00 0 00 is s.s.r., and Zg = 0. The existence of both Z and Z violates (3), a contradiction. The same argument can be applied if (b1) and (b2) hold. If (a1) and (b1) hold, X1 and Y 1 are not orthogonal, a contradiction; similarly for (a2) and (b2), for (a1) and (b2), and for (a2) and (b1).  With that lemma, we can now prove the OMCP duality theorem.

Proof OMCP Duality Theorem We prove the theorem by induction on n. For n = 0 the theorem is obvious. Fix n > 0, and assume the theorem holds for smaller values of n. Assume (3) and (4), and let e ∈ E2n. By the lemma 2.2, the assumptions of the theorem hold for the minors M(e) and M(¯e). By induction hypothesis, exactly one of ˆ 1 ˆ 1 (a1’) ∃X ∈ F(e) complementary nonnegative with Xg = +; ˆ 1 ∗ ˆ 1 (b1’) ∃Y ∈ F(e) complementary nonnegative with Yg = +; holds, and exactly one of ˆ 2 ˆ 1 (a2’) ∃X ∈ F(¯e) complementary nonnegative with Xg = +; ˆ 2 ∗ ˆ 1 (b2’) ∃Y ∈ F(¯e) complementary nonnegative with Yg = + holds. Assume that neither (a) nor (b) from the theorem holds. Then, by (6) and (8), the statements (a1’), (a2’), (b1’), and (b2’) respectively imply (a1), (a2), (b1), and (b2) from lemma 2.3. The induction hypothesis then means that two of those statements must hold, a contradiction to lemma 2.3, thus one of (a) and (b) must hold. 

2.4 Basis

We have now proved the main theorem, but we can even present you an algorithm, which finds a solution in finite time. To reach this goal, we need more definitions for oriented matroids.

A circuit of an oriented matroid is a vector with minimal nonempty support.

Definition Basis and its Rank Let M = (E, F) be an oriented matroid. A basis is a maximal subset B ⊂ E such that there is no X ∈ F \{0} with X ⊂ B. Its cardinality is called rank r(M) of M.

Robin Leroy, Laurin Stenz 9 Linear Complementarity and Oriented Matroids

For bases we have the following properties.

Proposition 2.4 Properties of a Basis Let M = (E, F) be an oriented matroid.

1. Every basis of M has the same rank.

2. For each basis B of M and each j ∈ E \B there exists a unique circuit X(B, j) (called fundamental circuit) such that

• X(B, j)j = + • X(B, j) ⊂ B ∪ {j}.

3. B ⊂ E is a basis of M. ⇐⇒ E \ B is a basis of the dual M ? (called cobasis, denoted N).

2.5 Tableau

Definition Tableau of a Basis Let B be a basis of the oriented matroid M. Then the tableau T (B) is the matrix

(tij)i∈B,j∈N with entries in {+, 0, −} defined by

tij = X(B, j)i for i ∈ B, j ∈ N.

This tableau contains all the information about all fundamental circuits for a particular basis B and also about every fundamental cocircuit Y (N, i) for the cobasis N, since we have the well-known relation

tij = −Y (N, i)j for i ∈ B, j ∈ N.

Hence the column j corresponds to the interesting entries of X(B, j) if we add the fact that X(B, j)j = +. Every other entry is zero. On the other hand the row i corresponds to the interesting entries of −Y (N, i). Every other entry is zero. This relation comes from the uniqueness of the fundamental circuits.

Robin Leroy, Laurin Stenz 10 Linear Complementarity and Oriented Matroids

Lemma 2.5 Change of Basis Let M = (E, F) be an oriented matroid. Let B be a basis.

Assume the entry trs for r ∈ B, s ∈ N is non-zero. 0 Then the set B = B \{r} ∪ {s} is a basis and the tableaux T (B) = (tij) and 0 0 T (B ) = (tij) are related as follows

0 • tsr = trs,

0 • tsj = −trstrj,

0 • tir = trstis,

0 • tij = tij ◦ (−trstistrj).

This lemma is well-known and can be easily verified by looking at the axioms of oriented matroids (OM1) to (OM4).

Definition Pivot Operation on the Position (r, s) The replacement of a tableau T (B) by a tableau T (B0) as in the previous Lemma is called a pivot operation on the position (r, s).

3 Criss-Cross Method

Now we are equipped with everything to start introducing the algorithm.

Note that throughout this section the symbol ± might stand for a non-zero entry, ⊕ for a non-negative entry and for a non-positive entry.

Assumption 3.1 Special Basis for (Eˆ2n, M)

The oriented matroid M = (Eˆ2n, F) contains a basis B with

•| B| = n,

• g 6∈ B,

• complementary, i.e. B ∩ B¯ = ∅.

Robin Leroy, Laurin Stenz 11 Linear Complementarity and Oriented Matroids

g g ⊕ . . . r − ...... ⊕

Figure 1: Illustrates feasible and cofeasible tableaux.

In the case of a LCP, where the set of sign vectors arises from a linear subspace V (A, b) this assumption is always true! This can be seen if one looks V (A, b) = ker(−b | −A | 1n). We can always choose B = {n + 1,..., 2n}. Assume that x ∈ V (A, b) has x ⊂ B (support here shall be defined in the respective way) then it is clear that

(−b | −A | 1n) · x 6= 0, which is a contradiction to x ∈ V (A, b).

Definition Feasible and Cofeasible Basis/Tableau

Let M = (Eˆ2n, F) be an oriented matroid. A basis B or a tableau T (B) for M is called feasible if

• g 6∈ B

• tig ≥ 0 for all i ∈ B

and cofeasible if

• g 6∈ B

•∃ r ∈ B such that trg = − and ∀ j ∈ N trj ≤ 0.

(Illustration: Figure 1)

3.1 Basis-Form Duality Theorem

Theorem 3.2 Basis-Form Duality Theorem

Let M = (Eˆ2n, F) be an oriented matroid satisfying Assumption 3.1 and the as- sumptions (3) and (4) of Theorem 2.1.

Robin Leroy, Laurin Stenz 12 Linear Complementarity and Oriented Matroids

Then exactly one of the following statements hold

(a) There exists a complementary feasible basis.

(b) There exists a complementary cofeasible basis.

Remark Basis-Form Duality Theorem =⇒ OMCP Duality Theorem If we have matroid M arising from a linear subspace V (A, b) (Assumption 3.1 fulfilled) and satisfying the conditions of the Theorem 2.1, all the assumptions of the Theorem 3.2 are fulfilled. Hence if we proved this theorem, we get the conclusion that there exists exactly one of the complementary bases. Now note that if the basis is feasible, then we know that the g-column of T (B) is non- negative. If we now look at the fundamental circuit X(B, g) we get immediately that it is a non-negative vector, since the entries that are not listed vanish with the exception of X(B, g)g = +. As the basis is complementary the X(B, g) is also. Hence we have found a non-negative, complementary vector X = X(B, g) such that Xg = +. Hence the conclusion of Theorem 2.1 is fulfilled. On the other hand if the basis is cofeasible, we get some r-row of T (B) that satisfies trg = − and all the other entries are non-positive. If we now look at the fundamental cocircuit Y (N, r) we know that all the interesting entries have the opposite sign of our r-row. All the other entries vanish. Hence we have a non-negative cocircuit, which is complementary, since B is complementary and Y (N, r)g = −trg = +. Hence the conclu- sion of Theorem 2.1 is fulfilled. This shows that the Basis-Form Duality Theorem (3.2) implies the OMCP Duality The- orem (2.1) in the case of matroids arising from linear subspaces V (A, b). //

Lemma 3.3

Let M = (Eˆ2n, F) be an oriented matroid satisfying the assumptions of Theorem 3.2. Let B be a complementary basis with g 6∈ B.

Robin Leroy, Laurin Stenz 13 Linear Complementarity and Oriented Matroids

r r s r s

r − 0 + r − 0 + r ± α s γ 0 β s δ 0

=⇒ α = ⊕ =⇒ γ = −, β = + =⇒ δ = −

Figure 2: Illustrates the implications provided by Lemma 3.3. The left tableau corre- sponds to (1), the center one to (2) and the right one to (3).

Then ∀ r, s ∈ B the following holds

(1) trg 6= 0 =⇒ trr ≥ 0,

trg = −

(2) trr = tsr = 0 =⇒ tsg = −, tss = +,

trs = +

trg = −

(3) trr = tss = 0 =⇒ tsr = −.

trs = +

(Illustration: Figure 2)

Proof Let B be a complementary basis with g 6∈ B. For this proof, we set

X0 = X(B, g), X1 = X(B, r),Y 1 = Y (N, r), X2 = X(B, s),Y 2 = Y (N, s).

Recall the definition of fundamental circuit as it will be very important in this proof.

(1) Assume trg 6= 0 for some r ∈ B. Further assume by contradiction that trr = −. 1 0 1 Let us look at the vectors W = X and Z = (trg · X ) ◦ X .

Robin Leroy, Laurin Stenz 14 Linear Complementarity and Oriented Matroids

1 0 1 0 1 W = X X X Z = (trg · X ) ◦ X

g 0 g + trg = ± r trr = − r trg = ± + r + r 0 + + i i i 0 i 0 0 0 where i ∈ B\{r} where i ∈ B\{r}

This yields that W is a strictly sign-reversing vector with Wg = 0 and Z is a strictly sign-preserving vector with Zg 6= 0. But both W and Z are in F by the axioms of oriented matroids, which contradicts the assumption (3) of Theorem 2.1.

(2) Assume trg = −, trr = tsr = 0 and trs = + for some r ∈ B and s ∈ B. Let γ = tsg and β = tss.

First we show that β = +. Suppose by contradiction that β ≤ 0. Consider the vectors W = X2 ◦ (−X1) and Z = X0 ◦ (−X1).

X2 X1 W = X2 ◦ (−X1) 0 1 0 1 g 0 0 0 X X Z = X ◦ (−X ) r + + g + + r 0 + − r − − s β = 0 β = r 0 + − s + + i i i 0 0 0 i 0 0 0 where i ∈ B\{r} where i ∈ B\{r, s}

Hence W is a strictly sign-reversing vector, such that Wg = 0 and Z is a strictly sign-preserving vector, such that Zg 6= 0. Both Z and W are in F, which contra- dicts the assumption (3) of Theorem 2.1. Now we prove γ = −. So suppose that γ ≥ 0. For the strictly sign-reversing vector W we have to consider two separate cases: In the case of γ = 0 we look at the vector W = Y 2 and in the case of γ = + we get the desired vector by looking at Y 1 and Y 2 and applying the forth axiom of oriented matroids (OM4). For the strictly sign-preserving vector we just take Z = Y 1 ◦ (−Y 2).

Case γ = 0 Case γ = + Both Cases

W = Y 2 Y 1 Y 2 W from (OM4) Y 1 Y 2 Z = Y 1 ◦ (−Y 2) g 0 g + − 0 g + + r 0 r + 0 + r + + r 0 r 0 0 0 r 0 0 0 s + s 0 + + s 0 + − s − s − − − s − − i 0 i 0 0 0 i 0 0 0 i i i where i ∈ B\{r, s} where i ∈ B\{r, s} where i ∈ B\{r, s}

Thus in both cases W is a strictly sign-reversing vector with Wg = 0 and Z is a

Robin Leroy, Laurin Stenz 15 Linear Complementarity and Oriented Matroids

strictly sign-preserving vector, such that Zg 6= 0. But both W and Z are contained in the dual F ?, which contradicts the assumption (4) of Theorem 2.1.

(3) Assume trg = −, trr = tss = 0 and trs = +. Let δ = tsr. We show that δ = −. Suppose that δ ≥ 0. Look at the vectors W = X2 ◦ (−X1) and Z = X0 ◦ (−X1).

X2 X1 W = X2 ◦ (−X1) 0 1 0 1 g 0 0 0 X X Z = X ◦ (−X ) r + + g + + r 0 + − r − − s 0 δ = ⊕ δ = ⊕ r 0 + − s + + i i i 0 0 0 i 0 0 0 where i ∈ B\{r} where i ∈ B\{r, s}

Hence W is a strictly sign-reversing vector, such that Wg = 0 and Z is a strictly sign-preserving vector, such that Zg 6= 0. Both Z and W are in F, which contra- dicts the assumption (3) of Theorem 2.1. 

3.2 Algorithm

The Criss-Cross Method assumes, that the n complementary pairs j, j are linearly ordered.

Lemma 3.4 Basis after Diagonal Pivots

Let M = (Eˆ2n, F) be an oriented matroid satisfying the conditions of Theorem 2.1. Then the line 13 of the Algorithm 1 (Diagonal Pivot) produces a complementary basis.

Proof This is a straightforward application of Lemma 2.5. As Diagonal Pivot is executed, we must have that tpp is non-zero. Thus the condition in the Lemma is satisfied and we are certain to get a basis again after changing p and p. 

Lemma 3.5 Basis after Exchange Pivots

Let M = (Eˆ2n, F) be an oriented matroid satisfying the conditions of Theorem 2.1. Then the line 15 of the Algorithm 1 (Exchange Pivots) produces a complementary basis.

Robin Leroy, Laurin Stenz 16 Linear Complementarity and Oriented Matroids

Algorithm 1 Criss-Cross Method 1: procedure Criss-Cross(B: complementary Basis) 2: repeat 3: if B is feasible then 4: return (B,g) . found a feas. compl. basis 5: else

6: r := min {i ∈ B | tig < 0}

7: if ∀ i ∈ B tri ≤ 0 then 8: return (B,r) . found a cofeas. compl. basis 9: else

10: s := min {i ∈ B | tri > 0} 11: p := max {r, s}

12: if tpp 6= 0 then   13: Apply Diagonal Pivot, B := B \{p} ∪ {p} 14: else   15: Apply Exchange Pivots, B := B \{r, s} ∪ {r, s} 16: end if 17: end if 18: end if 19: until true = false 20: end procedure

Robin Leroy, Laurin Stenz 17 Linear Complementarity and Oriented Matroids

Proof Let B be the complementary basis in the algorithm, when we reach Exchange Pivots. Let r, s, p have the corresponding values. As p = max {r, s} we can consider two cases:

First look at the case where p = r. Then we have that r > s and further trg = −, trr = 0, trs = +, tsg ≥ 0.

g r s g r s

r − 0 + r − 0 + Lemma=⇒ 3.3 (2) s ⊕ s ⊕ ±

Note that the conclusion of Lemma 3.3 (2) is not possible, hence the assumption cannot hold. Thus tsr 6= 0. Then we can apply Lemma 2.5 with the indices r and s and get a 0  0 0 complementary basis B := B\{r} ∪ {s}. The corresponding tableaux T (B ) = (tij) 0 has then still a non-zero entry at tsr = tsr. This allows to reapply Lemma 2.5, but this time with s and r. Then we get a new basis B00 = B0 \{s}  ∪ {r}.

g r r g s r

s − 0 + s 0

s ± r ±

T (B0) T (B00) Note that B00 is exactly the new basis provided by Exchange Pivots, which proves this case.

The second case is p = s. Then we have that r < s and further trg = −, trs = +, tss = 0.

g r s g r s

r − + r − 0 + Lemma=⇒ 3.3 (1) Algorithm s 0 s 0

Lemma 3.3 (1) gives us that trr is non-negative and by the definition of s it is not possible for trr to have a positive value. So we reach the conclusion that trr = 0. By Lemma 3.3 (3) is tsr = −, which gives us the following tableau.

Robin Leroy, Laurin Stenz 18 Linear Complementarity and Oriented Matroids

g r s

r − 0 +

s − 0

This tableau resembles the one in the previous case, the important thing is that trs 6= 0 and tsr 6= 0. By exactly the same argument as in the previous case we get that Exchange Pivots yields again a complementary basis. 

Lemma 3.6

Let M = (Eˆ2n, F) be an oriented matroid satisfying the conditions of Theorem 2.1.

Then ∀ e ∈ E2n at most one of the following statements holds

(A1) ∃ B1 complementary basis not containing g, such that

1 1 1 1 e ∈ B , teg = −, ∀ i ∈ B \{e} tig ≥ 0;

(A2) ∃ B2 complementary basis not containing g, such that

2 2 2 2 e ∈ B , teg = −, ∀ i ∈ B \{e} tig ≥ 0;

(B1) ∃ B3 complementary basis not containing g, such that

3 3 3 3 3 e ∈ B , trg = −, tre = +, ∀ i ∈ B \{e} tri ≤ 0

for some r ∈ B3 with r 6= e;

(B2) ∃ B4 complementary basis not containing g, such that

4 4 4 4 4 e ∈ B , tsg = −, tse = +, ∀ i ∈ B \{e} tsi ≤ 0

for some s ∈ B4 with s 6= e;

k k where T (B ) = (tij) for k = 1, 2, 3, 4. (Illustration: Figure 3)

Robin Leroy, Laurin Stenz 19 Linear Complementarity and Oriented Matroids

g g ⊕ ⊕ ...... ⊕ ⊕ e − e − (A1) (A2)

g e g e

r − ⊕ ··· ··· ⊕ + s − ⊕ ··· ··· ⊕ +

(B1) (B2)

Figure 3: Lists the four types of tableaux in Lemma 3.6.

Proof Look at the fundamental following fundamental circuits and cocircuits:

X1 = X(B1, g),Y 1 = Y (N 3, r), X2 = X(B2, g),Y 2 = Y (N 4, s).

j ˆ j 3 4 where N = E2n\B . The entries trr and tss are both non-positive but also non-negative due to Lemma 3.3 (1), which implies that they are zero. The vectors above have the following form.

Y 1 Y 2 1 2 X X g + g + g + g + e − e 0 e 0 e − e 0 e − e − e 0 r + s + i i r 0 s 0 i 0 i 0 i i where i ∈ B1 \{e} where i ∈ B2 \{e} i 0 i 0 where i ∈ B3 \{e} where i ∈ B4 \{e} Note that these vectors are complementary. Compare these vectors with the vectors in Lemma 2.3. One can see that the prerequisites are satisfied, hence only one of the four vectors can exists, which proves this lemma. 

Robin Leroy, Laurin Stenz 20 Linear Complementarity and Oriented Matroids

Theorem 3.7 Finiteness of the Criss-Cross Method

Let M = (Eˆ2n, F) be a matroid satisfying the conditions of Theorem 3.2. Then the Criss-Cross Method (Algorithm 1) terminates in finitely many steps and one ob- tains either the feasible or cofeasible complementary basis promised in Theorem 3.2.

Proof

We prove this by contradiction. Suppose that there is a matroid M = (Eˆ2n, F) with minimal elements in Eˆ2n, such that the Criss-Cross Method (Algorithm 1) does not terminate.

Since there are finitely many elements in Eˆ2n, there are also finitely many bases. The fact that the algorithm does not terminate for this matroid, means that it produces a cycle of bases

B0,B1,...,Bk = B0 for some k ∈ N.

By minimality of E2n every element j has to both enter and leave one of these bases at some point in the cycle. Otherwise one could leave the pair j, j away and one gets a matroid with smaller index set, which would contradict the minimality. For this note that the axioms of oriented matroids are defined on components of the vector, hence one can easily leave an index away and gets an oriented matroid again.

Let {e, e} ∈ E2n be the largest pair in E2n (in the beginning of this section we assumed a linear ordering of the pairs). The element e has to enter a basis at one point, so by looking at the algorithm one of the following cases has to occur

(E1) r = e then r = e ∈ B, teg = −, ∀ i ∈ B\{e} tig ≥ 0;

(E2) r 6= e then s = e ∈ B, trg = −, ∀ i ∈ B\{e} tri ≤ 0, tre = +. But the element e has to leave as well, so again by looking at the algorithm one of the following two cases has to occur

(L1) r = e then r = e ∈ B, teg = −, ∀ i ∈ B\{e} tig ≥ 0;

(L2) r 6= e then s = e ∈ B, trg = −, ∀ i ∈ B\{e} tri ≤ 0, tre = +. Compare these cases with the cases (A1), (A2), (B1), (B2) of Lemma 3.6. These corre- spond exactly to our four cases (E1), (E2), (L1), (L2). Hence by the conclusion of the lemma, only one of them can occur. But as already mentioned, one of each pair has to occur. This is a contradiction. Thus the algorithm terminates always. 

Robin Leroy, Laurin Stenz 21