Introduction
This dissertation is a reading of chapters 16 (Introduction to Integer Liner Programming) and 19 (Totally Unimodular Matrices: fundamental properties and examples) in the book : Theory of Linear and Integer Programming, Alexander Schrijver, John Wiley & Sons © 1986.
The chapter one is a collection of basic definitions (polyhedron, polyhedral cone, polytope etc.) and the statement of the decomposition theorem of polyhedra.
Chapter two is “Introduction to Integer Linear Programming”. A finite set of vectors a1 at is a Hilbert basis if each integral vector b in cone { a1 at} is a non- negative integral combination of a1 at. We shall prove Hilbert basis theorem: Each rational polyhedral cone C is generated by an integral Hilbert basis.
Further, an analogue of Caratheodory’s theorem is proved: If a system n a1x β1 amx βm has no integral solution then there are 2 or less constraints among above inequalities which already have no integral solution.
Chapter three contains some basic result on totally unimodular matrices. The main theorem is due to Hoffman and Kruskal: Let A be an integral matrix then A is totally unimodular if and only if for each integral vector b the polyhedron x x 0 Ax b is integral. Next, seven equivalent characterization of total unimodularity are proved. These characterizations are due to Hoffman & Kruskal, Ghouila-Houri, Camion and R.E.Gomory.
Basic examples of totally unimodular matrices are incidence matrices of bipartite graphs & Directed graphs and Network matrices. We prove Konig-Egervary theorem for bipartite graph.
1
Chapter 1
Preliminaries Definition 1.1: (Polyhedron) A polyhedron P is the set of points that satisfies finite number of linear inequalities i.e., P = {x | ≤ b} where (A, b) is an m (n + 1) matrix. Definition 1.2: (Polyhedral Cone) A cone C is polyhedral if C = {x ≤ 0} for some matrix A, i.e., C is the intersection of finitely many linear half spaces. Definition 1.3: (Rational Polyhedral Cone) A cone C is rational polyhedral if C = {x ≤ 0} for some rational matrix A. Definition 1.4: (Characteristic Cone) The Characteristic cone of P, denoted by Char.cone P, is the polyhedral cone Char.cone P: = = {y ≤ 0} (1) The non-zero vectors in char.cone P are called the infinite directions of P. Definition 1.5: (Polytope) A bounded polyhedron is called a polytope. Definition 1.6: (Pointed Cone) The linearity space of P is {y = 0} which is Char.cone P - Char.cone P. Clearly it is a linear space, as the kernel of A. If the dimension of this space is zero then P is called pointed. Definition 1.7: (Characteristic Vector) Let S be a finite set. If T S, the characteristic vector of T is the {0, 1} - vector in denoted by , satisfies = 1 if s T = 0 if s S\T Theorem 1.8: (Farkas’-Minkowski-Weyl theorem) A convex cone is polyhedral if and only if it is finitely generated. Theorem 1.9: (Decomposition theorem for Polyhedra ): P is a polyhedron in if and only if P = Q + C for some polytope Q and some polyhedral cone C.
2
Theorem 1.10: (Farkas’ Lemma): Let A be a real m n matrix and let c be a real nonzero n- vector. Then either the primal system Ax 0 and has a solution for x or the dual system = c and y has a solution for, y but never both.
3
Chapter 2 Integer Linear Programming
Definition 2.1: (Integer Linear Programming) Given rational matrix A, and rational vectors b and c, determine,
max {c | ≤ b; x integral} (1) Another definition is
Definition 2.2: Given rational matrix A, and rational vectors b and c, determine,
max {c | x ≥ 0 = b; x integral} (2) Remark 2.3: It is easy to see that we can obtain from one formulation to another one.
Note 2.4: We have the duality relation:
max {c | ≤ b ; x integral} min {yb | y ≥ 0, yA = c; y integral} (3) [Since, ≤ b y ≤ yb c ≤ yb] We can have strict inequality. For example, Take A = b = c = Thus the primal problem is
max {x | 2x ≤ 1; x integral}
Clearly, the maximum is 0. [x {0,-1,-2,------}]
The dual is
min {y | y ≥ 0, 2y = 1; y integral}
So the problem is infeasible. But the corresponding LP – optimal both are . Note 2.5: We may write an analogous statement for Farkas’ lemma.
4
The rational system = b has a nonnegative integral solution x if and only if yb is a nonnegative integer whenever yA is a nonnegative integral vector.
But, the statement is not true. For Example, Take A = b = . The rational system is
2x 1 + 3x 2 = 1.
Clearly, yA = ≥ 0 implies y ≥ 0. As 2 + 3 = 1 has no non-negative integral solution, converse is not true.
Note 2.6: We restrict to integer linear programming with rational input data. Otherwise, there may not be an optimum to the given problem. For Example,
Sup ξ η ξ η ξ η But no ξ, η attain the supremum. (Since is irrational) Definition 2.7: The LP-relaxation of the integer linear programming problem in the definition 2.1 is the following LP problem:
max {c | ≤ b} (5) Clearly, LP-relaxation gives an upper bound for corresponding integer linear programming.
Definition 2.8: (The Integer Hull of a Polyhedron) For any polyhedron P, the integer hull of P is I = the convex hull of the integral vectors in P. (6) I Note 2.9: The ILP problem (1) is equivalent to determine,
max { | x } for P = {x | ≤ b} Remark 2.10: For any rational polyhedral cone C,
CI = C (7)
5
(as C is generated by rational and hence by integral, vectors.)
Theorem 2.11: [Meyer] For any rational polyhedron P, the set is again a polyhedron. I If is nonempty then I Char. cone (P) = Char. cone ( ) I Proof: Consider a rational polyhedron P with a decomposition
P = Q + C,
Where Q is a Polytope and C is the characteristic cone of P. As C = CI, let C be generated by the integral vectors , and let B be the polytope, defined by, B = µ (8) It is enough to show that
= ( + C I I Note that ( is a polytope as both Q & B are polytopes. I Observe,
( + C + C I I
= + C I ( Remark 2.10) I ( I
= I i.e., (Q B I + C PI
Now to show reverse inclusion. Take p be any integral vector in P. i.e., p PI. Now, p = q + c for some q Q and c C. We have,
C = µiyi ( µi 0)
6
= µi yi + µi - µi )yi, Denote first term by c′ and the second by b (= c –c′). Clearly c′ C and b . Hence,
p = q + c′ + b = (q+b) + c′
q + b = p – c′
q + b is an integral vector as p and c′ are integral.
p (Q B I + C i.e., PI = ( Q B I + C. ∎ Remark 2.12: The above theorem implies that any integer linear programming problem can be written as max { cx | x Q} for some polyhedron Q which is again a linear programming problem.
This means we can represent PI by linear inequalities. But generally this is a difficult task. The theorem (2.11) can be extended to: For each rational matrix A, there exists a integral matrix M such that for each column vector b there exists a column vector d such that
x Ax b I = {x | Mx ≤ b} (9) So the coefficients of the inequalities defining PI can be described by the coefficients of the inequalities defining P.
Definition 2.13: (Integral Polyhedron) A rational polyhedron with property P = PI is called an integral polyhedron.
Remark 2.14: It is easy to see that for a rational polyhedron P the following are equivalent.
(i) P is integral i.e., P = PI i.e., P is the convex hull of the integral vectors in P. (ii) Each face of P contains integral vectors.
7
(iii) Each minimal face of P contains integral vectors. (iv) max { cx | x } is attained by an integral vector for each C for which the maximum is finite.
Definition 2.15: (Hilbert Basis) A finite set of vectors a1 at is a Hilbert basis if each integral vector b in cone a1 at is a nonnegative integral combination of a1 at. Note 2.16 : When each vector in a Hilbert basis is integral, it is called Integral Hilbert Basis.
Theorem 2.17: [Gordan] Each rational polyhedral cone C is generated by an integral Hilbert basis.
[Vander Corput] If C is pointed there is a unique minimal Hilbert basis generating C, (minimal relative to taking subset).
Proof : Let C be a rational polyhedral cone generated by say b1, bk [Theorem 1.8]. Without loss of generality, we can assume that b1, bk are integral vectors. Let a1, at be all integral vectors in the polytope
{λ1b1 λkbk | 0 λ , i = 1, k} (10) We claim that a1, at form an integral Hilbert basis. Inparticular those b1, bk among a1, at shall generate C. Let b be any integral point in C. We have k b = i 1 µiyi , µi 0 (11) Then
b = µ1 b1 + + µk bk+ µ1 µ1 b1 µk µk bk (12)
b - µ1 b1 + + µk bk = µ1 µ1 b1 µk µk bk (13)
The left hand side vector, as it is integral, occurs among, a1, at. Observe that the R.H.S. of (13) clearly belongs to (11) because 0 µ µ .
8
Since b1, bk occur among a1, at, it follows that b is a nonnegative integral combination of a1, at. So a1, at form a Hilbert basis. Now, assume that the cone C is pointed. Define
H = {a 0, a integral, a is not the sum of two other integral vectors in C}. (14)
It is clear that any integral Hilbert basis generating C must contain H. So H is finite as H is contained in (10).
We claim that H itself a Hilbert basis generating C. Let b be a vector such that bx 0 if x C – {0} (b exists as C is pointed). Suppose not every integral vector in C is a nonnegative integral combination of vectors in H. Let c be such a vector, with bc as small as possible this exists as, c must be in the set (10). Then c is not in H. Hence
c = c1 c2 for certain nonzero integral vectors c1 and c2 in C. Then
0 c1 and 0 c2 Hence both c 1 and c 2 are nonnegative integral combinations of vectors in H and therefore c is also. Therefore H is a Hilbert basis. As H is contained in any Hilbert basis of C, it is minimal. ∎ Remarks 2.18 :
(i) Combining the methods of theorem 2.11 and 2.17 for any rational polyhedron
P there exist integral vectors x1 xt, y1 ys such that x x P x integral = λ1x1 λtxt µ1y1 µsys λ1 λt µ1 µs nonnegative integers with λi 1 (15) (ii) If the cone is not pointed there is no unique minim al integral Hilbert basis generating the cone.
9
(iii) If a vector c belongs to a minimal integral Hilbert basis generating a pointed cone then the components of c are relatively prime integers.
Theorem 2.19: [A Theorem of Doignon] Let a system
a1x β1 amx βm (16) of linear inequalities in n variables be given. If (16) has no integral solution, then there are 2 n or less constraints among (16) which already have no integral solution.
Proof: Suppose (16) has no integral solution. We may assume that if we delete one of the constrains in (16), the remaining system has an integral solution. This means there exists integral vectors x1 xm so that, for j = 1, m, j ajx βj j aix βi i j We must show m 2n So assume m > 2n Let
Z = n Conv.hull { x1 xm} (17)
Choose γ1 γm so that:
(i) γj min ajz z Z ajz βj
(ii) the ystem a1x γ1 amx γm has no solution in Z.
(iii) γ1 γm is as large as possible. (18)
We claim that such γ1 γm exists. This is proved by showing the set of γ1 γm satisfying (18) is nonempty, bounded and closed. Note that j x z z Z ajz βj
10
If we take
γj min ajz z Z ajz βj j such γ1 γm exist. Note also that γj βj. As ajx βj, the system in (ii) has no solution in Z. This shows that (18) is nonempty. Next, if
j γj ajx j Then as aix βi i j and βi γi we get that system in (ii) has a solution. Therefore j γj ajx j
i.e., the set of γ1 γm satisfying (18), is bounded.
Now the complement of the set of γ1 γm satisfied (18) is
z ajz γj j which is a finite intersection of open half spaces ajz γj and hence it is an open set. j Since γ1 γm is as large as possible, for each j =1, m, there exists y Z. So that j j ajy = γj and aiy γi i j (19) As m 2n, there exists k, l k l so that yk = yl mod 2 i.e. , either both are even or both are odd. [n = 1 m > 2, n = 2 m > 4]. Thus,
1 yk yl elongs to Z and in view of (19), satisfies the system in (ii). This contradicts 2 (ii). Therefore m 2n ∎ Corollary 2.20: [Scarf] Let Ax b, be a system of linear inequalities in n variables, and let c n. If max {c x | Ax ≤ b ; x integral} is finite then
11
max { cx | Ax ≤ b; x integral} = max { cx | A′x ≤ b′; x integral} (20) ′ ′ n for some subsystem A x ≤ b of Ax b with at most 2 - 1 inequalities. Proof: Let
µ = max { cx | Ax ≤ b ; x integral} Hence for each t n., the system
Ax ≤ b, cx µ (21) has no integral solution. Therefore, by theorem (2.19) for each t N there is a subsystem of (21) of at most 2 n constraints having no integral solution.
Since Ax b does have an integral solution (as µ is finite), each such subsystem must contain the constraint cx µ . Hence there is a subsystem A′x ≤ b′ of atmost 2n - 1 constraints so that the system (21) has no integral solution for infinitely many values of t Therefore, A′x ≤ b′, cx > µ has no integral solution. This gives (20). ∎ Note 2.21: The bound 2n in theorem (2.19) is best possible. This is shown by the system
i I xi - i I xi - 1 (I {1, n}) (22) n of 2 constraints in the n variables x1 xn. Observe that for n= 1, the above system is x1 1 I x1 0 I 1 which is clearly infeasible.
Next, for n = 2, the system is
x1 x2 1 I 12
0 I 1 x2 x1 0 I 2 x1 x2 1 I 1 2 In particular, we have
x1 x2 1 x1 x2 0 1 1 Clearly, the above system has no integral solution (x = x ). 1 2 2 2 Now, take n + 1 variables. Put = {1, 2, n, n + 1} = {1, n} {n + 1}. Observe that . Hence we can arrange inequalities in the system (22) as
i I xi - i I xi - - 1 ( I {1, n})
i I xi + - i I xi - 1 = If this system has a solution then adding we get a solution of the system
i I xi - i I xi This means
i I xi - i I xi – 1 which has no solution by induction hypothesis.
13
Chapter 3
Totally Unimodular Matrices
Definition 3.1: (Totally Unimodular Matrix) A matrix A is totally unimodular if each sub determinant of A is 0, +1 or -1.
Note3.2: In particular each entry in a totally unimodular matrix is 0, +1 or -1.
Remark 3.3: It is easy to see that if A is totally unimodular then all following matrices are totally unimodular.
I T A A A I A , -A, , , T I A A A A Further if A is a square totally unimodular matrix then A 1 is also totally unimodular.
A relation between totally unimodularity and integer linear programming is given by following result.
Theorem 3.4 : Let A be totally unimodular matrix and let b be an integral vector. Then the polyhedron P = x Ax b is integral. Proof: Consider a minimal face F of P
F = x A′x b′ where A′x b′ is a subsystem of Ax b with A′ having full row rank. Then we may permute the columns of A in such a way that,
A′ UV where U is a nonsingular matrix and let det U = 1. A basic feasible solution of A′x b′ is U 1b = (1) 0
14
Since U is totally unimodular U 1 is also. Hence each entry in U 1 is 0, 1. Thus x is integral. Thus every minimal face is an integral vector. Hence P is an integer polyhedron. ∎ Note 3.5: Following corollary makes clear that each linear program with integer data and totally unimodular constraints matrix has an integral optimum solution.
Corollary 3.6: Let A be totally unimodular matrix and let b and c be integral vectors. Then both problems in the LP-duality equation
max { x| Ax b = min {yb | y ≥ 0, yA = c} (2) have integral optimum solution.
Proof: By above theorem the polyhedron Ax b is integral and hence max { cx | Ax b} is integral. Further as A is unimodular, I AT (3) AT is also unimodular which is a constraints matrix for the minimization problem. We again use above theorem to conclude that min {yb | y ≥ 0, yA = c} is integral. ∎ Remark 3.7: Hoffman & Kruskal theorem characterizes totally unimodularity which is similar to above characterization.
Definition 3.8 : Let A be any m n matrix of row full rank. A is called unimodular if A is integral, and each basis of A has determinant 1. Proposition 3.9: The matrix A Zm n is totally unimodular if and only if IA is unimodular.
Proof: [ ] If a basis of IA contains columns from A then its determinant is 1. Otherwise given basis can be rearranged (if necessary) in following form IB 1 (4) 0 B2 15
Note that det B 2 , as columns form basis. But A is totally unimodular, so det B 2 = 1 and hence determinant of the basis is 1. Let IA be unimodular. Consider a submatrix B of A. If rk B = m then by unimodularity of IA det B = 1. Suppose rk B = k m. Now we can complete the basis using columns of and taking columns in A corresponding to columns in B. Further, we can rearrange these columns to have form
IB B 1 0 B
Now, by unimodularity of IA , det B1 = 1 = det B. ∎ Theorem 3.10: Let A be an integral matrix of full row rank then the polyhedron x x 0 Ax b is integral for each integral vector b, if and only if A is unimodular. Proof: Let A be m n matrix. First suppose that A is unimodular. Let b be an integral vector, and let x be a vertex of the polyhedron x x 0 Ax b . Then there are n linearly independent constraints satisfied by x with equality. Therefore the columns of A corresponging to the nonzero components of x are linearly independent.
We can extend these columns to a basis B of A. Then x restricted to the coordinates corresponding to B is equal B 1b, which is integral as det B = 1. Since outside B, x is zero, it follows that x is integral .
[ ] Suppose that x x 0 Ax b is integral for each integral vector b. Let B be a basis of A. To prove that B is unimodular it suffices to show that B 1t is integral for each integral vector t. Then there exist an integral vector y such that
z = y + B 1t 0
16
Then b = Bz is integral. Now, extend z by adding zero components. Let this vector be z′ Then
Az′ = Bz = b and z′ is a vertex of the polyhedron x x 0 Ax b (As it is in the polyhedron, and satisfies n linearly independent constraints with equality.) Therefore, z′ is integral, so z and z – y = B 1t is integral. ∎ Corollary 3.11: (Hoffman and Kruskal ’s theorem) Let A be an integral matrix. Then A is totally unimodular if and only if for each integral vector b the polyhedron x x 0 Ax b is integral. Proof: Note that, for any integral vector b, the vertices of the polyhedron x x 0 Ax b are integral if and only if the vertices of the polyhedron z z 0 IA z b are integral. (Transform Ax b into Ax y b and put (x,y) = z). By the proposition (3.9) A is totally unimodular if and only if IA is unimodular. Hence, the theorem proves the corollary. ∎ Remark 3.12: An integral matrix A is totally unimodular if only if for all integral vectors a,b,c,d the vertices of the polytope x c x d a Ax b are integral. Observe that the constraints can be written as x d x c, Ax b Ax a. Hence the corresponding matrix has the form
I I (5) A A Note 3.13: It is clear from the Hoffman and Kruskal ’s theorem that an integral matrix A is totally unimodular if and only if one of the following polyhedron has all vertices integral, for each integral vector b and for some integral vector c.
x x c Ax b x x c Ax b 17
x x c Ax b x x c Ax b Corollary 3.14: An integral matrix A is totally unimodular if and only if for all integral vectors b and c both sides of the linear programming duality equation
max{cx | x 0 Ax b} = min {yb | y ≥ 0, yA c } (7) are achieved by integral vectors x and y (if they are finite).
Proof: Clear from above corollary and noting that A is totally unimodular if and only if AT is totally unimodular. ∎ Theorem 3.15: Let A be a matrix with entries 0, +1 or -1. Then the following are equivalent:
(i) A is totally unimodular, i.e. each square submatrix of A has determinant 0, +1, or -1. (ii) [Hoffman & Kruskal] For each integral vector b the polyhedron x x 0 Ax b has only integral vertices. (iii) [Hoffman &