Matroid Optimization and Algorithms Robert E. Bixby and William H. Cunningham June, 1990 TR90-15 MATROID OPTIMIZATION AND ALGORITHMS by Robert E. Bixby Rice University and William H. Cunningham Carleton University Contents 1. Introduction 2. Matroid Optimization 3. Applications of Matroid Intersection 4. Submodular Functions and Polymatroids 5. Submodular Flows and other General Models 6. Matroid Connectivity Algorithms 7. Recognition of Representability 8. Matroid Flows and Linear Programming 1 1. INTRODUCTION This chapter considers matroid theory from a constructive and algorithmic viewpoint. A substantial part of the developments in this direction have been motivated by opti­ mization. Matroid theory has led to a unification of fundamental ideas of combinatorial optimization as well as to the solution of significant open problems in the subject. In addition to its influence on this larger subject, matroid optimization is itself a beautiful part of matroid theory. The most basic optimizational property of matroids is that for any subset every max­ imal independent set contained in it is maximum. Alternatively, a trivial algorithm max­ imizes any {O, 1 }-valued weight function over the independent sets. Most of matroid op­ timization consists of attempts to solve successive generalizations of this problem. In one direction it is generalized to the problem of finding a largest common independent set of two matroids: the matroid intersection problem. This problem includes the matching problem for bipartite graphs, and several other combinatorial problems. In Edmonds' solu­ tion of it and the equivalent matroid partition problem, he introduced the notions of good characterization (intimately related to the NP class of problems) and matroid (oracle) algorithm. A second direction of generalization is to maximizing any weight function over the independent sets. Here a greedy algorithm also works: Consider the elements in order of non-increasing weight, at each step accepting an element if its weight is positive and it is independent with the set of previously-accepted elements. A most significant step was Edmonds' recognition of a polyhedral interpretation of this fact. He used the fact that the greedy algorithm optimizes any linear function over the convex hull of characteristic vectors of independent sets to establish a linear-inequality description of that polyhedron. He then showed that a greedy algorithm also works for the much larger class of polymatroids: poly­ hedra that are defined from functions that, like matroid rank functions, are submodular. This polyhedral approach led to the solution of the weighted matroid intersection problem and extensive further generalizations, culminating in the optimal submodular flow prob­ lem. Other important related problems are that of minimizing an arbitrary submodular function and a polymatroid generalization of nonbipartite matching. Complete solutions of these problems remain to be discovered, but substantial progress has been made. There is a second aspect of matroid theory, other than the optimizational one, for which a constructive approach is desirable. This might be called the structural aspect. We seek constructive answers to such fundamental questions as the connectivity, graphicness, or linear representability of a given matroid. It is pleasant to realize how many of the classical structural results of Tutte and Seymour are essentially constructive in nature. In fact, the most important structural result to date, Seymour's characterization of regular 2 matroids, yields an algorithm to recognize regularity by reducing that problem to recogniz­ ing graphicness and connectivity, two problems to which Tutte contributed substantially. On the other hand many structural questions can be proved to be unsolvable by matroid algorithms. Indeed, recognizing regularity turns out to be essentially the only question on linear representability that is solvable. We use the matroid terminology introduced in the Handbook chapter of Welsh. The following additional notation is used. For a subset A of the set S, we use A to denote S\A. Where J is an independent set of matroid M = (S,I), and e E J with JU {e} tf; I, we use C( J, e) to denote the unique circuit (fundamental circuit) contained in JU { e}. Use of this notation will imply that J U { e} ¢:. I. 3 2. MATROID OPTIMIZATION Two of the most natural optimization problems for a matroid M = (S, I) with weight vector c E Rs are to find an independent set of maximum weight, and to find a circuit of minimum weight. These generalize standard optimization problems on graphs. We show in this section that the classical greedy algorithm solves the independent set problem. While this problem is easy, it leads to very useful polyhedral methods. Considering the greedy algorithm requires the discussion of efficiency of matroid algorithms. It turns out that, with respect to the resulting notion of algorithmic solvability, the circuit problem above is intractable. The Optimal Independent Set Problem If we are asked to find a maximum cardinality independent set, we know from the independent set axioms (Chapter Welsh) that any maximal independent set is a solution. Hence the following trivial algorithm works: Where S = { e1, e2, ... , en}, start with J = 0 and treat the ei sequentially, adding ei to J if and only if J U { ei} is independent. The maximum weight independent set problem (2.1) max(c(J): J EI) includes the above problem as a special case (take each Ci = 1). Moreover, (2.1) is solved by making two simple modifications to the above algorithm. First, we treat the elements of Sin order of non-increasing weight, and second we do not add to Jany negative-weight elements. The resulting method is the greedy algorithm (GA). Greedy Algorithm for the Maximum-Weight Independent Set Problem Order S = {e1, e2, ... , en} SO that Ce1 ~ Ce2 ~ ••• ~ Cem ~ 0 ~ Cem+i ~ ... ~ Cen J := 0 For i = 1 to m do If J U { ei} E 'I then J := JU { ei} 5 (2.2) Theorem. For any matroid M = (S,T) and any c E R , GA solves (2.1). Proof. Suppose that J = {ii, ... , j k} is found by GA, but Q = { q1 , ... , qc} has larger weight. Assume that the ji are in the order in which GA added them, and that Cq1 ~ Cq2 ~ ••• ~ Cql. There is a least index i such that Cq; > Cj; or Cq; > 0 and i > k. Then {ji, ... ,ji-d is a basis of A= {j1, ... ,ji-1,q1, ... ,qi}, for otherwise GA would choose one of q1, ... , qi as j i. But { q1, ... , qi} is a larger independent subset of A, a contradiction. D A pair (S,T) satisfying only that 'I=/= 0 and A~ BE 'I implies A E 'I, is sometimes called an independence system. Both the problem (2.1) and the greedy algorithm can be 4 stated for any independence system, and one wonders whether other independence systems are similarly nice. However, the matroid axioms say that ( S, I) is a matroid if and only if 8 GA solves (2.1) for every c E {O, 1} . In view of this observation, Theorem (2.2) can be restated as follows. (2.3) Theorem. An independence system (S,I) is a matroid if and only if GA solves 8 (2.1) for every c E R . A second proof of (2.2), almost as short and much more useful, leads us to the poly­ hedral method. Proof. (of 2.2) Let x be the characteristic vector of the set J produced by GA, and let x be the characteristic vector of any independent set J'. Then c( J') = I: Ce;Xe; = c · x. Let Ti = {e 1 , ... , ei} for O :S i :S n. Notice that x(Ti) 2:: x(Ti) for 1 :S i :S m, because J n Ti is a maximal independent subset of Ti. Then m n C · X = L Ce;Xe; + L Ce;Xe; i=l i=m+l m n i=l i=m+l m-l n = L (ce; - Ce;+Jx(Ti) + Cmx(Tm) + L Ce;Xe; i=l i=m+l m-l n :::; L(ce; -Ce;+i)x(Ti)+cmx(Tm)+ L Ce;Xe;• i=l i=m+l But the last line is c · x, since the inequality holds with equality for x = x. o Notice that the only properties of x, x used in the second proof of (2.2) were that x 2:: 0 and x(Ti) :S x(Ti)(= r(Ti)), 1 :Si :Sm. So GA actually solves the following linear programming problem, since x(Ti) = r(Ti) implies x(Ti) :S x(Ti): max1m1ze c · x (2.4) subject to x(A) :S r(A), ACS· - ' X 2:: 0. This observation implies the following Matroid Polytope Theorem of Edmonds (1970). (2.5) Theorem. For any matroid M = (S,I), the extreme points of the polytope P(M) = { x E R! : x( A) :S r( A) for all A ~ S} are precisely the characteristic vectors of independent sets of M. 5 Proof. It is easy to see that the characteristic vector of any independent set is an extreme point of P(M). Now let x' be an extreme point of P(M). Then there is c E Rs such that x' is the unique optimal solution of max(c · x: x E P(M)). Applying GA to M and this c, we obtain x, the characteristic vector of an independent set, and x solves the same linear programming problem. Hence x' = x, as required. o The dual linear program to (2.4) is: minimize I)r(A)yA : A~ S) (2.6) subject to 'z:)YA : j E A) 2: Cj, for j E S; YA 2: 0, for A ~ S. Analysis of the second proof of theorem (2.2) shows that the following formula, sometimes called the dual greedy algorithm, gives an optimal solution y' to (2.6): I Yr, = Ce, - Ce,+1' 1::; i < m; Y'rm = Cerni y~ = 0 for all other A~ S.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages69 Page
-
File Size-