On Bisubmodular Maximization

Total Page:16

File Type:pdf, Size:1020Kb

On Bisubmodular Maximization On Bisubmodular Maximization Ajit P. Singh Andrew Guillory Jeff Bilmes University of Washington, Seattle University of Washington, Seattle University of Washington, Seattle [email protected] [email protected] [email protected] Abstract polynomial-time approximation algorithms exist when the matroid constraints consist only of a cardinality con- Bisubmodularity extends the concept of sub- straint [22], or under multiple matroid constraints [6]. modularity to set functions with two argu- Our interest lies in enriching the scope of selection ments. We show how bisubmodular maxi- problems which can be efficiently solved. We discuss mization leads to richer value-of-information a richer class of properties on the objective, known as problems, using examples in sensor placement bisubmodularity [24, 1] to describe biset optimizations: and feature selection. We present the first constant-factor approximation algorithm for max f(A, B) (2) A,B V a wide class of bisubmodular maximizations. ⊆ subject to (A, B) , i 1 . r . ∈ Ii ∀ ∈ { } 1 Introduction Bisubmodular function maximization allows for richer problem structures than submodular max: i.e., simul- Central to decision making is the problem of selecting taneously selecting and partitioning elements into two informative observations, subject to constraints on the groups, where the value of adding an element to a set cost of data acquisition. For example: can depend on interactions between the two sets. Sensor Placement: One is given a set of n locations, To illustrate the potential of bisubmodular maximiza- V , and a set of k n sensors, each of which can cover tion, we consider two distinct problems: a fixed area around it. Which subset X V should be ⊂ Coupled Sensor Placement (Section5): One is instrumented, to maximize the total area covered? given a set of n locations, V . There are two different Feature Selection: Given a regression model on n kinds of sensors, which differ in cost and area covered. features, V , and an objective which measures feature Given a total sensor budget, k, select sites to instru- quality, f : 2V R, select the best k features. ment with sensors of each type (A, B), to maximize → the total area covered. Both examples are selection problems, for which a near-optimal approximation can be found efficiently Coupled Feature Selection (Section6): Given a when the objective is submodular (see Definition1, regression model on n features, V , select a set of k below). Many selection problems involve a submodular features and partition them into disjoint sets A and objective: e.g., sensor placement [16, 15], document B, such that a joint measure of feature quality f : A B R is maximized. summarization [20], influence maximization in social × → networks [12], and feature selection [13]. All can be This paper is the first to (i) propose bisubmodular framed as submodular function maximization: maximization to describe a richer class of value-of- max f(S) (1) information problems; (ii) define sufficient conditions S V ⊆ under which bisubmodular max is efficiently solvable; subject to S i, i 1 . r . (iii) provide an algorithm for such instances (algorithms ∈ I ∀ ∈ { } for bisubmodular min exist [7]); (iv) prove, by con- The constraint sets are independent sets of matroids, Ii struction, the existence of an embedding of a directed which include knapsack or cardinality constraints. Fully bisubmodular function into a submodular one. Ancil- th lary to our study of bisubmodular maximization is a Appearing in Proceedings of the 15 International Con- new result on the extension of submodular functions ference on Artificial Intelligence and Statistics (AISTATS) 2012, La Palma, Canary Islands. Volume 22 of JMLR: defined over matroids (Theorem2). W&CP 22. Copyright 2012 by the authors. Our key observation is that many bisubmodular 1055 On Bisubmodular Maximization maximization problems can be reduced to matroid- 3 Simple Bisubmodularity constrained submodular maximization, for which ef- ficient, constant-factor approximation algorithms al- A natural analogue to submodularity for biset functions ready exist. We first introduce the concept of simple is simple bisubmodularity: bisubmodularity (Section3), which describes the biset Definition 4 (Simple Bisubmodularity). f : 22V analogue of submodularity. Directed bisubmodular- 2→V R is simple bisubmodular iff for each (A, B) 2 , ity [23] allows for more complex interaction between 2V ∈ (A0,B0) 2 with A A0, B B0 we have for the set arguments. The question of whether or not ∈ ⊆ ⊆ s / A0 and s / B0: directed bisubmodular functions can be efficiently max- ∈ ∈ imized was has been an open problem for over twenty f(A + s, B) f(A, B) f(A0 + s, B0) f(A0,B0), years. Our reduction approach proves that a wide range − ≥ − f(A, B + s) f(A, B) f(A0,B0 + s) f(A0,B0). of directed bisubmodular objectives can be efficiently − ≥ − maximized (Section4). 2V Equivalently, (A, B), (A0,B0) 2 , ∀ ∈ 2 Preliminaries f(A, B)+f(A0,B0) f(A A0,B B0)+f(A A0,B B0) ≥ ∪ ∪ ∩ ∩ We first introduce basic concepts and notation. Definition 1 (Submodularity). A set-valued function Fixing one of the coordinates of f(A, B) yields a sub- over ground set V , f : 2V R is submodular if for modular function in the free coordinate. → any A A0 V v, ⊆ ⊆ \ 3.1 Reduction to Submodular Maximization f(A + v) f(A) f(A0 + v) f(A0). (3) − ≥ − The reduction uses a duplication of the ground set Equivalently, A, A0 V , ¯ ∀ ⊆ (c.f., [10, 2]). Define V to be an extended ground set formed by taking the disjoint union of 2 copies of the f(A) + f(A0) f(A A0) + f(A A0). (4) ≥ ∪ ∩ original ground set V . For an element s V , we use ∈ ¯ + is used to denote adding an element to a set. si where i 1, 2 to refer to the ith copy in V . Then ¯ ∈ { } ¯ V = V1 V2 where Vi , i si. For an element s V A set function f(A, B) with two arguments A V and we use abs∪ (s) to refer to the corresponding original∈ ⊆ B V is a biset function. In some cases we assume element in V (i.e. abs(s S) s). For a set S V¯ we ⊆ i , the domain of f to be all ordered pairs of subsets of V : similarly use abs(S) abs(s): s S . ⊆ , { ∈ } 22V (A, B): A V, B V . , { ⊆ ⊆ } Given any simple bisubmodular function f, define a single-argument set function g for S V¯ : In other cases we assume the domain of f to be ordered ⊆ pairs of disjoint subsets of V : g(S) , f(abs(S V1), abs(S V2)). 3V (A, B): A V, B V, A B = . ∩ ∩ , { ⊆ ⊆ ∩ ∅} Note that there is a one-to-one mapping between A biset function is normalized if f( , ) = 0. In some f(A, B) for (A, B) 22V and g(S) for S V¯ and ∅ ∅ cases we will also assume f(A, B) is monotone. therefore maximizing∈g(S) is equivalent to maximizing⊆ Definition 2 (Monotonicity). A biset function f over f(A, B). Furthermore, 2V 2 is monotone (monotone non-decreasing) if for any Lemma 1. g(S) is submodular if f(A, B) is a simple s V , (A, B) 22V : ∈ ∈ bisubmodular function. f(A + s, B) f(A, B) and f(A, B + s) f(A, B). ≥ ≥ Note also that g is monotone when f is monotone. V A biset function f over 3 is monotone (monotone non- Based on this connection, maximizing a simple bisub- decreasing) if for any s V (A B), (A, B) 3V : ∈ \ ∪ ∈ modular function f reduces to maximizing a normal f(A + s, B) f(A, B) and f(A, B + s) f(A, B). submodular function g. We can then exploit constant- ≥ ≥ factor approximation algorithms for submodular func- Monotone submodular maximization is nontrivial only tion maximization. when constrained. We focus on matroid constraints: Corollary 1. If f(A, B) is non-negative and simple Definition 3 (Matroid). A matroid is a set of sets bisubmodular, then there exists a constant-factor ap- I with three properties: (i) , (ii) if A and proximation algorithm for solving Equation2 when the ∅ ∈ I ∈ I B A then B (i.e., is an independence system), constraints are either: (i) non-existent; (ii) A k , ⊆ ∈ I I 1 and (iii) if A and B and B > A then there B k ; (iii) A + B k; (iv) A B = | (v)| ≤ any ∈ I ∈ I | | | | 2 is an s B such that s / A and (A + s) . combination| | ≤ of| the| above.| | ≤ ∩ ∅ ∈ ∈ ∈ I 1056 Singh, Guillory, Bilmes Proof. With no constraints, maximizing f(A, B) corre- The previous section answers this question for simple sponds to maximizing non-negative, submodular g(S), bisubmodular functions. solvable using Feige et al. [5]. The only question is In many situations, including those described in Sec- how to represent the other constraints on f as ma- tions5 and6, simple bisubmodularity is sufficient. The troid constraints on g. Cases (ii) and (iii) reduce contributions of this section are primarily theoretical: to uniform matroids. In case (ii), S V k , 1 1 we (i) connect simple to directed bisubmodularity, (ii) S V k ; in case (iii) S V +| S∩ V| ≤ k. 2 2 1 2 give a method for embedding a directed bisubmodular For| ∩ case| (iv) ≤ the constraint is| a∩ partition| | matroid∩ | ≤ con- function into a submodular one; (iii) provide sufficient straint v V, S v , v 1. For any intersection 1 2 conditions under which directed bisubmodular maxi- of a constant∀ ∈ number| ∩ { of matroid}| ≤ constraints we can use mization can be reduced to submodular maximization. the algorithms of Lee et al. [18]. For the special case of monotone f (and therefore g) we can use the simple Definition 5 (Directed Bisubmodularity [24]).
Recommended publications
  • Fast Semidifferential-Based Submodular Function Optimization
    Fast Semidifferential-based Submodular Function Optimization: Extended Version Rishabh Iyer [email protected] University of Washington, Seattle, WA 98195, USA Stefanie Jegelka [email protected] University of California, Berkeley, CA 94720, USA Jeff Bilmes [email protected] University of Washington, Seattle, WA 98195, USA Abstract family of feasible solution sets. The set C could express, We present a practical and powerful new for example, that solutions must be an independent set framework for both unconstrained and con- in a matroid, a limited budget knapsack, or a cut (or strained submodular function optimization spanning tree, path, or matching) in a graph. Without based on discrete semidifferentials (sub- and making any further assumptions about f, the above super-differentials). The resulting algorithms, problems are trivially worst-case exponential time and which repeatedly compute and then efficiently moreover inapproximable. optimize submodular semigradients, offer new If we assume that f is submodular, however, then in and generalize many old methods for sub- many cases the above problems can be approximated modular optimization. Our approach, more- and in some cases solved exactly in polynomial time. A over, takes steps towards providing a unifying function f : 2V ! R is said to be submodular (Fujishige, paradigm applicable to both submodular min- 2005) if for all subsets S; T ⊆ V , it holds that f(S) + imization and maximization, problems that f(T ) ≥ f(S [ T ) + f(S \ T ). Defining f(jjS) , f(S [ historically have been treated quite distinctly. j) − f(S) as the gain of j 2 V with respect to S ⊆ V , The practicality of our algorithms is impor- then f is submodular if and only if f(jjS) ≥ f(jjT ) tant since interest in submodularity, owing to for all S ⊆ T and j2 = T .
    [Show full text]
  • Approximate Submodularity and Its Applications: Subset Selection, Sparse Approximation and Dictionary Selection∗
    Journal of Machine Learning Research 19 (2018) 1{34 Submitted 10/16; Revised 8/18; Published 8/18 Approximate Submodularity and its Applications: Subset Selection, Sparse Approximation and Dictionary Selection∗ Abhimanyu Dasy [email protected] Google David Kempez [email protected] Department of Computer Science University of Southern California Editor: Jeff Bilmes Abstract We introduce the submodularity ratio as a measure of how \close" to submodular a set function f is. We show that when f has submodularity ratio γ, the greedy algorithm for maximizing f provides a (1 − e−γ )-approximation. Furthermore, when γ is bounded away from 0, the greedy algorithm for minimum submodular cover also provides essentially an O(log n) approximation for a universe of n elements. As a main application of this framework, we study the problem of selecting a subset of k random variables from a large set, in order to obtain the best linear prediction of another variable of interest. We analyze the performance of widely used greedy heuristics; in particular, by showing that the submodularity ratio is lower-bounded by the smallest 2k- sparse eigenvalue of the covariance matrix, we obtain the strongest known approximation guarantees for the Forward Regression and Orthogonal Matching Pursuit algorithms. As a second application, we analyze greedy algorithms for the dictionary selection prob- lem, and significantly improve the previously known guarantees. Our theoretical analysis is complemented by experiments on real-world and synthetic data sets; in particular, we focus on an analysis of how tight various spectral parameters and the submodularity ratio are in terms of predicting the performance of the greedy algorithms.
    [Show full text]
  • Fast and Private Submodular and K-Submodular Functions
    Fast and Private Submodular and k-Submodular Functions Maximization with Matroid Constraints Akbar Rafiey 1 Yuichi Yoshida 2 Abstract The problem of maximizing nonnegative monotone submodular functions under a certain constraint has been in- tensively studied in the last decade, and a wide range of efficient approximation algorithms have been developed for this problem. Many machine learning problems, including data summarization and influence maximization, can be naturally modeled as the problem of maximizing monotone submodular functions. However, when such applications involve sensitive data about individuals, their privacy concerns should be addressed. In this paper, we study the problem of maximizing monotone submodular functions subject to matroid constraints in the frame- 1 work of differential privacy. We provide (1 e )-approximation algorithm which improves upon the previous results in terms of approximation guarantee.− This is done with an almost cubic number of function evaluations in our algorithm. 1 Moreover, we study k-submodularity, a natural generalization of submodularity. We give the first 2 - approximation algorithm that preserves differential privacy for maximizing monotone k-submodular functions subject to matroid constraints. The approximation ratio is asymptotically tight and is obtained with an almost linear number of function evaluations. 1. Introduction A set function F : 2E R is submodular if for any S T E and e E T it holds that F (S e ) F (S) F (T e ) F (T ).→The theory of submodular maximization⊆ ⊆ provides∈
    [Show full text]
  • Maximizing Non-Monotone Submodular Functions
    Maximizing non-monotone submodular functions Uriel Feige∗ Vahab S. Mirrokni∗ Jan Vondr´ak† April 19, 2007 Abstract Submodular maximization is a central problem in combinatorial optimization, generalizing many important problems including Max Cut in directed/undirected graphs and in hypergraphs, certain constraint satisfaction problems and maximum facility location problems. Unlike the problem of minimizing submodular functions, the problem of maximizing submodular functions is NP-hard. In this paper, we design the first constant-factor approximation algorithms for maximiz- 1 ing nonnegative submodular functions. In particular, we give a deterministic local search 3 - 2 approximation and a randomized 5 -approximation algorithm for maximizing nonnegative sub- 1 modular functions. We also show that a uniformly random set gives a 4 -approximation. For 1 symmetric submodular functions, we show that a random set gives a 2 -approximation, which can be also achieved by deterministic local search. These algorithms work in the value oracle model where the submodular function is acces- sible through a black box returning f(S) for a given set S. We show that in this model, our 1 2 -approximation for symmetric submodular functions is the best one can achieve with a subex- ponential number of queries. For the case that the submodular function is given explicitly (specifically, as a sum of poly- nomially many nonnegative submodular functions, each on a constant number of elements) we 5 prove that for any fixed > 0, it is NP-hard to achieve a ( 6 + )-approximation for sym- 3 metric nonnegative submodular functions, or a ( 4 + )-approximation for general nonnegative submodular functions. ∗Microsoft Research.
    [Show full text]
  • Submodular Function Maximization Andreas Krause (ETH Zurich) Daniel Golovin (Google)
    Submodular Function Maximization Andreas Krause (ETH Zurich) Daniel Golovin (Google) Submodularity1 is a property of set functions with deep theoretical consequences and far{ reaching applications. At first glance it appears very similar to concavity, in other ways it resembles convexity. It appears in a wide variety of applications: in Computer Science it has recently been identified and utilized in domains such as viral marketing (Kempe et al., 2003), information gathering (Krause and Guestrin, 2007), image segmentation (Boykov and Jolly, 2001; Kohli et al., 2009; Jegelka and Bilmes, 2011a), document summarization (Lin and Bilmes, 2011), and speeding up satisfiability solvers (Streeter and Golovin, 2008). In this survey we will introduce submodularity and some of its generalizations, illustrate how it arises in various applications, and discuss algorithms for optimizing submodular functions. Our emphasis here is on maximization; there are many important results and applications related to minimizing submodular functions that we do not cover2. As a concrete running example, we will consider the problem of deploying sensors in a drinking water distribution network (see Figure 1) in order to detect contamination. In this domain, we may have a model of how contaminants, accidentally or maliciously introduced into the network, spread over time. Such a model then allows to quantify the benefit f(A) of deploying sensors at a particular set A of locations (junctions or pipes in the network) in terms of the detection performance (such as average time to detection). Based on this notion of utility, we then wish to find an optimal subset A ⊆ V of locations maximizing the utility, maxA f(A), subject to some constraints (such as bounded cost).
    [Show full text]
  • Maximization of Non-Monotone Submodular Functions
    University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science 1-24-2014 Maximization of Non-Monotone Submodular Functions Jennifer Gillenwater [email protected] Follow this and additional works at: https://repository.upenn.edu/cis_reports Part of the Computer Engineering Commons, and the Computer Sciences Commons Recommended Citation Jennifer Gillenwater, "Maximization of Non-Monotone Submodular Functions", . January 2014. MS-CIS-14-01 This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_reports/988 For more information, please contact [email protected]. Maximization of Non-Monotone Submodular Functions Abstract A litany of questions from a wide variety of scientific disciplines can be cast as non-monotone submodular maximization problems. Since this class of problems includes max-cut, it is NP-hard. Thus, general purpose algorithms for the class tend to be approximation algorithms. For unconstrained problem instances, one recent innovation in this vein includes an algorithm of Buchbinder et al. (2012) that guarantees a ½ - approximation to the maximum. Building on this, for problems subject to cardinality constraints, Buchbinderet al. (2014) o_er guarantees in the range [0:356; ½ + o(1)]. Earlier work has the best approximation factors for more complex constraints and settings. For constraints that can be characterized as a solvable polytope, Chekuri et al. (2011) provide guarantees. For the online secretary setting, Gupta et al. (2010) provide guarantees.
    [Show full text]
  • 1 Introduction to Submodular Set Functions and Polymatroids
    CS 598CSC: Combinatorial Optimization Lecture data: 4/8/2010 Instructor: Chandra Chekuri Scribe: Bolin Ding 1 Introduction to Submodular Set Functions and Polymatroids Submodularity plays an important role in combinatorial optimization. Given a ¯nite ground set S, a set function f : 2S ! R is submodular if f(A) + f(B) ¸ f(A \ B) + f(A [ B) 8A; B ⊆ S; or equivalently, f(A + e) ¡ f(A) ¸ f(B + e) ¡ f(B) 8A ⊆ B and e 2 S n B: Another equivalent de¯nition is that f(A + e1) + f(A + e2) ¸ f(A) + f(A + e1 + e2) 8A ⊆ S and distinct e1; e2 2 S n A: Exercise: Prove the equivalence of the above three de¯nitions. A set function f : 2S ! R is non-negative if f(A) ¸ 0 8A ⊆ S. f is symmetric if f(A) = f(S n A) 8A ⊆ S. f is monotone (non-decreasing) if f(A) · f(B) 8A ⊆ B. f is integer-valued if f(A) 2 Z 8A ⊆ S. 1.1 Examples of submodular functions Cut functions. Given an undirected graph G = (V; E) and a `capacity' function c : E ! R+ on V edges, the cut function f : 2 ! R+ is de¯ned as f(U) = c(±(U)), i.e., the sum of capacities of edges between U and V nU. f is submodular (also non-negative and symmetric, but not monotone). In an undirected hypergraph G = (V; E) with capacity function c : E! R+, the cut function is de¯ned as f(U) = c(±E (U)), where ±E (U) = fe 2 E j e \ U 6= ; and e \ (S n U) 6= ;g.
    [Show full text]
  • Concave Aspects of Submodular Functions
    Concave Aspects of Submodular Functions Rishabh Iyer Jeff Bilmes University of Texas at Dallas, CSE Department University of Washington, ECE Department 2601 N. Floyd Rd. MS EC31 185 E Stevens Way NE Richardson, TX 75083 Seattle, WA 98195 Email: [email protected] Email: [email protected] Abstract—Submodular Functions are a special class of set This relationship is evident by the fact that submodular functions, which generalize several information-theoretic quan- function minimization is polynomial time (akin to convex tities such as entropy and mutual information [1]. Submodular minimization). A number of recent results, however, make functions have subgradients and subdifferentials [2] and admit polynomial time algorithms for minimization, both of which are this relationship much more formal. For example, similar to fundamental characteristics of convex functions. Submodular convex functions, submodular functions have tight modular functions also show signs similar to concavity. Submodular lower bounds and admit a sub-differential characterization [2]. function maximization, though NP hard, admits constant factor Moreover, it is possible [11] to provide optimality conditions approximation guarantees and concave functions composed with for submodular minimization, in a manner analogous to modular functions are submodular. In this paper, we try to provide a more complete picture on the relationship between the Karush-Kuhn-Tucker (KKT) conditions from convex submodularity with concavity. We characterize the superdiffer- programming. In addition, submodular functions also admit entials and polyhedra associated with upper bounds and provide a natural convex extension, known as the Lov´asz extension, optimality conditions for submodular maximization using the which is easy to evaluate [12] and optimize.
    [Show full text]
  • (05) Polymatroids, Part 1
    1 An integer polymatroid P with groundset E is a bounded set of non-negative integer-valued vectors coordinatized by E such that P includes the origin; and any non-neg integer vector is a member of P if it is ≤ some member of F; and for any non-neg integer vector y, every maximal vector x in P which is ≤ y (―an F basis of y”) has the same sum |x| ≡ x(E) ≡ ∑( xj : j ϵ E), called the rank r(y) of vector y in P. Clearly, by definition, the 0–1 valued vectors in an integral polymatroid are the ―incidence vectors‖ of the independent sets J (i.e., J ∈ F) of a matroid M = (E,F). For any vectors x and y in Z+, x ∩ y denotes the maximal integer vector which is ≤ x and ≤ y; x ∪ y denotes the minimal integer vector which is ≥ x and ≥ y. Clearly 0 ≤ r(0) ≤ r(x) ≤ r(y) for x ≤ y. “non-neg & non-decreasing” + Theorem 2: For any x and y in Z , r(x ∪ y) + r(x ∩ y) ≤ r(x) + r(y). “submodular” Proof: Let v be an P-basis of x ∩ y. Extend it to P-basis w of x ∪ y. By the definition of integer polymatroid, r(x ∩ y) = |v| and r(x ∪ y) = |w|. r(x) + r(y) ≥ |w ∩ x| + |w ∩y| = |w ∩ (x ∩ y)| + | w ∩(x ∪ y)| = |v| + |w| = r(x ∪ y) + r(x ∩ y). □ + For subsets S of E define fP(S) to be r(x) where x ϵ Z is very large for coordinates in S and = 0 for coordinates not in S.
    [Show full text]
  • Maximizing K-Submodular Functions and Beyond
    Maximizing k-Submodular Functions and Beyond∗ Justin Ward† School of Computer and Communication Sciences, EPFL, Switzerland [email protected] Stanislav Zivn´yˇ ‡ Department of Computer Science, University of Oxford, UK [email protected] Abstract We consider the maximization problem in the value oracle model of functions defined on k-tuples of sets that are submodular in every orthant and r-wise monotone, where k 2 and 1 r k. We give an analysis of a deterministic greedy algorithm that shows that any≥ such function≤ ≤ can be approximated to a factor of 1/(1 + r). For r = k, we give an analysis of a randomised greedy algorithm that shows that any such function can be approximated to a factor of 1/(1 + k/2). In the case of k = r = 2, the considered functions correspond precisely to bisubmodular p functions, in which case we obtain an approximation guarantee of 1/2. We show that, as in the case of submodular functions, this result is the best possible in both the value query model, and under the assumption that NP = RP . Extending a result of Ando6 et al., we show that for any k 3 submodularity in every orthant and pairwise monotonicity (i.e. r = 2) precisely characterize≥ k-submodular functions. Consequently, we obtain an approximation guarantee of 1/3 (and thus independent of k) for the maximization problem of k-submodular functions. 1 Introduction Given a finite nonempty set U, a set function f : 2U R defined on subsets of U is called → + submodular if for all S, T U, arXiv:1409.1399v2 [cs.DS] 23 Nov 2015 ⊆ f(S)+ f(T ) f(S T )+ f(S T ).
    [Show full text]
  • Maximizing Non-Monotone Submodular Functions∗
    Maximizing non-monotone submodular functions∗ Uriel Feige y Vahab S. Mirrokni z Dept. of Computer Science and Applied Mathematics Google Research The Weizmann Institute New York, NY Rehovot, Israel [email protected] [email protected] Jan Vondr´ak x IBM Alamaden Research Center San Jose, CA [email protected] January 29, 2011 Abstract Submodular maximization generalizes many important problems including Max Cut in directed and undirected graphs and hypergraphs, certain constraint satisfaction problems and maximum fa- cility location problems. Unlike the problem of minimizing submodular functions, the problem of maximizing submodular functions is NP-hard. In this paper, we design the first constant-factor approximation algorithms for maximizing non- negative (non-monotone) submodular functions. In particular, we give a deterministic local-search 1 2 3 -approximation and a randomized 5 -approximation algorithm for maximizing nonnegative submod- 1 ular functions. We also show that a uniformly random set gives a 4 -approximation. For symmetric 1 submodular functions, we show that a random set gives a 2 -approximation, which can be also achieved by deterministic local search. These algorithms work in the value oracle model where the submodular function is accessible 1 through a black box returning f(S) for a given set S. We show that in this model, a ( 2 + )- approximation for symmetric submodular functions would require an exponential number of queries for any fixed > 0. In the model where f is given explicitly (as a sum of nonnegative submodular 5 functions, each depending only on a constant number of elements), we prove NP-hardness of ( 6 + )- 3 approximation in the symmetric case and NP-hardness of ( 4 + )-approximation in the general case.
    [Show full text]
  • Disjoint Bases in a Polymatroid
    Disjoint Bases in a Polymatroid Gruia Calinescu˘ ∗ Chandra Chekuri† Jan Vondrak´ ‡ May 26, 2008 Abstract Let f : 2N → Z+ be a polymatroid (an integer-valued non-decreasing submodular set function with f(∅) = 0). We call S ⊆ N a base if f(S) = f(N). We consider the problem of finding a maximum number of disjoint bases; we denote by m∗ be this base packing number. A simple upper bound on m∗ ∗ P is given by k = max{k : i∈N fA(i) ≥ kfA(N), ∀A ⊆ N} where fA(S) = f(A ∪ S) − f(A). This upper bound is a natural generalization of the bound for matroids where it is known that m∗ = k∗. For polymatroids, we prove that m∗ ≥ (1 − o(1))k∗/ ln f(N) and give a randomized polynomial time algorithm to find (1 − o(1))k∗/ ln f(N) disjoint bases, assuming an oracle for f. We also derandomize the algorithm using minwise independent permutations and give a deterministic algorithm that finds (1 − )k∗/ ln f(N) disjoint bases. The bound we obtain is almost tight since it is known there are polymatroids for which m∗ ≤ (1 + o(1))k∗/ ln f(N). Moreover it is known that unless NP ⊆ DT IME(nlog log n), for any > 0, there is no polynomial time algorithm to obtain a (1 + )/ ln f(N)-approximation to m∗. Our result generalizes and unifies two results in the literature. 1 Introduction A polymatroid f : 2N → Z+ on a ground set N is a non-decreasing (monotone) integer-valued submodular function. A function f is monotone iff f(A) ≤ f(B) for all A ⊆ B.
    [Show full text]