Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap

Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap

Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap Aryan Mokhtari Hamed Hassani Amin Karbasi Massachusetts Institute of Technology University of Pennsylvania Yale University Abstract tions directly. This direct approach has led to provably good guarantees, for specific problem instances. Ex- amples include latent variable models [Anandkumar In this paper, we study the problem of con- et al., 2014], non-negative matrix factorization [Arora strained and stochastic continuous submod- et al., 2016], robust PCA [Netrapalli et al., 2014], ma- ular maximization. Even though the ob- trix completions [Ge et al., 2016], and training certain jective function is not concave (nor con- specific forms of neural networks [Mei et al., 2016]. vex) and is defined in terms of an expec- However, it is well known that in general finding the tation, we develop a variant of the condi- global optimum of a non-convex optimization problem tional gradient method, called Stochastic is NP-hard [Murty and Kabadi, 1987]. This computa- Continuous Greedy, which achieves a tight tional barrier has mainly shifted the goal of non-convex approximation guarantee. More pre- optimization towards two directions: a) finding an ap- cisely, for a monotone and continuous DR- proximate local minimum by avoiding saddle points submodular function and subject to a gen- [Ge et al., 2015, Anandkumar and Ge, 2016, Jin et al., eral convex body constraint, we prove that 2017, Paternain et al., 2017], or b) characterizing gen- Stochastic Continuous Greedy achieves a eral conditions under which the underlying non-convex [(1 1/e)OPT ✏] guarantee (in expectation) optimization is tractable [Hazan et al., 2016]. with− (1/✏3)− stochastic gradient computa- O tions. This guarantee matches the known In this paper, we consider a broad class of non-convex hardness results and closes the gap between optimization problems that possess special combina- deterministic and stochastic continuous sub- torial structures. More specifically, we focus on con- modular maximization. By using stochastic strained maximization of stochastic continuous sub- continuous optimization as an interface, we modular functions (CSF) that demonstrate diminish- also provide the first (1 1/e) tight approxi- ing returns, i.e., continuous DR-submodular functions, mation guarantee for maximizing− a monotone . ˜ but stochastic submodular set function sub- max F (x) = max Ez P [F (x, z)]. (1) x x ⇠ ject to a general matroid constraint. 2C 2C ˜ Here, the functions F : R+ are stochastic where x is the optimizationX⇥Z! variable, z is a 1 Introduction realization2 ofX the random variable Z drawn from2Z a dis- tribution P , and Rn is a compact set. Our goal X⇢ + Many procedures in statistics and artificial intelligence is to maximize the expected value of the random func- ˜ n require solving non-convex problems, including clus- tions F (x, z) over the convex body R+. Note that C✓ tering [Abbasi and Younis, 2007], training deep neu- we only assume that F (x) is DR-submodular, and not ral networks [Bengio et al., 2007], and performing necessarily the stochastic functions F˜(x, z). We also Bayesian optimization [Snoek et al., 2012], to name consider situations where the distribution P is either a few. Historically, the focus has been to convex- unknown (e.g., when the objective is given as an im- ify non-convex objectives; in recent years, there has plicit stochastic model) or the domain of the random been significant progress to optimize non-convex func- variable Z is very large (e.g., when the objective is de- fined in terms of an empirical risk) which makes the Proceedings of the 21st International Conference on Ar- cost of computing the expectation very high. In these tificial Intelligence and Statistics (AISTATS) 2018, Lan- regimes, stochastic optimization methods, which oper- zarote, Spain. PMLR: Volume 84. Copyright 2018 by the ate on computationally cheap estimates of gradients, author(s). arise as natural solutions. In fact, very recently, it was Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap shown in [Hassani et al., 2017] that stochastic gradient guarantee for general monotone submodular set func- methods achieve a (1/2) approximation guarantee to tions subject to a matroid constraint. Problem (1). The authors also showed that current versions of the conditional gradient method (a.k.a., Notation. Lowercase boldface v denotes a vector Frank-Wolfe), such as continuous greedy [Vondr´ak, and uppercase boldface A a matrix. We use v to k k 2008] or its close variant [Bian et al.], can perform ar- denote the Euclidean norm of vector v.Thei-th ele- bitrarily poorly in stochastic continuous submodular ment of the vector v is written as vi and the element maximization settings. on the i-th row and j-th column of the matrix A is denoted by A . Our contributions. We provide the first tight i,j (1 1/e) approximation guarantee for Problem (1) when− the continuous function F is monotone, smooth, 2RelatedWork DR-submodular, and the constraint set is a bounded convex body. To this end, we developC a Maximizing a deterministic submodular set function novel conditional gradient method, called Stochastic has been extensively studied. The celebrated result of Continuous Greedy (SCG), that produces a solution Nemhauser et al. [1978] shows that a greedy algorithm with an objective value larger than ((1 1/e)OPT ✏) achieves a (1 1/e) approximation guarantee for a − after O 1/✏3 iterations while only having− access− to monotone function subject to a cardinality constraint. unbiased estimates of the gradients (here OPT de- It is also known that this result is tight under reason- notes the optimal value of Problem (1)). SCG is also able complexity-theoretic assumptions [Feige, 1998]. memory efficient in the following sense: in contrast Recently, variants of the greedy algorithm have been to previously proposed conditional gradient methods proposed to extend the above result to non-monotone in stochastic convex [Hazan and Luo, 2016] and non- and more general constraints [Feige et al., 2011, Buch- convex [Reddi et al., 2016] settings, SCG does not re- binder et al., 2015, 2014, Feldman et al., 2017]. While quire using a minibatch in each step. Instead it simply discrete greedy algorithms are fast, they usually do not averages over the stochastic estimates of the previous provide the tightest guarantees for many classes of fea- gradients. sibility constraints. This is why continuous relaxations of submodular functions, e.g., the multilinear exten- Connection to Discrete Problems. Even though sion, have gained a lot of interest [Vondr´ak, 2008, Cali- submodularity has been mainly studied in discrete do- nescu et al., 2011, Chekuri et al., 2014, Feldman et al., mains [Fujishige, 2005], many efficient methods for op- 2011, Gharan and Vondr´ak, 2011, Sviridenko et al., timizing such submodular set functions rely on con- 2015]. In particular, it is known that the continuous tinuous relaxations either through a multi-linear ex- greedy algorithm achieves a (1 1/e) approximation tension [Vondr´ak, 2008] (for maximization) or Lovas guarantee for monotone submodular− functions under a extension [Lov´asz, 1983] (for minimization). In fact, general matroid constraint [Calinescu et al., 2011]. An Problem (1) has a discrete counterpart, recently con- c improved ((1 e− )/c)-approximation guarantee can sidered in [Hassani et al., 2017, Karimi et al., 2017]: be obtained if−f has curvature c [Vondr´ak, 2010]. ˜ max f(S) = max Ez P [f(S, z)], (2) Continuous submodularity naturally arises in many S S ⇠ 2I 2I learning applications such as robust budget allocation ˜ V where the functions f :2 R+ are stochastic, S [Staib and Jegelka, 2017, Soma et al., 2014], online is the optimization set variable⇥Z! defined over a ground resource allocation [Eghbali and Fazel, 2016], learn- set V , z is the realization of a random variable Z ing assignments [Golovin et al., 2014], as well as Ad- drawn from2Z the distribution P , and is a general ma- words for e-commerce and advertising [Devanur and troid constraint. Since P is unknown,I problem (2) can- Jain, 2012, Mehta et al., 2007]. Maximizing a detem- not be directly solved using the current state-of-the-art inistic continuous submodular function dates back to techniques. Instead, Hassani et al. [2017] showed that the work of Wolsey [1982]. More recently, Chekuri by lifting the problem to the continuous domain (via et al. [2015] proposed a multiplicative weight update multi-linear relaxation) and using stochastic gradient algorithm that achieves (1 1/e ✏) approximation methods on a continuous relaxation to reach a solution guarantee after O˜(n/✏2) oracle− calls− to gradients of a that is within a factor (1/2) of the optimum. Contem- monotone smooth submodular function F (i.e., twice porarily, [Karimi et al., 2017] used a concave relax- di↵erentiable DR-submodular) subject to a polytope ation technique to provide a (1 1/e) approximation constraint. A similar approximation factor can be for the class of submodular coverage− functions. Our obtained after (n/✏) oracle calls to gradients of F work also closes the gap for maximizing the stochastic for monotone DR-submodularO functions subject to a submodular set maximization, namely, Problem (2), down-closed convex body using the continuous greedy by providing the first tight (1 1/e) approximation method [Bian et al.]. However, such results require ex- − Aryan Mokhtari, Hamed Hassani, Amin Karbasi act computation of the gradients F which is not fea- ground set V , is called submodular if for all subsets sible in Problem (1). An alternativer approach is then A, B V ,wehave to modify the current algorithms by replacing gradi- ✓ ents F (x ) by their stochastic estimates F˜(x , z ); f(A)+f(B) f(A B)+f(A B). t t t ≥ \ [ however,r this modification may lead to arbitrarilyr poor solutions as demonstrated in [Hassani et al., 2017].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us