
Subgradient Methods in Network Resource Allocation: Rate Analysis Angelia Nedic´ Asuman Ozdaglar Department of Industrial and Department of Electrical Engineering Enterprise Systems Engineering and Computer Science University of Illinois Massachusetts Institute of Technology Urbana-Champaign, IL 61801 Cambridge, MA 02142 Email: [email protected] Email: [email protected] Abstract— We consider dual subgradient methods for solving subgradient method in dual space and exploit the subgradient (nonsmooth) convex constrained optimization problems. Our information to produce primal near-feasible and near-optimal focus is on generating approximate primal solutions with per- solutions. formance guarantees and providing convergence rate analysis. We propose and analyze methods that use averaging schemes In this paper, we study generating approximate primal op- to generate approximate primal optimal solutions. We provide timal solutions for general convex constrained optimization estimates on the convergence rate of the generated primal problems using dual subgradient methods. We consider a solutions in terms of both the amount of feasibility violation and simple averaging scheme that constructs primal solutions by bounds on the primal function values. The feasibility violation forming the running averages of the primal iterates generated and primal value estimates are given per iteration, thus provid- ing practical stopping criteria. We provide a numerical example when evaluating the subgradient of the dual function. We that illustrate the performance of the subgradient methods with focus on methods that use a constant stepsize both in view averaging in a network resource allocation problem. of its simplicity and the potential to generate approximate solutions in a relatively small number of iterations. I. INTRODUCTION We provide estimates on the convergence rate of the Lagrangian relaxation and duality have been effective tools average primal sequences in terms of both the amount of for solving large-scale convex optimization problems and feasibility violation and the primal objective function values. for systematically providing lower bounds on the optimal Our estimates depend on the norm of the generated dual value of nonconvex (continuous and discrete) optimization iterates. Under the Slater condition, we show that the dual problems. Subgradient methods have played a key role in sequence is bounded and we provide an explicit bound on this framework providing computationally efficient means the norm of the dual iterates. Combining these results, we to obtain near-optimal dual solutions and bounds on the establish convergence rate estimates for the average primal optimal value of the primal optimization problem. Most re- sequence. Our estimates show that under the Slater condition, markably, in networking applications, over the last few years, the amount of constraint violation goes to zero at the rate of subgradient methods have been used with great success 1=k with the number of subgradient iterations k. Moreover, in developing decentralized cross-layer resource allocation the primal function values go to the optimal value within mechanisms (see [10] and [22] for more on this subject). some error at the rate 1=k. Our bounds explicitly highlight The subgradient methods for solving dual problems have the dependence of the error terms on the constant stepsize been extensively studied starting with Polyak [17] and and illustrate the tradeoffs between the solution accuracy and Ermoliev [4]. Their convergence properties under various computational complexity in selecting the stepsize value. stepsize rules have been long established and can be found, Other than the papers cited above, our paper is also related for example, in [21], [18], [5], and [1]. Numerous extensions to the literature on the recovery of primal solutions from and implementations including parallel and incremental ver- subgradient methods (see for example [15], [21], [8], [20], sions have been proposed and analyzed (see [7], [11], [12], [9], [6], [16], [19]). These works focus on the asymptotic [13]). Despite widespread use of the subgradient methods for behavior of the primal sequences, i.e., the convergence solving dual (nondifferentiable) problems, there are limited properties in the limit as the number of iterations increases results in the existing literature on the recovery of primal to infinity. Since the focus is on the asymptotic behavior, the solutions and the convergence rate analysis in the primal convergence analysis has been mostly limited to diminishing space. In many network resource allocation problems, how- stepsize rules1. Moreover there is no convergence rate anal- ever, the main interest is in solving the primal problem. ysis on the generated primal sequences. In this paper, we In this case, the question arises whether we can use the focus on generating approximate primal solutions in finitely many iterations with convergence rate guarantees. We thank Ali ParandehGheibi for his assistance with the numerical example. This research was partially supported by the National Science Foundation 1The exception is the paper [6] where a target-level based stepsize has under CAREER grants CMMI-0742538 and DMI-0545910. been considered. 1189 0 The paper is organized as follows: In Section II, we set X is compact (since f and gjs are continuous due to define the primal and dual problems, and provide an explicit being convex over Rn). Furthermore, we assume that the bound on the level sets of the dual function under Slater minimization problem in Eq. (4) is simple enough so that it condition. In Section III, we consider a subgradient method can be solved efficiently. For example, this is the case when with a constant stepsize and study its properties under Slater. the functions f and gj’s are affine or affine plus norm-square In Section IV, we introduce approximate primal solutions term [i.e., ckxk2 +a0x+b], and the set X is the nonnegative generated through averaging and provide bounds on their orthant in Rn. Many practical problems of interest, such as feasibility violation and primal cost values. Section V, we those arising in network resource allocation, often have this present a numerical example of network resource allocation structure. that illustrates the performance of the dual subgradient In our subsequent development, we consider subgradient method with averaging. Section VI contains our concluding methods as applied to the dual problem given by Eqs. (3) and remarks. (4). Due to the form of the dual function q, the subgradients Regarding notation, we view a vector as a column vector, of q at a vector ¹ are related to the primal vectors x¹ and we denote by x0y the inner product of two vectors x attaining the minimum in Eq. (4). Specifically, the set @q(¹) and y.p We use kyk to denote the standard Euclidean norm, of subgradients of q at a given ¹ ¸ 0 is given by kyk = y0y. For a vector u 2 Rm, we write u+ to denote @q(¹) = conv (fg(x ) j x 2 X g) ; (5) the projection of u on the nonnegative orthant in Rm, i.e., ¹ ¹ ¹ where X = fx 2 X j q(¹) = f(x ) + ¹0g(x )g, and u+ = (maxf0; u g; ¢ ¢ ¢ ; maxf0; u g)0: ¹ ¹ ¹ ¹ 1 m conv(Y ) denotes the convex hull of a set Y . In the following, For a concave function q : Rm ! [¡1; 1], we say that a we omit some of the proofs due to space constraints and refer m vector s¹ 2 R is a subgradient of q(¹) at a given vector the interested reader to the longer version of our paper [14]. ¹ 2 dom(q) if A. Slater Condition and Boundedness of the Multiplier Sets 0 q(¹¹) + s¹(¹ ¡ ¹) ¸ q(¹) for all ¹ 2 dom(q); (1) In this section, we consider sets of the form f¹ ¸ 0 j q(¹) ¸ q(¹¹)g for a fixed ¹ ¸ 0, which are obtained where dom(q) = f¹ 2 Rm j q(¹) > ¡1g. The set of all by intersecting the nonnegative orthant in Rm and (upper) subgradients of q at ¹ is denoted by @q(¹¹). level sets of the concave dual function q. We show that II. PRIMAL AND DUAL PROBLEMS these sets are bounded when the primal problem satisfies We focus on the following constrained optimization prob- the standard Slater constraint qualification, formally given in lem: the following. Assumption 1: (Slater Condition) There exists a vector minimize f(x) (2) x¹ 2 Rn such that subject to g(x) · 0; x 2 X; g (¹x) < 0 for all j = 1; : : : ; m: where f : Rn ! R is a convex function, g = (g ; : : : ; g )0 j 1 m We refer to a vector x¹ satisfying the Slater condition as a and each g : Rn ! R is a convex function, and X ½ Rn is j Slater vector. a nonempty closed convex set. We refer to this as the primal Under the assumption that f ¤ is finite, it is well-known problem. We denote the primal optimal value by f ¤, and that the Slater condition is sufficient for a zero duality gap throughout this paper, we assume that the value f ¤ is finite. as well as for the existence of a dual optimal solution To generate approximate solutions to the primal problem (see e.g. [1]). Furthermore, the dual optimal set is bounded of Eq. (2), we consider approximate solutions to its dual (see [5]). This property of the dual optimal set under the problem. Here, the dual problem is the one arising from Slater condition, has been observed and used as early as Lagrangian relaxation of the inequality constraints g(x) · 0, in Uzawa’s analysis of Arrow-Hurwicz gradient method in and it is given by [23]. Interestingly, most work on subgradient methods has maximize q(¹) not made use of this powerful result, which is a key in our (3) subject to ¹ ¸ 0; ¹ 2 Rm; analysis. where q is the dual function defined by The following proposition extends the result on the op- timal dual set boundedness under the Slater condition. In q(¹) = inf ff(x) + ¹0g(x)g: (4) particular, it shows that the Slater condition also guarantees x2X the boundedness of the (level) sets f¹ ¸ 0 j q(¹) ¸ q(¹¹)g.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-