A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians

A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians

Industrial and Systems Engineering A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians Albert S. Berahas Department of Industrial and Operations Engineering, University of Michigan Frank E. Curtis, Michael J. O'Neill, and Daniel P. Robinson Department of Industrial and Systems Engineering, Lehigh University COR@L Technical Report 21T-013 A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians Albert S. Berahas∗1, Frank E. Curtisy2, Michael J. O'Neillz2, and Daniel P. Robinsonx2 1Department of Industrial and Operations Engineering, University of Michigan 2Department of Industrial and Systems Engineering, Lehigh University June 24, 2021 Abstract A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. The algorithmic structure of the proposed method is based on a step decomposition strategy that is known in the literature to be widely effective in practice, wherein each search direction is computed as the sum of a normal step (toward linearized feasibility) and a tangential step (toward objective decrease in the null space of the constraint Jacobian). However, the proposed method is unique from others in the literature in that it both allows the use of stochastic objective gradient estimates and possesses convergence guarantees even in the setting in which the constraint Jacobians may be rank deficient. The results of numerical experiments demonstrate that the algorithm offers superior performance when compared to popular alternatives. 1 Introduction We propose an algorithm for solving equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. Formulations of this type arise throughout science and engineering in important applications such as data-fitting problems, where one aims to determine a model that minimizes the discrepancy between values yielded by the model and corresponding known outputs. Our algorithm is designed for solving such problems when the decision variables are restricted to the solution set of a (potentially nonlinear) set of equations. We are particularly interested in such problems when the constraint Jacobian|i.e., the matrix of first-order derivatives of the constraint function|may be rank deficient in some or even all iterations during the run of an algorithm, since this can be an unavoidable occurrence in practice that would ruin the convergence properties of any algorithm that is not specifically designed for this setting. The structure of our algorithm follows a step decomposition strategy that is common in the constrained optimization literature; in particular, our algorithm has roots in the Byrd- Omojokun approach [17]. However, our algorithm is unique from previously proposed algorithms in that it ∗E-mail: [email protected] yE-mail: [email protected] zE-mail: [email protected] xE-mail: [email protected] 2 offers convergence guarantees while allowing for the use of stochastic objective gradient information in each iteration. We prove that our algorithm converges to stationarity (in expectation), both in desirable cases when the constraints are feasible and convergence to the feasible region can be guaranteed (in expectation), and in less desirable cases, such as when the constraints are infeasible and one can only guarantee convergence to an infeasible stationary point. To the best of our knowledge, there exist no other algorithms in the literature that have been designed specifically for this setting, namely, stochastic optimization with equality constraints that may exhibit rank deficiency. Our algorithm builds upon the method for solving equality constrained stochastic optimization problems proposed in [1]. The method proposed in that article assumes that the singular values of the constraint Jacobians are bounded below by a positive constant throughout the optimization process, which implies that the linear independence constraint qualification (LICQ) holds at all iterates. By contrast, the algorithm proposed in this paper make no such assumption. Handling the potential lack of full-rank Jacobians ne- cessitates a different algorithmic structure and a distinct approach to proving convergence guarantees; e.g., one needs to account for the fact that primal-dual stationarity conditions may not be necessary and/or the constraints may be infeasible. Similar to the context in [1], our algorithm is intended for the highly stochastic regime in which the stochastic gradient estimates might only be unbiased estimators of the gradients of the objective at the algorithm iterates that satisfy a loose variance condition. Indeed, we show that in nice cases|in particular, when the adaptive merit parameter employed in our algorithm eventually settles at a value that is sufficiently small|our algorithm has convergence properties in expectation that match those of the algorithm in [1]. These results parallel those for the stochastic gradient method in the context of unconstrained optimization [2, 21, 22]. However, for cases not considered in [1] when the merit parameter sequence may vanish, we require the stronger assumption that the difference between each stochastic gradient estimate and the corresponding true gradient of the objective eventually is bounded deterministically in each iteration. This is appropriate in many ways since in such a scenario the algorithm aims to transition from solving a stochastic optimization problem to the deterministic one of minimizing constraint violation. Finally, we discuss how in any particular run of the algorithm, the probability is zero that the merit parameter settles at too large of a value, and provide commentary on what it means to assume that the total probability of such an event (over all possible runs of the algorithm) is zero. Our algorithm has some similarities, but many differences with another recently proposed algorithm, namely, that in [14]. That algorithm is also designed for equality constrained stochastic optimization, but: (i) like for the algorithm in [1], for the algorithm in [14] the LICQ is assumed to hold at all algorithm iterates, and (ii) the algorithm in [14] employs an adaptive line search that may require the algorithm to compute relatively accurate stochastic gradient estimates throughout the optimization process. Our algorithm, on the other hand, does not require the LICQ to hold and is meant for a more stochastic regime, meaning that it does not require a procedure for refining the stochastic gradient estimate within an iteration. Consequently, the convergence guarantees that can be proved for our method, and the expectations that one should have about the practical performance of our method, are quite distinct from those for the algorithm in [14]. Besides the methods in [1, 14], there have been few proposed algorithms that might be used to solve problem of the form (1). Some methods have been proposed that employ stochastic (proximal) gradient strategies applied to minimizing penalty functions derived from constrained problems [4, 11, 15], but these do not offer convergence guarantees to stationarity with respect to the original constrained problem. On the other hand, stochastic Frank-Wolfe methods have been proposed [10, 12, 13, 19, 20, 24], but these can only be applied in the context of convex feasible regions. Our algorithm, by contrast, is designed for nonlinear equality constrained stochastic optimization. 1.1 Notation The set of real numbers is denoted as R, the set of real numbers greater than (respectively, greater than or equal to) r 2 R is denoted as R>r (respectively, R≥r), the set of n-dimensional real vectors is denoted as Rn, the set of m-by-n-dimensional real matrices is denoted as Rm×n, and the set of n-by-n-dimensional 3 real symmetric matrices is denoted as Sn. Given J 2 Rm×n, the range space of J T is denoted as Range(J T ) and the null space of J is denoted as Null(J). (By the Fundamental Theorem of Linear Algebra, for any J 2 Rm×n, the spaces Range(J T ) and Null(J) are orthogonal and Range(J T ) + Null(J) = Rn.) The set of nonnegative integers is denoted as N := f0; 1; 2;::: g. For any m 2 N, let [m] denote the set of integers f0; 1; : : : ; mg. n The algorithm that we propose is iterative in the sense that, given a starting point x0 2 R , it generates n a sequence of iterates fxkg with xk 2 R for all k 2 N. For simplicity of notation, the iteration number is appended as a subscript to other quantities corresponding to each iteration; e.g., with a function c : Rn ! R, m×n its value at xk is denoted as ck := c(xk) for all k 2 N. Given Jk 2 R , we use Zk to denote a matrix whose columns form an orthonormal basis for Null(Jk). 1.2 Organization Our problem of interest and basic assumptions about the problem and the behavior of our algorithm are presented in Section 2. Our algorithm is motivated and presented in Section 3. Convergence guarantees for our algorithm are presented in Section 4. The results of numerical experiments are provided in Section 5 and concluding remarks are provided in Section 6. 2 Problem Statement Our algorithm is designed for solving (potentially nonlinear and/or nonconvex) equality constrained opti- mization problems of the form min f(x) s:t: c(x) = 0; with f(x) = [F (x; ι)]; (1) n E x2R where the functions f : Rn ! R and c : Rn ! Rm are smooth, ι is a random variable with associated probability space (Ω; F;P ), F : Rn × Ω ! R, and E[·] denotes expectation taken with respect to P . We assume that values and first-order derivatives of the constraint functions can be computed, but that the objective and its associated first-order derivatives are intractable to compute, and one must instead employ stochastic estimates.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    31 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us