
1 A Wolfe line search algorithm for vector optimization 2 3 4 L. R. LUCAMBIO PÉREZ and L. F. PRUDENTE, Universidade Federal de Goiás, Brazil 5 6 In a recent paper, Lucambio Pérez and Prudente extended the Wolfe conditions for the vector-valued optimization. Here, we propose a 7 line search algorithm for finding a step size satisfying the strong Wolfe conditions in the vector optimization setting. Well definedness 8 and finite termination results are provided. We discuss practical aspects related to the algorithm and present some numerical 9 experiments illustrating its applicability. Codes supporting this paper are written in Fortran 90 and are freely available for download. 10 11 CCS Concepts: • Mathematics of computing → Mathematical software; Continuous optimization; 12 Additional Key Words and Phrases: line search algorithm, Wolfe conditions, vector optimization 13 14 ACM Reference Format: 15 L. R. Lucambio Pérez and L. F. Prudente. 2019. A Wolfe line search algorithm for vector optimization. ACM Trans. Math. Softw. 1, 1 16 (June 2019), 23 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn 17 18 19 1 INTRODUCTION 20 In the vector optimization, several objectives have to be minimized simultaneously with respect to a partial order 21 induced by a closed convex pointed cone with non-empty interior. The particular case where the cone is the so-called 22 23 Pareto cone, i.e. the positive orthant, corresponds to the multiobjective optimization. Many practical models in different 24 areas seek to solve multiobjective and vector optimization problems, see for example [De et al. 1992; Fliege and Vicente 25 2006; Fliege and Werner 2014; Gravel et al. 1992; Hong et al. 2008; Leschine et al. 1992; Tavana 2004; Tavana et al. 26 2010; White 1998]. Recently, the extension of iterative methods for scalar-valued to multiobjective and vector-valued 27 28 optimization has received considerable attention from the optimization community. Some works dealing with this 29 subject include the steepest descent and projected gradient [Chuong and Yao 2012; Cruz et al. 2011; Fliege and Svaiter 30 2000; Fukuda and Graña Drummond 2011, 2013; Graña Drummond and Iusem 2004; Graña Drummond and Svaiter 31 2005; Miglierina et al. 2008], conjugate gradient [Lucambio Pérez and Prudente 2018], Newton [Carrizo et al. 2016; 32 33 Chuong 2013; Fliege et al. 2009; Graña Drummond et al. 2014; Lu and Chen 2014], Quasi-Newton [Ansary and Panda 34 2015; Qu et al. 2011, 2013], subgradient [Bello Cruz 2013], interior point [Villacorta and Oliveira 2011], and proximal 35 methods [Bonnel et al. 2005; Ceng et al. 2010; Ceng and Yao 2007; Chuong 2011; Chuong et al. 2011]. 36 Line search and trust region techniques are essential components to globalize nonlinear descent optimization 37 38 algorithms. Concepts such as Armijo and Wolfe conditions are the basis for a broad class of minimization methods 39 that employ line search strategies. Hence, the development of efficient line search procedures that satisfy these kind 40 of conditions is a crucial ingredient for the formulation of several practical algorithms for optimization. This is a 41 well-explored topic in classical optimization, see [Al-Baali and Fletcher 1986; Dennis and Schnabel 1996; Fletcher 2013; 42 43 Authors’ address: L. R. Lucambio Pérez, [email protected]; L. F. Prudente, [email protected], Universidade Federal de Goiás, Instituto de Matemática e Estatística, 44 Avenida Esperança, s/n, Campus Samambaia, 74690-900, Goiânia, GO, Brazil. 45 Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not 46 made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights forcomponents 47 of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to 48 redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. 49 © 2019 Association for Computing Machinery. 50 Manuscript submitted to ACM 51 52 Manuscript submitted to ACM 1 2 L. R. Lucambio Pérez and L. F. Prudente 53 Hager 1989; Hager and Zhang 2005; Lemaréchal 1981; Moré and Sorensen 1982; Moré and Thuente 1994]. In particular, 54 we highlight the paper [Moré and Thuente 1994] where Moré and Thuente proposed an algorithm designed to find a 55 step size satisfying the strong scalar-valued Wolfe conditions. On the other hand, the Wolfe conditions were extended 56 57 to the vector optimization setting only recently in [Lucambio Pérez and Prudente 2018]. In this work, the authors also 58 introduced the Zoutendijk condition for vector optimization and proved that it holds for a general descent line search 59 method that satisfies the vector-valued Wolfe conditions. The Zoutendijk condition is a powerful tool for analyzingthe 60 convergence of line search methods. These extensions opened perspectives for the formulation of new algorithms for 61 62 vector optimization problems. 63 In the present paper, we take a step beyond [Lucambio Pérez and Prudente 2018] from the perspective of actual 64 implementations of line search strategies. We propose a line search algorithm for finding a step size satisfying the strong 65 vector-valued Wolfe conditions. At each iteration, our algorithm works with a scalar function and uses an inner solver 66 67 designed to find a step size satisfying the strong scalar-valued Wolfe conditions. In the multiobjective optimization case, 68 such scalar function corresponds to one of the objectives. Assuming that the inner solver stops in finite time, we show 69 that the proposed algorithm is well defined and terminates its execution in a finite number of (outer) iterations. Inour 70 implementation, we use the algorithm of Moré and Thuente as the inner solver. Its finite termination is guaranteed 71 72 except in pathological cases, see [Moré and Thuente 1994]. It should be mentioned that an early version of the algorithm 73 proposed here was used in the numerical experiments of [Lucambio Pérez and Prudente 2018], where a family of 74 conjugate gradients methods for vector optimization problems was introduced and tested. We present also a theoretical 75 result: using the convergence result of our algorithm, we improve the theorem in [Lucambio Pérez and Prudente 2018] 76 77 on the existence of step sizes satisfying the vector-valued Wolfe conditions. We also discuss some practical aspects 78 related to our implementation of the proposed algorithm. In particular, using the geometry of the image set under F in 79 a multiobjective optimization problem, we show that a convex quadratic objective can be neglected simplifying the 80 problem. Numerical experiments illustrating the applicability of the algorithm are presented. Our codes are written in 81 82 Fortran 90, and are freely available for download at https://lfprudente.ime.ufg.br/. 83 This paper is organized as follows. In Section2 we provide some useful definitions, notations and preliminary results. 84 In Section3 we describe the Wolfe line search algorithm for vector optimization and present its convergence analysis. 85 Section4 is devoted to practical aspects related to the algorithm. Section5 describes the numerical experiments. Finally 86 87 we make some comments about this work in Section6. 88 m 89 Notation. The symbol h·; ·i is the usual inner product in R and k·k denotes the Euclidean norm. If K = (k1;k2;:::) ⊆ N 90 (with k < k for all j), we denote K ⊂ N. j j+1 1 91 92 93 2 PRELIMINARIES 94 A closed, convex, and pointed cone K ⊂ Rm with non-empty interior induces a partial order in Rm as following 95 96 u ⪯K v (u ≺K v) if and only if v − u 2 K (v − u 2 int(K)), 97 98 where int(K) denotes the interior of K. In vector optimization, given a function F : Rn ! Rm, we seek to find a 99 n n 100 K-efficient or K-optimal point of F, that is, a point x 2 R for which there is no other y 2 R with F (y) ⪯K F (x) and 101 F (y) , F (x). We denote this problem by 102 n 103 MinimizeK F (x); x 2 R : (1) 104 Manuscript submitted to ACM A Wolfe line search algorithm for vector optimization 3 105 When F is continuously differentiable, a necessary condition for K-optimality of x 2 Rn is 106 107 − int(K) \ Image(JF (x)) = ;; (2) 108 m 109 where JF (x) is the Jacobian of F at x and Image(JF (x)) denotes the image on R by JF (x). A point x satisfying (2) is n 110 called K-critical. If x is not K-critical, then there is d 2 R such that JF (x)d 2 −int(K). In this case, it is possible to show 111 that d is a K-descent direction for F at x, i.e., there exists T > 0 such that 0 < t < T implies that F (x + td) ≺K F (x), 112 see [Luc 1989]. 113 ∗ 2 m j h i ≥ 8 2 ⊂ ∗ n f g 114 The positive polar cone of K is defined by K B w R w;y 0; y K . Let C K 0 be a compact set 115 such that ( ) 116 K∗ = cone(conv(C)); (3) 117 118 where conv(C) denotes the convex hull of C, and cone(conv(C)) is the cone generated by conv(C). When K is polyhedral, ∗ 119 K is also polyhedral and the finite set of its extremal rays is such that (3) holds. For example, in multiobjective 120 m ∗ m optimization, where K = R+ , we have K = K and C can be defined as the canonical basis of R .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages23 Page
-
File Size-