Incorporating Nonmonotone Strategies Into the Trust Region Method for Unconstrained Optimization$

Incorporating Nonmonotone Strategies Into the Trust Region Method for Unconstrained Optimization$

View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Computers and Mathematics with Applications 55 (2008) 2158–2172 www.elsevier.com/locate/camwa Incorporating nonmonotone strategies into the trust region method for unconstrained optimization$ Neng-zhu Gua, Jiang-tao Mob,∗ a School of Business, University of Shanghai for Science and Technology, Shanghai 200093, China b College of Mathematics and Information Science, Guangxi University, Nanning 530004, China Received 7 July 2005; received in revised form 13 August 2007; accepted 30 August 2007 Abstract This paper concerns a nonmonotone line search technique and its application to the trust region method for unconstrained optimization problems. In our line search technique, the current nonmonotone term is a convex combination of the previous nonmonotone term and the current objective function value, instead of an average of the successive objective function values that was introduced by Zhang and Hager [H. Zhang, W.W. Hager, A nonmonotone line search technique and its application to unconstrained optimization, SIAM J. Optim. 14 (4) (2004) 1043–1056]. We incorporate this nonmonotone scheme into the traditional trust region method such that the new algorithm possesses nonmonotonicity. Unlike the traditional trust region method, our algorithm performs a nonmonotone line search to find a new iteration point if a trial step is not accepted, instead of resolving the subproblem. Under mild conditions, we prove that the algorithm is global and superlinear convergence holds. Primary numerical results are reported. c 2008 Published by Elsevier Ltd Keywords: Unconstrained optimization; Nonmonotone trust region method; Nonmonotone line search; Global convergence; Superlinear convergence 1. Introduction We consider the following unconstrained optimization problem min{ f (x)|x ∈ Rn}, (1.1) where f : Rn → R is twice continuously differentiable. Since it was proposed by Levenberg [1] and Marquardt [2] for nonlinear least-square problems and then developed by Goldfeld [3] for unconstrained optimization, trust region methods have been studied extensively by many researchers. For example, Powell [4] established the convergence result of the trust region method for unconstrained $ This work was supported by the Science Foundation of Guangxi, No: 0542043. ∗ Corresponding author. E-mail addresses: [email protected] (N.-z. Gu), [email protected] (J.-t. Mo). 0898-1221/$ - see front matter c 2008 Published by Elsevier Ltd doi:10.1016/j.camwa.2007.08.038 N.-z. Gu, J.-t. Mo / Computers and Mathematics with Applications 55 (2008) 2158–2172 2159 optimization, Fletcher [5], Yuan [6], Powell and Yuan [7] proposed various trust region algorithms for constrained optimization problems. For a more complete introduction, we refer the reader to [8]. Trust region methods solve an optimization problem iteratively. At each iteration, a trial step dk is generated by solving the subproblem T 1 T min gk d + d Bkd = φk(d) (1.2) d∈R n 2 s.t. kdk ≤ ∆k, n×n where gk = ∇ f (xk), Bk ∈ R is an approximate Hessian matrix of f at xk, ∆k > 0 is a trust region radius. Some criteria are used to decide whether a trial step dk is accepted. If a trial step is not accepted, traditional trust region methods resolve the subproblem (1.2) by reducing the trust region radius until an acceptable step is found. Therefore, the subproblem may be solved several times at an iteration before an acceptable step is found, and these repetitious processes are likely to increase the total cost of computation for large scale problems. To improve the computational efficiency of trust region methods, new type trust region algorithms that combine line search techniques have been developed (e.g., Necedal and Yuan [9] and Gertz [10]). The prominent virtue of these algorithms is to perform a line search to find an successful iterative point when a trial step is not accepted; this mechanism implies that the subproblem is needs to be solved once for each iteration. Most of the proposed methods for optimization problems require objective function values monotonically decreasing at each iteration. However, Grippo, Lampariello and Lucidi [11] shown that this scheme can considerably slow the rate of convergence in the intermediate stages of the minimization process, especially in the presence of the narrow curved valley. To overcome this shortcoming, they introduced a nonmonotone line search technique of the form T f (xk + αdk) ≤ max f (xk− j ) + δα∇ f (xk) dk, (1.3) 0≤ j≤m(k) where δ ∈ (0, 1) is a constant, N is an integer and k, k ≤ N; m(k) = (1.4) min{m(k − 1) + 1, N}, k > N. This technique leads to a breakthrough in nonmonotonic algorithms for nonlinear optimization. For example, based on (1.3), Deng, Xiao and Zhou [12] first proposed a nonmonotone trust region method for unconstrained optimization, Ke and Han [13], Sun [14], Fu and Sun [15] presented various nonmonotone trust region methods. Recently, Zhang and Hager [16] found that there still exist some drawbacks with the nonmonotone technique (1.3). First, a good function value generated in any iteration might be excluded due to the max in (1.3). Second, in many cases, the numerical performance is dependent on the choice of N (see [11,17]). Moreover, for any given bound N on the memory, even an iterative method is generating R-linearly convergence for a strongly convex function, the iterates may not satisfy the condition (1.3) for k sufficiently large (see [18]). To improve these limitations, Zhang and Hager [16] proposed a nonmonotone gradient method for unconstrained optimization. Their method requires an average of the successive function values that is decreasing – that is to say, the steplength α is computed to satisfy the following line search condition T f (xk + αdk) ≤ Ck + δα∇ f (xk) dk, (1.5) where f (xk), k = 1; Ck = (1.6) (ηk−1 Qk−1Ck−1 + f (xk))/Qk, k ≥ 2, 1, k = 1; Qk = (1.7) ηk−1 Qk−1 + 1, k ≥ 2, and ηk−1 ∈ [ηmin, ηmax]. ηmin ∈ (0, ηmax) and ηmax ∈ (0, 1) are two chosen parameters. The numerical results reported in [16] show that the nonmonotone technique (1.5) is particularly efficient for unconstrained problems. 2160 N.-z. Gu, J.-t. Mo / Computers and Mathematics with Applications 55 (2008) 2158–2172 Inspired by this nonmonotone technique, Mo, Liu and Yan [19] introduced it into trust region method and developed a nonmonotone algorithm. The numerical results indicate that the algorithm is robust and encouraging. As we see, the definition of mean values Ck implies that each Ck is a convex combination of the previous Ck−1 and fk, including the complex ηk and Qk. In theory, the best convergence results were obtained by dynamically varying ηk, using values closer to 1 when the iterates were far from the optimum, and using values closer to 0 when the iterates were near the optimum (see [16] for details). In practice, however, it becomes an encumbrance to update ηk and Qk at each iteration. Therefore, in this study, we introduce another nonmonotone line search T f (xk + αdk) ≤ Dk + δα∇ f (xk) dk, (1.8) where Dk is a simple convex combination of the previous Dk−1 and fk, say f (xk), k = 1; Dk = (1.9) ηDk−1 + (1 − η) f (xk) k ≥ 2 for some fixed η ∈ (0, 1), or a variable ηk. In the rest of this paper, our purpose is to develop an algorithm that combines the nonmonotone technique (1.8) and trust region method for unconstrained optimization problems. The algorithm doesn’t restrict the object function values to be monotonically decreasing. Furthermore, if a trial step is not accepted, the algorithm performs the nonmonotone line search (1.8) to find an iterative point instead of resolving the subproblem. The new algorithm can viewed as a generalization of the Nocedal and Yuan algorithm [9] from monotone to non-monotone, or the Mo et al. algorithm [19] from non-linesearch to linesearch. The paper is organized as follows. In Section 2, we describe the algorithm. In Section 3, we establish the global convergence and superlinear convergence for the algorithm. Primary numerical results are presented in Section 4. Finally, our conclusions are given in Section 5. 2. Algorithm In this section, we state our method in the form of an algorithm. Throughout this paper, we use k · k to represent the Euclidean norm and denote f (xk) by fk, ∇ f (xk) by gk, etc. Vectors are column vectors unless a transpose is used. The method we present is also an iterative method. At each iteration, a trial step dk is generated by solving the trust region subproblem (1.2). Similarly to [9], we solve (1.2) inaccurately such that kdkk ≤ ∆k, φk(0) − φk(dk) ≥ τkgkk min{∆k, kgkk/kBkk} (2.1) and T dk gk ≤ −τkgkk min{∆k, kgkk/kBkk}, (2.2) where τ ∈ (0, 1) is a constant. To determine whether a trial step will be accepted, we compute ρk, the ratio between the actual reduction and the predicted reduction. Instead of using fk − f (xk +dk) as the actual reduction, we choose Dk − f (xk +dk) as the actual reduction, such that the algorithm does not require object function values to be monotonically decreasing. ρk is given as Dk − f (xk + dk) ρk = , (2.3) φk(0) − φk(dk) where Dk is defined by (1.9). If ρk ≥ µ, where µ is a constant, we accept dk as a successful step and let xk+1 = xk +dk. Otherwise, we generate a new iterative point by xk+1 = xk + αkdk, where αk is a steplength satisfying the line search condition (1.8). More precisely, we describe the nonmonotone trust region algorithm as follows: Algorithm 1 (Nonmonotone Trust Region Method With line Search).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us