A Penalty Function Algorithm with Objective Parameters for Nonlinear Mathematical Programming

A Penalty Function Algorithm with Objective Parameters for Nonlinear Mathematical Programming

JOURNAL OF INDUSTRIAL AND doi:10.3934/jimo.2009.5.585 MANAGEMENT OPTIMIZATION Volume 5, Number 3, August 2009 pp. 585–601 A PENALTY FUNCTION ALGORITHM WITH OBJECTIVE PARAMETERS FOR NONLINEAR MATHEMATICAL PROGRAMMING Zhiqing Meng College of Business and Administration Zhejiang University of Technology, Zhejiang 310023, China Qiying Hu School of Management Fudan University, Shanghai 201800, China Chuangyin Dang Department of Manufacturing Engineering & Engineering Management City University of Hong Kong, Kowloon, Hong Kong, China (Communicated by Jan Ruckmann) Abstract. In this paper, we present a penalty function with objective param- eters for inequality constrained optimization problems. We prove that this type of penalty functions has good properties for helping to solve inequality con- strained optimization problems. Moreover, based on the penalty function, we develop an algorithm to solve the inequality constrained optimization problems and prove its convergence under some conditions. Numerical experiments show that we can obtain a satisfactorily approximate solution for some constrained optimization problems as the same as the exact penalty function. 1. Introduction. The problem we consider in this paper is the following inequality constrained optimization problem: (P ) min f0(x) s.t. fi(x) ≤ 0, i ∈ I = {1, 2, · · · ,m}, n where fi : R → R, i ∈ I0 = {0, 1, 2, · · · ,m}. Its feasible set is denoted by n X = {x ∈ R | fi(x) ≤ 0, i ∈ I}. The penalty function method provides an important approach to solving (P). Its main idea is to transform (P) into a sequence of unconstrained optimization problems which are easier to solve. In recent years, many researchers have paid attention to it in both theoretical and practical aspects. It is well-known that a penalty function for (P) is defined as: F (x, ρ)= f (x)+ ρ max{f (x), 0}2, 0 X i i∈I 2000 Mathematics Subject Classification. Primary: 65K05; Secondary: 90C30. Key words and phrases. Nonlinear programming, penalty function, objective parameter, constraint. The first author is supported by the National Natural Science Foundation of China under grunt 10671039 and SRG7002006. 585 586 ZHIQING MENG, QIYING HU AND CHUANGYIN DANG and the corresponding penalty optimization problem for (P) is defined as: n (Pρ) min F (x, ρ) s.t. x ∈ R . The penalty function F (x, ρ) is smooth if the constraints and objective function are differentiable, but it is not necessarily exact. Here, a penalty function F (x, ρ) is ∗ exact if there is some ρ such that an optimal solution to (Pρ) is also an optimal solution to (P ) for all ρ ≥ ρ∗. We call ρ a penalty parameter. In 1967, Zangwill [19] presented the following penalty function: F (x, ρ)= f (x)+ ρ max{f (x), 0}, 1 0 X i i∈I with the corresponding penalty optimization problem of (P) n (EPρ) min F1(x, ρ) s.t. x ∈ R . The penalty function F1(x, ρ) is exact under certain assumptions, but it is not smooth. Exact penalty functions attract many researchers to study. For example, Han and Mangasrian [5] presented an exact penalty function for nonlinear programming. Rosenberg [14] gave a globally convergent algorithm for convex programming based on an exact penalty function. Rosenberg in [15] further studied stability of ex- act penalty functions in locally Lipschitz programming. Di Pillo and Grippo [12] proposed an exact penalty function method with global convergence for nonlinear programming problems. Zenios, et al. [20] discussed a smooth penalty function al- gorithm for network-structured problems. Pinar and Zenios [13] presented a smooth exact penalty function for convex constrained optimization problems, in which all the objective function and the constraint functions are convex, and the smoothing penalty function is first-order differentiable. Mongeau and Sartenaer [10] discussed automatic decrease of the penalty parameter in exact penalty function methods. However, in these papers [5], [10], [12], [13], [14], [15], [20], no new types of penalty functions are presented and the type of penalty functions studied in them is the same as that presented by [19]. On the other hand, the existing exact penalty func- tion algorithms still need to increase the penalty parameter in order to find out a better solution and the penalty functions are not differentiable [5], [12], [14], [15], [19]. Hence, we cannot use any efficient algorithms, such as Newton Method, to solve constrained optimization problems via those exact penalty function methods. Recently, Rubinov, Glover, Yang, and Huang [6], [16], [17], [18] presented a nonlinear Lagrangian penalty function, which is defined as 1 F (x, ρ) = [f (x)k + ρ max{f (x), 0}k] k , k 0 X i i∈I For some k> 0. The corresponding penalty optimization problem of (P) is n (EP kρ) min Fk(x, ρ) s.t. x ∈ R . When k = 1, the problem (EP kρ) is exactly the same as (EPρ). The penalty function Fk(x, ρ) is smooth for k > 1 if all the constraints and objective function are differentiable, but it is not necessarily smooth for 0 < k ≤ 1. Some conditions to ensure the exactness of the penalty function are required in [16], [17], however, they are not easy to check. All the penalty function algorithms with the constrained penalty parameter need to increase the penalty parameter ρ gradually. So does the exact penalty function methods because we do not know exactly how big the penalty parameter ρ is needed. A PENALTY FUNCTION ALGORITHM WITH OBJECTIVE PARAMETERS 587 In fact, from a computing point of view, it is impossible to take a very big value of the penalty parameter ρ due to the limited precision of a computer. The penalty function method with an objective penalty parameter have been discussed in [1], [2], [3], [4], [11], where the penalty function is defined as φ(x, M) = (f (x) − M)p + f (x)p. 0 X i i∈I where p > 0. Suppose x∗ is an optimal solution and f ∗ is the optimal value of the objective function, then a sequential penalty function method can be envisaged, in which a convergent sequence ({M k} → f ∗) is generated so that the minimizers x(M k) → x∗. Morrison [11] considered the problem min{f(x)|g(x)=0} and de- fines a penalty function problem: min (f(x) − M)2 + |g(x)|2. Without convexity or continuity assumptions, a sequence of problems is constructed by choosing an appropriate convergent sequence M k. Fletcher [3], [4] discussed a similar type of φ(x, M), furthermore Burke [1] considered a more general type. Fiacco and Mc- Cormick [2] gave a general introduction of sequential unconstrained minimization techniques. Mauricio and Maculan [8] discussed a Boolean penalty method for zero-one nonlinear programming and defined another type of penalty functions: H(x, M) = max{f0(x) − M,f1(x), · · · ,fm(x)}. Meng, Hu and Dang [9] also studied an objective penalty function method as follows F (x, M) = (f (x) − M)2 + max{f (x), 0}p, 0 X i i∈I which is a good smooth penalty function. In this paper, we present a more general type of the penalty functions. It will give us good prospects to solve the constrained optimization problems. Let functions Q : R → R ∪{+∞} and P : R → R ∪{+∞} satisfy respectively Q(t) > 0 for all t ∈ R \{0} Q(0) = 0 Q(t1) <Q(t2) for 0 ≤ t1 <t2 and P (t) = 0 if and only if t ≤ 0, P (t) > 0 if and only if t> 0. We present the following penalty function with objective parameters: F (x, M)= Q(f (x) − M)+ P (f (x)), 0 X i i∈I where the objective parameter M ∈ R. If Q(t), P (t), fi(x) (i ∈ I0) are all differ- entiable, then it is obvious that F (x, M) is also differentiable. For example, letting Q(t)= t2 and P (t) = max{t, 0}4, we have the following penalty function, F (x, M) = (f (x) − M)2 + f +(x)4, 0 X i i∈I + where fi (x) = max{0,fi(x)}, i ∈ I. For i ∈ I0, if fi is first-order differentiable, + 4 it is clear that fi (x) is also first-order differentiable, and if fi is second-order 588 ZHIQING MENG, QIYING HU AND CHUANGYIN DANG + 4 differentiable, it is clear that fi (x) is also second-order differentiable. And so is F (x, M). The remainder of this paper is organized as follows. In Section 2, we show some theorems for the penalty function F (x, M) and present an algorithm to solve the original problem (P) with global convergence without any convex conditions. In Section 3, we give some numerical examples, which show that the number of iterations of the algorithm is few for obtaining a good approximate solution to (P). 2. A penalty function method with objective parameters. Consider the following nonlinear optimization problem: (P(M)) min F (x, M), s.t. x ∈ Y, where Y ⊂ Rn and the feasible set X ⊂ Y . Especially, when Y = Rn, (P(M)) is an unconstrained optimization problem. If an optimal solution to (P (M)) for some M is also an optimal solution to (P), then M is called an appropriate penalty parameter. Next, we prove a theorem on the penalty function. ∗ ∗ ∗ Theorem 2.1. If x is an optimal solution to (P) and M = f0(x ), then, x is also an optimal solution to (P (M)) with F (x∗,M)=0. ∗ ∗ Proof. Since x is an optimal solution to (P) and M = f0(x ), we have F (x∗,M)= Q(f (x∗) − M)+ P (f (x∗))=0. 0 X i i∈I It follows from F (x, M) ≥ 0 for any x ∈ Rn that x∗ is an optimal solution to (P (M)).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    17 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us