IAENG International Journal of Applied Mathematics, 40:3, IJAM_40_3_06 ______Polynomial Penalty Method for Solving Problems

Parwadi Moengin, Member, IAENG

Abstract—In this work, we study a class of polynomial order-even penalty functions for solving linear III. POLYNOMIAL PENALTY METHOD programming problems with the essential property that each member is convex polynomial order-even when viewed as a function of the multiplier. Under certain assumption on For any scalar σ > 0, we define the polynomial the parameters of the penalty function, we give a rule for penalty function xP σ),( for problem (1); xP σ),( : choosing the parameters of the penalty function. We also n give an for solving this problem. → RR by m T ρ Index Terms—linear programming, penalty method, = + , (2) xP σ ),( xc ∑ −σ bxA ii )( polynomial order-even. i =1

where ρ > 0 is an even number. Here, and denote Ai bi I. INTRODUCTION the ith row of matrices A and b, respectively. The positive even number ρ is chosen to ensure that the function (2) is The basic idea in penalty method is to eliminate convex. Hence, xP σ),( has a global minimum. We some or all of the constraints and add to the objective function a penalty term which prescribes a high cost to refer to σ as the penalty parameter. infeasible points (Wright, 2001; Zboo, etc., 1999). This is the ordinary Lagrangian function in which in

Associated with this method is a parameter σ, which the altered problem, the constraints − bxA ii (i =1,..,m) determines the severity of the penalty and as a are replaced by − ρ . The penalty terms are consequence the extent to which the resulting bxA ii )( unconstrained problem approximates the original problem formed from a sum of polynomial order−ρ of constrained (Kas, etc., 1999; Parwadi, etc., 2002). In this paper, we violations and the penalty parameter σ determines the restrict attention to the polynomial order-even penalty amount of the penalty. function. Other penalty functions will appear elsewhere. The motivation behind the introduction of the This paper is concerned with the study of the polynomial polynomial order-ρ term is that they may lead to a penalty function methods for solving linear programming. representation of an optimal solution of problem (1) in It presents some background of the methods for the terms of a local unconstrained minimum. Simply stating problem. The paper also describes the theorems and the definition (2) does not give an adequate impression of for the methods. At the end of the paper we the dramatic effects of the imposed penalty. In order to give some conclusions and comments to the methods. understand the function stated by (2) we give an example with some values for σ. Some graphs of xP σ),( are

II. STATEMENT OF THE PROBLEM given in Figures 1−3 for the trivial problem minimize )( = xxf Throughout this paper we consider the problem subject to x =− 01 , minimize ்ܿݔ for which the polynomial penalty function is given by subject to Ax = b ρ x ≥ 0, (1) xP σ ),( = xx −σ+ 1 )( . where A ∈ R ×nm , c, x ∈ R n , and b ∈ R m . Without Figures 1, 2 and 3 depict the one-dimensional variation of loss of generality we assume that A has full rank m. We the penalty function of ρ = 2, 4 and 6, for three values of assume that problem (1) has at least one feasible solution. penalty parameter σ, that is σ = 1, σ = 2 and σ = 6, In order to solve this problem, we can use Karmarkar’s respectively. algorithm and simpelx method (Durazzi, 2000). But in this paper we propose a polynomial penalty method as another alternative method to solve linear programming problem (1).

Manuscript is submitted on March 6, 2010. This work was supported by the Faculty of Industrial Engineering, Trisakti University, Jakarta. Parwadi Moengin is with the Department of Industrial Engineering, Faculty of Industrial Technology, Trisakti University, Jakarta 11440, Indonesia. (email: [email protected], [email protected]).

(Advance online publication: 19 August 2010)

IAENG International Journal of Applied Mathematics, 40:3, IJAM_40_3_06 ______

minimize xP σ),( , it is clear that x * is a limit point of the unconstrained minimizers of xP σ),( as σ → ∞. The intuitive motivation for the penalty method is that we seek unconstrained minimizers of xP σ),( for value of σ increasing to infinity. Thus the method of solving a sequence of minimization problem can be considered. The polynomial penalty method for problem (1) consists of solving a sequence of problems of the form minimize xP σk ),( subject to x ≥ 0 , (3) where {σk } is a penalty parameter sequence satisfying 0 σ<σ< kk +1 for all k, k ∞→σ .

x(1) x(2) x(6) The method depends for its success on sequentially Figure 1 The quadratic penalty function for ρ = 2 increasing the penalty parameter to infinity. In this paper, we concentrate on the effect of the penalty parameter. The rationale for the penalty method is based on the fact that when k ∞→σ , then the term m k ρ , σ ∑ − bxA ii )( i =1 when added to the objective function, tends to infinity if

− bxA ii ≠ 0 and equals zero if − bxA ii = 0 for all i. Thus, we define the function Rf n ∞+−∞→ ],(: by ⎧ T if bxAxc =− i,allfor0 xf )( = ⎨ ii ⎩∞ if bxA ii ≠− i.allfor0 The optimal value of the original problem (1) can be written as x(1) x(2) x(6) f * = inf T xc = xf )(inf Figure 2 The polynomial penalty function for ρ = 4 ,xbAx ≥= 0 x≥0 = xP σk ),(liminf . (4) 0 kx ∞→≥ On the other hand, the penalty method determines, via the sequence of minimizations (3), f = xP σk ),(inflim . (5) xk ≥∞→ 0 Thus, in order for the penalty method to be successful, the original problem should be such that the interchange of “lim” and “inf” in (4) and (5) is valid. Before we give a guarantee for the validity of the interchange, we investigate some properties of the function defined in (2). First, we derive the convexity behavior of the polynomial penalty function defined by (2) is stated in the following stated theorem.

x(1) x(2) x(6) Theorem 1 (Convexity) The polynomial penalty function xP σ),( is convex in Figure 3 The polynomial penalty function for ρ = 6 its domain for every σ > 0.

The y-ordinates of these figures represent xP σ),( for ρ = 2, ρ = 4, ρ = 6, respectively. Clearly, if the solution x * = 1 of this example is compared with the points which

(Advance online publication: 19 August 2010)

IAENG International Journal of Applied Mathematics, 40:3, IJAM_40_3_06 ______

Proof. Proof. It is straightforward to prove convexity of xP σ),( using Let x k and x k +1 denote the global minimizers of T ρ k k +1 the convexity of xc and − bxA ii )( . Then the problem (3) for the penalty parameters σ and σ , theorem is proven. „ respectively. By definition of x k and x k +1 as minimizers and σk < σk +1 , we have The local and global behavior of the polynomial penalty m function defined by (2) is stated in next the theorem. It is kkT k ρ kT +1 xc σ+ − bxA )( ≤ xc + a consequence of Theorem 1. ∑ i i i =1 m Theorem 2 (Local and global behavior) σk k +1 ρ , (6a) ∑ ( i − bxA i ) Consider the function xP σ),( which is defined in (2). i =1 Then m xc kT +1 + σk k +1 ρ ≤ xc kT +1 + (a) xP σ),( has a finite unconstrained minimizer in its ∑ ( i − bxA i ) i =1 domain for every σ > 0 and the set Mσ of m unconstrained minimizers of xP σ),( in its domain σk +1 k +1 ρ , (6b) ∑ ( i − bxA i ) is convex and compact for every σ > 0. i =1 m (b) Any unconstrained local minimizer of xP σ),( in xc kT +1 + σk +1 k +1 ρ ≤ xc kT + ∑ ( i − bxA i ) its domain is also a global unconstrained minimizer i =1 of xP σ),( . m σk +1 k ρ . (6c) ∑ i − bxA i )( Proof. i =1 It follows from Theorem 1 that the smooth function k +1 xP σ),( achieves its minimum in its domain. We then We multiply the first inequality (6a) with the ratio σ / k conclude that xP σ),( has at least one finite σ , and add the inequality to the inequality (6c) we unconstrained minimizer. obtain + + By Theorem 1 xP σ),( is convex, so any local ⎛ σk 1 ⎞ ⎛ σk 1 ⎞ ⎜ −1⎟ xc kT ≤ ⎜ −1⎟ xc kT +1 . ⎜ k ⎟ ⎜ k ⎟ minimizer is also a global minimizer. Thus, the set Mσ of ⎝ σ ⎠ ⎝ σ ⎠ unconstrained minimizers of xP σ),( is bounded and k k +1 kTkT +1 Since 0 < σ < σ , it follows that ≤ xcxc closed, because the minimum value of xP σ),( is and part (a) is established. unique, and it follows that Mσ is compact. Clearly, the convexity of Mσ follows from the fact that the set of To prove part (b) of the theorem, we add the inequality optimal points xP σ),( is convex. Theorem 2 has been (6a) to the inequality (6c) to get m m verified. „ k +1 k k +1 ρ k +1 k k ρ ()σ−σ ∑ ( i bxA i ) ()σ−σ≤− ∑ i − bxA i )( As a consequence of Theorem 2 we derive the i =1 i =1 monotonicity behaviors of the objective function problem , (1), the penalty terms in xP σ),( and the minimum thus m m value of the polynomial penalty function xP σ),( . To do k +1 ρ k ρ ( i bxA i ≤− i − bxA i )() kk ∑ ∑ this, for any σk > 0 we denote x k and xP σ ),( as i =1 i =1 a minimizer and minimum value of problem (3), as required for part (b). respectively. Using inequalities (6a) and (6b), we obtain Theorem 3 (Monotonicity) m xc σ+ kkT k ρ ≤ xc kT +1 + σk +1 k ∑ i − bxA i )( Let σ }{ be an increasing sequence of positive penalty i =1 parameters such that k ∞→σ as k ∞→ . m k +1 ρ . Then ∑ ( i − bxA i ) i =1 kT (a) { xc } is non-decreasing. Hence, part (c) of the theorem is established. „ ⎧ m ⎫ (b) k ρ is non-increasing. We now give the main theorem concerning polynomial ⎨∑ i − bxA i )( ⎬ ⎩ i =1 ⎭ penalty method for linear programming problem (1). kk (c) xP σ ),( is non-decreasing. { }

(Advance online publication: 19 August 2010)

IAENG International Journal of Applied Mathematics, 40:3, IJAM_40_3_06 ______

Theorem 4 (Convergence of polynomial penalty function) and Let σk }{ be an increasing sequence of positive penalty f * = T xc , parameters such that k ∞→σ as k ∞→ . Denote which proves that x is a global minimum for problem k kk (1). This proves part (b) of the theorem. x and xP σ ),( as in Theorem 3. Then (a) k → bAx as k ∞→ . To prove part (c), we apply the results of parts (a) and (b), kk (b) kT → fxc * as k → ∞ . and then taking k → ∞ of the definition xP σ ),( . kk „ (c) xP →σ f *),( as k ∞→ . Some notes about this theorem will be taken. First, it Proof. assumes that the problem (3) has a global minimum. This By definition of x k and xP σkk ),( , we have may not be true if the objective function of the problem kT kk k (1) is replaced by a nonlinear function. However, this xc ≤ xP σ ),( ≤ xP σ ),( for all x ≥ 0 . (7) situation may be handled by choosing appropriate value Let f * denotes the optimal value of the problem P )( . We of ρ. We also note that the constraint x ≥ 0 of the have problem (3) is important to ensure that the limit point of f * = inf T xc = xP σk ),(inf . the sequence x k }{ satisfies the condition x ≥ 0 . ,xbAx ≥= 0 =bAx x ≥0 Hence, by taking the infimum of the right-hand side of IV. ALGORITHM (7) over x ≥ 0 and = bAx , we obtain m The implications of these theorems are remarkably xP σkk ),( = xc kT + σk k ρ ≤ f *. strong. The polynomial penalty function has a finite ∑ i − bxA i )( i =1 unconstrained minimizer for every value of the penalty k parameter, and every limit point of a minimizing Let x be a limit point of x }{ . By taking the limit sequence for the penalty function is a constrained superior in the above relation and by using the continuity minimizer of a problem (1). Thus the algorithm of solving T of xc and − bxA ii , we obtain a sequence of minimization problems is suggested. Based m on Theorems 4, we formulate an algorithm for solving T + k k ρ ≤ f *. (8) problem (1). xc σ ∑ i − bxA i )(suplim k ∞→ i =1 m Algorithm 1 Since k ρ ≥ 0 and k , it follows 1 ∑ i − bxA i )( ∞→σ Given Ax = b, σ > 0, the number of iteration N and ε i =1 > 0. that we must have 1 n 1 1 m 1. Choose x ∈ R such that A x = b and x ≥ 0. k ρ 2. If the optimality conditions are satisfied for problem ∑ i − bxA i )( → 0 i =1 (1) at x 1 , then stop. and 3. Compute xP σ11 ),( := xP σ1 ),(min and the bxA ii =− 0 for all i = 1, …, m, 9) x≥0 otherwise the limit superior in the left-hand side of (8) minimizer x 1 . will equal to +∞. This proves part (a) of the theorem. 4. Compute xP σkk ),( := xP σk ),(min , the x≥0 n k Since { xRx ≥∈ }0 is a closed set we also obtain minimizer x and k k −1 that x ≥ 0 . Hence, x is feasible, and σ := 10 σ for k = 2. kk −1 kk f * ≤ T xc . (10) 5. If || − xx || < ε or | xP σ ),( – Using (8)-(10), we obtain xP σkk −− 11 ),( | < ε or m k f * + k k ρ ≤ T + || − bAx || < ε or k = N; then stop. σ ∑ i − bxA i )(suplim xc k ∞→ i =1 Else k := k + 1 and go to step 4. „ m k k ρ ≤ f *. σ ∑ i − bxA i )(suplim k ∞→ i =1 Hence, m k k ρ = 0 σ ∑ i − bxA i )(suplim k ∞→ i =1

(Advance online publication: 19 August 2010)

IAENG International Journal of Applied Mathematics, 40:3, IJAM_40_3_06 ______

Examples: Consider the following problems. 4.Minimize = − + 43 xxf 21 1.Minimize 21 ++= 752 xxxf 3 subject to − xx 21 ≥ 0 , subject to 1 xxx 32 =++ 632 , − + xx 21 ≥ 22 , x j ≥ 0 , for j = 1, 2, 3. x j ≥ 0 , for j = 1, 2.

2.Minimize 1 += 5.04.0 xxf 2 5. Minimize = + 83 xxf 21 subject to 1 xx 2 ≥+ 7.21.03.0 , subject to 1 + xx 2 ≤ 2043 ,

1 xx 2 =+ 65.05.0 , + xx 21 ≥ 123 , x ≥ 0 , for j = 1, 2. j x j ≥ 0 , for j = 1, 2.

3.Minimize += 34 xxf 21 Table 1 reports the results of computational for Algorithm subject to xx ≥+ 632 , 1(ρ = 2), Algorithm 1(ρ = 4) and Karmarkar’s Algorithm. 21 The first column of Table 1 contains the problem number 4 xx 21 ≥+ 4 , and the next two columns of each algorithm in this table x ≥ 0 , for j = 1, 2. contain the total iterations and the times (in seconds) of j each algorithm.

Tabel 1 Algorithm 1(ρ = 2), Algorithm 1(ρ = 4) and Karmarkar’s Algorithm test statistics Algorithm 1(ρ = 2) Algorithm 1(ρ = 4) Karmarkar’s Algorithm

Total Time Total Time Total Time Problem No. Iterations (Secs.) Iterations (Secs.) Iterations (Secs.) 1. 10 3.4 25 189.1 16 3.6 2. 9 3.9 7 75.3 19 3.7 3. 9 8.5 9 851.4 19 3.7 4. 10 8.7 12 2978.9 12 2.8 5. 10 9.9 19 5441.7 18 3.8

V. CONCLUSION [2] Kas, P., Klafszky, E., & Malyusz, L. (1999). Convex program based on the Young inequality and its As mentioned above, the paper has described the relation to linear programming. Central European penalty functions with penalty terms in polynomial order- Journal for Operations Research. 7(3). pp. 291−304. σ for solving problem (1). The algorithms for these [3] Parwadi, M., Mohd, I.B., & Ibrahim, N.A. (2002). methods are also given in this paper. The Algorithm 1 is Solving Bounded LP Problems using Modified used to solve the problem (1). We also note the important Logarithmic-exponential Functions. In Purwanto thing of these methods which do not need an interior (Ed.), Proceedings of the National Conference on point assumption. Mathematics and Its Applications in UM Malang (pp. 135-141). Malang: Department of Mathematics

UM Malang. REFERENCES [4] Wright, S.J. (2001). On the convergence of the

Newton/log-barrier method. Mathematical [1] Durazzi, C. (2000). On the Newton interior-point method for problems. Programming, 90(1), 71−100. Journal of Optimization Theory and Applications. [5] Zboo, R.A, Yadav, S.P., & Mohan, C. (1999). Penalty method for an optimal control problem with equality 104(1). pp. 73−90. and inequality constraints. Indian Journal of Pure and Applied Mathematics. 30(1), pp. 1−14.

(Advance online publication: 19 August 2010)