Intelligent Information Management, 2010, 2, 40-45 doi:10.4236/iim.2010.21005 Published Online January 2010 (http://www.scirp.org/journal/iim)
New Variants of Newton’s Method for Nonlinear Unconstrained Optimization Problems
V. K A N WA R 1, Kapil K. SHARMA2, Ramandeep BEHL2 1University Institute of Engineering and Technology, Panjab University, Chandigarh160 014, India 2Department of Mathematics, Panjab University, Chandigarh160 014, India Email: [email protected], [email protected], [email protected]
Abstract In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the proposed family is three. Numerical comparisons are made to show the performance of the presented meth- ods. Furthermore, numerical experiments demonstrate that the logarithmic mean Newton’s method outper- form the classical Newton’s and other variants of Newton’s method. MSC: 65H05.
Keywords: Unconstrained Optimization, Newton’s Method, Order of Convergence, Power Means, Initial Guess
1. Introduction the one dimensional methods are most indispensable and the efficiency of any method partly depends on them The celebrated Newton’s method [10]. The purpose of this work is to provide some alterna- f x n (1) tive derivations through power mean and to revisit some xxnn1 f xn well-known methods for solving nonlinear unconstrained used to approximate the optimum of a function is one of optimization problems. the most fundamental tools in computational mathemat- ics, operation research, optimization and control theory. 2. Review of Definition of Various Means It has many applications in management science, indus- trial and financial research, chaos and fractals, dynamical For a given finite real number , the th power systems, stability analysis, variational inequalities and mean m of positive scalars a and b , is defined as even to equilibrium type problems. Its role in optimiza- tion theory can not be overestimated as the method is the follows [11] basis for the most effective procedures in linear and 1 nonlinear programming. For a more detailed survey, one ab m (2) can refer [1–4] and the references cited therein. The idea 2 behind the Newton’s method is to approximate the objec- tive function locally by a quadratic function which agr- It is easy to see that ees with the function at a point. The process can be re- 2ab For 1 , m (Harmonic mean) , (3) peated at the point that optimizes the approximate func- 1 ab tion. Recently, many new modified Newton-type meth- 2 1 ab ods and their variants are reported in the literature [5–8]. For , m , (4) One of the reasons for discussing one dimensional opti- 1 2 2 2 mization is that some of the iterative methods for higher dimensional problems involve steps of searching extrema ab For 1, m1 (Arithmetic mean) . (5) along certain directions in n [8]. Finding the step size, 2
, along the direction vector d involves solving the For 0 , we have lim ma b, (6) n n 0 sub problem to minimize f xfnn1 xdnn, which which is the so-called geometric mean of a , b and is a one dimensional search problem in n [9]. Hence may be denoted by mg .
Copyright © 2010 SciRes IIM V. KANWAR ET AL. 41
* For given positive scalars a and b , some other and fx n1 , instead of f xn . Therefore, it is called well-known means are defined as arithmetic mean Newton’s method [13]. By using differ- aabb ent approximations to the indefinite integral in the New- N (Heronian mean) , (7) 3 ton’s theorem (11), different iterative formulas can be 22 obtained for solving nonlinear equations [14]. ab C (Contra-harmonic mean) , (8) ab 4. Variants of Newton’s Method for 2aabb22 T (Centroidal mean) , (9) Unconstrained Optimization Problems 3ab and Now we shall extend this idea for the case of uncon- ab strained optimization problems. Suppose that the func- L (Logarithmic mean) . (10) logab log tion f x is a sufficiently differentiable function. Let is an extremum point of f x , then x is a 3. Variants of Newton’s Method for root of Nonlinear Equations fx 0 (16) Extending Newton’s theorem (11), we have Recently, some modified Newton’s method with cubic x convergence have been developed by considering differ- f xf x f tdt (17) ent quadrature formula for computing integral, arising in n x the Newton’s theorem [12] n
x Approximating the indefinite integral in (17) by the f xfx ftd t (11) rectangular rule according to which n x xn n1 f tdt x x f x (18) Weerakoon and Fernando [13] re-derived the classical nn1 n xn Newton’s method by approximating the indefinite inte- and using fx 0 , we get gral in (11) by rectangular rule. Suppose that x xn1 is n1 the root of the equation fx 0 , we then put the left f x N n , fx 0 . (19) xxnnn11x n side fxn1 0 in the identity (11) and approximate f xn the integral by the rectangular rule according to which This is a well-known quadratically convergent New- xn1 ton’s method for unconstrained optimization problems. f tdt x x f x (12) nn1 n Approximating the integral in (17) by the trapezoidal xn approximation
Therefore, from (11) and (12), they obtain the well- xn1 1 (20) known Newton’s method. Weerakoon and Fernando [13] ftd t xnn11 x fxn fx n 2 further used trapezoidal approximation to the definite xn integral according to which in combination with the approximation x n1 1 fx ftdt x x fx fx (13) n fx N and fx 0, nn11 n n fx nn1 f x n1 n1 2 fx xn n to obtain modified Newton’s method given by we get the following arithmetic mean Newton’s method given by 2 fxn xxnn1 (14) 2 fx xx n (21) fxnn fx1 nn1 N fxnn fx 1 where f x for unconstrained optimization problems. This formula is xx n (15) nn1 also derived independently by Kahya [5]. f xn If we use the midpoint rule of integration in (20) is the Newton’s iterate. In contrast to classical Newton’s (Gaussian-Legendre formula with one knot) [15], then method, this method uses the arithmetic mean of f xn we obtain a new formula given by
Copyright © 2010 SciRes IIM 42 V. KANWAR ET AL.
fx mula (23) as follows: xx n nn1 N (22) 1) For 1 (arithmetic mean), Formula (23) corre- xx f nn1 sponds to cubically convergent arithmetic mean New- 2 ton’s method This formula may be called the midpoint Newton’s 2 fx formula for unconstrained optimization problems. Now xx n (24) nn1 N for generalization, approximating the functions in cor- fxnn fx1 rection factor in (21) with a power means of afx n 2) For 1 (harmonic mean), Formula (23) cor- N responds to a cubically convergent harmonic mean and bfx n1 , we have Newton’s method
fxn xx fxn 11 nn1 1 xx (25) nn1 N N 2 fxn fx n1 fxnn fx 1 sign f x0 2 3) For 0 (geometric mean), Formula (23) cor- responds to a new cubically convergent geometric mean (23) Newton’s method fx This family may be called the th power mean it- n xxnn1 (26) N erative family of Newton’s method for unconstrained op- signfx01 fxfxnn timization problems. 1 Special cases: 4) For , Formula (23) corresponds to a new It is interesting to note that for different specific values 2 cubically convergent method of , various new methods can be deduced from For-
4 fx n xxnn1 (27) fx fx NN2 signfx fxfx nn10 nn1
5) For 2 (root mean square), Formula (23) cor- Some other new third-order iterative methods based on responds to a new cubically convergent root mean square heronian mean, contra-harmonic mean, centrodial mean, Newton’s method logarithmic mean etc. can also be obtained from Formula fx n xxnn1 (28) (21) respectively. 22 N fxnn fx1 6) New cubically convergent iteration method based sign f x0 2 on heronian mean is
3 fx n xxnn1 (29) fx fx NN signfx fxfx nn10 nn1
7) New cubically convergent iteration method based 9) New cubically convergent iteration method based on contra-harmonic mean is on logarithmic mean is N fx log fx log fxN 3 fxnn f x fx n1 nn n1 xx (32) xxnn1 (30) nn1 22 N fx fx N fxnn fx1 nn1 8) New cubically convergent iteration method based 5. Convergence Analysis on centroidal mean is N Theorem 5.1 Let I be an optimum point of a suf- 3fxnn fx fx n1 xx (31) ficiently differentiable function for nn1 22NN fx : I 2 fxnnn fx 1 fxfx n1 an open interval I . If the initial guess x0 is suffi-
Copyright © 2010 SciRes IIM V. KANWAR ET AL. 43
ciently close to , then for , the family of meth- fx n ods defined by (23) has cubic convergence with the fol- signfx fxfx N lowing error equation 01nn (41) 9 234 9 34 ecceOenn34 2 n ecnn1421c3 eOe n (33) 2 2 From (40) and (41), we obtain k 1 f 234 where xnn e and ckk , 3,4,.... . ecceOnn1344.5 2 en (42) kf! Proof: Let be an extremum of the function f x Therefore, it can be concluded that for all , the th power mean iterative family (23) for unconstra- (i.e. f 0 and f 0 ). Since f x is ined optimization problems converges cubically. On sim- sufficiently differentiable function, expanding f xn , ilar lines, we can prove the convergence of the remaining Formulae (29)-(32) respectively. f x and f x about x by means of Tay- n n lor’s expansion, we have 6. Further Modifications of the Family (23) 11234 f xfnn f e f eOen n 2! 3! The two main practical deficiencies of Newton’s method, the need for analytic derivatives and the possible failure (34) to converge to the solutions from poor starting points are the key issues in unconstrained optimization problems. fxf e34 ce23 ceOe4 nnnn 34nFamily (23) and other methods (29)-(32) are also vari- ants of Newton’s method and will fail miserably if at any (35) stage of computation, the second order derivative of the fx f 16ce 12 ceOe23 function is either zero or very close to zero. The defect of nn34nnNewton’s method can easily be eliminated by the simple (36) modification of iteration process. Applying the Newton’s method (19) to a modified function: Using (35) and (36) in (19), we get px xn N 22 f xe fx (43) fxnn13 f [1 18 ce (37) 23 4 where p . This function has better behavior as well 68cc34 18 ceOe 3nn ] as the same optimum point as f x ; we shall get the Case 1) For \0 , Formula (23) may be written modified Newton’s method given by as f x xx n (44) 1 nn1 f xpfxnn N fx n1 1 This is a one parameter family of Newton’s iteration fx fx n n method for unconstrained optimization problems and do xxnn1 (38) fx n 2 not fail if fx n 0 like Newton’s method. In order to obtain the quadratic convergence, the sign of en- tity p should be chosen so that denominator is largest in Using binomial theorem and the Formulae (35), (36) magnitude. On similar lines, we can also modify some of and (37) in (38), we finally obtain the above-mentioned cubically convergent variants of Newton’s methods for unconstrained optimization prob- 9 34 lems. Kahya [5] has also derived similar formula by us- ecnn1312 ceOe 4 n (39) 2 ing the different approach based on the ideas of Mamta et al. [16]. Similar approaches for finding the simple root of Case 2) For 0 , Formula (23) can be written as a nonlinear equation or system of equations have been used by Ben-Israel [17] and, Kanwar and Tomar [18]. fx n xxnn1 (40) N signfx0 fxfxnn17. Numerical Results
Upon using (35), (36) and (37), we obtain In this section, we shall present the numerical results
Copyright © 2010 SciRes IIM 44 V. KANWAR ET AL.
Table 1. Test problems
No. Examples Initial guess Optimum point 7.0 1 43 2 8.278462409973145 xx8.5 31.0625 x 7.59 x 45 10.0 -1.0 2 x 2 0.204481452703476 ex 3 1.0 2 1.0 3 cosxx 2 2.35424280166260 3.0 10.2 0.5 4 6.2xx3 , 0 0.860054146289254 x 2.0 3774.522 32.0 5 2.27xx 181.529, 0 40.777259826660156 x 45.0
Table 2. Comparison table
Method Examples NM AMN HMN GMN RMSNM HERNM CNM TMNM LMNM (27)
1 5 4 3 5 4 4 4 4 4 3 5 4 3 4 4 4 4 4 4 3 2 5 3 4 5 3 4 3 4 3 3 5 3 4 5 4 4 4 3 3 2 3 5 4 4 5 4 4 4 4 4 2 4 3 3 4 3 3 3 3 3 2 4 6 4 4 6 4 4 4 5 4 3 5 4 4 5 4 4 4 4 4 3 5 5 4 4 5 4 4 4 4 4 3 5 4 3 5 4 4 4 4 4 3 obtained by employing the iterative methods namely methods based on nonlinear means and can be used to Newton’s method ( NM ), arithmetic mean Newton’s solve efficiently the unconstrained optimization prob- method ( AMN ), harmonic mean Newton’s method lems. Proposed family (23) unifies some of the most ( HMN ), geometric mean Newton’s method ( GMN ), known third-order methods for unconstrained optimiza- method (27), root mean square Newton’s method tion problems. It also provides many more unknown ( RMSNM ), heronian mean Newton’s method ( HENM ), processes. All of the proposed third-order methods re- contra-harmonic mean Newton’s method ( CMN ), cen- quire three function evaluations per iterations so that troidal mean Newton’s method ( TMN ), logarithmic their efficiency index is 1.44 , which is better than the mean Newton’s method (LMNM) respectively to com- one of Newton’s method 1.41 . Finally experimental pute the extrememum of the function given in Table 1. results showed that the harmonic mean and logarithmic We use the same functions as Kahya [6, 7]. The results mean Newton’s method are the best among the proposed 15 iteration methods. are summarized in Table 2. We use 10 as toler- ance. Computations have been performed using C in double precision arithmetic. The following stopping cri- 9. Acknowledgement teria are used for computer programs: We would like to thank the anonymous referee for his or 1) xxnn1 , 2) fxn1 . her valuable comments on improving the original manu- script. We are also thankful to Professor S.K. Tomar, 8. Conclusions Department of Mathematics, Panjab University, Chandi- garh for several useful suggestions. Ramandeep Behl further acknowledges the financial support of CSIR, It is well-known that the quadrature formulas play an New Delhi. important and significant role in the evaluations of defi- nite integrals. We have shown that these quadrature for- mulas can also be used to develop some iterative meth- 10. References ods for solving unconstrained optimization problems. Further, this work proposes a family of Newton-type [1] B. T. Ployak, “Newton’s method and its use in optimiza-
Copyright © 2010 SciRes IIM V. KANWAR ET AL. 45
tion,” European Journal of Operation Research, Vol. 181, 926, 2005. pp. 1086–1096, 2007. [10] M. V. C. Rao & N. D. Bhat, “A new unidimensional [2] Y. P. Laptin, “An approach to the solution of nonlinear search scheme for optimization,” Computer & Chemical unconstrained optimization problems (brief communica- Engineering, Vol. 15, No. 9, pp. 671–674, 1991. tions),” Cybernetics and System Analysis, Vol. 45, No. 3, [11] P. S. Bulle, “The power means, hand book of means and pp. 497–502, 2009. their inequalities,” Kluwer Dordrecht, Netherlands. 2003. [3] G. Gundersen & T. Steihaug, “On large-scale uncon- [12] J. E. Dennis & R. B. Schnable, “Numerical methods for strained optimization problems and higher order meth- unconstrained optimization and nonlinear equations,” ods,” Optimization methods & Software, DOI: 10.1080/ Prentice-Hall, New York, 1983. 10556780903239071, No. 1–22, 2009. [13] S. Weerakoon & T. G. I. Fernando, “A variant of New- [4] H. B. Zhang, “On the Halley class of methods for uncon- ton’s method with accelerated third-order convergence,” strained optimization problems,” Optimization Methods Applied Mathematics Letter, Vol. 13, pp. 87–93, 2000. & Software, DOI: 10.1080/10556780902951643, No. 1–10, 2009. [14] M. Frontini & E. Sormani,” Some variants of Newton’s method with third-order convergence,” Applied Mathe- [5] E. Kahya, “Modified secant-type methods for uncon- matics and Computation, Vol. 140, pp. 419–426, 2003. strained optimization,” Applied Mathematics and Com- putation, Vol. 181, No. 2, pp. 1349–1356, 2007. [15] W. Gautschi, “Numerical analysis: an introduction,” Birk- häuser, Boston, Inc., Boston, 1997. [6] E. Kahya & J. Chen, “A modified Secant method for unconstrained optimization,” Applied Mathematics and [16] Mamta, V. Kanwar, V. K. Kukreja & S. Singh, “On a Computation, Vol. 186, No. 2, pp. 1000–1004, 2007. class of quadratically convergent iteration formulae,” Ap- plied Mathematics Computation, Vol. 166, pp. 633–637, [7] E. Kahya, “A class of exponential quadratically conver- 2005. gent iterative formulae for unconstrained optimization,” Applied Mathematics and Computation, Vol. 186, pp. [17] B. I. Adi, “Newton’s method with modified functions,” 1010–1017, 2007. Contemporary Mathematics, Vol. 204, pp. 39–50, 1997. [8] C. L. Tseng, “A newton-type univariate optimization [18] V. Kanwar & S. K. Tomar, “Exponentially fitted variants algorithm for locating nearest extremum,” European Jour- of Newton’s method with quadratic and cubic conver- nal of Operation Research, Vol. 105, pp. 236–246, 1998. gence,” International Journal of Computer Mathematics, Vol. 86, No. 9, pp. 603–611, 2009. [9] E. Kahya, “A new unidimensional search method for optimization: Linear interpolation method,” Applied Mathematics and Computation, Vol. 171, No. 2, pp. 912–
Copyright © 2010 SciRes IIM