Intelligent Information Management, 2010, 2, 40-45 doi:10.4236/iim.2010.21005 Published Online January 2010 (http://www.scirp.org/journal/iim) New Variants of Newton’s Method for Nonlinear Unconstrained Optimization Problems V. K A N WA R 1, Kapil K. SHARMA2, Ramandeep BEHL2 1University Institute of Engineering and Technology, Panjab University, Chandigarh160 014, India 2Department of Mathematics, Panjab University, Chandigarh160 014, India Email: [email protected], [email protected], [email protected] Abstract In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the proposed family is three. Numerical comparisons are made to show the performance of the presented meth- ods. Furthermore, numerical experiments demonstrate that the logarithmic mean Newton’s method outper- form the classical Newton’s and other variants of Newton’s method. MSC: 65H05. Keywords: Unconstrained Optimization, Newton’s Method, Order of Convergence, Power Means, Initial Guess 1. Introduction the one dimensional methods are most indispensable and the efficiency of any method partly depends on them The celebrated Newton’s method [10]. The purpose of this work is to provide some alterna- f xn (1) tive derivations through power mean and to revisit some xxnn1 f xn well-known methods for solving nonlinear unconstrained used to approximate the optimum of a function is one of optimization problems. the most fundamental tools in computational mathemat- ics, operation research, optimization and control theory. 2. Review of Definition of Various Means It has many applications in management science, indus- trial and financial research, chaos and fractals, dynamical For a given finite real number , the th power systems, stability analysis, variational inequalities and mean m of positive scalars a and b , is defined as even to equilibrium type problems. Its role in optimiza- tion theory can not be overestimated as the method is the follows [11] basis for the most effective procedures in linear and 1 nonlinear programming. For a more detailed survey, one ab m (2) can refer [1–4] and the references cited therein. The idea 2 behind the Newton’s method is to approximate the objec- tive function locally by a quadratic function which agr- It is easy to see that ees with the function at a point. The process can be re- 2ab For 1 , m (Harmonic mean) , (3) peated at the point that optimizes the approximate func- 1 ab tion. Recently, many new modified Newton-type meth- 2 1 ab ods and their variants are reported in the literature [5–8]. For , m , (4) One of the reasons for discussing one dimensional opti- 1 2 2 2 mization is that some of the iterative methods for higher dimensional problems involve steps of searching extrema ab For 1, m1 (Arithmetic mean) . (5) n along certain directions in [8]. Finding the step size, 2 , along the direction vector d involves solving the For 0 , we have lim ma b, (6) n n 0 sub problem to minimize f xfnn1 xdnn, which which is the so-called geometric mean of a , b and is a one dimensional search problem in n [9]. Hence may be denoted by mg . Copyright © 2010 SciRes IIM V. KANWAR ET AL. 41 * For given positive scalars a and b , some other and fx n1 , instead of f xn . Therefore, it is called well-known means are defined as arithmetic mean Newton’s method [13]. By using differ- aabb ent approximations to the indefinite integral in the New- N (Heronian mean) , (7) 3 ton’s theorem (11), different iterative formulas can be 22 obtained for solving nonlinear equations [14]. ab C (Contra-harmonic mean) , (8) ab 4. Variants of Newton’s Method for 2aabb22 T (Centroidal mean) , (9) Unconstrained Optimization Problems 3ab and Now we shall extend this idea for the case of uncon- ab strained optimization problems. Suppose that the func- L (Logarithmic mean) . (10) logab log tion f x is a sufficiently differentiable function. Let is an extremum point of f x , then x is a 3. Variants of Newton’s Method for root of Nonlinear Equations fx 0 (16) Extending Newton’s theorem (11), we have Recently, some modified Newton’s method with cubic x convergence have been developed by considering differ- f xf x f tdt (17) ent quadrature formula for computing integral, arising in n x the Newton’s theorem [12] n x Approximating the indefinite integral in (17) by the f xfx ftd t (11) rectangular rule according to which n x xn n1 f tdt x x f x (18) Weerakoon and Fernando [13] re-derived the classical nn1 n xn Newton’s method by approximating the indefinite inte- and using fx 0 , we get gral in (11) by rectangular rule. Suppose that x xn1 is n1 the root of the equation fx 0 , we then put the left f x N n , fx 0 . (19) xxnnn11x n side fxn1 0 in the identity (11) and approximate f xn the integral by the rectangular rule according to which This is a well-known quadratically convergent New- xn1 ton’s method for unconstrained optimization problems. f tdt x x f x (12) nn1 n Approximating the integral in (17) by the trapezoidal xn approximation Therefore, from (11) and (12), they obtain the well- xn1 1 (20) known Newton’s method. Weerakoon and Fernando [13] ftd t xnn11 x fxn fx n 2 further used trapezoidal approximation to the definite xn integral according to which in combination with the approximation x n1 1 fx ftdt x x fx fx (13) n fx N and fx 0, nn11 n n fx nn1 f x n1 n1 2 fx xn n to obtain modified Newton’s method given by we get the following arithmetic mean Newton’s method given by 2 fxn xxnn1 (14) 2 fx xx n (21) fxnn fx1 nn1 N fxnn fx 1 where f x for unconstrained optimization problems. This formula is xx n (15) nn1 also derived independently by Kahya [5]. f xn If we use the midpoint rule of integration in (20) is the Newton’s iterate. In contrast to classical Newton’s (Gaussian-Legendre formula with one knot) [15], then method, this method uses the arithmetic mean of f xn we obtain a new formula given by Copyright © 2010 SciRes IIM 42 V. KANWAR ET AL. fx mula (23) as follows: xx n nn1 N (22) 1) For 1 (arithmetic mean), Formula (23) corre- xx f nn1 sponds to cubically convergent arithmetic mean New- 2 ton’s method This formula may be called the midpoint Newton’s 2 fx formula for unconstrained optimization problems. Now xx n (24) nn1 N for generalization, approximating the functions in cor- fxnn fx1 rection factor in (21) with a power means of afx n 2) For 1 (harmonic mean), Formula (23) cor- N responds to a cubically convergent harmonic mean and bfx n1 , we have Newton’s method fxn xx fxn 11 nn1 1 xx (25) nn1 N N 2 fxn fx n1 fxnn fx 1 sign f x0 2 3) For 0 (geometric mean), Formula (23) cor- responds to a new cubically convergent geometric mean (23) Newton’s method fxn This family may be called the th power mean it- xxnn1 (26) N erative family of Newton’s method for unconstrained op- signfx01 fxfxnn timization problems. 1 Special cases: 4) For , Formula (23) corresponds to a new It is interesting to note that for different specific values 2 cubically convergent method of , various new methods can be deduced from For- 4 fx n xxnn1 (27) fx fx NN2 signfx fxfx nn10 nn1 5) For 2 (root mean square), Formula (23) cor- Some other new third-order iterative methods based on responds to a new cubically convergent root mean square heronian mean, contra-harmonic mean, centrodial mean, Newton’s method logarithmic mean etc. can also be obtained from Formula fx n xxnn1 (28) (21) respectively. 22 N fxnn fx1 6) New cubically convergent iteration method based sign f x0 2 on heronian mean is 3 fx n xxnn1 (29) fx fx NN signfx fxfx nn10 nn1 7) New cubically convergent iteration method based 9) New cubically convergent iteration method based on contra-harmonic mean is on logarithmic mean is N fx log fx log fxN 3 fxnn f x fx n1 nn n1 xx (32) xxnn1 (30) nn1 22 N fx fx N fxnn fx1 nn1 8) New cubically convergent iteration method based 5. Convergence Analysis on centroidal mean is N Theorem 5.1 Let I be an optimum point of a suf- 3fxnn fx fx n1 xx (31) ficiently differentiable function for nn1 22NN fx : I 2 fxnnn fx 1 fxfx n1 an open interval I . If the initial guess x0 is suffi- Copyright © 2010 SciRes IIM V. KANWAR ET AL. 43 ciently close to , then for , the family of meth- fx n ods defined by (23) has cubic convergence with the fol- signfx fxfx N lowing error equation 01nn (41) 9 234 9 34 ecceOenn34 2 n ecnn1421c3 eOe n (33) 2 2 From (40) and (41), we obtain k 1 f 234 where xnn e and ckk , 3,4,.... ecceOnn1344.5 2 en (42) kf! Proof: Let be an extremum of the function f x Therefore, it can be concluded that for all , the th power mean iterative family (23) for unconstra- (i.e. f 0 and f 0 ). Since f x is ined optimization problems converges cubically. On sim- sufficiently differentiable function, expanding f xn , ilar lines, we can prove the convergence of the remaining Formulae (29)-(32) respectively.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-