Evaluating and Generalization of Methods of Cyclic Coordinate, Hooke – Jeeves, and Rosenbrock
Total Page:16
File Type:pdf, Size:1020Kb
Computational Research 2(3): 31-43, 2014 http://www.hrpub.org DOI: 10.13189/cr.2014.020301 Evaluating and Generalization of Methods of Cyclic Coordinate, Hooke – Jeeves, and Rosenbrock Mohammad Ali Khandan, Mehdi Delkhosh* Department of Mathematics and Computer, Damghan University, Damghan, Iran *Corresponding Author: [email protected] Copyright © 2014 Horizon Research Publishing All rights reserved. Abstract In optimization problems, we are looking for simultaneously). The method of Rosenbrock tests m (m is the best point for our problem. Problems always classify to number of optimized decision variables) orthogonal search unary or n-ary with bound or unbound. In this paper, we directions which are equal to coordinate axes only at the first evaluate three methods, Cyclic Coordinate, Hooke – Jeeves, iteration. Rosenbrock search is a numerical optimization and Rosenbrock on n dimensional space for unbounded algorithm applicable to optimization problems in which the problem and we speak about the advantages and objective function is inexpensive to compute and the disadvantages of these three methods. We offer solutions for derivative either does not exist or cannot be computed some disadvantages and we give generalized algorithms to efficiently. improve the methods. Another type of method, namely dynamic programming, has been used extensively in stand management optimization Keywords Golden Section Method, Cyclic Coordinate [8]. It differs from direct search methods so that, instead of Method, Hooke – Jeeves Method, Rosenbrock Method seeking optimal values for continuous decision variables, it finds the optimal sequence of stand states defined by classified values of several growing stock characteristics (for instance, basal area, mean diameter, and stand age). 1. Introduction Direct search methods can be used in both even-aged and uneven-aged forestry [9]. Uneven-aged management can In optimization problems, we are looking for best point for also be optimized using linear programming and a transition our problem. Problems always classify to unary or n-ary with matrix model [10]. Of the direct search methods mentioned bound or unbound. There are several methods for in [7] the Hooke-Jeeves algorithm has been used much in optimization problems, such that Cyclic Coordinate method, forestry, at least in Finland [11, 12]. Hooke - Jeeves method, and Rosenbrock method on n In this paper, we first introduce these three methods, and dimensional space for unbounded problem. then we express the advantages and disadvantages them, and Here, we consider the problem of minimizing a function f finally, we give generalized algorithms to improve them. of several variables without using derivatives. The methods described here proceed in following manner. Given a vector x, a suitable direction d is first determined, and then f is 2. The Method of Cyclic Coordinate minimized from x in the direction d by one of the techniques we will describe. In this method we move on Coordinate axes as much as Direct search methods are suitable tools for maximizing or λ until get the optimal point [1, 4, and 6]. minimizing functions that are non-smooth and non-differentiable. In their comprehensive book about 2.1. Algorithm for Cyclic Coordinate Method non-linear programming [7] mention three methods (multidimensional search methods without using Initial step: differentiating): the cyclic coordinate method, the Choose a scalar ε > 0 to be used for terminating the Hooke-Jeeves method, and the Rosenbrock method. The algorithm. And let d1,d 2 ,...,d n be the Coordinate cyclic coordinate method alters the value of one decision directions. Choose an initial point x1 , let y1 = x1 , let variable at a time, i.e. coordinate axes as the search k = j = 1 and got to the main step. directions. The Hooke-Jeeves method uses two search modes: exploratory search in the direction of coordinate axes (one Main Step: decision variable altered at a time) and pattern search in other directions (more than one decision variable altered 1. Let λ j be an optimal solution to the problem to 32 Evaluating and Generalization of Methods of Cyclic Coordinate, Hooke – Jeeves, and Rosenbrock Min f (y j + λd j ) Start point: minimize , and let λ ∈ℜ 1 0 1 1 = 2 = ε = = + λ < x = (0,3) d , d , 0.2 y j +1 y j jd j . if j n , replace j by j+1, and 0 1 = repeat step 1. Otherwise if j n go to step 2. In first step unary function shows like this with the following 2. Let xk +1 = yk +1 . If xk +1 − xk < ε then stop. values (Figure 1): 1 Otherwise let y1 = xk +1, j = 1, replace k by k+1 and y = (0,3) repeat step 1. 1 d 1 = 0 2.2. Advantages of Cyclic Coordinate Method 2 4 1. Implementation of an algorithm is simple. f (y1 + λd1) = (−6 + λ) + (−2 + λ) 2. We can use this algorithm for functions that either is differentiable or are not differentiable. 2.3. Disadvantages of Cyclic Coordinate Method 1. When we use this method for differentiable functions, it will converge to a point with zero gradients. But when the function is not differentiable, the method may stall at a non-optimal point. 2. This method converts an n-ary function to a unary, and by using an algorithm for solving unary function gain an optimal point and set in the n-ary function. Then increase the overhead of the algorithm. 3. We solve a unary system in each step and should pay attention to the domain of unary system. If the domain Figure 1. let y1 = (0,3),d1 = (1,0)t does not have a minimum point then the system does not get a good solution. In second step unary function shows like this with the following values (Figure 2): 2.4. Checking problems of Cyclic Coordinate Method 1 = y (0,3) 2.4.1. Problem 1 2 0 d = Hooke - Jeeves algorithm is proposed to solve the first 1 problem + λ = + + λ 2 2.4.2. Problem 2 f (y1 d2 ) 16 4(3 ) The simplex method is used to solve the second problem (It must be considered that the complexity of the simplex method is not better than this) 2.4.3. Problem 3 Consider the following problem that solve according to the Cyclic Coordinate method. We use the Golden section method (see Appendix) for solving unary system. The method initial with a uncertainty interval and we know minimum point is in this range. At each step decrease the range until gets a good range. Now imagine we have several extremum points or don’t have any minimum point in this range. Then we check this special state in following sample. Figure 2. let y1 = (0,3),d 2 = (0,1)t 4 2 Min f (x1, x2 ) = (x1 − 2) + (x1 − 2x2 ) As you see in first step minimum point is in (-4, 4). At 2 s.t. (x1, x2 )∈ℜ each step according to the Golden section method the range Computational Research 2(3): 31-43, 2014 33 becomes large or small. 2.5. Modifications to Algorithm of Cyclic Coordinate Now if run the algorithm with an initial range (-4, 4) for Method (Solution for Problem 3) Golden section method [2], we get following data in table 1. For solving this problem, we should find critical points for Table 1. Run the algorithm of Cyclic Coordinate with initial (-4, 4) the Golden section algorithm. We need to differentiate from function in order to find critical point, and examine the state K (x, f(x)) j d y λ y + j j 1 of function in previous and next points of where the function (1, 1 (0,3) 1 (0,3) 3.088 (3.088, 3) has a zero gradient. So we can’t implement this job for 0) following reasons: (0, (3.088, 1 52 2 (3.088, 3) -1.475 1) 1.524) 1. We're working on that function may not be (3.088, (1, (3.088, (2.609, differentiable. 2 1 -0.478 1.524) 0) 1.524) 1.524) 2. In each step we should apply another algorithm in (0, (2.609, (2.609, 2 1.403 2 -0.242 order to specify a critical area in the Golden section 1) 1.524) 1.281) method. So this job increase running time of main (2.609, (1, (2.609, (2.366, 3 1 -0.242 algorithm and it may not be cost effective. 1.281) 0) 1.281) 1.281) 3. Differentiating from function is complex. (0, (2.366, (2.366, 3 0.140 2 -0.137 1) 1.281) 1.143) 4. If we can differentiate from function, and find the (2.366, (1, (2.366, (2.228, minimum point when has a zero gradient, then we 4 1 -0.137 1.143) 0) 1.143) 1.143) don’t need trouble yourself and implement the (0, (2.228, (2.228, 4 0.024 2 -0.032 algorithm because we have minimum point!!!!! 1) 1.143) 1.111) We could find answers in iteration 4. But if we choice initial range (1, 6) for Golden section 3. The Method of Hooke - Jeeves method then have following data in table 2. In this method we move on Coordinate axes until get As you can see table 2, not only doesn’t close to minimum optimal point [5, 6, and 7]. point but reach to a larger number at each step and get away from the optimal point. So in thirtieth iteration x is (33.51, 34.49) and there are many differences between this number 3.1. Algorithm for Hooke - Jeeves Method and the number of the previous table. The algorithm doesn’t Initial step: halt at this point and we can continue for iteration 31 and so on.