ISSN: 2377-3316 http://dx.doi.org/10.21174/josetap.v3i1. 44

A New Class of Scaling Matrices for Scaled

Aydin Ayanzadeh*1, Shokoufeh Yazdanian2, Ehsan Shahamatnia3 1 Department of Applied Informatics, Informatics Institute, Istanbul Technical University, Istanbul, Turkey 2Department of Computer science, University of Tabriz, Tabriz, Iran 3Department of Computer and Electrical Engineering, Universidade Nova de Lisboa, Lisbon, Portugal

Abstract— A new class of matrices for the interior point Newton-type methods is considered to solve the nonlinear systems with simple bounds. We review the essential properties of a scaling matrix and consider several well- known scaling matrices proposed in the literature. We define a new scaling matrix that is the convex combination of these matrices. The proposed scaling matrix inherits those interesting properties of the individual matrices and satisfies additional desired requirements. The numerical experiments demonstrate the superiority of the new scaling matrix in solving several important test problems.

Keywords— Diagonal Scaling; Bound-Constrained Equations; Scaling Matrices; Trust Region Methods; Affine Scaling.

I. INTRODUCTION where FX: n is a continuously differentiable Diverse applications including signal processing mapping, X  n is an open set containing the n- [36] and compressive sensing [3,4] incorporate many dimensional box  ={x n | l  x  u }. The vectors convex nonlinear optimization problems. Although nn are lower and upper specialized algorithms have been developed for some of lu(  { − }) ,  (  { + }) these problems, the interior-point methods are still the bounds on the variables such that  has a nonempty main tool to tackle them. These methods require a interior. feasible initial point [19]. It should be paid attention that Efficient methods for the solution of this problem proper data collection plays an important role in with good local convergence behavior have been assessing the results obtained [39]. In addition, a vast proposed. The affine scaling trust region approach forms variety of soft-computing techniques such as a practical framework for smooth and nonsmooth box evolutionary computing methods [2, 18, 28, 37, 38] and constrained systems of nonlinear equations [6-9]. These neural networks [1, 31] include optimization problems, kinds of methods use ellipsoidal trust region defined by which are sensitive to the initial points. Like verifying a diagonal scaling matrix [12]. The diagonal scaling solution uniqueness conditions, these tasks convert into handles the bounds while at each iteration a quadratic linear feasibility problems with strict inequalities or model of the object function 1 || F ||2 is minimized nonlinear feasibility problems with bound constraints 2 [16, 34]. The latter is often challenging so that the within a trust region around the current iteration. existing algorithms need to be theoretically and The main motivation for the current work is a series computationally improved. The nonlinear minimization of papers by Bellavia et al [6-11]. The methods they problem with bound constraints is: introduced, STRN and CODOSOL have very good

n numerical properties [6,7]. These methods are widely F( x )= 0, x  = { x  | l  x  u ;  i = 1,..., n }. iii (1) used in practice [14, 35] and their efficiency has been

* Corresponding author can be contacted via the journal website.

Journal of Software Engineering: Theories and Practices 3 (1): 1-8, 2018 | https://lsp.institute Journal of Software Engineering: Theories and Practices

proved in several papers [7, 23]. In [8] the authors studied global and fast convergence of an inexact dogleg method. They did not investigate the choice of a suitable scaling matrix and only reported the preliminary results. Later in [11] they focused on medium scale problems (4) and replaced the inexact Newton step by the exact solution of the Newton equation. They considered for all i=1,...,n and all x Î W. In affine scaling several diagonal scaling matrices and showed the methods in order to handle the bounds the direction of assumptions required to ensure the convergence. The name of the method is Constrained Dogleg (CoDo) the scaled gradient is defined by method which is freely accessible through the website Given an iterate and the trust region size http://codosol.de.unifi.it. The effectiveness of (CoDo) is xk Î int (W) verified by comparing it to STRSCNE [7] and to IATR D > 0, the trust region subproblem for (2) is: [9]. k minm ( p ) ; || G p ||  , x + p  int (  ) (5) In these scaling-based algorithms, the performance p n k k k k is influenced by the selection of a scaling matrix. We where is the norm of the linear model for F(x) at introduce a new class of scaling matrices, which is mk xk obtained by the convex combination of current known , i.e. m ( p) =|| F + F p || and G= G() x nn with matrices. We analyze the numerical performance of k k k ¢ kk n n n . For the standard spherical trust CoDoSol (The Matlab Solver of CoDo) for different G : Gk = I 1 convex combinations of the scaling matrices and - region and for G = D 2 the elliptical trust region compare the results using performance profile approach. k k We also use a Projected Affine Scaling Interior Point achieves. In order to solve this subproblem different to check the local convergence properties of approaches have been proposed like STRN [6] which the new scaling matrix. combines ideas from the classical trust-region Newton method for unconstrained nonlinear equations and the In section 2 we explain the role of the scaling interior affine scaling approach for constrained matrices. In section 3 we consider several scaling optimization problems or CoDoSol [11] which is based matrices and review the requirements and assumptions. on a dogleg procedure tailored for constrained problems. In section 4, we introduce a new class of scaling matrices and prove that the new matrices satisfy the III. SCALING MATRICES required assumptions. Finally, in section 5 we report our conclusion of computational results. We consider the following well known matrices: • D CL (x ) given by Coleman and Li [12]. The II. SCALED TRUST REGIONS diagonal elements are: In this section we describe the idea behind using scaled matrices to solve problem (1). First note that (1) is closely related to the box constrained optimization problem:

1 2 min x ÎW f (x ) = || F (x ) || s.t . x Î W. 2 (2) (6) Every solution of (1) is a global minimum of (2.2) and DxHUU () * • given by Heinkenschloss et al [21]: if x * is a minimum of (2) such that f (x *) = 0, then x is a solution of (1). The first order optimality conditions of (2) are equivalent to the nonlinear system of equations D( x ) f ( x )= 0, x  , where (7) and D is a suitable scaling matrix of order n where p >1 is a fixed constant. (3) • D KK given by Kanzow and Klug [26, 18]: D(x) = diag (d1(x),...,d n (x)). Coleman and Li [12] considered only one choice of the scaling matrix, then Heinkenschloss et al. [21] noted that the optimality conditions holds for a general class of scaling matrices satisfying the conditions:

20 Journal of Software Engineering: Theories and Practices

01a DCON= a D CL + a D HUU + a D KK + a D HMZ ,,i (11) 1 2 3 4 i =1,2,3,4 4 ai =1. i=1 This scaling matrix demonstrates the advantages of (8) all the scaling matrices involved in its definition and seems an appropriate matrix with combined properties.

we first have to verify that this matrix satisfies four For a given constant g > 0. - D HMZ (x ) by Hager et desired requirements as follows al [20]: (i) Clearly D CON satisfies (4). Table 1. Test problems. (9) where

(10)

and a(x ) is a continuous function, strictly positive for any x and uniformly bounded away from zero. This matrix has been used by authors in study of a cyclic Barzilai-Borwein for bound constrained minimization problems they replaced the (ii) As Bellavia et al. [11] showed, the above three Hessian of the objective function with , where lk I lk matrices are bounded in for WÇ Br (x ) x Î W is the classical Barzilai-Borwein parameter. Then, in and . This implies that the combination of order to compute the new iterate, they move along the r > 0 scaled gradient , with these matrices is also bounded. (iii) It has been shown in [11] that all the matrices a(x ) = l . k k except D HUU join this property. Thus if we assume Bellavia et al in [11] introduced the following then CON satisfies this condition. a2 = 0 D assumptions for a scaling matrix and verified that all above scaling matrices satisfy almost all the (iv) Same as before, since all the matrices satisfy requirements specified by the following assumption. condition (iv), the convex combination also verifies this property. A. Assumption (i) D(x) satisfies (4), It is worth mentioning that D CL is discontinuous at (ii) D(x) is bounded in for any x points where there exists an index i for which WÇ Br (x ) KK HUU and r > 0, but the matrices D and D are locally Lipschitz continuous and continuous (iii) There exists a  such that the step size l to k respectively. The new matrix D CON is continuous and the boundary from xk along satisfies so we can have a matrix having all the advantages of 3 matrices, a matrix that can be as fast and efficient as the l > l whenever is uniformly bounded k first one and as smooth and continuous as second one. above, (iv) For any x in int (W) there exist two positive IV. NUMERICAL EXPERIMENTS constants  and c such that We ran two experiments, CoDoSol on 15 problems x Br (x ) Ì int (W) and Projected Affine Scaling Interior Point on 2 and -1 for any x in . || D(x) ||£ c x Bx /2() problems.

We define the convex combination of the above scaling matrices as a new class of scaling matrices as follows:

21 Journal of Software Engineering: Theories and Practices

A. Experiment 1 and, in some cases, they need to be reformed before In this section, we apply the CoDoSol algorithm to feeding them into the solver [5]. Problems Pb14, Pb15 several problems using 3 scaling matrices are examples of this kind of problems [30]. The starting points for the problems with finite lower and upper DCL ,D HUU ,D KK and 4 combined matrices: bounds are selected by a uniform distribution between l 11 and u i.e. x = l +0.25v(u -l ),v =1,2,3 and for the DDCL+ HUU 0 problems with infinite lower and upper bounds 22 v T and x0 =10 (1,...,1) 11CL KK DD+ x = -10v (1,...,1)T ,v = 0,1,2,v =1,2,3 respectively. 22 0 11 In CoDoSol the trust region size is updated as in [10, DDKK+ HUU 11], the failure criteria and parameter selection is same 22 as [24] where the authors introduced an innovative and efficient method for parameter selection. We tested the 1 1 1 DDDCL++ HUU KK algorithm with the scaling matrices and reported the 3 3 3 . (12) numerical results. Thus, the CoDoSol is tested with the following scaling matrices: We implement the Constrained Dogleg method in the Matlab code CoDoSol using Elliptical trust-region with • D CL initial radius of 1. The limiting number of the iterations is set to be 300 and the limiting number of F-evaluations • D KK with g =1 as suggested in [27] is set to be 1000. The experiment was on 15 problems with dimension between n=2 and n=1000 specified in • D HUU with p =1 Table 1. CON Different types of constrained systems including • D defined by (11) systems with solutions both within the 1 - and on the boundary, systems with only lower (upper) and G (x ) = D(x ) 2 . The efficiency of the scaling bounds and systems with variable components bounded matrices has been measured by It, the number of the from above and below can be found in this table. iterations and Fe, the number of the function evaluations Nonlinear constrained systems come from [13, 15] to get convergence (Table 2,3,4). (problems Pb1 to Pb6), [40] (problems Pb10 to Pb13), Bellavia et al showed that CL is superior to KK and chemical equilibrium system given in [17, 32, 33, 41] KK is superior to HUU on their studied test problems, (Pb7), and nonlinear complementarity problems (NCPs) in our test problems KK is slightly better while HUU given in [25, 29] (Pb8, Pb9). While Pb15 [20] comes shows the poorest performance. from nonlinear BVPs. Dealing with large dimensional problems is also critical, these problems need more CPU

Table 2. CoDoSol with different scaling matrices: Number of iterations for the first starting point

22 Journal of Software Engineering: Theories and Practices

Table 3. CoDoSol with different scaling matrices: Number of iterations for the second starting point

Table 4. CoDoSol with different scaling matrices: Number of iterations for the third starting point

The combination of the scaling matrices shows coefficients and show 1 1 with . The C + C C1 +C 2 interesting performances. For example, in 80% of the 2 1 2 2 1 1 cases CL + HUU shows as good performance as or black doted lines correspond to the individual matrices 2 2 and the red line corresponds to the convex combination better performance than the individual matrices. of the scaling matrices. To compare these seven scaling matrices, we use the By looking at the performance profiles performance profiles and to have a more reliable corresponding to CL+HUU we can see that it is efficient comparison we used Nested Performance Profiles [22], in solving about the 75% of the tests and solves about which removed a negative side effect of the performance 95% of the tests within a factor 1 from the best solver. profiles. In Fig 1 and Fig 3 the computational effort is CL+HUU is the best scaling matrix for the studied measured in terms of mean It (the mean number of problems. iterations for the three starting points) and mean Fe (mean F-evaluations for the three starting points) This can be verified by looking at the figure 2 and respectively. In these figures we eliminate the convex figure 4. Figure 2 is based on It while Figure 4 is based on Fe.

23 Journal of Software Engineering: Theories and Practices

Table 5. Projected Affine-Scaling Interior-Point Newton Method with different scaling matrices, Rosenbrock function

1 1

1 1

0.5 0.5 KK KK CL HUU 0.5 0.5 KK+CL KK+HUU KK KK CL HUU 0 0 0 0.05 0.1 0 0.5 1 KK+CL KK+HUU 0 0 0 0.05 0.1 0 0.5 1 1 1

1 1

0.5 0.5 CL KK HUU CL 0.5 0.5 CL+HUU HUU CL KK KK+CL+HUU HUU CL 0 0 CL+HUU HUU 0 0.5 1 012 KK+CL+HUU 0 0 0 0.5 1 0 0.5 1 1.5 Figure 3. Performance Profiles for the F-evaluations, basic comparisons Figure 1. Performance Profiles for the number of iterations, basic comparisons B. Experiment 2

Table 6. Projected Affine-Scaling Interior-Point Newton Method with different scaling matrices, Wood function

In this section, we illustrate the local behavior of the 1 1 different scaling strategies using two standard test

0.9 0.9 problems. We implemented Algorithm "Projected Affine-Scaling Interior-Point Newton Method" [29]. 0.8 0.8 The first test example is the famous Rosenbrock- 0.7 0.7 function: 0.6 0.6 2 2 2 0.5 0.5 f (x) =100(x2 - x1 ) +(1- x1) .

0.4 0.4 KK KK+CL+HUU This function has a unique global minimum at KK+CL 0.3 CL 0.3 * HUU KK+HUU x = (1,1). The lower and upper bounds are l = (0,0) CL+HUU CL+HUU 0.2 0.2 and u = (1,1). 0.1 0.1

0 0 To study the local convergence properties, the 0 0.5 1 0 0.5 1 standard starting point is set to be . x0 = (0.999,0.999) Figure 2. Performance Profiles for the number of iterations, the performance of CL+HUU Table 5 contains the corresponding numerical results for the scaling matrices. The slow linear rate of convergence for CL and significantly better for HUU and KK have been indicated. While for the convex combination we have a fast convergence. This naturally removes the weak point of scaling matrix CL since the number of iterations reduces from 34 to 4.

24 Journal of Software Engineering: Theories and Practices

The second test problem is wood function: REFERENCES

2 [1] Ayanzadeh, Aydin, and Sahand Vahidnia. "A Modified Deep f (x ) =100(x2 - x1 ) Neural Networks for Dog Breeds Identification." (2018). +(1- x )2 + 90(x - x )2 [2] Ayanzadeh, Ramin, E. Shahamatnia, and S. Setayeshi. 1 4 3 "Determining optimum queue length in computer networks by 2 2 using memetic algorithms." Applied Sci 9 (2009): 2847-2851. +(1- x3 ) +10(x2 + x4 - 2) https://doi.org/10.3923/jas.2009.2847.2851 +0.1(x - x )2. [3] Ayanzadeh, Ramin, Milton Halem, and Tim Finin. "SAT-based 2 4 Compressive Sensing." arXiv preprint arXiv:1903.03650 (2019). This function admits an unconstrained minimum in * [4] Ayanzadeh, Ramin, Seyedahmad Mousavi, Milton Halem, and x = (1,1,1,1). We use the bounds l = (1,1,1,0.99) and Tim Finin. " Based Binary Compressive 0 Sensing with Matrix Uncertainty." arXiv preprint u = (3,3,3,3). The starting point is x =1.001(1,1,1,1) arXiv:1901.00088 (2019). . The corresponding numerical results are given in Table [5] Azencott, Robert, Viktoria Muravina, Rasoul Hekmati, Wei 6. Zhang, and Michael Paldino. "Automatic clustering in large sets of time series." In Contributions to Partial Differential Equations Again, we see that the convergence of the Coleman- and Applications, pp. 65-75. Springer, Cham, 2019. Li matrix is rather slow. While according to Bellavia et https://doi.org/10.1007/978-3-319-78325-3_6 all it is the fastest one for the medium scale problems. [6] Bellavia, Stefania, Maria Macconi, and Benedetta Morini. "An Clearly the convex combination of the CL and KK helps affine scaling trust-region approach to bound-constrained it to mitigate the violation of the strict complementarity nonlinear systems." Applied Numerical Mathematics 44, no. 3 (2003): 257-280. https://doi.org/10.1016/S0168- assumption that slows down the convergence rate of the 9274(02)00170-8 affine-scaling Newton method using the Coleman-Li scaling. [7] Bellavia, Stefania, Maria Macconi, and Benedetta Morini. "STRSCNE: A scaled trust-region solver for constrained nonlinear equations." Computational Optimization and 1 1 Applications 28, no. 1 (2004): 31-50. https://doi.org/10.1023/B:COAP.0000018878.95983.4e 0.9 0.9 [8] Bellavia, Stefania, Maria Macconi, and Sandra Pieraccini. "On 0.8 0.8 affine scaling inexact dogleg methods for bound-constrained 0.7 0.7 nonlinear systems." 0.6 0.6 [9] Bellavia, Stefania, and Benedetta Morini. "An interior global

0.5 0.5 method for nonlinear systems with simple bounds." Optimization Methods and Software 20, no. 4-5 (2005): 453- 0.4 0.4 474. https://doi.org/10.1080/10556780500140516 KK KK+CL+HUU 0.3 CL 0.3 KK+CL [10] Bellavia, Stefania, and Benedetta Morini. "Subspace trust-region HUU KK+HUU CL+HUU 0.2 0.2 CL+HUU methods for large bound-constrained nonlinear equations." SIAM journal on numerical analysis 44, no. 4 (2006): 1535- 0.1 0.1 1555. https://doi.org/10.1137/040611951

0 0 0 0.5 1 0 0.5 1 [11] Bellavia, Stefania, Maria Macconi, and Sandra Pieraccini. "Constrained Dogleg Methods for nonlinear systems with simple bounds." Computational Optimization and Applications 53, no. Figure 4. Performance Profiles for the F-evaluations, the 3 (2012): 771-794. https://doi.org/10.1007/s10589-012-9469-8 performance of CL+HUU [12] Coleman, Thomas F., and Yuying Li. "An interior trust region approach for nonlinear minimization subject to bounds." SIAM V. CONCLUSION Journal on optimization 6, no. 2 (1996): 418-445. During our long implementations it has been proved https://doi.org/10.1137/0806023 that the performance of the scaling matrices depends on [13] Dirkse, Steven P., and Michael C. Ferris. "MCPLIB: A the test problems. Each matrix has its own advantages collection of nonlinear mixed complementarity problems." Optimization Methods and Software 5, no. 4 (1995): 319-345. and disadvantages. For a new problem, there is no way https://doi.org/10.1080/10556789508805619 to select the best possible scaling matrix and it has to be [14] Ferkl, Lukáš, and Gjerrit Meinsma. "Finding optimal ventilation done by trial and fail. By changing in the dimension of control for highway tunnels." Tunnelling and underground space the problem or changing the parameter values of a technology 22, no. 2 (2007): 222-229. problem the previous matrix will not necessarily work https://doi.org/10.1016/j.tust.2006.04.002 as the best one. Also the performance of the scaling [15] Floudas, Christodoulos A., Panos M. Pardalos, Claire Adjiman, matrices highly depends on the algorithm. A matrix William R. Esposito, Zeynep H. Gümüs, Stephen T. Harding, demonstrates fast convergence for a specific algorithm John L. Klepeis, Clifford A. Meyer, and Carl A. Schweiger. Handbook of test problems in local and global optimization. Vol. and slow convergence for other algorithm. In order to 33. Springer Science & Business Media, 2013.) overcome this problem, one can use the convex combination of the scaling matrices. This new class of [16] Forsgren, Anders, Philip E. Gill, and Margaret H. Wright. "Interior methods for nonlinear optimization." SIAM review 44, scaling matrices has the advantages of the individual no. 4 (2002): 525-597. https://doi.org/10.1137/S003614450241 matrices and in some cases works as the best option. 4942

25 Journal of Software Engineering: Theories and Practices

[17] Ghassemi, Zahra, and Gymama Slaughter. "Dynamic modeling [31] Marzban, Forough, Ramin Ayanzadeh, and Pouria Marzban. of direct electron transfer PQQ-GDH MWCNTs bioanode "Discrete time dynamic neural networks for predicting chaotic function." In Nano/Micro Engineered and Molecular Systems time series." Journal of Artificial Intelligence 7, no. 1 (2014): 24. (NEMS), 2017 IEEE 12th International Conference on, pp. 383- https://doi.org/10.3923/jai.2014.24.34 387. IEEE, 2017. https://doi.org/10.1109/NEMS.2017.8017047 [32] Meintjes, Keith, and Alexander P. Morgan. "Chemical [18] Gholami, Ali-Asghar, Ramin Ayanzadeh, and Elaheh Raisi. equilibrium systems as numerical test problems." ACM "Fuzzy honey bees foraging optimization: Transactions on Mathematical Software (TOMS) 16, no. 2 approach for clustering." Journal of Artificial Intelligence 7, no. (1990): 143-151. https://doi.org/10.1145/78928.78930 1 (2014): 13. https://doi.org/10.3923/jai.2014.13.23 [33] Meintjes, Keith, and Alexander P. Morgan. "A methodology for [19] Gondzio, Jacek. "Crash start of interior point methods." solving chemical equilibrium systems." Applied Mathematics European Journal of Operational Research 255, no. 1 (2016): and Computation 22, no. 4 (1987): 333-361. 308-314. https://doi.org/10.1016/j.ejor.2016.05.030 https://doi.org/10.1016/0096-3003(87)90076-2 [20] Hager, William W., Bernard A. Mair, and Hongchao Zhang. "An [34] Mousavi, Seyedahmad, and Jinglai Shen. "Solution Uniqueness affine-scaling interior-point CBB method for box-constrained of Convex Piecewise Affine Functions Based Optimization with optimization." Mathematical Programming 119, no. 1 (2009): 1- Applications to Constrained Minimization." arXiv preprint 32. https://doi.org/10.1007/s10107-007-0199-0 1 arXiv:1711.05882 (2017). [21] Heinkenschloss, Matthias, Michael Ulbrich, and Stefan Ulbrich. "Superlinear and quadratic convergence of affine-scaling [35] Nezami, Saman, Soobum Lee, Kiweon Kang, and Jaehoon Kim. interior-point Newton methods for problems with simple bounds "Improving Durability of a Vibration Energy Harvester Using without strict complementarity assumption." Mathematical Structural Design Optimization." In ASME 2016 Conference on Programming 86, no. 3 (1999): 615-635. Smart Materials, Adaptive Structures and Intelligent Systems, https://doi.org/10.1007/s101070050107 pp. V002T07A018-V002T07A018. American Society of Mechanical Engineers, 2016. [22] Hekmati, Rasoul, and Hanieh Mirhajianmoghadam. "Nested Performance Profiles for Benchmarking Software." arXiv [36] Palomar, Daniel P., and Yonina C. Eldar, eds. Convex preprint arXiv:1809.06270 (2018). optimization in signal processing and communications. Cambridge university press, 2010. [23] Hekmati, Rasoul. "On efficiency of non-monotone adaptive trust region and scaled trust region methods in solving nonlinear [37] Shahamatnia, Ehsan, Ramin Ayanzadeh, Rita A. Ribeiro, and systems of equations." Biquarterly Control and Optimization in Saeid Setayeshi. "Adaptive imitation scheme for memetic applied Mathematics 1, no. 1 (2016): 31-40. algorithms." In Doctoral Conference on Computing, Electrical and Industrial Systems, pp. 109-116. Springer, Berlin, [24] Hekmati, Rasoul, Robert Azencott, Wei Zhang, Zili D. Chu, and Heidelberg, 2011. https://doi.org/10.1007/978-3-642-19170- Michael J. Paldino. "Localization of Epileptic Seizure Focus by 1_12 Computerized Analysis of fMRI Recordings." arXiv preprint arXiv:1812.04533 (2018). [38] Shahamatnia, Ehsan, André Mora, Ivan Dorotovič, Rita A. Ribeiro and José M. Fonseca. "Evaluative Study of PSO/Snake [25] Hoch, W., and K. Schittkowski. "Test Examples for Nonlinear Hybrid Algorithm and Gradient Path Labeling for Calculating Programming Codes, Lecture Notes in Economics and Solar Differential Rotation." In: Nguyen N., Kowalczyk R., Mathematical Systems No. 187." (1981). Filipe J. (eds) Transactions on Computational Collective [26] Kanzow, Christian, and Andreas Klug. "On affine-scaling Intelligence XXIV. Lecture Notes in Computer Science, vol interior-point Newton methods for nonlinear minimization with 9770. Springer, Berlin, Heidelberg, 2016. bound constraints." Computational Optimization and https://doi.org/10.1007/978-3-662-53525-7_2 Applications 35, no. 2 (2006): 177-197. [39] Shahamatnia, Ehsan, Ivan Dorotovič, André Mora, José Fonseca https://doi.org/10.1007/s10589-006-6514-5 and Rita Ribeiro. "Data Inconsistency in Sunspot Detection." In: [27] Kanzow, Christian, and Andreas Klug. "An interior-point affine- Filev D. et al. (eds) Intelligent Systems'2014. Advances in scaling trust-region method for semismooth equations with box Intelligent Systems and Computing, vol 323. Springer. 2015. constraints." Computational optimization and applications 37, https://doi.org/10.1007/978-3-319-11310-4_49 no. 3 (2007): 329-353. https://doi.org/10.1007/s10589-007- [40] Tsoulos, I. G., and Athanassios Stavrakoudis. "On locating all 9029-9 roots of systems of nonlinear equations inside bounded domain [28] Khosravani-Rad, Amir, Ramin Ayanzadeh, and Elaheh Raisi. using global optimization methods." Nonlinear Analysis: Real "Dynamic parameters optimization for enhancing performance World Applications 11, no. 4 (2010): 2465-2471. and stability of PSO." Trends in Applied Sciences Research 9, https://doi.org/10.1016/j.nonrwa.2009.08.003 no. 5 (2014): 238. https://doi.org/10.3923/tasr.2014.238.245 [41] Wächter, Andreas, and Lorenz T. Biegler. "Failure of global [29] Klug, Andreas. "Affine-Scaling Methods for Nonlinear convergence for a class of interior point methods for nonlinear Minimization Problems and Nonlinear Systems of Equations programming." Mathematical Programming 88, no. 3 (2000): with Bound Constraints." (2006). 565-574. https://doi.org/10.1007/PL00011386 [30] Lukšan, L., and Jan Vlcek. Subroutines for testing large sparse and partially separable unconstrained and equality constrained optimization problems. Vol. 767. Technical Report, 1999.

26