Ilnumerics Optimization Toolbox
ILNumerics Optimization Toolbox
1 INTRODUCTION
Optimization deals with the minimization or maximization of functions. The ILNumerics Optimization Toolbox consists of functions that perform minimization (or maximization) on general nonlinear functions and problems. An optimization problem is the problem of finding the best solution from all feasible solutions. Optimization problems can be divided into two categories depending on whether the variables are continuous or discrete. Here, we are focussing on continuous optimization problems.
The standard form of a continuous optimization problem is
min
Subject to ℎ ≤0, =1,…, , , ∈ Ω =0, =1,…, Where is the objective function to be minimized over the variable : ℝ → ℝ are called the inequality constraints , ℎ: ℝ → ℝ are called the equality constraints , and : ℝ → ℝ is a convex set in , called bound constraints. Ω ℝ By convention, the standard form defines a minimization problem . A maximization problem can be handled by negating the objective function. Based on the description of the function f and the feasible set , the problem can be classified as linear, quadratic, non- Ω linear, semi-infinite, semi-definite, multiple-objective, or discrete optimization problem. However, in the current status of the ILNumerics optimization toolbox, only nonlinear unconstraint and constraint optimization functions are provided.
2 UNCONSTRAINED OPTIMIZATION
The program available for unconstrained optimization problem in ILNumerics is called optimUnconst . The op timUnconst function solves optimization problems with nonlinear objectives, without bound constraints on the unknown variables . A quasi-Newton method using a Broyden-Fletcher-Goldfarb-Shanno (BFGS) formula to update the approximate Hessian matrix is implemented. The quasi-Newton method has a O(n²) memory requirement. For the moment, only the BFGS and the classical adaptive Newton Method algorithm is available for unconstrained optimization problem.
optimUnconst gives the option to provide user defined functions for the computation of the hessian or gradient. By default, the gradient is computed using finite differences based on an optimal step size.
The optimUnconst function is essentially an unconstrained nonlinear optimization solver:
• xopt = optimUnconst(objfunc,x0); • xopt = optimUnconst(objfunc,x0, gradfunc: gradient); • xopt = optimUnconst(objfunc,x0, hessianFunc: hessian); • xopt = optimUnconst(objfunc,x0, gradfunc: gradient, hessianFunc: hessian);
where • objfunc is the objective function, • x0 is the initial guess, • gradient is the gradient of the objective function, • xopt is the optimal point, or the minimizer, • hessianFunc is the hessian function giving the explicit expression of the hessian matrix.
2.12.12.1 TTTHE COST FUNCTION The cost function is directly passed to optimUnconst as a function parameter. In most cases, this will be an ILNumerics function. But it is also fine, to give the cost function as an anonymous function. Requirements on the cost function: The cost function has to be “smooth enough”, i.e. the 2nd derivative of the cost function (Hessian matrix) is expected to exist and to be non-zero on the whole definition set.
2.2 GETTING STARTED WITH UNCONSTRAINED OPTIMIZATION The simplest use of the optimUnconst algorithm is as follows: xopt = optimUnconst(objfunc,x0) ; where • objfunc is the objective function, • x0 is the initial guess, • xopt is the optimal point, or the minimizer.
2.2.1 Example In the following example, we compute the unconstrained minimum of the Rosenbrock 1 function. The function is given by