
An Augmented Lagrangian Neural Network for the Fixed-Time Solution of Linear Programming Dayanna T. Toro and Jose´ M. Lozano Juan Diego Sanchez-Torres´ Department of Electrical Engineering Research Laboratory on Optimal Design, Universidad de Guanajuato Devices and Advanced Materials -OPTIMA- Carretera Salamanca - Valle de Santiago, km 3.5 + 1.8 ITESO University Comunidad de Palo Blanco. P.C. 36885, Salamanca, Mexico.´ Tlaquepaque, Mexico´ Email: [email protected], [email protected] [email protected] Abstract—In this paper, a recurrent neural network is of a linear problem, obtaining the optimal solution of a system proposed using the augmented Lagrangian method for solving in finite time. linear programming problems. The design of this neural In the same way, investigations have also been carried out in network is based on the Karush-Kuhn-Tucker (KKT) optimality conditions and on a function that guarantees fixed-time nonlinear systems with discontinuous activation function [10], convergence. With this aim, the use of slack variables allows [11]. Similarly, in later studies a neural network was proposed transforming the initial linear programming problem into that solved problems where the objective function may or may an equivalent one which only contains equality constraints. not be continuously differentiable [12] and a neural network Posteriorly, the activation functions of the neural network are capable of solving non-convex problems [13] was successfully designed as fixed time controllers to meet KKT optimality conditions. Simulations results in an academic example and an proposed, which expanded the universe of systems that can be application example show the effectiveness of the neural network. solved. Other studies have used the augmented Lagrangian method for solving optimization problems, achieving better I. INTRODUCTION numerical stability [14]. For this reason, the aim of this Linear programming is an important field of optimization, paper is to develop a recurrent neural network based on since many applications can be formulated and solved through Lagrangian augmented and slack variables to solve linear a linear representation, which has generated major research programming problems, thus illustrating its operation in a within this area. Over the years, for many applications, a classic optimization problem. variety of numerical algorithms have been developed, being The remainder of this paper is organized as follows: in Pyne one of the first to introduce the use of dynamic systems Section II, the preliminaries related to the development of the to solve optimization problems. The main advantage of this neural network are presented. Section III describes the model technique is that the system constantly seeks new solutions as of the recurrent neural network and its performance through an the parameters of the problems are varied [1]. academic example. In Section IV an application is presented Since then, many extensions of these systems have been were the energy that is delivered by the different generation developed in which recurrent neural networks have received units of a microgrid are maximized. Finally, conclusions are substantial attention. In 1986, Tank and Hopfield proposed the given in Section V. first recurrent neural network for solving linear programming problems [2], which inspired the development of numerous II. MATHEMATICAL PRELIMINARIES neural networks in the area of optimization. In 1988, Kennedy and Chua [3] proposed a neural network with a finite penalty In this section, the important concepts for the development parameter for nonlinear programming, with the disadvantage of the network structure are presented. that the penalty parameter had to grow infinitely to achieve convergence. To avoid using the penalty parameter, others A. Slack Variables methods such as Lagrange multiplier were introduced [4]. Proposition 1. If a and b are two real numbers, then a ≤ b In this sense, Wang [5] proposed a recurrent neural if and only if a + y2 = b for some number y [15]. network with a time-varying threshold vector to solve linear programming problems, with the disadvantage that the proposed network is asymptotically stable. Similarly, it was A classical optimization problem usually has both equality shown [6] that the use of differential algebra, in conjunction and inequality constraints that, when combined, often interact with KKT conditions, has a faster convergence than other in complex ways, so that slack variables can be used to methods used in linear programming; other investigators [7]– transform the original problem into a problem that only has [9] also use recurrent neural networks to study the convergence equality restrictions. Now, consider the general programming problem multipliers and the penalty method, since it consists of adding an additional term to the Lagrangian. min cT x Given σ = Mz − d and parting from problem (3), the s.t. Ax = b (1) augmented Lagrangian is defined as l ≤ x ≤ h Z σ L (x; z; λ) = cT x + λT σ + (σ)dz (6) n ρ where x 2 R is the vector of decision variables, c, l, 0 n m m×n h 2 R , b 2 R and A 2 R is a full row-rank matrix T where λ = [λ1; ::: ; λk] represents the vector of the (rank(A) = m; m ≤ n). R σ Lagrange multipliers and 0 (σ)dz is the penalty function. The general problem (1) can be transformed into a problem ρ 2 Usually, the penalty term most commonly used is 2 k σ k , of equality constraints by well-knows techniques such as but in this work, a different function is used to illustrate its adding slack variables. For this to occur, first the inequality behavior. is converted into independent inequalities, as follows III. STRUCTURE OF THE NETWORK min cT x A. Recurrent Neural Network Design s.a. Ax − b = 0 (2) Starting from (6) and according to the KKT conditions [16], −x ≤ −l (x*; λ*; z*) is an optimal solution if x ≤ h * * * rxLρ(x ; z ; λ ) = 0 Note that problem (2) is equivalent to problem (1); once this r L (x*; z*; λ*) = 0 form is obtained, the slack variables are introduced in order y ρ * * * to eliminate the inequality rλLρ(x ; z ; λ ) = 0 min cT x Subsequently, the following conditions must be fulfilled @z @σ @z @σ s.t. Ax − b = 0 c + λ + (σ) = 0 (7) −x + s + l = 0 @x @z @x @z 1 @z @σ @z @σ λ + (σ) = 0 (8) x + s2 − h = 0 @y @z @y @z 2 2 2 2 σ = where s1 = y1; ::: ; yn and s2 = yn+1; ::: ; y2n are the 0 (9) vectors that contain the slack variables. T k where y = [0 :::0 y1 y2 ::: y2n] 2 R and x = When a problem has equality constraints, it can be written T k [x1 x2 ::: xn 0 ::: 0] 2 R ; this is done so they are of the in vectorial form as shown below @z @z same dimensions as z, such that @y y @x are square matrices min cT x k × k, of the form (3) s.a. Mz − d = 0 2 @z1 @z2 ::: @zk 3 @y1 @y1 @y1 6 @z1 @z2 ::: @zk 7 where @z 6 @y2 @y2 @y2 7 = 6 . 7 2 AO O 3 2 b 3 @y 6 . 7 m×n m×n 4 . ::: . 5 @z @z @z M = 4 In×n In×n On×n 5 ; d = 4 h 5 (4) 1 2 ::: k @yk @yk @yk −In×n On×n In×n −l 2 3 @z1 @z2 ::: @zk @x1 @x1 @x1 T = x ; ::: ; x ; y2; ::: ; y2 (5) 6 @z1 @z2 ::: @zk 7 z 1 n 1 2n @z 6 @x2 @x2 @x2 7 = 6 . 7 : @x 6 . 7 A is the matrix containing the equality constraints, Om×n is 4 . ::: . 5 a matrix of zeros of dimension m×n, O is another matrix @z1 @z2 ::: @zk n×n @xk @xk @xk of zeros of dimension n × n, I is an identity matrix of n×n The KKT conditions (7)-(9) are used to propose the dimension n × n, z is the vector containing both the decision following recurrent neural network in order to solve the linear variables (x) and the slack variables (y2), which represent the i program given in (3) additional value that needs to be added to satisfy the equality; @z @σ @z @σ each slack variables is elevated to the square to ensure that it x_ = −γ c + (σ) + λ (10) is positive. @x @z @x @z @z @σ @z @σ B. Augmented Lagrangian y_ = −γ (σ) + λ (11) @x @z @x @z The augmented Lagrangian method is a technique used to λ_ = γασ (12) solve constraint optimization problems by transforming those problems into unconstrained equivalent ones. This method can where γ is a positive scaling constant, α is a nonnegative gain be considered a hybrid between the method of the Lagrange and (σ) is the derivative of the penalty function. @z @z Given that σ = Mz − d, then σ_ = M @x x_ + M @y y_. Hence, B. An Academic Example from (10) and (11), it results Consider the linear programming problem [18] " # ! @z @z 2 @z 2 @σ σ_ = − γ M c − M + λ @x @x @y @z min 4x1 + x2 + 2x3 " # ! (13) @z 2 @z 2 @σ s.t. x1 − 2x2 + x3 = 2 − γ M + (σ) : @x @y @z − x1 + 2x2 + x3 = 1 − 5 ≤ x1; x2; x3 ≤ 5 Selecting (σ) in (13) as " # !−1 The proposed neural network (10)-(12) with the design @z 2 @z 2 @σ (σ) = − γM + function (14) and (20) is tested, with k1 = 8, k2 = 3, k3 = 3, @x @y @z α = 1, ρ = 1, γ = 105 and initial conditions are randomly " " # ! # @z @z 2 @z 2 @σ selected within a range from −10 to 10. Results are shown in γ M c + M + λ − φ(σ) Fig.1 and Fig.2.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-