Grammatical Evolution for Neural Network Optimization in the Control System Synthesis Problem D.E
Total Page:16
File Type:pdf, Size:1020Kb
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 103 ( 2017 ) 14 – 19 XIIth International Symposium Intelligent Systems, INTELS16, 5-7 October 2016, Moscow, Russia Grammatical evolution for neural network optimization in the control system synthesis problem D.E. Kazaryan∗, A.V. Savinkov RUDN University, Miklukho-Maklaya str. 6, Moscow 117198, Russia Abstract Grammatical evolution is a perspective branch of the genetic programming. It uses evolutionary algorithm based search engine and Backus — Naur form of domain-specific language grammar specifications to find symbolic expressions. This paper describes an application of this method to the control function synthesis problem. Feed-forward neural network was used as an approximation of the control function, that depends on the object state variables. Two-stage algorithm is presented: grammatical evolution optimizes neural network structure and genetic algorithm tunes weights. Computational experiments were performed on the simple kinematic model of a two-wheel driving mobile robot. Training was performed on a set of initial conditions. Results show that the proposed algorithm is able to successfully synthesize a control function. ©c 20172017 TheThe Authors. Authors. Published Published by byElsevier Elsevier B.V. B.V. This is an open access article under the CC BY-NC-ND license Peer-revie(http://creativecommons.org/licenses/by-nc-nd/4.0/w under responsibility of the scientific). committee of the XIIth International Symposium “Intelligent Systems”. Peer-review under responsibility of the scientific committee of the XIIth International Symposium “Intelligent Systems” Keywords: grammatical evolution; control system synthesis; artificial neural networks. 1. Introduction Control synthesis is a complex problem, that usually involves a great amount of analytical computations, especially for the nonlinear problems, or comprised of tedious uniform tasks. For nonlinear problems common approach is to linearize a plant around operating points and then design a linear controller. This approach could lead to oversimpli- fication of the model. These issues will be avoided if computer take the brunt of a solution search. Artificial neural networks (ANNs) are often used in control applications as they are developed theoretically as well as have computationally effective implementations. There were successful attempts to use ANN as a nonlinear controller, that takes the plant state variables as inputs and produce control signal 1,2. Arguably the most widely used model of ANNs in control applications is nonlinear autoregressive model with exogenous inputs (NARX), that uses delayed inputs and outputs: h(k) = F y(k − 1), y(k − 2),...,y(k − n), u(k), u(k − 1),...,u(k − m) . (1) ∗ Corresponding author. Tel.: +7-495-955-0792. E-mail address: kazaryan [email protected] 1877-0509 © 2017 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of the scientific committee of the XIIth International Symposium “Intelligent Systems” doi: 10.1016/j.procs.2017.01.002 D.E. Kazaryan and A.V. Savinkov / Procedia Computer Science 103 ( 2017 ) 14 – 19 15 NARX networks are used both for identification 3,4,5 and control 3. Usually control engineers select the ANN architec- ture using their expert knowledge in the problem domain. In this paper we propose to use symbolic expression search method for optimal ANN structure search. Symbolic expressions search methods arose from the work 6 that introduced genetic programming method. Genetic programming is followed by cartesian genetic programming 7, grammatical evolution (GE) 8, analytic programming 9, network operator 10 etc. These methods allow to perform a search for a structure using functions as ”building blocks”. Symbolic expressions search methods are actively used in control synthesis recently 11,12 We choose grammatical evolution (GE) for optimal ANN structure 2 search. There are several works in this field 13,14. In our work we define grammatical rules of modification of some existing neural network structure. We supposed that incorporating expert knowledge into the search process is important and introduced elements of basic solution principle 11. The paper is organized as following. In section 2 control synthesis problem is formally stated. In section 3 we describe neural controller model used in the paper. Section 4 introduces the structure of the GE algorithm. In sec- tion 5 we show the neural controller performance on the simple nonlinear mobile robot kinematic model. Conclusion summarizes the paper and discusses possible directions of the future work. 2. Control system synthesis problem statement Consider the following ODE system x˙(t) = f (x (t) , u (t)) , (2) where x ∈ X is a system state, X ∈ Rn, u ∈ U is a control function, U ∈ Rm is closed and bounded, m < n. u(t) is defined on t0; t f . Initial conditions for (2) are given as X0 ∈ X and target terminal states set is defined as X f ∈ X, X0 ∩ X f = ∅. Generally X0 and X f are continuous, but our assumption is that if we synthesized a control ∀x(0) ∈ D ∀x( f ) ∈ D D ∈ D ∈ D D function that could move the system (2) from i X0 to j X f , X0 X0, X f X f , X0 and X f are finite and ∀x(0) ∈ given with small enough discretization step, the control function would be able to move the system (2) from i X0 ∀x( f ) ∈ to j X f : D = x(0) , x(0) ,...,x(0) , X0 0 (t0) 1 (t0) c−1(t0) (3) D = x( f ) , x( f ) ,...,x( f ) . X f 0 (t f ) 1 (t f ) d−1(t f ) (4) Control synthesis goal is to find a function h , x, x(0), x( f ) ∈ (t i j ) U (5) x(0) ∈ D x( f ) ∈ D 15 that moves the system (2) from i X0 to j X f while minimizing a set of functionals in a Pareto sense p. 25 in general: J = {J1(h), J2(h),...,Jr(h)} , (6) where = h , x, x(0), x( f ) , = , . Jk F( (t i j )) k 1 r (7) i j h , x, x(0), x( f ) This general problem statement can always be reduced to the single functional optimization problem. As (t i j ) is nonlinear, we can approximate it with an artificial neural network. ANN architecture will be described in the next section. 16 D.E. Kazaryan and A.V. Savinkov / Procedia Computer Science 103 ( 2017 ) 14 – 19 Fig. 1. Control system with neural controller 3. Neural controller model Artificial neural networks are great approximators. It has been proven 16 that even neural network with single hidden layer and discriminatory activation functions and suffi cient number of parameters (weights) is able to approximate any nonlinear function with required accuracy. In this paper we will use this ANN ability to approximate the desired function (5). ANN for the control system architecture shown on Figure 1 can be expressed as a function hˆ , x, x(0), x( f ), w = f w ,... w , f w , f w , , x, x(0), x( f ) , t i j n n 3 2 2 1 1 t i j (8) where wi, i = 0, n − 1 are subject to the parametric optimization whereas n, pi = dim wi and fi ∈ F are subjects to the structural optimization, F is the ordered set of allowed activation functions. In this paper we used fully-connected layers for ANN. 4. Grammatical evolution for neural controller synthesis Grammatical evolution is an evolutionary algorithm that uses formal grammar rules given in a Backus — Naur form (BNF). BNF is a way for defining the language grammar in the form of production rules. Rules are formed using terminals and nonterminals. Terminals are elements of the language and nonterminals are expressions that can be replaced using production rules to other nonterminals, terminals or their combination. Using these rules GE builds possible problem solutions in a string form. The string obtained is a subject to evaluation. Usually during evaluation it should be translated or interpreted, so it is preferable to use a programming language that has built-in eval operator. For the search process GE uses a search engine, usually genetic algorithm 8 or particle swarm optimization algo- rithm 17. Search algorithm operates over a population of integer or binary arrays of variable length. During the search these arrays are transformed using operators specific to the certain algorithm. In our work genetic algorithm is used, so crossover and mutation operators are chosen. GE requires several sets to be defined: N – set of nonterminals, T – set of terminals, S – set of possible initial symbols (usually S ∈ N, but it is possible to use a predefined string, that contains at least one nonterminal), P – set of production rules. GE can be easily adopted for the structural optimization of the neural network (8). Let us define sets described above: N = {<expr>, <modification>, <f_num>, <l_num>, <n_num>} T = {0, 1, ..., 9, add_l, add_n, rmv_l, rmv_n, chng_f, max_l, max_n, max_f} S = {<expr>} and P can be represented as (1) <expr> ::= <expr>,<modification>,<expr> (0) | <modification> (1) D.E. Kazaryan and A.V. Savinkov / Procedia Computer Science 103 ( 2017 ) 14 – 19 17 Fig. 2. Initial neural network structure for grammatical evolution (2) <modification> ::= add_l(<l_num>, <n_num>) (0) | add_n(<l_num>, <n_num>) (1) | rmv_l(<l_num>) (2) | rmv_n(<l_num>, <n_num>) (3) | chng_f(<l_num>, <f_num>) (4) (3) <l_num> ::= 0|1|...|max_l-1 (0)-(max_l-1) (4) <n_num> ::= 0|1|...|max_n-1 (0)-(max_n-1) (5) <f_num> ::= 0|1|...|max_f-1 (0)-(max_f-1) where <modification> options are functions that change the structure of the ANN, <l num> is a layer position, <n num> is a number of neurons, and <f num> is an index of the activation function in F. max l, max n, and max f define the greatest layer index, the greatest number of neurons in a layer and the greatest activation function index respectively.