Evolution Strategies for Constants Optimization in Genetic Programming

Total Page:16

File Type:pdf, Size:1020Kb

Evolution Strategies for Constants Optimization in Genetic Programming Evolution Strategies for Constants Optimization in Genetic Programming Cesar´ L. Alonso ¤ Jose´ Luis Montana˜ Centro de Inteligencia Artificial Cruz Enrique Borges y Universidad de Oviedo Dpto. de Matematicas´ Estad´ıstica y Computacion´ Campus de Viesques 33271 Gijon´ Universidad de Cantabria [email protected] Avda de los Castros s.n. [email protected] [email protected] Abstract whose corresponding semantic function is a candidate solu- tion for the problem instance, is obtained. One simple way Evolutionary computation methods have been used to to exemplify this situation is the following. Assume that we solve several optimization and learning problems. This pa- have to guess the equation of a geometric figure. If some- per describes an application of evolutionary computation body (for example a GP algorithm) tells us that this figure is methods to constants optimization in Genetic Programming. a quartic function, it only remains for us to guess the appro- A general evolution strategy technique is proposed for ap- priate coefficients. This point of view is not new and it con- proximating the optimal constants in a computer program stitutes the underlying idea of many successful methods in representing the solution of a symbolic regression problem. Machine Learning that combine a space of hypotheses with The new algorithm has been compared with a recent lin- least square methods. Previous work in which constants of ear genetic programming approach based on straight-line a symbolic expression have been effectively optimized has programs. The experimental results show that the proposed also dealt with memetic algorithms, in which classical local algorithm improves such technique. optimization techniques (gradient descent [9], linear scaling [3] or other methods based on diversity measures [7]) were used. 1 Introduction We have tested the performance of our strategy on sym- bolic regression problem instances. The problem of sym- bolic regression consists of finding –in symbolic form– a In the last years Genetic Programming (GP) has been ap- function that fits a given finite sample set of data points. plied to a range of complex problems in a variety of fields More formally, we consider an input space X = IRn and like quantum computing, electronic design, sorting, search- an output space Y = IR: We are given a sample of m pairs ing, game playing, etc. Most of these applications can be z = (x ; y ) : These examples are drawn according seen as evolutionary optimization or evolutionary learning. i i 1·i·m to an unknown probability measure ½ on the product space For dealing with these complex tasks, GP evolves a popu- Z = X £Y and they are independent identically distributed lation composed by symbolic expressions built from a set (i.i.d.). The goal is to construct a function f : X ! Y of functionals and a set of terminals (including the vari- which predicts the value y 2 Y from a given x 2 X: The ables and the constants). In this paper we want to exploit criterion to choose a function f is a low probability of er- the following intuitive idea: once the shape of the sym- ror. As usual in this context, we estimate error by empirical bolic expression representing some optimal solution has error. been found, we try to determine the best values of the con- stants appearing in the symbolic expression. More specif- In our algorithm for finding function f; a GP algorithm ically, the terminals being constants are not fixed numeric will try to guess its shape whereas the evolution strategy values, but references to numeric values. Specializing these (ES) will try to find its coefficients. references to fixed values, a specific symbolic expression, The paper is organized as follows: in section 2 we de- scribe the components that constitute the ES for obtain- ¤The First two authors are supported by spanish grant TIN2007-67466- C02-02 ing good values for the constants. Section 3 provides the ySupported by FPU program and MTM2004-01167 definition of the structure that will represent the programs and also includes the details of the designed GP algorithm. value given the multiple partial results from the collabora- Section 4 presents some numerical experiments. Finally, tors (collaboration credit assignment). In this paper we will section 5 draws some conclusions and addresses future re- consider the first subset of P for computing the fitness of c: search directions. Then the expression 1 becomes ES c 2 The algorithm and its convergence Fz (c) = Fz(¡0) (2) where ¡0 is the best symbolic expression of the population In this section we describe an evolution strategy (ES) that P; obtained by the execution of a previous GP algorithm. provides good values for the numeric terminal symbols C = The details of this GP algorithm will be described in the next section. fc1; : : : ; cqg used by a population of symbolic expressions that evolves during a GP process. We assume a population Next we describe the ES for optimizing constants. Sim- ilar to many other evolutionary algorithms, the ES always P = f¡1;:::; ¡N g constituted by N symbolic expressions over a set of numeric functionals F and a set of numeric maintains a population of constants vectors fc1;:::; cM g. terminals T = V [C: Let [a; b] ½ IR be the search space for The initial vector of constants comes from the GP algo- rithm evolving symbolic expressions. Different vectors of the constants ci; 1 · i · q: In this situation, each individual c is represented by a vector of floating point numbers in constants can be generated at random from the uniform dis- [a; b]q: tribution in the search space. There are several ways of defining the fitness of a vector Recombination in the ES involves all individuals in the of constants c; but in all of them the current population P population. If the population size is M, then the recom- of symbolic expressions that evolves in the GP process is bination will have M parents and generates M offsprings n through linear combination. Mutation is achieved by per- involved. Let z = (xi; yi) 2 IR £ IR; 1 · i · m; be a sample defining a symbolic regression instance. Given a forming an affine transformation. The fitness of a vector of constants is evaluated by Equation 2. The main steps of the vector of values containing the constants c = (c1; : : : ; cq); we define the fitness of c by the following expression: ES are described as follows: 1. Recombination: Let A := (aij )1·i;j·M be an M £ ES c Fz (c) = minfFz(¡i ); 1 · i · Ng (1) M matrix that satisfies aij ¸ 0 for 1 · i; j · M PM c and j=1 aij = 1; for 1 · i · M. Then generate where Fz(¡ ) represents the fitness of the symbolic expres- i I sion ¡ after making the substitution of the references to the an intermediate population of constants vectors X = i I I constants in C; by the numeric values of c: The expression (c1 ;:::; cM ) from X = (c1;:::; cM ) through the c following recombination: that computes Fz(¡i ) is defined below by Equation 6. Observe that when the fitness of c is computed by means of the above expression, the GP fitness values of a whole (XI )t = AXt; (3) population of symbolic expressions are also computed. This t could cause too much computational effort when the size of where X represents the transposition of vector X. both populations increases. In order to prevent the above sit- Many different methods for choosing matrix A can be uation new fitness functions for c can be introduced, where used. In practice A can be chosen either deterministi- only a subset of the population P of the symbolic expres- cally or stochastically. sions is evaluated. Previous work based on cooperative co- 2. Mutation: Generate the next intermediate population evolutionary architectures suggests two basic methods for J I I X from X as follows: for each individual ci in pop- selecting the subset of collaborators (see [11]): The first ulation XI produce an offspring according to one, in our case, consists of the selection of the best sym- bolic expression of the current population P; considering J t I t t the GP fitness obtained from the last evaluation of P: The (ci ) = Bi (ci ) + gi (4) second one selects two individuals from P : the best one where B is an q£q matrix and g is a q- vector. Matrix and a random symbolic expression. Then, evaluates both i i B and vector g can be chosen either deterministically symbolic expressions with the references to constants spe- i i or stochastically. cialized to c and assigns the best value to the fitness of c: In general, there are three aspects to consider when select- Next we show some calculations that justify the pro- ing the subset of collaborators. These aspects are: The de- posed recombination and mutation procedures. Suppose gree of greediness when choosing a collaborator (collabo- that the individuals at time I of the evolution process are rator selection pressure); the number of collaborators (col- X = (c1; ¢ ¢ ¢ ; cM ). Let c be an optimal set of constants. laboration pool size) and the method of assigning a fitness Let ej := cj ¡ c According to recombination 2 C = fc1; : : : ; cqg is a finite set of references to constants. XM I The number of instructions l is the length of ¡: ci = aijcj; i = 1;:::;M Observe that a slp ¡ = fI1;:::;Ilg is identified with j=1 the set of variables ui that are introduced by means of the PM Since j=1 aij = 1 and aij ¸ 0 then for i = 1;:::;M: instructions Ii: Thus the slp ¡ can be denoted by ¡ = fu1; : : : ; ulg: I I jjei jj = jjci ¡ cjj · Let ¡ = fu1; : : : ; ulg be a slp over F and T: The sym- bolic expression represented by ¡ is ul considered as an XM expression over the set of terminals T constructed by a se- I · aijjjcj ¡ cjj · max jjejjj: quence of recursive compositions from the set of functions 1·j·M j=1 F: Provided that V = fx1; : : : ; xng ½ T is the set of termi- This means essentially that the recombination procedure nal variables and C = fc1; : : : ; cqg ½ T is the set of refer- q I ences to the constants, for each specialization c 2 [a; b] ½ does not make population X worse than population X.
Recommended publications
  • Metaheuristics1
    METAHEURISTICS1 Kenneth Sörensen University of Antwerp, Belgium Fred Glover University of Colorado and OptTek Systems, Inc., USA 1 Definition A metaheuristic is a high-level problem-independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms (Sörensen and Glover, To appear). Notable examples of metaheuristics include genetic/evolutionary algorithms, tabu search, simulated annealing, and ant colony optimization, although many more exist. A problem-specific implementation of a heuristic optimization algorithm according to the guidelines expressed in a metaheuristic framework is also referred to as a metaheuristic. The term was coined by Glover (1986) and combines the Greek prefix meta- (metá, beyond in the sense of high-level) with heuristic (from the Greek heuriskein or euriskein, to search). Metaheuristic algorithms, i.e., optimization methods designed according to the strategies laid out in a metaheuristic framework, are — as the name suggests — always heuristic in nature. This fact distinguishes them from exact methods, that do come with a proof that the optimal solution will be found in a finite (although often prohibitively large) amount of time. Metaheuristics are therefore developed specifically to find a solution that is “good enough” in a computing time that is “small enough”. As a result, they are not subject to combinatorial explosion – the phenomenon where the computing time required to find the optimal solution of NP- hard problems increases as an exponential function of the problem size. Metaheuristics have been demonstrated by the scientific community to be a viable, and often superior, alternative to more traditional (exact) methods of mixed- integer optimization such as branch and bound and dynamic programming.
    [Show full text]
  • Genetic Programming: Theory, Implementation, and the Evolution of Unconstrained Solutions
    Genetic Programming: Theory, Implementation, and the Evolution of Unconstrained Solutions Alan Robinson Division III Thesis Committee: Lee Spector Hampshire College Jaime Davila May 2001 Mark Feinstein Contents Part I: Background 1 INTRODUCTION................................................................................................7 1.1 BACKGROUND – AUTOMATIC PROGRAMMING...................................................7 1.2 THIS PROJECT..................................................................................................8 1.3 SUMMARY OF CHAPTERS .................................................................................8 2 GENETIC PROGRAMMING REVIEW..........................................................11 2.1 WHAT IS GENETIC PROGRAMMING: A BRIEF OVERVIEW ...................................11 2.2 CONTEMPORARY GENETIC PROGRAMMING: IN DEPTH .....................................13 2.3 PREREQUISITE: A LANGUAGE AMENABLE TO (SEMI) RANDOM MODIFICATION ..13 2.4 STEPS SPECIFIC TO EACH PROBLEM.................................................................14 2.4.1 Create fitness function ..........................................................................14 2.4.2 Choose run parameters.........................................................................16 2.4.3 Select function / terminals.....................................................................17 2.5 THE GENETIC PROGRAMMING ALGORITHM IN ACTION .....................................18 2.5.1 Generate random population ................................................................18
    [Show full text]
  • A Hybrid LSTM-Based Genetic Programming Approach for Short-Term Prediction of Global Solar Radiation Using Weather Data
    processes Article A Hybrid LSTM-Based Genetic Programming Approach for Short-Term Prediction of Global Solar Radiation Using Weather Data Rami Al-Hajj 1,* , Ali Assi 2 , Mohamad Fouad 3 and Emad Mabrouk 1 1 College of Engineering and Technology, American University of the Middle East, Egaila 54200, Kuwait; [email protected] 2 Independent Researcher, Senior IEEE Member, Montreal, QC H1X1M4, Canada; [email protected] 3 Department of Computer Engineering, University of Mansoura, Mansoura 35516, Egypt; [email protected] * Correspondence: [email protected] or [email protected] Abstract: The integration of solar energy in smart grids and other utilities is continuously increasing due to its economic and environmental benefits. However, the uncertainty of available solar energy creates challenges regarding the stability of the generated power the supply-demand balance’s consistency. An accurate global solar radiation (GSR) prediction model can ensure overall system reliability and power generation scheduling. This article describes a nonlinear hybrid model based on Long Short-Term Memory (LSTM) models and the Genetic Programming technique for short-term prediction of global solar radiation. The LSTMs are Recurrent Neural Network (RNN) models that are successfully used to predict time-series data. We use these models as base predictors of GSR using weather and solar radiation (SR) data. Genetic programming (GP) is an evolutionary heuristic computing technique that enables automatic search for complex solution formulas. We use the GP Citation: Al-Hajj, R.; Assi, A.; Fouad, in a post-processing stage to combine the LSTM models’ outputs to find the best prediction of the M.; Mabrouk, E.
    [Show full text]
  • Long Term Memory Assistance for Evolutionary Algorithms
    mathematics Article Long Term Memory Assistance for Evolutionary Algorithms Matej Crepinšekˇ 1,* , Shih-Hsi Liu 2 , Marjan Mernik 1 and Miha Ravber 1 1 Faculty of Electrical Engineering and Computer Science, University of Maribor, 2000 Maribor, Slovenia; [email protected] (M.M.); [email protected] (M.R.) 2 Department of Computer Science, California State University Fresno, Fresno, CA 93740, USA; [email protected] * Correspondence: [email protected] Received: 7 September 2019; Accepted: 12 November 2019; Published: 18 November 2019 Abstract: Short term memory that records the current population has been an inherent component of Evolutionary Algorithms (EAs). As hardware technologies advance currently, inexpensive memory with massive capacities could become a performance boost to EAs. This paper introduces a Long Term Memory Assistance (LTMA) that records the entire search history of an evolutionary process. With LTMA, individuals already visited (i.e., duplicate solutions) do not need to be re-evaluated, and thus, resources originally designated to fitness evaluations could be reallocated to continue search space exploration or exploitation. Three sets of experiments were conducted to prove the superiority of LTMA. In the first experiment, it was shown that LTMA recorded at least 50% more duplicate individuals than a short term memory. In the second experiment, ABC and jDElscop were applied to the CEC-2015 benchmark functions. By avoiding fitness re-evaluation, LTMA improved execution time of the most time consuming problems F03 and F05 between 7% and 28% and 7% and 16%, respectively. In the third experiment, a hard real-world problem for determining soil models’ parameters, LTMA improved execution time between 26% and 69%.
    [Show full text]
  • A Genetic Programming-Based Low-Level Instructions Robot for Realtimebattle
    entropy Article A Genetic Programming-Based Low-Level Instructions Robot for Realtimebattle Juan Romero 1,2,* , Antonino Santos 3 , Adrian Carballal 1,3 , Nereida Rodriguez-Fernandez 1,2 , Iria Santos 1,2 , Alvaro Torrente-Patiño 3 , Juan Tuñas 3 and Penousal Machado 4 1 CITIC-Research Center of Information and Communication Technologies, University of A Coruña, 15071 A Coruña, Spain; [email protected] (A.C.); [email protected] (N.R.-F.); [email protected] (I.S.) 2 Department of Computer Science and Information Technologies, Faculty of Communication Science, University of A Coruña, Campus Elviña s/n, 15071 A Coruña, Spain 3 Department of Computer Science and Information Technologies, Faculty of Computer Science, University of A Coruña, Campus Elviña s/n, 15071 A Coruña, Spain; [email protected] (A.S.); [email protected] (A.T.-P.); [email protected] (J.T.) 4 Centre for Informatics and Systems of the University of Coimbra (CISUC), DEI, University of Coimbra, 3030-790 Coimbra, Portugal; [email protected] * Correspondence: [email protected] Received: 26 November 2020; Accepted: 30 November 2020; Published: 30 November 2020 Abstract: RealTimeBattle is an environment in which robots controlled by programs fight each other. Programs control the simulated robots using low-level messages (e.g., turn radar, accelerate). Unlike other tools like Robocode, each of these robots can be developed using different programming languages. Our purpose is to generate, without human programming or other intervention, a robot that is highly competitive in RealTimeBattle. To that end, we implemented an Evolutionary Computation technique: Genetic Programming.
    [Show full text]
  • Computational Creativity: Three Generations of Research and Beyond
    Computational Creativity: Three Generations of Research and Beyond Debasis Mitra Department of Computer Science Florida Institute of Technology [email protected] Abstract In this article we have classified computational creativity research activities into three generations. Although the 2. Philosophical Angles respective system developers were not necessarily targeting Philosophers try to understand creativity from the their research for computational creativity, we consider their works as contribution to this emerging field. Possibly, the historical perspectives – how different acts of creativity first recognition of the implication of intelligent systems (primarily in science) might have happened. Historical toward the creativity came with an AAAI Spring investigation of the process involved in scientific discovery Symposium on AI and Creativity (Dartnall and Kim, 1993). relied heavily on philosophical viewpoints. Within We have here tried to chart the progress of the field by philosophy there is an ongoing old debate regarding describing some sample projects. Our hope is that this whether the process of scientific discovery has a normative article will provide some direction to the interested basis. Within the computing community this question researchers and help creating a vision for the community. transpires in asking if analyzing and computationally emulating creativity is feasible or not. In order to answer this question artificial intelligence (AI) researchers have 1. Introduction tried to develop computing systems to mimic scientific One of the meanings of the word “create” is “to produce by discovery processes (e.g., BACON, KEKADA, etc. that we imaginative skill” and that of the word “creativity” is “the will discuss), almost since the beginning of the inception of ability to create,’ according to the Webster Dictionary.
    [Show full text]
  • Geometric Semantic Genetic Programming Algorithm and Slump Prediction
    Geometric Semantic Genetic Programming Algorithm and Slump Prediction Juncai Xu1, Zhenzhong Shen1, Qingwen Ren1, Xin Xie2, and Zhengyu Yang2 1 College of Water Conservancy and Hydropower Engineering, Hohai University, Nanjing 210098, China 2 Department of Electrical and Engineering, Northeastern University, Boston, MA 02115, USA ABSTRACT Research on the performance of recycled concrete as building material in the current world is an important subject. Given the complex composition of recycled concrete, conventional methods for forecasting slump scarcely obtain satisfactory results. Based on theory of nonlinear prediction method, we propose a recycled concrete slump prediction model based on geometric semantic genetic programming (GSGP) and combined it with recycled concrete features. Tests show that the model can accurately predict the recycled concrete slump by using the established prediction model to calculate the recycled concrete slump with different mixing ratios in practical projects and by comparing the predicted values with the experimental values. By comparing the model with several other nonlinear prediction models, we can conclude that GSGP has higher accuracy and reliability than conventional methods. Keywords: recycled concrete; geometric semantics; genetic programming; slump 1. Introduction The rapid development of the construction industry has resulted in a huge demand for concrete, which, in turn, caused overexploitation of natural sand and gravel as well as serious damage to the ecological environment. Such demand produces a large amount of waste concrete in construction, entailing high costs for dealing with these wastes 1-3. In recent years, various properties of recycled concrete were validated by researchers from all over the world to protect the environment and reduce processing costs.
    [Show full text]
  • A Genetic Programming Strategy to Induce Logical Rules for Clinical Data Analysis
    processes Article A Genetic Programming Strategy to Induce Logical Rules for Clinical Data Analysis José A. Castellanos-Garzón 1,2,*, Yeray Mezquita Martín 1, José Luis Jaimes Sánchez 3, Santiago Manuel López García 3 and Ernesto Costa 2 1 Department of Computer Science and Automatic, Faculty of Sciences, BISITE Research Group, University of Salamanca, Plaza de los Caídos, s/n, 37008 Salamanca, Spain; [email protected] 2 CISUC, Department of Computer Engineering, ECOS Research Group, University of Coimbra, Pólo II - Pinhal de Marrocos, 3030-290 Coimbra, Portugal; [email protected] 3 Instituto Universitario de Estudios de la Ciencia y la Tecnología, University of Salamanca, 37008 Salamanca, Spain; [email protected] (J.L.J.S.); [email protected] (S.M.L.G.) * Correspondence: [email protected] Received: 31 October 2020; Accepted: 26 November 2020; Published: 27 November 2020 Abstract: This paper proposes a machine learning approach dealing with genetic programming to build classifiers through logical rule induction. In this context, we define and test a set of mutation operators across from different clinical datasets to improve the performance of the proposal for each dataset. The use of genetic programming for rule induction has generated interesting results in machine learning problems. Hence, genetic programming represents a flexible and powerful evolutionary technique for automatic generation of classifiers. Since logical rules disclose knowledge from the analyzed data, we use such knowledge to interpret the results and filter the most important features from clinical data as a process of knowledge discovery. The ultimate goal of this proposal is to provide the experts in the data domain with prior knowledge (as a guide) about the structure of the data and the rules found for each class, especially to track dichotomies and inequality.
    [Show full text]
  • Evolving Evolutionary Algorithms Using Linear Genetic Programming
    Evolving Evolutionary Algorithms Using Linear Genetic Programming Mihai Oltean [email protected] Department of Computer Science, Babes¸-Bolyai University, Kogalniceanu 1, Cluj- Napoca 3400, Romania Abstract A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chro- mosome encodes an EA which is used for solving a particular problem. Several Evolu- tionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numeri- cal experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several well-known benchmark- ing problems. Keywords Genetic algorithms, genetic programming, linear genetic programming, evolving evo- lutionary algorithms 1 Introduction Evolutionary Algorithms (EAs) (Goldberg, 1989; Holland, 1975) are new and powerful tools used for solving difficult real-world problems. They have been developed in or- der to solve some real-world problems that the classical (mathematical) methods failed to successfully tackle. Many of these unsolved problems are (or could be turned into) optimization problems. The solving of an optimization problem means finding solu- tions that maximize or minimize a criteria function (Goldberg, 1989; Holland, 1975; Yao et al., 1999). Many Evolutionary Algorithms have been proposed for dealing with optimization problems. Many solution representations
    [Show full text]
  • Genetic Programming for Julia: Fast Performance and Parallel Island Model Implementation
    Genetic Programming for Julia: fast performance and parallel island model implementation Morgan R. Frank November 30, 2015 Abstract I introduce a Julia implementation for genetic programming (GP), which is an evolutionary algorithm that evolves models as syntax trees. While some abstract high-level genetic algorithm packages, such as GeneticAlgorithms.jl, al- ready exist for Julia, this package is not optimized for genetic programming, and I provide a relatively fast implemen- tation here by utilizing the low-level Expr Julia type. The resulting GP implementation has a simple programmatic interface that provides ample access to the parameters controlling the evolution. Finally, I provide the option for the GP to run in parallel using the highly scalable ”island model” for genetic algorithms, which has been shown to improve search results in a variety of genetic algorithms by maintaining solution diversity and explorative dynamics across the global population of solutions. 1 Introduction be inflexible when attempting to handle diverse problems. Julia’s easy vectorization, access to low-level data types, Evolving human beings and other complex organisms and user-friendly parallel implementation make it an ideal from single-cell bacteria is indeed a daunting task, yet bi- language for developing a GP for symbolic regression. ological evolution has been able to provide a variety of Finally, I will discuss a powerful and widely used solutions to the problem. Evolutionary algorithms [1–3] method for utilizing multiple computing cores to improve refers to a field of algorithms that solve problems by mim- solution discovery using evolutionary algorithms. Since icking the structure and dynamics of genetic evolution.
    [Show full text]
  • Basic Concepts of Linear Genetic Programming
    Chapter 2 BASIC CONCEPTS OF LINEAR GENETIC PROGRAMMING In this chapter linear genetic programming (LGP) will be explored in further detail. The basis of the specific linear GP variant we want to investigate in this book will be described, in particular the programming language used for evolution, the representation of individuals, and the spe- cific evolutionary algorithm employed. This will form the core of our LGP system, while fundamental concepts of linear GP will also be discussed, including various forms of program execution. Linear GP operates with imperative programs. All discussions and ex- periments in this book are conducted independently from a special type of programming language or processor architecture. Even though genetic programs are interpreted and partly noted in the high-level language C, the applied programming concepts exist principally in or may be trans- lated into most modern imperative programming languages, down to the level of machine languages. 2.1 Representation of Programs The imperative programming concept is closely related to the underlying machine language, in contrast to the functional programming paradigm. All modern CPUs are based on the principle of the von Neumann archi- tecture, a computing machine composed of a set of registers and basic instructions that operate and manipulate their content. A program of such a register machine, accordingly, denotes a sequence of instructions whose order has to be respected during execution. 14 Linear Genetic Programming void gp(r) double r[8]; { ... r[0] = r[5] + 71; // r[7] = r[0] - 59; if (r[1] > 0) if (r[5] > 2) r[4] = r[2] * r[1]; // r[2] = r[5] + r[4]; r[6] = r[4] * 13; r[1] = r[3] / 2; // if (r[0] > r[1]) // r[3] = r[5] * r[5]; r[7] = r[6] - 2; // r[5] = r[7] + 15; if (r[1] <= r[6]) r[0] = sin(r[7]); } Example 2.1.
    [Show full text]
  • Meta-Genetic Programming: Co-Evolving the Operators of Variation
    Meta-Genetic Programming: Co-evolving the Operators of Variation Bruce Edmonds, Centre for Policy Modelling, Manchester Metropolitan University, Aytoun Building, Aytoun Street, Manchester, M1 3GH, UK. Tel: +44 161 247 6479 Fax: +44 161 247 6802 E-mail: [email protected] URL: http://www.fmb.mmu.ac.uk/~bruce Abstract The standard Genetic Programming approach is augmented by co-evolving the genetic operators. To do this the operators are coded as trees of inde®nite length. In order for this technique to work, the language that the operators are de®ned in must be such that it preserves the variation in the base population. This technique can varied by adding further populations of operators and changing which populations act as operators for others, including itself, thus to provide a framework for a whole set of augmented GP techniques. The technique is tested on the parity problem. The pros and cons of the technique are discussed. Keywords genetic programming, automatic programming, genetic operators, co-evolution Meta-Genetic Programming 1. Introduction Evolutionary algorithms are relatively robust over many problem domains. It is for this reason that they are often chosen for use where there is little domain knowledge. However for particular problem domains their performance can often be improved by tuning the parameters of the algorithm. The most studied example of this is the ideal mix of crossover and mutation in genetic algorithms. One possible compromise in this regard is to let such parameters be adjusted (or even evolve) along with the population of solutions so that the general performance can be improved without losing the generality of the algorithm1.
    [Show full text]