Review on Adaptive Genetic Algorithm and Metaheuristic Methods Within Stochastic Optimisation

Total Page:16

File Type:pdf, Size:1020Kb

Review on Adaptive Genetic Algorithm and Metaheuristic Methods Within Stochastic Optimisation Int.J.Curr.Microbiol.App.Sci (2018) Special Issue-6: 431-441 International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Special Issue-6 pp. 431-441 Journal homepage: http://www.ijcmas.com Review Article Review on Adaptive Genetic Algorithm and Metaheuristic Methods within Stochastic Optimisation Manjusha P . Dhoke, Jyoti S. Kale And Pavan K. Dhoke Vasantrao Naik Marathwada Krishi Vidyapeeth, Parbhani, Maharashtra, India *Corresponding author ABSTRACT In computer science and operations research, a genetic algorithm (GA) is a Ke yw or ds metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are Adaptive genetic commonly used to generate high-quality solutions to optimization and algorithm, search problems by relying on bio-inspired operators such as mutation, Stochastic crossover and selection. In a genetic algorithm, a population of candidate optimisation solutions (called individuals, creatures, or phenotypes) to an optimization problem is evolved toward better solutions. Introduction In genetic algorithm each candidate solution used in the next iteration of the algorithm. has a set of properties (its chromosomes or Commonly, the algorithm terminates when genotype) which can be mutated and altered; either a maximum number of generations traditionally, solutions are represented in has been produced, or a satisfactory fitness binary as strings of 0s and 1s, but other level has been reached for the population. encodings are also possible.[2] The evolution usually starts from a population of randomly A typical genetic algorithm requires: generated individuals, and is an iterative process, with the population in each iteration A genetic representation of the solution called a generation. In each generation, the domain, fitness of every individual in the population is evaluated; the fitness is usually the value A fitness function to evaluate the solution of the objective function in the optimization domain. problem being solved. The more fit individuals are stochastically selected from A standard representation of each candidate [2] the current population, and each individual's solution is as an array of bits. Arrays of genome is modified (recombined and other types and structures can be used in possibly randomly mutated) to form a new essentially the same way. The main property generation. The new generation of that makes these genetic representations candidate solutions is then convenient is that their parts are easily 431 Int.J.Curr.Microbiol.App.Sci (2018) Special Issue-6: 431-441 aligned due to their fixed size, which The fitness function is defined over the facilitates simple crossover operations. genetic representation and measures the Variable length representations may also be quality of the represented solution. The used, but crossover implementation is more fitness function is always problem complex in this case. Tree-like dependent. For instance, in the knapsack representations are explored in genetic problem one wants to maximize the total programming and graph-form value of objects that can be put in a representations are explored in evolutionary knapsack of some fixed capacity. A programming; a mix of both linear representation of a solution might be an chromosomes and trees is explored in gene array of bits, where each bit represents a expression programming. different object, and the value of the bit (0 or 1) represents whether or not the object is in Once the genetic representation and the the knapsack. Not every such representation fitness function are defined, a GA proceeds is valid, as the size of objects may exceed to initialize a population of solutions and the capacity of the knapsack. The fitness of then to improve it through repetitive the solution is the sum of values of all application of the mutation, crossover, objects in the knapsack if the representation inversion and selection operators. is valid, or 0 otherwise. Initialization In some problems, it is hard or even impossible to define the fitness expression; The population size depends on the nature of in these cases, a simulation may be used to the problem, but typically contains several determine the fitness function value of a hundreds or thousands of possible solutions. phenotype (e.g. computational fluid dynamics is used to determine the air Often, the initial population is generated resistance of a vehicle whose shape is randomly, allowing the entire range of encoded as the phenotype), or even possible solutions (the search space). interactive genetic algorithms are used. Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to Genetic operators be found. The next step is to generate a second Selection generation population of solutions from those selected through a combination of During each successive generation, a portion genetic operators: crossover (also called of the existing population is selected to recombination), and mutation. breed a new generation. Individual solutions are selected through a fitness-based process, For each new solution to be produced, a pair where fitter solutions (as measured by a of "parent" solutions is selected for breeding fitness function) are typically more likely to from the pool selected previously. By be selected. Certain selection methods rate producing a "child" solution using the above the fitness of each solution and methods of crossover and mutation, a new preferentially select the best solutions. Other solution is created which typically shares methods rate only a random sample of the many of the characteristics of its "parents". population, as the former process may be New parents are selected for each new child, very time-consuming. and the process continues until a new 432 Int.J.Curr.Microbiol.App.Sci (2018) Special Issue-6: 431-441 population of solutions of appropriate size is generated. Although reproduction methods In addition to the main operators above, that are based on the use of two parents are other heuristics may be employed to make more "biology inspired", some research[3][4] the calculation faster or more robust. The suggests that more than two "parents" speciation heuristic penalizes crossover generate higher quality chromosomes. between candidate solutions that are too similar; this encourages population diversity These processes ultimately result in the next and helps prevent premature convergence to generation population of chromosomes that a less optimal solution.[6][7] is different from the initial generation. Generally the average fitness will have Termination increased by this procedure for the population, since only the best organisms This generational process is repeated until a from the first generation are selected for termination condition has been reached. breeding, along with a small proportion of Common terminating conditions are: less fit solutions. These less fit solutions ensure genetic diversity within the genetic A solution is found that satisfies minimum pool of the parents and therefore ensure the criteria genetic diversity of the subsequent generation of children. Fixed number of generations reached Opinion is divided over the importance of Allocated budget (computation time/money) crossover versus mutation. There are many reached references in Fogel (2006) that support the importance of mutation-based search. The highest ranking solution's fitness is reaching or has reached a plateau such that Although crossover and mutation are known successive iterations no longer produce as the main genetic operators, it is possible better results to use other operators such as regrouping, colonization-extinction, or migration in Manual inspection genetic algorithms.[5] Combinations of the above It is worth tuning parameters such as the mutation probability, crossover probability The building block hypothesis and population size to find reasonable settings for the problem class being worked Genetic algorithms are simple to implement, on. A very small mutation rate may lead to but their behavior is difficult to understand. genetic drift (which is non-ergodic in In particular it is difficult to understand why nature). these algorithms frequently succeed at generating solutions of high fitness when A recombination rate that is too high may applied to practical problems. The building lead to premature convergence of the genetic block hypothesis (BBH) consists of: algorithm. A mutation rate that is too high may lead to loss of good solutions, unless A description of a heuristic that performs elitist selection is employed. adaptation by identifying and recombining Heuristics "building blocks", i.e. low order, low 433 Int.J.Curr.Microbiol.App.Sci (2018) Special Issue-6: 431-441 defining-length schemata with above there is a reasonable amount of work that average fitness. attempts to understand its limitations from the perspective of estimation of distribution A hypothesis that a genetic algorithm algorithms.[11][12][13] performs adaptation by implicitly and efficiently implementing this heuristic. Limitations Goldberg describes the heuristic as follows: There are limitations of the use of a genetic algorithm compared to alternative "Short, low order, and highly fit schemata optimization algorithms: are sampled, recombined [crossed over], and resampled to form strings of potentially Repeated fitness function evaluation for higher fitness. In a way, by working with complex problems is often the most these particular schemata [the
Recommended publications
  • Metaheuristics1
    METAHEURISTICS1 Kenneth Sörensen University of Antwerp, Belgium Fred Glover University of Colorado and OptTek Systems, Inc., USA 1 Definition A metaheuristic is a high-level problem-independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms (Sörensen and Glover, To appear). Notable examples of metaheuristics include genetic/evolutionary algorithms, tabu search, simulated annealing, and ant colony optimization, although many more exist. A problem-specific implementation of a heuristic optimization algorithm according to the guidelines expressed in a metaheuristic framework is also referred to as a metaheuristic. The term was coined by Glover (1986) and combines the Greek prefix meta- (metá, beyond in the sense of high-level) with heuristic (from the Greek heuriskein or euriskein, to search). Metaheuristic algorithms, i.e., optimization methods designed according to the strategies laid out in a metaheuristic framework, are — as the name suggests — always heuristic in nature. This fact distinguishes them from exact methods, that do come with a proof that the optimal solution will be found in a finite (although often prohibitively large) amount of time. Metaheuristics are therefore developed specifically to find a solution that is “good enough” in a computing time that is “small enough”. As a result, they are not subject to combinatorial explosion – the phenomenon where the computing time required to find the optimal solution of NP- hard problems increases as an exponential function of the problem size. Metaheuristics have been demonstrated by the scientific community to be a viable, and often superior, alternative to more traditional (exact) methods of mixed- integer optimization such as branch and bound and dynamic programming.
    [Show full text]
  • Removing the Genetics from the Standard Genetic Algorithm
    Removing the Genetics from the Standard Genetic Algorithm Shumeet Baluja Rich Caruana School of Computer Science School of Computer Science Carnegie Mellon University Carnegie Mellon University Pittsburgh, PA 15213 Pittsburgh, PA 15213 [email protected] [email protected] Abstract from both parents into their respective positions in a mem- ber of the subsequent generation. Due to random factors involved in producing “children” chromosomes, the chil- We present an abstraction of the genetic dren may, or may not, have higher fitness values than their algorithm (GA), termed population-based parents. Nevertheless, because of the selective pressure incremental learning (PBIL), that explicitly applied through a number of generations, the overall trend maintains the statistics contained in a GA’s is towards higher fitness chromosomes. Mutations are population, but which abstracts away the used to help preserve diversity in the population. Muta- crossover operator and redefines the role of tions introduce random changes into the chromosomes. A the population. This results in PBIL being good overview of GAs can be found in [Goldberg, 1989] simpler, both computationally and theoreti- [De Jong, 1975]. cally, than the GA. Empirical results reported elsewhere show that PBIL is faster Although there has recently been some controversy in the and more effective than the GA on a large set GA community as to whether GAs should be used for of commonly used benchmark problems. static function optimization, a large amount of research Here we present results on a problem custom has been, and continues to be, conducted in this direction. designed to benefit both from the GA’s cross- [De Jong, 1992] claims that the GA is not a function opti- over operator and from its use of a popula- mizer, and that typical GAs which are used for function tion.
    [Show full text]
  • Genetic Algorithm
    A More “Realistic” Genetic Algorithm -----CGA A thesis submitted to the Cardiff University For the degree of Master of Philosophy in Engineering By Hui Chen February 2013 1 Acknowledgements I am heartily thankful to my supervisors Professor John Miles and Dr Alan Kwan for giving me the opportunity to carry out this study and encouraging me a lot during these years. Then, I offer my regards and blessings to all of those who helped and supported me in any respect during the completion of the project; Dr Haijiang Li, my parents Jianying Yang and Xueping Chen and all my friends. Last but not least, I also wish to thank Dr Michael Packianather, Dr Shuqi Pan and all the staffs in the research office for their patience and support. Hui Chen 2013 2 Abstract Genetic Algorithms (GAs) are loosely based on the concept of the natural cycle of reproduction with selective pressures favouring the individuals which are best suited to their environment (i.e. fitness function). However, there are many features of natural reproduction which are not replicated in GAs, such as population members taking some time to reach puberty. This thesis describes a programme of research which set out to investigate what would be the impact on the performance of a GA of introducing additional features which more closely replicate real life processes. The motivation for the work was curiosity. The approach has been tested using various standard test functions. The results are interesting and show that when compared with a Canonical GA, introducing various features such as the need to reach puberty before reproduction can occur and risk of illness can enhance the effectiveness of GAs in terms of the overall effort needed to find a solution.
    [Show full text]
  • Randomized Derivative-Free Optimization Institute of Theoretical Computer Science Swiss Institute of Bioinformatics
    Sebastian U. Stich MADALGO & CTIC Summer School 2011 [email protected] Randomized Derivative-Free Optimization Institute of Theoretical Computer Science Swiss Institute of Bioinformatics Standard Approaches Evolution Strategies Explained The Right Metric Other Methods Pure Random Search [1a] • (1 + 1)-ES described already in 1968 [2a] Step-size adaptation of ES [2c] f H • sample points uniformly in space For quadratic: Hit and Run [7a] • basis of more complex methods like CMA-ES 1 • exponential running time y∗ ∗ ∗ ∗ • draw a random direction and on the segment • convergence of (1+1)-ES linear on quadratic functions [2b] f(x) = f(x ) + (x − x )H(x − x ) K • Pure adaptive random search (PAS) [1b]: 2 contained in the convex body uniformly the 2 ∗ next iterate sample sequence of points, all new points y • theoretical convergence results only known for very simple H = ∇ f(x ) z uniformly among those with better function functions • originally proposed as sampling method to y value → not possible to do ∗ generate uniform samples from convex x∗ Generic (µ + λ)-Evolution Strategy (ES) gradient direction −∇f(x) body [7b] P • PAS analogous to randomized version of method of centers −1 1: for k = 0 to N do Newton direction −H ∇f(x) • fast mixing times (in special settings, e.g. log-convave functions) proven by Lovász [7c] and Dyer [7d] 2: Xk ←(λ new points, based on Mk−1) DIRECT [1c] • introduced as optimization algorithm by Bertsimas [7a] 3: EvaluateFunction (Xk) • iteratively evaluate function at grid • if H ≈ In first order information is sufficient
    [Show full text]
  • Genetic Algorithm: Reviews, Implementations, and Applications
    Paper— Genetic Algorithm: Reviews, Implementation and Applications Genetic Algorithm: Reviews, Implementations, and Applications Tanweer Alam() Faculty of Computer and Information Systems, Islamic University of Madinah, Saudi Arabia [email protected] Shamimul Qamar Computer Engineering Department, King Khalid University, Abha, Saudi Arabia Amit Dixit Department of ECE, Quantum School of Technology, Roorkee, India Mohamed Benaida Faculty of Computer and Information Systems, Islamic University of Madinah, Saudi Arabia How to cite this article? Tanweer Alam. Shamimul Qamar. Amit Dixit. Mohamed Benaida. " Genetic Al- gorithm: Reviews, Implementations, and Applications.", International Journal of Engineering Pedagogy (iJEP). 2020. Abstract—Nowadays genetic algorithm (GA) is greatly used in engineering ped- agogy as an adaptive technique to learn and solve complex problems and issues. It is a meta-heuristic approach that is used to solve hybrid computation chal- lenges. GA utilizes selection, crossover, and mutation operators to effectively manage the searching system strategy. This algorithm is derived from natural se- lection and genetics concepts. GA is an intelligent use of random search sup- ported with historical data to contribute the search in an area of the improved outcome within a coverage framework. Such algorithms are widely used for maintaining high-quality reactions to optimize issues and problems investigation. These techniques are recognized to be somewhat of a statistical investigation pro- cess to search for a suitable solution or prevent an accurate strategy for challenges in optimization or searches. These techniques have been produced from natu- ral selection or genetics principles. For random testing, historical information is provided with intelligent enslavement to continue moving the search out from the area of improved features for processing of the outcomes.
    [Show full text]
  • Gaussian Adaptation Revisited – an Entropic View on Covariance Matrix Adaptation
    Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2010 Gaussian Adaptation Revisited – An Entropic View on Covariance Matrix Adaptation Müller, Christian L ; Sbalzarini, Ivo F Abstract: We revisit Gaussian Adaptation (GaA), a black-box optimizer for discrete and continuous prob- lems that has been developed in the late 1960’s. This largely neglected search heuristic shares several interesting features with the well-known Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and with Simulated Annealing (SA). GaA samples single candidate solutions from a multivariate normal distribution and continuously adapts its first and second moments (mean and covariance) such asto maximize the entropy of the search distribution. Sample-point selection is controlled by a monotonically decreasing acceptance threshold, reminiscent of the cooling schedule in SA. We describe the theoret- ical foundations of GaA and analyze some key features of this algorithm. We empirically show that GaA converges log-linearly on the sphere function and analyze its behavior on selected non-convex test functions. DOI: https://doi.org/10.1007/978-3-642-12239-2_45 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-79216 Journal Article Originally published at: Müller, Christian L; Sbalzarini, Ivo F (2010). Gaussian Adaptation Revisited – An Entropic View on Covariance Matrix Adaptation. Lecture Notes in Computer Science, 6024:432-441. DOI: https://doi.org/10.1007/978-3-642-12239-2_45 Gaussian Adaptation revisited – an entropic view on Covariance Matrix Adaptation Christian L. M¨uller and Ivo F.
    [Show full text]
  • A Sequential Hybridization of Genetic Algorithm and Particle Swarm Optimization for the Optimal Reactive Power Flow
    sustainability Article A Sequential Hybridization of Genetic Algorithm and Particle Swarm Optimization for the Optimal Reactive Power Flow Imene Cherki 1,* , Abdelkader Chaker 1, Zohra Djidar 1, Naima Khalfallah 1 and Fadela Benzergua 2 1 SCAMRE Laboratory, ENPO-MA National Polytechnic School of Oran Maurice Audin, Oran 31000, Algeria 2 Departments of Electrical Engineering, University of Science and Technology of Oran Mohamed Bodiaf, Oran 31000, Algeria * Correspondence: [email protected] Received: 21 June 2019; Accepted: 12 July 2019; Published: 16 July 2019 Abstract: In this paper, the problem of the Optimal Reactive Power Flow (ORPF) in the Algerian Western Network with 102 nodes is solved by the sequential hybridization of metaheuristics methods, which consists of the combination of both the Genetic Algorithm (GA) and the Particle Swarm Optimization (PSO). The aim of this optimization appears in the minimization of the power losses while keeping the voltage, the generated power, and the transformation ratio of the transformers within their real limits. The results obtained from this method are compared to those obtained from the two methods on populations used separately. It seems that the hybridization method gives good minimizations of the power losses in comparison to those obtained from GA and PSO, individually, considered. However, the hybrid method seems to be faster than the PSO but slower than GA. Keywords: reactive power flow; metaheuristic methods; metaheuristic hybridization; genetic algorithm; particles swarms; electrical network 1. Introduction The objective of any company producing and distributing electrical energy is to ensure that the required power is available at all points and at all times.
    [Show full text]
  • Recent Advances in Differential Evolution – an Updated Survey
    Swarm and Evolutionary Computation 27 (2016) 1–30 Contents lists available at ScienceDirect Swarm and Evolutionary Computation journal homepage: www.elsevier.com/locate/swevo Survey Paper Recent advances in differential evolution – An updated survey Swagatam Das a,n, Sankha Subhra Mullick a, P.N. Suganthan b,n a Electronics and Communication Sciences Unit, Indian Statistical Institute, Kolkata 700108, India b School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore article info abstract Article history: Differential Evolution (DE) is arguably one of the most powerful and versatile evolutionary optimizers for Received 10 October 2015 the continuous parameter spaces in recent times. Almost 5 years have passed since the first compre- Received in revised form hensive survey article was published on DE by Das and Suganthan in 2011. Several developments have 13 January 2016 been reported on various aspects of the algorithm in these 5 years and the research on and with DE have Accepted 15 January 2016 now reached an impressive state. Considering the huge progress of research with DE and its applications Available online 1 February 2016 in diverse domains of science and technology, we find that it is a high time to provide a critical review of Keywords: the latest literatures published and also to point out some important future avenues of research. The Differential evolution purpose of this paper is to summarize and organize the information on these current developments on Continuous evolutionary optimization DE. Beginning with a comprehensive foundation of the basic DE family of algorithms, we proceed Numerical optimization through the recent proposals on parameter adaptation of DE, DE-based single-objective global optimi- Parameter adaptation Recombination zers, DE adopted for various optimization scenarios including constrained, large-scale, multi-objective, multi-modal and dynamic optimization, hybridization of DE with other optimizers, and also the multi- faceted literature on applications of DE.
    [Show full text]
  • An Improved Adaptive Genetic Algorithm for Two-Dimensional Rectangular Packing Problem
    applied sciences Article An Improved Adaptive Genetic Algorithm for Two-Dimensional Rectangular Packing Problem Yi-Bo Li 1, Hong-Bao Sang 1,* , Xiang Xiong 1 and Yu-Rou Li 2 1 State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China; [email protected] (Y.-B.L.); [email protected] (X.X.) 2 Canterbury School, 101 Aspetuck Ave, New Milford, CT 06776, USA; [email protected] * Correspondence: [email protected] Abstract: This paper proposes the hybrid adaptive genetic algorithm (HAGA) as an improved method for solving the NP-hard two-dimensional rectangular packing problem to maximize the filling rate of a rectangular sheet. The packing sequence and rotation state are encoded in a two- stage approach, and the initial population is constructed from random generation by a combination of sorting rules. After using the sort-based method as an improved selection operator for the hybrid adaptive genetic algorithm, the crossover probability and mutation probability are adjusted adaptively according to the joint action of individual fitness from the local perspective and the global perspective of population evolution. The approach not only can obtain differential performance for individuals but also deals with the impact of dynamic changes on population evolution to quickly find a further improved solution. The heuristic placement algorithm decodes the rectangular packing sequence and addresses the two-dimensional rectangular packing problem through continuous iterative optimization. The computational results of a wide range of benchmark instances from zero- waste to non-zero-waste problems show that the HAGA outperforms those of two adaptive genetic algorithms from the related literature.
    [Show full text]
  • Chapter 12 Gene Selection and Sample Classification Using a Genetic Algorithm/K-Nearest Neighbor Method
    Chapter 12 Gene Selection and Sample Classification Using a Genetic Algorithm/k-Nearest Neighbor Method Leping Li and Clarice R. Weinberg Biostatistics Branch, National Institute of Environmental Health Sciences, Research Triangle Park, NC 27709, USA, e-mail: {li3,weinberg}@niehs.nih.gov 1. INTRODUCTION Advances in microarray technology have made it possible to study the global gene expression patterns of tens of thousands of genes in parallel (Brown and Botstein, 1999; Lipshutz et al., 1999). Such large scale expression profiling has been used to compare gene expressions in normal and transformed human cells in several tumors (Alon et al., 1999; Gloub et al., 1999; Alizadeh et al., 2000; Perou et al., 2000; Bhattacharjee et al., 2001; Ramaswamy et al., 2001; van’t Veer et al., 2002) and cells under different conditions or environments (Ooi et al., 2001; Raghuraman et al., 2001; Wyrick and Young, 2002). The goals of these experiments are to identify differentially expressed genes, gene-gene interaction networks, and/or expression patterns that may be used to predict class membership for unknown samples. Among these applications, class prediction has recently received a great deal of attention. Supervised class prediction first identifies a set of discriminative genes that differentiate different categories of samples, e.g., tumor versus normal, or chemically exposed versus unexposed, using a learning set with known classification. The selected set of discriminative genes is subsequently used to predict the category of unknown samples. This method promises both refined diagnosis of disease subtypes, including markers for prognosis and better targeted treatment, and improved understanding of disease and toxicity processes at the cellular level.
    [Show full text]
  • A Note on Evolutionary Algorithms and Its Applications
    Bhargava, S. (2013). A Note on Evolutionary Algorithms and Its Applications. Adults Learning Mathematics: An International Journal, 8(1), 31-45 A Note on Evolutionary Algorithms and Its Applications Shifali Bhargava Dept. of Mathematics, B.S.A. College, Mathura (U.P)- India. <[email protected]> Abstract This paper introduces evolutionary algorithms with its applications in multi-objective optimization. Here elitist and non-elitist multiobjective evolutionary algorithms are discussed with their advantages and disadvantages. We also discuss constrained multiobjective evolutionary algorithms and their applications in various areas. Key words: evolutionary algorithms, multi-objective optimization, pareto-optimality, elitist. Introduction The term evolutionary algorithm (EA) stands for a class of stochastic optimization methods that simulate the process of natural evolution. The origins of EAs can be traced back to the late 1950s, and since the 1970’s several evolutionary methodologies have been proposed, mainly genetic algorithms, evolutionary programming, and evolution strategies. All of these approaches operate on a set of candidate solutions. Using strong simplifications, this set is subsequently modified by the two basic principles of evolution: selection and variation. Selection represents the competition for resources among living beings. Some are better than others and more likely to survive and to reproduce their genetic information. In evolutionary algorithms, natural selection is simulated by a stochastic selection process. Each solution is given a chance to reproduce a certain number of times, dependent on their quality. Thereby, quality is assessed by evaluating the individuals and assigning them scalar fitness values. The other principle, variation, imitates natural capability of creating “new” living beings by means of recombination and mutation.
    [Show full text]
  • Genetic Algorithm by Ben Shedlofsky Executive Summary the Genetic
    Genetic Algorithm By Ben Shedlofsky Executive Summary The Genetic algorithm is a way using computers to figure out how a population evolves genetically. These understandings of how a population evolves towards a more healthy population; as well as how it evolves through breeding purposes. Each generation develops a new offspring and some offspring converge towards a healthier population and some don’t at all. This algorithm helps figure out in mathematical terms how a population breeds through a computer simulation. The major problem was understanding step by step how a Genetic algorithm works and how it doesn’t work. The other problem was figuring out to present it to the audience without the audience not understanding what I presented or some magic show of how it works. In a computational format the Genetic Algorithm uses binary or base 2 instead of using base 10. This makes it a little harder to read and understand from a business end user. Once you understand that there are just 1’s and 0’s instead of 0 thru 9 digits, you start to recognize that this uses a computational system to derive populations. Breeding for each generation happens either by two parents or by one parent. The two parents are called crossover and usually happen the most. While the one parent or mutation usually happen very low percentage. This is a very similar makeup to a biological population like monkey breeding amongst themselves or single cell organisms breeding amongst themselves too. Each one of us has genes or trait that makes us who we are and these differences in our genes or traits make us unique to the individual.
    [Show full text]