Genetic Algorithms for Applied Path Planning a Thesis Presented in Partial Ful Llment of the Honors Program Vincent R

Genetic Algorithms for Applied Path Planning a Thesis Presented in Partial Ful Llment of the Honors Program Vincent R

Genetic Algorithms for Applied Path Planning A Thesis Presented in Partial Ful llment of the Honors Program Vincent R. Ragusa Abstract Path planning is the computational task of choosing a path through an environment. As a task humans do hundreds of times a day, it may seem that path planning is an easy task, and perhaps naturally suited for a computer to solve. This is not the case however. There are many ways in which NP-Hard problems like path planning can be made easier for computers to solve, but the most signi cant of these is the use of approximation algorithms. One such approximation algorithm is called a genetic algorithm. Genetic algorithms belong to a an area of computer science called evolutionary computation. The techniques used in evolutionary computation algorithms are modeled after the principles of Darwinian evolution by natural selection. Solutions to the problem are literally bred for their problem solving ability through many generations of selective breeding. The goal of the research presented is to examine the viability of genetic algorithms as a practical solution to the path planning problem. Various modi cations to a well known genetic algorithm (NSGA-II) were implemented and tested experimentally to determine if the modi cation had an e ect on the operational eciency of the algorithm. Two new forms of crossover were implemented with positive results. The notion of mass extinction driving evolution was tested with inconclusive results. A path correction algorithm called make valid was created which has proven to be extremely powerful. Finally several additional objective functions were tested including a path smoothness measure and an obstacle intrusion measure, the latter showing an enormous positive result. Department of Computer Science Under the supervision of Dr. H. David Mathias Florida Southern College May 2017 Contents Abstract Acknowledgments 1 Introduction 1 1.1 Optimization Problems . 1 1.1.1 Single-Objective Optimization . 1 1.1.2 Multi-Objective Optimization . 2 1.1.3 Computational Complexity . 3 1.2 Path Planning . 6 1.3 UAVs and MAVs . 7 1.4 Genetic Algorithms . 7 1.4.1 Solution Encoding Scheme . 8 1.4.2 Genetic Operators . 8 1.4.3 Selection and Niching . 10 1.4.4 Tuning a Genetic Algorithm . 13 2 Problem Statement 13 3 The Algorithm 14 3.1 Path nder . 14 3.1.1 Genome & Solution Encoding . 14 3.1.2 Objectives . 15 3.1.3 Algorithm Structure . 15 3.1.4 Genetic Operators . 16 4 Experiments and Results 16 4.1 Crossover and Mass Extinction . 17 4.1.1 Tested Components . 17 4.1.2 Experimental Method . 18 4.1.3 Results . 20 4.2 Crossover, Obstacle Intrusion, Path Correction, and Mutation Size . 22 4.2.1 Tested Components . 22 4.2.2 Experimental Method . 24 4.2.3 Results . 26 5 Final Conclusions 30 5.1 Crossover and Mass Extinction . 30 5.2 Crossover, Obstacle Intrusion, and Path Correction . 31 References Acknowledgments I would like to thank Florida Southern College for nancial and other support for this re- search, 3DR for providing educational pricing and support of open source projects important to research, commercial, and hobbyist projects for MAVs, and Peter Barker for assistance with the drone ight simulator. I would also like to thank Annie Wu and Vera Kazakova for valuable feedback and collaboration, Isabel Loyd and Susan Serrano for assistance with the statistical analysis, and David Mathias for all of his support and guidance. 1 Introduction The very idea of eciency is born from asking "Why is this task accomplished in this way?" and following that question with "Is there a better way?". This way of arriving at eciency leaves a lot of room to interpret "better" and as such the task of nding "the best" of anything can be extremely complicated. In mathematics and in computer science the task of nding the best of something is called an optimization problem. 1.1 Optimization Problems Optimization problems can be thought of as search problems where the task is to search for the optimal solution to a problem from among a pool of candidate solutions. There are typically many candidate solutions, each, usually, with an associated value (or values) which are desired to either be maxamized or minimized[1]. It is convenient to call all optimization problems minimization problems because any value you wish to maximize can be multiplied by ( 1) and then minimized to achieve the same e ect. Also note that it is common for there to be more than one candidate solution which achieves the optimal value[1]. 1.1.1 Single-Objective Optimization Single-objective optimization problems seek the minimum of a single value. Single-objective problems are typically solved by mathematical methods such as evaluating derivatives and gradient descent as well as basic computational methods like greedy strategies. For example, consider a single-objective optimization problem where the goal is to minimize the potential energy of a spring U(x). 1 U(x) = kx2 2 Evaluating dU = 0, dx d U(x) = kx = 0 dx ) x = 0 it is seen that there is a critical point when x = 0. Furthermore evaluating d2U at the critical dx2 point, d2U = k 2 dx x=0 k > 0 reveals that the critical point is a minimum because the second derivative is positive at the critical point. Therefore the minimum of U(x) is at the point x = 0, or when the spring is neither compressed or stretched. Mathematical methods are often the easiest method for solving single objective optimization problems. 1 1.1.2 Multi-Objective Optimization Multi-objective optimization problems seek to minimize n values simultaneously. Under ideal circumstances each objective is independent and the problem reduces to solving n single-objective optimization problems in parallel. This is almost never the case however as a single property of a candidate solution usually a ects multiple objective values. Consider the task of maximizing the volume of a box (minimizing the volume multiplied by ( 1)) and minimizing the surface area of the box. Note x; y; z 2 (0; 1). V (x; y; z) = xyz A(x; y; z) = 2(xy + xz + yz) If an attempt is made to minimize V (x; y; z) using the same mathematical method used for the spring example in section 1.1.1: rV = (yz; xz; xy) = (0; 0; 0) ) (x = y = 0) OR (x = z = 0) OR (y = z = 0) a contradiction is reached because x, y, and z cannot equal zero. A second attempt, this time to minimize A(x; y; z) reveals: rA = 2(y + z; x + z; x + y) = (0; 0; 0) ) (y + z = 0) AND (x + z = 0) AND (x + y = 0) Solving by elimination: (x; y; z) = (0; 0; 0) Again a contradiction is reached because x, y, and z cannot equal zero. Finally, let R(x; y; z) be de ned as the ratio of the area to volume: A(x; y; z) 2(xy + xz + yz) 2 2 2 R(x; y; z) = = = + + V (x; y; z) xyz z y x Searching for critical points reveals: 1 1 1 rR = 2( ; ; ) = (0; 0; 0) z2 y2 x2 which has no solutions. It seems that without some new mathematical or computational tools these problems cannot be solved. Scalarization Sometimes it is still useful to attempt to optimize a multi-objective problem by using a method known as scalarization. Scalarization is simply the transformation of n objective functions ff1; ¡¡¡ ; fng into a single function F via a weighted sum[2], where ci 6= 0 is the weight on fi. Xn F (p) = cifi(p) i=1 2 Often, the weights sum to 1 so that each weight represents a percentage of the nal sum. Xn ci = 1 i=1 Individual objectives can be given preference by giving them larger weights. Raising the objective function to a di erent power k 6= 0 can be used to emphasize (k > 0) or de- emphasize (k < 0) changes in an objective function. Xn ki F (p) = ci(fi(p)) i=1 The biggest drawback of scalarization is that contributions by individual tness measures are anonymous, hiding useful data about the search space[2]. Additionally, it falls victim to things like change blindness, or when multiple tness functions change but the weighted sum has a net change of zero. Scalarization is often abandoned for better methods of dealing with multiple objectives simultaneously. Pareto Optimization One popular method of handling multiple objectives in an optimization problem is to rank them not via the tness values directly, but by their relative Pareto eciency. Simply put, a candidate solution to a multi-objective optimization problem is Pareto ecient (or Pareto optimal) if it cannot make any progress in one of its objectives without causing negative changes in some or all of the other objectives[3]. This is more easily illustrated by rst de ning a vector comparison operator "". Deb et al[4] de ne this operator such that vector v1 is partially less than vector v2 when: v1 v2 (8i j v2;i v1;i) ^ (9i j v1;i < v2;i) With this, comparing two solutions is done by recognizing each objective's tness score as a component of a tness vector with dimension equal to the number of objectives. If the tness vector of one solution is partially less than that of another then the former dominates the latter. With this new operator Pareto optimal can be de ned as a solution that cannot be dominated by another solution. Note that there can be (and often are) an in nite number of Pareto optimal solutions to a given multi-objective optimization problem.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    36 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us