Research on New Artificial Intelligence Based Path Planning Algorithms with Focus on Autonomous Driving

Research on New Artificial Intelligence Based Path Planning Algorithms with Focus on Autonomous Driving

DEPARTMENTOFGEOINFORMATICS UNIVERSITYOFAPPLIEDSCIENCESMUNICH PROF.DR.THOMASABMAYR Research on new Artificial Intelligence based Path Planning Algorithms with Focus on Autonomous Driving Christoph Oberndorfer Master Thesis Research on new Artificial Intelligence based Path Planning Algorithms with Focus on Autonomous Driving Master Thesis University of Applied Sciences Munich Department of Geoinformatics Executed at Audi Electronics Venture GmbH Advisors: Prof. Dr. Thomas Abmayr (University of Applied Sciences Munich) Gordon Taft (Audi Electronics Venture GmbH) Author: Christoph Oberndorfer Course of study: Geomatics (M. Eng.) Realized in summer semster 2017, submitted in September 2017 Abstract Keywords: Artificial Intelligence, Neural Networks, Reinforcement Learning, Evolu- tionary Algorithms, Routing Algorithms, Path Planning, Autonomous Driving Artificial intelligence based technologies strongly gained importance during the last years in various fields of research. They are likely to significantly influence the future developments of the automotive industry. Autonomous driving in dynamic, intra-urban environments is thus one of the most challenging areas for automobile manufacturers at this juncture. One of the corresponding key technologies is intelligent path planning. Within this field of research, clas- sical routing algorithms emerged as not flexible enough to handle complex situations in road traffic. Building on these challenges, the main goal of this work is to develop an artificial intelligence based approach of path planning algorithms that provides a better applicability than classical routing algorithms. Additionally, a comparison of both approaches gives a review of their assets and drawbacks. Therefore, the developed approaches should broaden the viewpoint of classical path planning methods. The present work targets the realization of two different artificial intelligence based approaches. The first implemented algorithm uses a neural network as well as reinforcement learning meth- ods. In the second approach, an evolutionary algorithm is applied to evolve a neural network. The resulting algorithms are tested in a grid based environment to carry out an accuracy analysis. Various experiments furthermore help to identify the most fitting parameter sets for the particular algorithms. Both algorithms turned out to exhibit sufficient functionality. The reinforcement learning based approach is able to find its way from the starting point to the target point with a probability of 98.74%. The comparison to a classical state of the art path planning algorithm (A*) showed that the reinforcement learning based approach has the ability to take the shortest path in 99.52% of all successful testings. These findings clearly show that this algorithm is highly suitable for the challenge of path planning. The second evolution based approach was only able to provide insufficient results with a probability of 69.96%. To improve these results, several potential extensions are recommended. Thereby, the present work shows the efficiency of artificial intelligent based methods in path planning tasks by realizing a holistic and powerful path planning algorithm. Contents List of FiguresI List of Tables II List of Abbreviations III 1 Introduction1 1.1 Artificial Intelligence today...........................1 1.2 Motivation....................................3 1.3 Related work..................................4 1.4 Scope of the work................................6 1.5 Structure of the work..............................6 2 Fundamental Methods7 2.1 Routing algorithms...............................7 2.1.1 Dijkstra Algorithm...........................8 2.1.2 A* Algorithm.............................. 10 2.1.3 Ant Colony Optimisation Algorithm................. 12 2.2 Artificial Intelligence.............................. 13 2.2.1 Reinforcement Learning........................ 15 2.2.2 Neural Network............................. 19 2.2.3 Evolutionary Algorithm........................ 25 3 Approach 27 3.1 A* Algorithm.................................. 27 3.2 Q-Learning Algorithm............................. 28 3.3 Multilayer Perceptron.............................. 30 3.4 Neuroevolution of Augmenting Topologies.................. 34 4 Experiments and Results 39 4.1 Multilayer Perceptron.............................. 39 4.1.1 Independent runs............................ 39 4.1.2 Training five epochs.......................... 40 4.1.3 Different ratio of data sets....................... 40 4.1.4 Different optimisers........................... 41 4.1.5 Different alpha values.......................... 41 4.1.6 One hidden layer............................ 41 4.1.7 Two hidden layers............................ 42 4.1.8 One to five hidden layers........................ 43 4.1.9 Reduction of unsolved grids in training................ 45 4.1.10 Reduction of the number of steps for final result........... 46 4.2 Neuroevolution of Augmenting Topologies.................. 47 4.2.1 Independent runs............................ 48 4.2.2 Different number of training sets................... 48 4.2.3 Different number of neurons...................... 48 4.2.4 Different MutateAddNeuronProb values............... 49 4.2.5 Different ActivationLevel values.................... 49 4.2.6 Different SurvivalRate values..................... 50 4.2.7 Different YoungAgeFitnessBoost values................ 50 4.2.8 Other parameters............................ 51 4.2.9 Experiments with final parameters.................. 52 4.2.10 Analysis of the final result....................... 53 5 Discussion and Outlook 57 5.1 Evaluation of Multilayer Perceptron...................... 57 5.2 Evaluation of Neuroevolution of Augmenting Topologies........... 59 5.3 Comparison of algorithms........................... 60 5.4 Summary.................................... 62 5.5 Outlook..................................... 63 Appendix V A XOR function example step-by-step......................V B NEAT parameter list..............................VI Bibliography VII List of Figures 2.1 Visualisation of a graph with nodes and links................7 2.2 Development of a search tree with two different search algorithms.....8 2.3 Two subgraphs with Dijkstra Algorithm...................9 2.4 Two subgraphs with A* Algorithm...................... 10 2.5 Comparison of Dijkstra and A* algorithms.................. 11 2.6 Development of the Ant Colony Optimisation algorithm........... 12 2.7 The agent interacts with the environment................... 14 2.8 Reinforcement Learning framework...................... 16 2.9 Map of five rooms connected through doors................. 17 2.10 Graph representation of the rooms with rewards for each link........ 17 2.11 R-matrix includes the initial rewards for all links............... 17 2.12 Q-matrix as memory of the algorithm..................... 18 2.13 Biological and artificial neuron......................... 19 2.14 Schematic representation of an ANN..................... 20 2.15 Stochastic gradient descent with a two-dimensional error function..... 21 2.16 Overview of the most common activation functions............. 22 2.17 Examples of underfitting, fitting and overfitting............... 23 2.18 Crossover and mutation as genetic operators................. 26 3.1 Graph and shortest path created by A*.................... 28 3.2 Graph and shortest path created by Q-Learning............... 29 3.3 Grid with player, walls, positive and negative items............. 30 3.4 Process of calculating an action to a state.................. 32 3.5 Workflow of the learning mechanism in the Neural Network......... 33 3.6 Workflow of the NEAT algorithm....................... 35 4.1 MSE development over five epochs...................... 40 4.2 MSE development with different numbers of hidden layers......... 44 4.3 Development of the result value........................ 47 4.4 Analysis of the quantity of neurons in each layer............... 54 4.5 Visualisation of the final Neural Network................... 55 I List of Tables 1.1 Different levels of automation for self-driving cars..............2 2.1 All possible combinations of input and output of the XOR function..... 24 2.2 Results after learning the XOR function................... 25 4.1 Results of the MSE development over five epochs.............. 40 4.2 Results with different numbers of training and test sets........... 41 4.3 Results of different optimisers......................... 41 4.4 Results of different alpha values of the SGD................. 41 4.5 Results of one hidden layer........................... 42 4.6 Results of two hidden layers.......................... 42 4.7 Run time with two hidden layers....................... 43 4.8 Results of two hidden layers after five epochs................. 43 4.9 Results of one to five hidden layers...................... 44 4.10 Detailed results of one to five hidden layers.................. 44 4.11 Results of unsolved grids in training phase.................. 45 4.12 Results of different numbers of maximum repetitions............ 45 4.13 Results of minimised steps........................... 47 4.14 Different numbers of training sets....................... 48 4.15 Different neurons of initial hidden layer.................... 49 4.16 Different MutateAddNeuronProb values.................... 49 4.17 Different ActivationLevel values........................ 50 4.18 Different SurvivalRate values.......................... 50 4.19 Different YoungAgeFitnessBoost values.................... 51 4.20 Best parameters for the final evaluation...................

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    95 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us