Deep Reinforcement Learning for Autonomous Driving: a Survey

Deep Reinforcement Learning for Autonomous Driving: a Survey

1 Deep Reinforcement Learning for Autonomous Driving: A Survey B Ravi Kiran1, Ibrahim Sobh2, Victor Talpaert3, Patrick Mannion4, Ahmad A. Al Sallab2, Senthil Yogamani5, Patrick Pérez6 Abstract—With the development of deep representation learn- decision process, which is formalized under the classical ing, the domain of reinforcement learning (RL) has become settings of Reinforcement Learning (RL), where the agent is a powerful learning framework now capable of learning com- required to learn and represent its environment as well as plex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and act optimally given at each instant [1]. The optimal action provides a taxonomy of automated driving tasks where (D)RL is referred to as the policy. methods have been employed, while addressing key computa- In this review we cover the notions of reinforcement tional challenges in real world deployment of autonomous driving learning, the taxonomy of tasks where RL is a promising agents. It also delineates adjacent domains such as behavior solution especially in the domains of driving policy, predictive cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of perception, path and motion planning, and low level controller simulators in training agents, methods to validate, test and design. We also focus our review on the different real world robustify existing solutions in RL are discussed. deployments of RL in the domain of autonomous driving Index Terms—Deep reinforcement learning, Autonomous driv- expanding our conference paper [2] since their deployment has ing, Imitation learning, Inverse reinforcement learning, Con- not been reviewed in an academic setting. Finally, we motivate troller learning, Trajectory optimisation, Motion planning, Safe users by demonstrating the key computational challenges and reinforcement learning. risks when applying current day RL algorithms such imitation learning, deep Q learning, among others. We also note from I. INTRODUCTION the trends of publications in figure2 that the use of RL or Autonomous driving (AD)1 systems constitute of multiple Deep RL applied to autonomous driving or the self driving perception level tasks that have now achieved high precision domain is an emergent field. This is due to the recent usage of on account of deep learning architectures. Besides the per- RL/DRL algorithms domain, leaving open multiple real world ception, autonomous driving systems constitute of multiple challenges in implementation and deployment. We address the tasks where classical supervised learning methods are no more open problems inVI. applicable. First, when the prediction of the agent’s action The main contributions of this work can be summarized as changes future sensor observations received from the envi- follows: Self-contained overview of RL background for the auto- ronment under which the autonomous driving agent operates, ² for example the task of optimal driving speed in an urban motive community as it is not well known. Detailed literature review of using RL for different au- area. Second, supervisory signals such as time to collision ² (TTC), lateral error w.r.t to optimal trajectory of the agent, tonomous driving tasks. Discussion of the key challenges and opportunities for represent the dynamics of the agent, as well uncertainty in ² the environment. Such problems would require defining the RL applied to real world autonomous driving. arXiv:2002.00444v2 [cs.LG] 23 Jan 2021 stochastic cost function to be maximized. Third, the agent is The rest of the paper is organized as follows. SectionII required to learn new configurations of the environment, as provides an overview of components of a typical autonomous well as to predict an optimal decision at each instant while driving system. Section III provides an introduction to rein- driving in its environment. This represents a high dimensional forcement learning and briefly discusses key concepts. Section space given the number of unique configurations under which IV discusses more sophisticated extensions on top of the basic the agent & environment are observed, this is combinatorially RL framework. SectionV provides an overview of RL applica- large. In all such scenarios we are aiming to solve a sequential tions for autonomous driving problems. SectionVI discusses challenges in deploying RL for real-world autonomous driving 1 Navya, Paris. [email protected] systems. Section VII concludes this paper with some final 2Valeo Cairo AI team, Egypt. ibrahim.sobh, ahmad.el- sallab@{valeo.com} remarks. 3U2IS, ENSTA Paris, Institut Polytechnique de Paris & AKKA Technolo- gies, France. [email protected] OMPONENTS OF YSTEM 4School of Computer Science, National University of Ireland, Galway. II. C AD S [email protected] Figure1 comprises of the standard blocks of an AD system 5 Valeo Vision Systems. [email protected] demonstrating the pipeline from sensor stream to control 6Valeo.ai. [email protected] 1For easy reference, the main acronyms used in this article are listed in actuation. The sensor architecture in a modern autonomous Appendix (TableIV). driving system notably includes multiple sets of cameras, 2 Fig. 1. Standard components in a modern autonomous driving systems pipeline listing the various tasks. The key problems addressed by these modules are Scene Understanding, Decision and Planning. radars and LIDARs as well as a GPS-GNSS system for be localized within the map. The first reliable demonstrations absolute localisation and inertial measurement Units (IMUs) of automated driving by Google were primarily reliant on that provide 3D pose of the vehicle in space. localisation to pre-mapped areas. Because of the scale of The goal of the perception module is the creation of an the problem, traditional mapping techniques are augmented intermediate level representation of the environment state (for by semantic object detection for reliable disambiguation. In example bird-eye view map of all obstacles and agents) that is addition, localised high definition maps (HD maps) can be to be later utilised by a decision making system that ultimately used as a prior for object detection. produces the driving policy. This state would include lane position, drivable zone, location of agents such as cars & C. Planning and Driving policy pedestrians, state of traffic lights and others. Uncertainties in the perception propagate to the rest of the information Trajectory planning is a crucial module in the autonomous chain. Robust sensing is critical for safety thus using redundant driving pipeline. Given a route-level plan from HD maps or sources increases confidence in detection. This is achieved GPS based maps, this module is required to generate motion- by a combination of several perception tasks like semantic level commands that steer the agent. segmentation [3], [4], motion estimation [5], depth estimation Classical motion planning ignores dynamics and differential [6], soiling detection [7], etc which can be efficiently unified constraints while using translations and rotations required to into a multi-task model [8], [9]. move an agent from source to destination poses [11]. A robotic agent capable of controlling 6-degrees of freedom (DOF) is A. Scene Understanding said to be holonomic, while an agent with fewer controllable DOFs than its total DOF is said to be non-holonomic. Classical This key module maps the abstract mid-level representation algorithms such as A algorithm based on Djisktra’s algorithm of the perception state obtained from the perception module to ¤ do not work in the non-holonomic case for autonomous the high level action or decision making module. Conceptually, driving. Rapidly-exploring random trees (RRT) [12] are non- three tasks are grouped by this module: Scene understanding, holonomic algorithms that explore the configuration space by Decision and Planning as seen in figure1 module aims to random sampling and obstacle free path generation. There are provide a higher level understanding of the scene, it is built various versions of RRT currently used in for motion planning on top of the algorithmic tasks of detection or localisation. in autonomous driving pipelines. By fusing heterogeneous sensor sources, it aims to robustly generalise to situations as the content becomes more abstract. This information fusion provides a general and simplified D. Control context for the Decision making components. A controller defines the speed, steering angle and braking Fusion provides a sensor agnostic representation of the envi- actions necessary over every point in the path obtained from ronment and models the sensor noise and detection uncertain- a pre-determined map such as Google maps, or expert driving ties across multiple modalities such as LIDAR, camera, radar, recording of the same values at every waypoint. Trajectory ultra-sound. This basically requires weighting the predictions tracking in contrast involves a temporal model of the dynamics in a principled way. of the vehicle viewing the waypoints sequentially over time. Current vehicle control methods are founded in classical op- B. Localization and Mapping timal control theory which can be stated as a minimisation of Mapping is one of the key pillars of automated driving [10]. a cost function x˙ f (x(t), u(t)) defined over a set of states x(t) Æ Once an area is mapped, current position of the vehicle can and control actions u(t). The control

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    18 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us