Explainability of Vision-Based Autonomous Driving Systems: Review and Challenges

Explainability of Vision-Based Autonomous Driving Systems: Review and Challenges

Noname manuscript No. (will be inserted by the editor) Explainability of vision-based autonomous driving systems: Review and challenges Eloi´ Zablocki∗;1 · H´ediBen-Younes∗;1 · Patrick P´erez1 · Matthieu Cord1;2 Abstract This survey reviews explainability methods 1 Introduction for vision-based self-driving systems. The concept of ex- plainability has several facets and the need for explain- 1.1 Self-driving systems ability is strong in driving, a safety-critical application. Gathering contributions from several research fields, Research on autonomous vehicles is blooming thanks to namely computer vision, deep learning, autonomous recent advances in deep learning and computer vision driving, explainable AI (X-AI), this survey tackles sev- (Krizhevsky et al, 2012; LeCun et al, 2015), as well as eral points. First, it discusses definitions, context, and the development of autonomous driving datasets and motivation for gaining more interpretability and ex- simulators (Geiger et al, 2013; Dosovitskiy et al, 2017; plainability from self-driving systems. Second, major re- Yu et al, 2020). The number of academic publications cent state-of-the-art approaches to develop self-driving on this subject is rising in most machine learning, com- systems are quickly presented. Third, methods provid- puter vision, robotics and transportation conferences, ing explanations to a black-box self-driving system in a and journals. On the industry side, several manufactur- post-hoc fashion are comprehensively organized and de- ers are already producing cars equipped with advanced tailed. Fourth, approaches from the literature that aim computer vision technologies for automatic lane follow- at building more interpretable self-driving systems by ing, assisted parking, or collision detection among other design are presented and discussed in detail. Finally, re- things. Meanwhile, constructors are working on and de- maining open-challenges and potential future research signing prototypes with level 4 and 5 autonomy. The directions are identified and examined. development of autonomous vehicles has the potential to reduce congestions, fuel consumption, and crashes, Keywords Autonomous driving · Explainability · and it can increase personal mobility and save lives Interpretability · Black-box · Post-hoc interpretabililty given that nowadays the vast majority of car crashes are caused by human error (Anderson et al, 2014). The first steps in the development of autonomous Eloi´ Zablocki driving systems are taken with the collaborative Eu- arXiv:2101.05307v1 [cs.CV] 13 Jan 2021 E-mail: [email protected] ropean project PROMETHEUS (Program for a Euro- H´ediBen-Younes pean Traffic with Highest Efficiency and Unprecedented E-mail: [email protected] Safety) (Xie et al, 1993) at the end of the '80s and the Grand DARPA Challenges in the late 2000s. At Patrick P´erez E-mail: [email protected] these times, systems are heavily-engineered pipelines (Urmson et al, 2008; Thrun et al, 2006) and their mod- Matthieu Cord ular aspect decomposes the task of driving into sev- E-mail: [email protected] eral smaller tasks | from perception to planning | ∗ equal contribution which has the advantage to offer interpretability and 1 Valeo.ai transparency to the processing. Nevertheless, modular 2 Sorbonne Universit´e pipelines have also known limitations such as the lack of flexibility, the need for handcrafted representations, 2 Eloi´ Zablocki∗, H´ediBen-Younes∗ et al. and the risk of error propagation. In the 2010s, we ob- 1.3 Research questions and focus of the survey serve an interest in approaches aiming to train driving systems, usually in the form of neural networks, either Two complementary questions are the focus of this sur- by leveraging large quantities of expert recordings (Bo- vey and they guide its organization: jarski et al, 2016; Codevilla et al, 2018; Ly and Akhloufi, 1. Given a trained self-driving model, coming as a 2020) or through simulation (Espi´eet al, 2005; Toro- black-box, how can we explain its behavior? manoff et al, 2020; Dosovitskiy et al, 2017). In both 2. How can we design learning-based self-driving mod- cases, these systems learn a highly complex transforma- els which are more interpretable? tion that operates over input sensor data and produce end-commands (steering angle, throttle). While these Regardless of driving considerations, these ques- neural driving models overcome some of the limitations tions are asked and answered in many generic ma- of the modular pipeline stack, they are sometimes de- chine learning papers. Besides, some papers from the scribed as black-boxes for their critical lack of trans- vision-based autonomous driving literature propose in- parency and interpretability. terpretable driving systems. In this survey, we bridge the gap between general X-AI methods that can be ap- plied for the self-driving literature, and driving-based approaches claiming explainability. In practice, we reor- ganize and cast the autonomous driving literature into an X-AI taxonomy that we introduce. Moreover, we 1.2 Need for explainability detail generic X-AI approaches | some have not been used yet in the autonomous driving context | and that The need for explainability is multi-factorial and de- can be leveraged to increase the explainability of self- pends on the concerned people, whether they are end- driving models. users, legal authorities, or self-driving car designers. End-users and citizens need to trust the autonomous 1.4 Positioning system and to be reassured (Choi and Ji, 2015). More- over, designers of self-driving models need to under- Many works advocate for the need of explainable driv- stand the limitations of current models to validate them ing models (Ly and Akhloufi, 2020) and published re- and improve future versions (Tian et al, 2018). Besides, views about explainability often mention autonomous regarding legal and regulator bodies, it is needed to ac- driving as an important application for X-AI methods. cess explanations of the system for liability purposes, However, there are only a few works on interpretable especially in the case of accidents (Rathi, 2019; Li et al, autonomous driving systems, and, to the best of our 2018c). knowledge, there exists no survey focusing on the in- The fact that autonomous self-driving systems are terpretability of autonomous driving systems. Our goal not inherently interpretable has two main origins. is to bridge this gap, to organize and detail existing On the one hand, models are designed and trained methods, and to present challenges and perspectives for within the deep learning paradigm which has known building more interpretable self-driving systems. explainability-related limitations: datasets contain nu- This survey is the first to organize and review self- merous biases and are generally not precisely curated, driving models under the light of explainability. The the learning and generalization capacity remains em- scope is thus different from papers that review self- pirical in the sense that the system may learn from driving models in general. For example, Janai et al spurious correlation and overfit on common situations, (2020) review vision-based problems arising in self- also, the final trained model represents a highly-non- driving research, Di and Shi(2020) provide a high-level linear function and is non-robust to slight changes in review on the link between human and automated driv- the input space. On the other hand, self-driving sys- ing, Ly and Akhloufi(2020) review imitation-based self- tems have to simultaneously solve intertwined tasks of driving models, Manzo et al(2020) survey deep learning very different natures: perception tasks with detection models for predicting steering angle, and Kiran et al of lanes and objects, planning and reasoning tasks with (2020) review self-driving models based on deep rein- motion forecasting of surrounding objects and of the forcement learning. ego-vehicle, and control tasks to produce the driving Besides, there exist reviews on X-AI, interpretabil- end-commands. Here, explaining a self-driving system ity, and explainability in machine learning in general thus means disentangling predictions of each implicit (Beaudouin et al, 2020; Gilpin et al, 2018; Adadi and task, and to make them human-interpretable. Berrada, 2018; Das and Rad, 2020). Among others, Xie Explainability of vision-based autonomous driving systems: Review and challenges 3 et al(2020) give a pedagogic review for non-expert read- further explainability of self-driving systems. Section 6 ers while Vilone and Longo(2020) offer the most ex- presents the particular use-case of explaining a self- haustive and complete review on the X-AI field. Moraf- driving system by means of natural language justifi- fah et al(2020) focus on causal interpretability in ma- cations. chine learning. Moreover, there also exist reviews on ex- plainability applied to decision-critical fields other than driving. This includes interpretable machine learning 2 Explainability in the context of autonomous for medical applications (Tjoa and Guan, 2019; Fellous driving et al, 2019). Overall, the goal of this survey is diverse, and we This section contextualizes the need for interpretable hope that it contributes to the following: driving models. In particular, we present the main motivations to require increased explainability in Sec- { Interpretability and explainability notions are clar- tion 2.1, we define and organize explainability-related ified in the context of autonomous driving, depend-

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    36 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us