Self-Driving Car Autonomous System Overview

Total Page:16

File Type:pdf, Size:1020Kb

Self-Driving Car Autonomous System Overview Self-Driving Car Autonomous System Overview - Industrial Electronics Engineering - Bachelors’ Thesis - Author: Daniel Casado Herráez Thesis Director: Javier Díaz Dorronsoro, PhD Thesis Supervisor: Andoni Medina, MSc San Sebastián - Donostia, June 2020 Self-Driving Car Autonomous System Overview Daniel Casado Herráez "When something is important enough, you do it even if the odds are not in your favor." - Elon Musk - 2 Self-Driving Car Autonomous System Overview Daniel Casado Herráez To my Grandfather, Family, Friends & to my supervisor Javier Díaz 3 Self-Driving Car Autonomous System Overview Daniel Casado Herráez 1. Contents 1.1. Index 1. Contents ..................................................................................................................................................... 4 1.1. Index.................................................................................................................................................. 4 1.2. Figures ............................................................................................................................................... 6 1.1. Tables ................................................................................................................................................ 7 1.2. Algorithms ........................................................................................................................................ 7 2. Abstract ...................................................................................................................................................... 8 3. Introduction .............................................................................................................................................. 9 3.1. Motivation ........................................................................................................................................ 9 3.2. Objectives and Scope ................................................................................................................... 10 3.3. Introduction to Autonomous Vehicles ...................................................................................... 11 4. Project timeline and Gantt chart .......................................................................................................... 13 5. Sensing ..................................................................................................................................................... 14 5.1. LIDAR ............................................................................................................................................ 14 5.1.1. LIDAR sensor ...................................................................................................................... 14 5.1.2. Point clouds .......................................................................................................................... 15 5.1.3. Ground plane and noise removal ...................................................................................... 17 5.1.4. Iterative Closest Point (ICP) .............................................................................................. 17 5.2. Camera ............................................................................................................................................ 20 5.2.1. Stereo Vision ......................................................................................................................... 21 5.3. Radar ............................................................................................................................................... 28 5.4. GPS/GNSS .................................................................................................................................... 29 5.5. IMU ................................................................................................................................................. 31 5.6. Odometry sensors ......................................................................................................................... 32 5.7. Sensor Fusion................................................................................................................................. 33 6. Perception ................................................................................................................................................ 34 6.1. Lane Detection .............................................................................................................................. 34 6.2. Object detection ............................................................................................................................ 37 6.2.1. Classical Object Detection.................................................................................................. 37 6.2.2. Modern Object Detection: Convolutional Neural Networks ....................................... 40 6.3. SLAM (Simultaneous Localization and Mapping) ................................................................... 60 7. Control ..................................................................................................................................................... 61 7.1. Car Model ....................................................................................................................................... 61 7.1.1. Car-like model ...................................................................................................................... 62 4 Self-Driving Car Autonomous System Overview Daniel Casado Herráez 7.1.2. Bicycle model ........................................................................................................................ 63 7.1.3. Geometric vehicle model .................................................................................................... 64 7.1.4. Kinematic model .................................................................................................................. 64 7.1.5. Dynamic model .................................................................................................................... 64 7.1.6. Frequency domain................................................................................................................ 65 7.2. Path Planning ................................................................................................................................. 66 7.2.1. Motion planning ................................................................................................................... 68 7.2.2. Motion planning algorithms ............................................................................................... 71 7.3. Motion Control .............................................................................................................................. 87 7.3.1. Optimization based control ................................................................................................ 88 7.3.2. Proportional Integral Derivative Controller (PID)......................................................... 94 8. Formula Student Vehicles Overview .................................................................................................. 98 8.1. AMZ ................................................................................................................................................ 99 8.1.1. Sensing ................................................................................................................................... 99 8.1.2. Perception ............................................................................................................................. 99 8.1.3. Control .................................................................................................................................100 8.2. Beijing Institute of Technology ................................................................................................100 8.2.1. Sensing .................................................................................................................................100 8.2.2. Perception ...........................................................................................................................100 8.2.3. Control .................................................................................................................................100 8.3. TUW Racing.................................................................................................................................102 8.3.1. Sensing .................................................................................................................................102 8.3.2. Perception ...........................................................................................................................102 8.3.3. Control .................................................................................................................................102 8.4. University of Western Australia Formula SAE ......................................................................103 8.4.1. Sensing, Perception and Control .....................................................................................103 9. Conclusion .............................................................................................................................................104 10. APPENDIXES ................................................................................................................................105
Recommended publications
  • Seamless Texture Mapping of 3D Point Clouds
    Seamless Texture Mapping of 3D Point Clouds Dan Goldberg Mentor: Carl Salvaggio Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology Rochester, NY November 25, 2014 Abstract The two similar, quickly growing fields of computer vision and computer graphics give users the ability to immerse themselves in a realistic computer generated environment by combining the ability create a 3D scene from images and the texture mapping process of computer graphics. The output of a popular computer vision algorithm, structure from motion (obtain a 3D point cloud from images) is incomplete from a computer graphics standpoint. The final product should be a textured mesh. The goal of this project is to make the most aesthetically pleasing output scene. In order to achieve this, auxiliary information from the structure from motion process was used to texture map a meshed 3D structure. 1 Introduction The overall goal of this project is to create a textured 3D computer model from images of an object or scene. This problem combines two different yet similar areas of study. Computer graphics and computer vision are two quickly growing fields that take advantage of the ever-expanding abilities of our computer hardware. Computer vision focuses on a computer capturing and understanding the world. Computer graphics con- centrates on accurately representing and displaying scenes to a human user. In the computer vision field, constructing three-dimensional (3D) data sets from images is becoming more common. Microsoft's Photo- synth (Snavely et al., 2006) is one application which brought attention to the 3D scene reconstruction field. Many structure from motion algorithms are being applied to data sets of images in order to obtain a 3D point cloud (Koenderink and van Doorn, 1991; Mohr et al., 1993; Snavely et al., 2006; Crandall et al., 2011; Weng et al., 2012; Yu and Gallup, 2014; Agisoft, 2014).
    [Show full text]
  • Robust Tracking and Structure from Motion with Sampling Method
    ROBUST TRACKING AND STRUCTURE FROM MOTION WITH SAMPLING METHOD A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS for the degree DOCTOR OF PHILOSOPHY IN ROBOTICS Robotics Institute Carnegie Mellon University Peng Chang July, 2002 Thesis Committee: Martial Hebert, Chair Robert Collins Jianbo Shi Michael Black This work was supported in part by the DARPA Tactical Mobile Robotics program under JPL contract 1200008 and by the Army Research Labs Robotics Collaborative Technology Alliance Contract DAAD 19- 01-2-0012. °c 2002 Peng Chang Keywords: structure from motion, visual tracking, visual navigation, visual servoing °c 2002 Peng Chang iii Abstract Robust tracking and structure from motion (SFM) are fundamental problems in computer vision that have important applications for robot visual navigation and other computer vision tasks. Al- though the geometry of the SFM problem is well understood and effective optimization algorithms have been proposed, SFM is still difficult to apply in practice. The reason is twofold. First, finding correspondences, or "data association", is still a challenging problem in practice. For visual navi- gation tasks, the correspondences are usually found by tracking, which often assumes constancy in feature appearance and smoothness in camera motion so that the search space for correspondences is much reduced. Therefore tracking itself is intrinsically difficult under degenerate conditions, such as occlusions, or abrupt camera motion which violates the assumptions for tracking to start with. Second, the result of SFM is often observed to be extremely sensitive to the error in correspon- dences, which is often caused by the failure of the tracking. This thesis aims to tackle both problems simultaneously.
    [Show full text]
  • Complete 2020 Annual Report (PDF)
    2020 REPORT TO THE COMMUNITY Youth wade into Crissy Field Marsh during Project WISE in fall 2019 (See story, page 5). Crissy Field Center moves into new space at DEAR FRIEND the Tunnel Tops in 2021. We’ll improve those trails we missed so much. We’ll welcome back OF THE PARKS, our volunteers and visitors with open arms—or maybe a friendly wave. With our partners, our hat a year to start as only the second focus on making parks accessible for all—so W CEO in the history of the Golden Gate that everyone feels welcome in parks and can National Parks Conservancy. Throughout this enjoy the many health benefits of nature—is Report to the Community, we shine a light on more important now than ever. our major accomplishments of 2019. We had That’s why I’m so grateful for my first year no idea what was just around the corner. at the helm of the Parks Conservancy. I’ve I came into this job believing strongly in gotten to see the park spirit shine bright under the power of national parks to inspire and the toughest conditions. The snapshot of heal. The Bay Area shelter-in-place orders 2019 you’ll get in this report shows us what’s somehow strengthened that conviction. When possible for our long-term future, and I can’t we lose something, we miss it more than ever. wait to get there. It may take some time to And, we learn a powerful lesson in not taking recover, but with your help, our parks will it for granted.
    [Show full text]
  • Program Analysis Based Approaches to Ensure Security and Safety of Emerging Software Platforms
    Program Analysis Based Approaches to Ensure Security and Safety of Emerging Software Platforms by Yunhan Jia A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Science and Engineering) in The University of Michigan 2018 Doctoral Committee: Professor Z. Morley Mao, Chair Professor Atul Prakash Assistant Professor Zhiyun Qian, University of California Riverside Assistant Professor Florian Schaub Yunhan Jia [email protected] ORCID iD: 0000-0003-2809-5534 c Yunhan Jia 2018 All Rights Reserved To my parents, my grandparents and Xiyu ii ACKNOWLEDGEMENTS Five years have passed since I moved into the Northwood cabin in Ann Arbor to chase my dream of obtaining a Ph.D. degree. Now, looking back from the end of this road, there are so many people I would like to thank, who are an indispensable part of this wonderful journey full of passion, love, learning, and growth. Foremost, I would like to gratefully thank my advisor, Professor Zhuoqing Morley Mao for believing and investing in me. Her constant support was a definite factor in bringing this dissertation to its completion. Whenever I got lost or stucked in my research, she would always keep a clear big picture of things in mind and point me to the right direction. With her guidance and support over these years, I have grown from a rookie to a researcher that can independently conduct research. Besides my advisor, I would like to thank my thesis committee, Professor Atul Prakash, Professor Zhiyun Qian, and Professor Florian Schaub for their insightful suggestions, com- ments, and support.
    [Show full text]
  • Euclidean Reconstruction of Natural Underwater
    EUCLIDEAN RECONSTRUCTION OF NATURAL UNDERWATER SCENES USING OPTIC IMAGERY SEQUENCE By Han Hu B.S. in Remote Sensing Science and Technology, Wuhan University, 2005 M.S. in Photogrammetry and Remote Sensing, Wuhan University, 2009 THESIS Submitted to the University of New Hampshire In Partial Fulfillment of The Requirements for the Degree of Master of Science In Ocean Engineering Ocean Mapping September, 2015 This thesis has been examined and approved in partial fulfillment of the requirements for the degree of M.S. in Ocean Engineering Ocean Mapping by: Thesis Director, Yuri Rzhanov, Research Professor, Ocean Engineering Philip J. Hatcher, Professor, Computer Science R. Daniel Bergeron, Professor, Computer Science On May 26, 2015 Original approval signatures are on file with the University of New Hampshire Graduate School. ii ACKNOWLEDGEMENTS I would like to thank all those people who made this thesis possible. My first debt of sincere gratitude and thanks must go to my advisor, Dr. Yuri Rzhanov. Throughout the entire three-year period, he offered his unreserved help, guidance and support to me in both work and life. Without him, it is definite that my thesis could not be completed successfully. I could not have imagined having a better advisor than him. Besides my advisor in Ocean Engineering, I would like to give many thanks to the rest of my thesis committee and my advisors in the Department of Computer Science: Prof. Philip J. Hatcher and Prof. R. Daniel Bergeron as well for their encouragement, insightful comments and all their efforts in making this thesis successful. I also would like to express my thanks to the Center for Coastal and Ocean Mapping and the Department of Computer Science at the University of New Hampshire for offering me the opportunities and environment to work on exciting projects and study in useful courses from which I learned many advanced skills and knowledge.
    [Show full text]
  • Motion Estimation for Self-Driving Cars with a Generalized Camera
    2013 IEEE Conference on Computer Vision and Pattern Recognition Motion Estimation for Self-Driving Cars With a Generalized Camera Gim Hee Lee1, Friedrich Fraundorfer2, Marc Pollefeys1 1 Department of Computer Science Faculty of Civil Engineering and Surveying2 ETH Zurich,¨ Switzerland Technische Universitat¨ Munchen,¨ Germany {glee@student, pomarc@inf}.ethz.ch [email protected] Abstract In this paper, we present a visual ego-motion estima- tion algorithm for a self-driving car equipped with a close- to-market multi-camera system. By modeling the multi- camera system as a generalized camera and applying the non-holonomic motion constraint of a car, we show that this leads to a novel 2-point minimal solution for the general- ized essential matrix where the full relative motion includ- ing metric scale can be obtained. We provide the analytical solutions for the general case with at least one inter-camera correspondence and a special case with only intra-camera Figure 1. Car with a multi-camera system consisting of 4 cameras correspondences. We show that up to a maximum of 6 so- (front, rear and side cameras in the mirrors). lutions exist for both cases. We identify the existence of degeneracy when the car undergoes straight motion in the ready available in some COTS cars, we focus this paper special case with only intra-camera correspondences where on using a multi-camera setup for ego-motion estimation the scale becomes unobservable and provide a practical al- which is one of the key features for self-driving cars. A ternative solution. Our formulation can be efficiently imple- multi-camera setup can be modeled as a generalized cam- mented within RANSAC for robust estimation.
    [Show full text]
  • Structure from Motion Using a Hybrid Stereo-Vision System François Rameau, Désiré Sidibé, Cédric Demonceaux, David Fofi
    Structure from motion using a hybrid stereo-vision system François Rameau, Désiré Sidibé, Cédric Demonceaux, David Fofi To cite this version: François Rameau, Désiré Sidibé, Cédric Demonceaux, David Fofi. Structure from motion using a hybrid stereo-vision system. 12th International Conference on Ubiquitous Robots and Ambient Intel- ligence, Oct 2015, Goyang City, South Korea. hal-01238551 HAL Id: hal-01238551 https://hal.archives-ouvertes.fr/hal-01238551 Submitted on 5 Dec 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. The 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2015) October 28 30, 2015 / KINTEX, Goyang city, Korea ⇠ Structure from motion using a hybrid stereo-vision system Franc¸ois Rameau, Desir´ e´ Sidibe,´ Cedric´ Demonceaux, and David Fofi Universite´ de Bourgogne, Le2i UMR 5158 CNRS, 12 rue de la fonderie, 71200 Le Creusot, France Abstract - This paper is dedicated to robotic navigation vision system and review the previous works in 3D recon- using an original hybrid-vision setup combining the ad- struction using hybrid vision system. In the section 3.we vantages offered by two different types of camera. This describe our SFM framework adapted for heterogeneous couple of cameras is composed of one perspective camera vision system which does not need inter-camera corre- associated with one fisheye camera.
    [Show full text]
  • Cybersecurity for Connected Cars Exploring Risks in 5G, Cloud, and Other Connected Technologies
    Cybersecurity for Connected Cars Exploring Risks in 5G, Cloud, and Other Connected Technologies Numaan Huq, Craig Gibson, Vladimir Kropotov, Rainer Vosseler TREND MICRO LEGAL DISCLAIMER The information provided herein is for general information Contents and educational purposes only. It is not intended and should not be construed to constitute legal advice. The information contained herein may not be applicable to all situations and may not reflect the most current situation. 4 Nothing contained herein should be relied on or acted upon without the benefit of legal advice based on the particular facts and circumstances presented and nothing The Concept of Connected Cars herein should be construed otherwise. Trend Micro reserves the right to modify the contents of this document at any time without prior notice. 7 Translations of any material into other languages are intended solely as a convenience. Translation accuracy is not guaranteed nor implied. If any questions arise Research on Remote Vehicle related to the accuracy of a translation, please refer to Attacks the original language official version of the document. Any discrepancies or differences created in the translation are not binding and have no legal effect for compliance or enforcement purposes. 13 Although Trend Micro uses reasonable efforts to include Cybersecurity Risks of Connected accurate and up-to-date information herein, Trend Micro makes no warranties or representations of any kind as Cars to its accuracy, currency, or completeness. You agree that access to and use of and reliance on this document and the content thereof is at your own risk. Trend Micro disclaims all warranties of any kind, express or implied.
    [Show full text]
  • Motion Estimation for Self-Driving Cars with a Generalized Camera
    Motion Estimation for Self-Driving Cars With a Generalized Camera Gim Hee Lee1, Friedrich Fraundorfer2, Marc Pollefeys1 1 Department of Computer Science Faculty of Civil Engineering and Surveying2 ETH Zurich,¨ Switzerland Technische Universitat¨ Munchen,¨ Germany {glee@student, pomarc@inf}.ethz.ch [email protected] Abstract In this paper, we present a visual ego-motion estima- tion algorithm for a self-driving car equipped with a close- to-market multi-camera system. By modeling the multi- camera system as a generalized camera and applying the non-holonomic motion constraint of a car, we show that this leads to a novel 2-point minimal solution for the general- ized essential matrix where the full relative motion includ- ing metric scale can be obtained. We provide the analytical solutions for the general case with at least one inter-camera correspondence and a special case with only intra-camera Figure 1. Car with a multi-camera system consisting of 4 cameras correspondences. We show that up to a maximum of 6 so- (front, rear and side cameras in the mirrors). lutions exist for both cases. We identify the existence of degeneracy when the car undergoes straight motion in the ready available in some COTS cars, we focus this paper special case with only intra-camera correspondences where on using a multi-camera setup for ego-motion estimation the scale becomes unobservable and provide a practical al- which is one of the key features for self-driving cars. A ternative solution. Our formulation can be efficiently imple- multi-camera setup can be modeled as a generalized cam- mented within RANSAC for robust estimation.
    [Show full text]
  • Zbwleibniz-Informationszentrum
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Parker, Geoffrey; Petropoulos, Georgios; Marshall van Alstyne Working Paper Platform mergers and antitrust Bruegel Working Paper, No. 2021/01 Provided in Cooperation with: Bruegel, Brussels Suggested Citation: Parker, Geoffrey; Petropoulos, Georgios; Marshall van Alstyne (2021) : Platform mergers and antitrust, Bruegel Working Paper, No. 2021/01, Bruegel, Brussels This Version is available at: http://hdl.handle.net/10419/237621 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu PLATFORM MERGERS AND ANTITRUST TED VERSION 15 2021 JUNE 15 TED VERSION UPDA | GEOFFREY PARKER, GEORGIOS PETROPOULOS AND MARSHALL VAN ALSTYNE ISSUE 01/2021 ISSUE 01/2021 | Platform ecosystems rely on economies of scale, data-driven economies of scope, high quality algorithmic systems, and strong network effects that frequently promote winner-takes-most markets.
    [Show full text]
  • Autonomous Vehicle (AV) Policy Framework, Part I: Cataloging Selected National and State Policy Efforts to Drive Safe AV Development
    Autonomous Vehicle (AV) Policy Framework, Part I: Cataloging Selected National and State Policy Efforts to Drive Safe AV Development INSIGHT REPORT OCTOBER 2020 Cover: Reuters/Brendan McDermid Inside: Reuters/Stephen Lam, Reuters/Fabian Bimmer, Getty Images/Galimovma 79, Getty Images/IMNATURE, Reuters/Edgar Su Contents 3 Foreword – Miri Regev M.K, Minister of Transport & Road Safety 4 Foreword – Dr. Ami Appelbaum, Chief Scientist and Chairman of the Board of Israel Innovation Authority & Murat Sunmez, Managing Director, Head of the Centre for the Fourth Industrial Revolution Network, World Economic Forum 5 Executive Summary 8 Key terms 10 1. Introduction 15 2. What is an autonomous vehicle? 17 3. Israel’s AV policy 20 4. National and state AV policy comparative review 20 4.1 National and state AV policy summary 20 4.1.1 Singapore’s AV policy 25 4.1.2 The United Kingdom’s AV policy 30 4.1.3 Australia’s AV policy 34 4.1.4 The United States’ AV policy in two selected states: California and Arizona 43 4.2 A comparative review of selected AV policy elements 44 5. Synthesis and Recommendations 46 Acknowledgements 47 Appendix A – Key principles of driverless AV pilots legislation draft 51 Appendix B – Analysis of American Autonomous Vehicle Companies’ safety reports 62 Appendix C – A comparative review of selected AV policy elements 72 Endnotes © 2020 World Economic Forum. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, including photocopying and recording, or by any information storage and retrieval system.
    [Show full text]
  • Noise Models in Feature-Based Stereo Visual Odometry
    Noise Models in Feature-based Stereo Visual Odometry Pablo F. Alcantarillay and Oliver J. Woodfordz Abstract— Feature-based visual structure and motion recon- struction pipelines, common in visual odometry and large-scale reconstruction from photos, use the location of corresponding features in different images to determine the 3D structure of the scene, as well as the camera parameters associated with each image. The noise model, which defines the likelihood of the location of each feature in each image, is a key factor in the accuracy of such pipelines, alongside optimization strategy. Many different noise models have been proposed in the literature; in this paper we investigate the performance of several. We evaluate these models specifically w.r.t. stereo visual odometry, as this task is both simple (camera intrinsics are constant and known; geometry can be initialized reliably) and has datasets with ground truth readily available (KITTI Odometry and New Tsukuba Stereo Dataset). Our evaluation Fig. 1. Given a set of 2D feature correspondences zi (red circles) between shows that noise models which are more adaptable to the two images, the inverse problem to be solved is to estimate the camera varying nature of noise generally perform better. poses and the 3D locations of the features Yi. The true 3D locations project onto each image plane to give a true image location (blue circles). The I. INTRODUCTION noise model represents a probability distribution over the error e between the true and measured feature locations. The size of the ellipses depends on Inverse problems—given a set of observed measurements, the uncertainty in the feature locations.
    [Show full text]