Real-Time Collision Risk Estimation Based on Pearson's Correlation

Total Page:16

File Type:pdf, Size:1020Kb

Real-Time Collision Risk Estimation Based on Pearson's Correlation Real-time Collision Risk Estimation based on Pearson’s Correlation Coefficient Arthur Miranda Neto, Alessandro Corrêa Victorino, Isabelle Fantoni, Janito Vaqueiro Ferreira To cite this version: Arthur Miranda Neto, Alessandro Corrêa Victorino, Isabelle Fantoni, Janito Vaqueiro Ferreira. Real-time Collision Risk Estimation based on Pearson’s Correlation Coefficient. IEEE Work- shop on Robot Vision (WORV 2013), Jan 2013, Clearwater Beach, FL, United States. pp.40-45, 10.1109/WORV.2013.6521911. hal-00861087 HAL Id: hal-00861087 https://hal.archives-ouvertes.fr/hal-00861087 Submitted on 11 Sep 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Real-Time Collision Risk Estimation based on Pearson’s Correlation Coefficient A. Miranda Neto 1, A. Corrêa Victorino 2, I. Fantoni 2 and J. V. Ferreira 1 1 Autonomous Mobility Laboratory (LMA) at FEM/UNICAMP, Brazil 2 Heudiasyc Laboratory at UTC – CNRS UMR 7253, France Abstract understated [8]. The real nature of the information used by humans to evaluate time-to-contact is still an open question. The perception of the environment is a major issue in Humans adapt their motion to avoid collisions in order to autonomous robots. In our previous works, we have proposed a preserve admissible time-to-contact. Velocity, distance, and visual perception system based on an automatic image discarding time are intrinsically linked together [9]. method as a simple solution to improve the performance of a real- In 1895, Karl Pearson published the Pearson’s Correlation time navigation system. In this paper, we take place in the obstacle Coefficient (PCC) [10]. The Pearson's method is widely used avoidance context for vehicles in dynamic and unknown in statistical analysis, pattern recognition and image environments, and we propose a new method for Collision Risk Estimation based on Pearson’s Correlation Coefficient (PCC). processing. Applications on the latter include the comparison Applying the PCC to real-time CRE has not been done yet, making of two images for image registration purposes, object the concept unique. This paper provides a novel way of calculating recognition, and disparity measurement [11]. Based on collision risk and applying it for object avoidance using the PCC. Pearson's method, we have proposed the discarding criteria This real-time perception system has been evaluated from real data [12], [13] as a simple solution to improve the performance of obtained by our intelligent vehicle. a real-time navigation system by exploiting the temporal coherence between consecutive frames. It also automatically 1. Introduction determines the reference frame in a real time execution. In ATELY, several applications for control of autonomous this paper, a real-time perception problem is applied to Land (semi)-autonomous vehicles are being developed. intelligent vehicles (human operated or autonomous The challenge to construct robust methods, and, in most systems). Based on the PCC variation, we estimate the cases, in real-time systems, is far from being achieved. This Collision Risk Estimation in dynamic and unknown can be observed by the great number of researches being environments by using a single monocular system. published in the last few years. In Section 2 we present a review of previous works. For military or civil purposes, some of these applications Section 3, 4 and 5 introduce the Pearson’s Correlation include: the Grand Challenge [1] and Urban Challenge [2]; Coefficient, the Otsu thresholding method and the Region- Advanced Driver Assistance Systems (ADAS) [3]; Merging Algorithm. Section 6 describes the proposed autonomous perception system [4], [5], and aerial robots [6]. Collision Risk Estimation methodology. The results are The goal of the Grand Challenge was the development of presented in Section 7 and the conclusions are given in an autonomous robot capable of traversing unrehearsed, off- Section 8. road terrain [1]. For the Urban Challenge, the goal of the 2. Related Works system was to drive a car, autonomously, in a city environment, with way crossings and with static and dynamic The perception layer uses many types of sensors [1], [2], obstacles [2]. On the other hand, driven by the high number [14], including ultrasonic sensors, laser rangefinders, radar, of vehicles in all over the world, the ADAS systems emerged cameras, etc, which in many cases may be limited in scope to help the driver in its driver task [3]. Examples of such a and subject to noise. These sensors are not perfect: ultrasonic system are: autonomous cruise control, laser-based systems, sensors are cheap but suffer from specular reflections and are radar-based systems, collision avoidance system and limited in range, and laser rangefinders and radar provide precrash system [7]. better resolution but are more complex and expensive [4]. In all these cases, the important factors are the variety and The vision-based sensors are defined as passive sensors complexity of environments and situations. These intelligent and the image scanning is performed fast enough for vehicle developments have a common issue: providing to the Intelligent Transportation Systems. However, vision sensors vehicle platform the capability of perceiving and interacting are less robust than millimeter-wave radars in foggy, night, with its neighbour environment. or direct sun-shine conditions [15]. All range-based obstacle The importance of motion in visual processing cannot be detection systems have difficulty for detecting small or flat objects on the ground, and range sensors are also unable to distinguish between different types of ground surfaces [4]. Notwithstanding, the main problem with the use of active 3. Pearson’s Correlation Coefficient sensors is represented by interference among sensors of the According to [29], an empirical and theoretical same type, hence, foreseeing a massive and widespread use development that defines the regression and correlation as of these sensing agents, the use of passive sensors obtains statistical topics were presented by Sir Francis Galton in key advantages [15]. For example, the monocular vision 1885. In 1895, Karl Pearson published the PCC [10]. It is contribution to the DARPA Grand Challenge [16] showed widely used in statistical analysis, pattern recognition and that the reach of lasers is approximately 22 meters, whereas image processing. It is described by Eq. (1) [11]: the monocular vision module often looks 70 meters ahead. On the safety front, the progressive safety systems will be (x x )(y y ) developed through the manufacturing of an “intelligent ∑ i − m i − m i (1) bumper” peripheral to the vehicle in answering new features r1 = 2 2 as: blind spot detection, frontal and lateral pre-crash, etc. ∑(x i − xm ) ∑(y i − ym ) The objective in terms of cost to fill ADAS functions has to i i th be very lower than the current Adaptive Cruise Control (500 where xi is the intensity of the i pixel in image 1, yi is the Euros) [17]. In the obstacle avoidance context, the collision intensity of the i th pixel in image 2, x is the mean intensity warning algorithms typically issue a warning when the m of image 1, and is the mean intensity of image 2. current range to an object is less than the critical warning ym distance, where the safety can be measured in terms of the 4. Otsu Thresholding Method minimum time-to-collision (TTC) [18]. To calculate the TTC several techniques are presented in the literature [19], [20], 4.1 Image pre-processing [21] [22], [23]. For example, from the fusion of radar and We use a color or gray-level image. If the image is vision, the results have demonstrated the advantages of both colored, in order to utilize the most important information of sensors to improve the collision-sensing accuracy [24]. The the color image, the candidate color channel, that was radar gives accurate range and range-rate measurements dominant in certain color space, is selected to generate the while the vision solves the angular accuracy problem of histogram image [30]. It is described by Eq. (2). radar; however this fusion solution is costly [21]. Moreover, many authors focus on algorithms not suitable to perform (2) under real-time requirements such as low computational Cc = arg max (N R , NG , N B ) {}R,G,B costs [19]. Measuring distances is a non-native task for a monocular camera system [19]. Nevertheless, TTC, or time- where Cc means the color channel of the dominant color to-contact estimation is an approach to visual collision channel in certain referenced region. detection from an image sequence. It is a biologically inspired method that does not require scene reconstruction or 3D depth estimation [20]. Actually, TTC is an interesting 4.2 Otsu thresholding method (OTM): description and well studied research topic [19]. Optical flow may be Region recognition can be handled by popular used to TTC [8], [25], [26]. However, computing TTC from thresholding algorithm such as Maximum Entropy, Invariant an optical flow has proven impractical for real applications Moment and OTM. For road detection, because OTM in dynamic environment [22]. Additionally, gradient-based supplies a more satisfactory performance in image methods can be used with a certain degree of confidence in segmentation, OTM was used to overcome the negative environments such as indoors were the lighting conditions impacts caused by environmental variation [30]. can be controlled. It is computationally expensive [27]. Furthermore, some authors consider the OTM as one of the Inspired by the TTC approaches, this paper presents a best choices for real-time applications in machine vision novel approach to obtain Collision Risk Estimation (CRE) [31], [32].
Recommended publications
  • Optimized Route Network Graph As Map Reference for Autonomous Cars Operating on German Autobahn
    Optimized Route Network Graph as Map Reference for Autonomous Cars Operating on German Autobahn Paul Czerwionka, Miao Wang Artificial Intelligence Group Institute of Computer Science Freie Universitat¨ Berlin, Germany Abstract— This paper describes several optimization tech- mapping errors are more crucial when driving at high speeds, niques used to create an adequate route network graph for performing lane changes or overtaking maneuvers. autonomous cars as a map reference for driving on German An universal format to define such digital road maps has autobahn or similar highway tracks. We have taken the Route Network Definition File Format (RNDF) specified by DARPA been issued by the Defense Advanced Research Projects and identified multiple flaws of the RNDF for creating digital Agency (DARPA) for its autonomous vehicle competitions in maps for autonomous vehicles. Thus, we introduce various desert terrain in 2004 and 2005, and in urban environments enhancements to it to form a digital map graph called RND- in 2007. The Grand Challenges 2004 and 2005 used a route FGraph, which is well suited to map almost any urban trans- definition data format (RDDF) which consists of a list of portation infrastructure. We will also outline and show results of fast optimizations to reduce the graph size. The RNDFGraph longitudes, latitudes, speed limits, and corridor widths that has been used for path-planning and trajectory evaluation by define the course boundary. It was later updated to a route the behavior module of our two autonomous cars “Spirit of network definition file (RNDF) for the Urban Challenge Berlin” and “MadeInGermany”. We have especially tuned the including stop signs, parking spots and intersections.
    [Show full text]
  • A Voice and Pointing Gesture Interaction System for Supporting
    Clemson University TigerPrints All Dissertations Dissertations 5-2017 A Voice and Pointing Gesture Interaction System for Supporting Human Spontaneous Decisions in Autonomous Cars Pablo Sauras-Perez Clemson University, [email protected] Follow this and additional works at: https://tigerprints.clemson.edu/all_dissertations Recommended Citation Sauras-Perez, Pablo, "A Voice and Pointing Gesture Interaction System for Supporting Human Spontaneous Decisions in Autonomous Cars" (2017). All Dissertations. 1943. https://tigerprints.clemson.edu/all_dissertations/1943 This Dissertation is brought to you for free and open access by the Dissertations at TigerPrints. It has been accepted for inclusion in All Dissertations by an authorized administrator of TigerPrints. For more information, please contact [email protected]. A VOICE AND POINTING GESTURE INTERACTION SYSTEM FOR SUPPORTING HUMAN SPONTANEOUS DECISIONS IN AUTONOMOUS CARS A Dissertation Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Automotive Engineering by Pablo Sauras-Perez May 2017 Accepted by: Dr. Pierluigi Pisu, Committee Chair Dr. Joachim Taiber Dr. Yunyi Jia Dr. Mashrur Chowdhury ABSTRACT Autonomous cars are expected to improve road safety, traffic and mobility. It is projected that in the next 20-30 years fully autonomous vehicles will be on the market. The advancement on the research and development of this technology will allow the disengagement of humans from the driving task, which will be responsibility of the vehicle intelligence. In this scenario new vehicle interior designs are proposed, enabling more flexible human vehicle interactions inside them. In addition, as some important stakeholders propose, control elements such as the steering wheel and accelerator and brake pedals may not be needed any longer.
    [Show full text]
  • Study on Electrical Vehicle and Its Scope in Future
    © 2018 JETIR February 2018, Volume 5, Issue 2 www.jetir.org (ISSN-2349-5162) Study on Electrical Vehicle and Its Scope in Future Divya Sharma, Department of Electrical Engineering Galgotias University, Yamuna Expressway Greater Noida, Uttar Pradesh ABSTRACT: The cars are based on programming in advanced time to give the individual driver relaxed driving. In the automobile industry, various opportunities are known, which makes a car automatic. Google, the largest network, has been working on self-driving cars since 2010 and yet an innovative adaptation to present a mechanised vehicle in a radically new model is continuing to be launched. The modelling, calibration and study of changes in city morphology in autonomous vehicles (AVs). Transport system are utilized to travel among the tangential houses as well as the intermediate work and necessitate transportation space. The main benefits of an autonomous vehicle areit can be operated day parking area for other uses, mitigatemetropolitanground. We also reduce the cost per kilometre of driving. Researchers are interested in the area of automotive automation, where most applications are found in different places. The technology in this research paper will help to understand the quick, present and future technologies used or used in the automotive field to render automotive automation. KEYWORDS: Autonomous Vehicle, Linriccan Wonder, Robotic Van, Self-Driving Car, Tesla corporation. INTRODUCTION Users all over the world are delighted with the introduction of an automated vehicle for the general
    [Show full text]
  • Multimodal Machine Learning for Intelligent Mobility
    MULTIMODAL MACHINE LEARNING FOR INTELLIGENT MOBILITY Multimodal Machine Learning for Intelligent Mobility by Aubrey James Roche Doctoral Thesis Submitted in partial fulfilment of the requirements for the award of Doctor of Philosophy Institute of Digital Technologies, Loughborough University London © Aubrey James Roche May 2020 i MULTIMODAL MACHINE LEARNING FOR INTELLIGENT MOBILITY Dedicate to my partner Sádhbh & my son Ruadhán – Nunc scio quid sit amor, semper gratiam habebo. Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better. (Samuel Beckett) ii MULTIMODAL MACHINE LEARNING FOR INTELLIGENT MOBILITY Certificate of Originality Thesis Access Conditions and Deposit Agreement Students should consult the guidance notes on the electronic thesis deposit and the access conditions in the University’s Code of Practice on Research Degree Programmes Author: Jamie Roche .............................................................................................................................. Title: Multimodal Machine Learning for Intelligent Mobility ............................................................ I Jamie Roche, 2 Herbert Tec, Herbert Road, Bray, Co Wicklow, Ireland, “the Depositor”, would like to deposit Multimodal Machine Learning for Intelligent Mobility, hereafter referred to as the “Work”, once it has successfully been examined in Loughborough University Research Repository Status of access OPEN / RESTRICTED / CONFIDENTIAL Moratorium Period: 3 years 3 months ................... years, ending 13 ...........
    [Show full text]
  • Chapter 8. Automated Driving
    CHAPTER 8 Automated Driving Jorge Villagra1, Leopoldo Acosta2, Antonio Artuñedo1, Rosa Blanco3, Miguel Clavijo4, Carlos Fernández5, Jorge Godoy1, Rodolfo Haber1, Felipe Jiménez4, Carlos Martínez4, José E. Naranjo4, Pedro J. Navarro5, Ana Paúl3 and Francisco Sánchez3 1CSIC, Madrid, Spain 2Universidad de La Laguna, Santa Cruz de Tenerife, Spain 3CTAG - Centro Tecnolo´gico de Automocio´n de Galicia, Porrin˜o, Spain 4Universidad Polite´cnica de Madrid, Madrid, Spain 5Universidad Polite´cnica de Cartagena, Cartagena, Spain Contents 8.1 Fundamentals 275 8.2 Technology Bricks 279 8.2.1 Control Architectures 279 8.2.2 Situation Awareness and Risk Assessment 283 8.2.3 Decision Making 286 8.2.4 DriverÀVehicle Interaction 289 8.2.5 Motion Planning 292 8.2.6 Vehicle Control 304 8.3 Cooperative Automated Driving 310 8.3.1 Platooning 311 8.3.2 Urban Road Transport 314 8.4 Verification and Validation 315 8.5 Main Initiatives and Applications 317 8.5.1 Prototypes 317 8.5.2 Projects 322 8.5.3 Special Applications 327 8.6 Socioregulatory Aspects 332 8.6.1 Legal Pathways 332 8.6.2 Ethical Aspects 338 References 339 8.1 FUNDAMENTALS Following the Webster definition, an autonomous vehicle “has the right or power of self-government,” “exists or act separately from other things Intelligent Vehicles © 2018 Elsevier Inc. DOI: http://dx.doi.org/10.1016/B978-0-12-812800-8.00008-4 All rights reserved. 275 276 Intelligent Vehicles or people,” “is undertaken or carried on without outside control,” and “responds, reacts, or develops independently of the whole.” If we take these words at face value, it is therefore clear that there is a common mistake confusing this definition with the kind of systems cur- rently appearing on mass media, where the driver has very little interven- tion.
    [Show full text]
  • Autonomous Cars: Past, Present and Future
    Autonomous Cars: Past, Present and Future A Review of the Developments in the Last Century, the Present Scenario and the Expected Future of Autonomous Vehicle Technology Keshav Bimbraw Mechanical Engineering Department, Thapar University, P.O. Box 32, Patiala, Punjab, India Keywords: Autonomous Cars, Autonomous Vehicles, Cars, Mechatronics Systems, Intelligent Transportation Technologies and Systems, Automation. Abstract: The field of autonomous automation is of interest to researchers, and much has been accomplished in this area, of which this paper presents a detailed chronology. This paper can help one understand the trends in autonomous vehicle technology for the past, present, and future. We see a drastic change in autonomous vehicle technology since 1920s, when the first radio controlled vehicles were designed. In the subsequent decades, we see fairly autonomous electric cars powered by embedded circuits in the roads. By 1960s, autonomous cars having similar electronic guide systems came into picture. 1980s saw vision guided autonomous vehicles, which was a major milestone in technology and till date we use similar or modified forms of vision and radio guided technologies. Various semi-autonomous features introduced in modern cars such as lane keeping, automatic braking and adaptive cruise control are based on such systems. Extensive network guided systems in conjunction with vision guided features is the future of autonomous vehicles. It is predicted that most companies will launch fully autonomous vehicles by the advent of next decade. The future of autonomous vehicles is an ambitious era of safe and comfortable transportation. 1 INTRODUCTION ‘Linriccan Wonder’. Significant advances in autonomous car technology has been made after the Consumers all around the whole world are enthusiastic advent of the vision guided Mercedes-Benz robotic about the advent of autonomous cars for public.
    [Show full text]
  • Robust Horizon Finding Algorithm for Real-Time Autonomous Navigation
    Robust Horizon Finding Algorithm for Real-Time Autonomous Navigation based on Monocular Vision Arthur de Miranda Neto, Alessandro Correa Victorino, Isabelle Fantoni, Douglas Eduardo Zampieri To cite this version: Arthur de Miranda Neto, Alessandro Correa Victorino, Isabelle Fantoni, Douglas Eduardo Zampieri. Robust Horizon Finding Algorithm for Real-Time Autonomous Navigation based on Monocular Vision. 14th IEEE International Conference on Intelligent Transportation Systems (ITSC 2011), Oct 2011, Washington DC, United States. pp.532-537. hal-00657425 HAL Id: hal-00657425 https://hal.archives-ouvertes.fr/hal-00657425 Submitted on 6 Jan 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Robust Horizon Finding Algorithm for Real-Time Autonomous Navigation based on Monocular Vision A. Miranda Neto, A. Corrêa Victorino, I. Fantoni and D. E. Zampieri Abstract —Navigation of an Autonomous Vehicle is based techniques [8]. Besides their intrinsic higher computational on its interaction with the environment, through information complexity caused by a significant increment in the acquired by sensors. The perception of the environment is a amount of data to be processed, the most common vision major issue in autonomous and (semi)-autonomous systems. systems (such as the processing of non-monocular images) This work presents the embedded real-time visual perception must also be robust enough to tolerate noise caused by problem applied to experimental platform.
    [Show full text]
  • Automated Vehicles and Regulation: Panel Discussion
    TRANSPORTATION RESEARCH Number E-C232 April 2018 USERS. VEHICLES. INFRASTRUCTURE. TRANSPORTATION RESEARCH BOARD 2018 EXECUTIVE COMMITTEE OFFICERS Chair: Katherine F. Turnbull, Executive Associate Director and Research Scientist, Texas A&M Transportation Institute, College Station Vice Chair: Victoria A. Arroyo, Executive Director, Georgetown Climate Center; Assistant Dean, Centers and Institutes; and Professor and Director, Environmental Law Program, Georgetown University Law Center, Washington, D.C. Division Chair for NRC Oversight: Susan Hanson, Distinguished University Professor Emerita, School of Geography, Clark University, Worcester, Massachusetts Executive Director: Neil J. Pedersen, Transportation Research Board TRANSPORTATION RESEARCH BOARD 2017–2018 TECHNICAL ACTIVITIES COUNCIL Chair: Hyun-A C. Park, President, Spy Pond Partners, LLC, Arlington, Massachusetts Technical Activities Director: Ann M. Brach, Transportation Research Board David Ballard, Senior Economist, Gellman Research Associates, Inc., Jenkintown, Pennsylvania, Aviation Group Chair Coco Briseno, Deputy Director, Planning and Modal Programs, California Department of Transportation, Sacramento, State DOT Representative Anne Goodchild, Associate Professor, University of Washington, Seattle, Freight Systems Group Chair George Grimes, CEO Advisor, Patriot Rail Company, Denver, Colorado, Rail Group Chair David Harkey, Director, Highway Safety Research Center, University of North Carolina, Chapel Hill, Safety and Systems Users Group Chair Dennis Hinebaugh, Director, National
    [Show full text]
  • Keeping Autonomous Driving Alive
    Göde Both Keeping Autonomous Driving Alive Göde Both Keeping Autonomous Driving Alive An Ethnography of Visions, Masculinity and Fragility Budrich Academic Press GmbH Opladen • Berlin • Toronto 2020 © 2020 This work is licensed under the Creative Commons Attribution-ShareAlike 4.0. (CC-BY-SA 4.0) It permits use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you share under the same license, give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/4.0/ © 2020 Dieses Werk ist beim Verlag Barbara Budrich GmbH erschienen und steht unter der Creative Commons Lizenz Attribution-ShareAlike 4.0 International (CC BY-SA 4.0): https://creativecommons.org/licenses/by-sa/4.0/ Diese Lizenz erlaubt die Verbreitung, Speicherung, Vervielfältigung und Bearbeitung bei Verwendung der gleichen CC-BY-SA 4.0-Lizenz und unter Angabe der UrheberInnen, Rechte, Änderungen und verwendeten Lizenz. This book is available as a free download from www.barbara-budrich.net (https://doi.org/10.3224/96665009). A paperback version is available at a charge. The page numbers of the open access edition correspond with the paperback edition. Diese Dissertation wurde von der Humanwissenschaftlichen Fakultät der Universität zu Köln im November 2019 angenommen. A CIP catalogue record for this book is available from Die Deutsche Bibliothek (The German Library) © 2020 by Budrich Academic Press, Opladen, Berlin & Toronto www. budrich-academic-press.de ISBN 978-3-96665-009-0 eISBN 978-3-96665-983-3 DOI 10.3224/96665009 Die Deutsche Bibliothek – CIP-Einheitsaufnahme Ein Titeldatensatz für die Publikation ist bei Der Deutschen Bibliothek erhältlich.
    [Show full text]
  • University of California Santa Cruz Tightly-Coupled Lidar
    UNIVERSITY OF CALIFORNIA SANTA CRUZ TIGHTLY-COUPLED LIDAR AND CAMERA FOR AUTONOMOUS VEHICLES A dissertation submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in ELECTRICAL ENGINEERING by Mohammad Hossein Daraei June 2018 The Dissertation of Mohammad Hossein Daraei is approved: Professor Hamid Sadjadpour, Chair Professor Yu Zhang Professor Roberto Manduchi Tyrus Miller Vice Provost and Dean of Graduate Studies Copyright © by Mohammad Hossein Daraei 2018 Table of Contents List of Figures v List of Tables vii Abstract viii Dedication x Acknowledgments xi 1 Introduction 1 2 Environment Perception for Autonomous Driving 9 2.1 Literature on Environment Perception . 10 2.2 Measuring the Surroundings . 14 2.3 Multi-modal Spatio-temporal Association . 17 2.3.1 Continuous Mapping Between Sensor Spaces . 20 2.3.2 Discrete Multi-modal Measurement Association . 22 2.4 Distance-aware Processing Resolution . 23 2.5 Proposed Fusion Architecture . 29 3 Velocity Estimation 34 3.1 Related Work . 36 3.2 Problem Formulation . 41 3.3 Proposed Motion Estimation and Depth Reconstruction . 42 3.3.1 Energy Function Derivation . 43 3.3.2 LiDAR Data Term EP (v, d) ................. 45 3.3.3 Surface Smoothness Term ES (d) .............. 46 3.3.4 LiDAR Likelihood Term, EL(v) ............... 49 3.3.5 Image-based Data Term EI (v)................ 50 3.3.6 Occlusion and parallax modeling . 53 3.4 Optimization . 54 iii 3.4.1 Inference and Tracking . 54 3.4.2 Non-linear Penalization . 57 3.4.3 Multi-resolution analysis . 58 3.5 B-Spline Wavelets as Basis Functions .
    [Show full text]
  • Object Detection from a Vehicle Using Deep Learning Network and Future Integration with Multi-Sensor Fusion Algorithm
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by IUPUIScholarWorks 2017-01-0117 Object Detection from a Vehicle using Deep Learning Network and Future Integration with Multi-Sensor Fusion Algorithm Raja Sekhar Rao Dheekonda* Sampad K. Panda§ MD. Nazmuzzaman Khan§ Mohammad Al-Hasan* Sohel Anwar§ Indiana University Purdue University Indianapolis §Department of Mechanical Engineering *Department of Computer Science Copyright © 2017 SAE International ABSTRACT that currently exist (e.g. collision avoidance system, lane keeping assist, blind spot detection, adaptive cruise Accuracy in detecting a moving object is critical to control) can provide the basics of an autonomous autonomous driving or advanced driver assistance vehicles’ framework. However, many challenges remain systems (ADAS). By including the object classification with the current state of the art in ADAS technologies that from multiple sensor detections, the model of the object hinders safe and robust implementation of the or environment can be identified more accurately. The autonomous vehicles for consumers. One of the key critical parameters involved in improving the accuracy are challenges is real-time detection and identification of a the size and the speed of the moving object. All sensor moving object of various shapes and speeds under all data are to be used in defining a composite object possible environments and backgrounds. representation so that it could be used for the class information in the core object’s description. This For object detection and classification, image and video composite data can then be used by a deep learning data from camera is the only information source in the network for complete perception fusion in order to solve existing methodologies [17-23].
    [Show full text]
  • Investigating the Emotional Aspect and Experience of Autonomous Vehicles from the Perspective of an Industrial Designer.”
    “Investigating the emotional aspect and experience of autonomous vehicles from the perspective of an Industrial Designer.” Process Book Vol.1 Research I CONTENTS II PRELIMINARY RESEARCH III AUTONOMOUS VEHICLES I IV ITERATIVE DESIGN BRIEF V PRECEDENCE AND SPECULATION FUTURE timeline SPECULATION timeline CONTENTS INDEX of TECH INDEX of ACCIDENT INDEX of SAFETY VI USING AUTONOMOUS VEHICLES VII RESEARCH PLAN Hello. VIII IDEATION V.1 TAPE Drawing V1 • IX CONTACT PART 1 1. 2. X MID-TERM VIDEO 2.1. 2.2. XI CONTACT PART 2 2.3. 2.4. Diderik Severin Astrup Westby - Fall 2016 - Design Core 2 Project Outline “The summer before your degree In addition to this, we were also project is a good opportunity to think asked to create a reflection portfolio. openly about your degree project. Your The aim with this was to aid us in fram- ideas for the degree project should be ing our interest and design language, to II taking shape over the summer, and you ensure we could create a project which are asked to begin initial research and was related to our wants as a design- investigation early, and choose two topic er, to be able to fully focus thorugh the areas capable of sustaining your inter- entire term of the projects duration. ests and energy. There was a alot of emphasis on PRELIMINARY cars in this portfolio, so I intentionally 4th year: Projects are self-man- tried looking elsewhere to see if there RESEARCH aged. Students choose the topic, deter- were any other opportunities I might mine the criteria for success, and select have found interesting.
    [Show full text]