Robust Horizon Finding Algorithm for Real-Time Autonomous Navigation

Total Page:16

File Type:pdf, Size:1020Kb

Robust Horizon Finding Algorithm for Real-Time Autonomous Navigation Robust Horizon Finding Algorithm for Real-Time Autonomous Navigation based on Monocular Vision Arthur de Miranda Neto, Alessandro Correa Victorino, Isabelle Fantoni, Douglas Eduardo Zampieri To cite this version: Arthur de Miranda Neto, Alessandro Correa Victorino, Isabelle Fantoni, Douglas Eduardo Zampieri. Robust Horizon Finding Algorithm for Real-Time Autonomous Navigation based on Monocular Vision. 14th IEEE International Conference on Intelligent Transportation Systems (ITSC 2011), Oct 2011, Washington DC, United States. pp.532-537. hal-00657425 HAL Id: hal-00657425 https://hal.archives-ouvertes.fr/hal-00657425 Submitted on 6 Jan 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Robust Horizon Finding Algorithm for Real-Time Autonomous Navigation based on Monocular Vision A. Miranda Neto, A. Corrêa Victorino, I. Fantoni and D. E. Zampieri Abstract —Navigation of an Autonomous Vehicle is based techniques [8]. Besides their intrinsic higher computational on its interaction with the environment, through information complexity caused by a significant increment in the acquired by sensors. The perception of the environment is a amount of data to be processed, the most common vision major issue in autonomous and (semi)-autonomous systems. systems (such as the processing of non-monocular images) This work presents the embedded real-time visual perception must also be robust enough to tolerate noise caused by problem applied to experimental platform. In this way, a robust horizon finding algorithm that finds the horizon line platform movements and drifts in the calibration of the was proposed and applied to generate the navigable area. It multiple cameras’ setup. permits to investigate dynamically only a small portion of the For land vehicle navigation, the monocular vision image (road) ahead of the vehicle. From a dynamic threshold systems have been designed to investigate the road search method based on Otsu segmentation and Hough information, and in order to decrease the volume of data transform, this system was robust to illumination changes and does not need any contrast adjustments. It can also be used for processing, some systems have been designed to with infrared camera. investigate only a small portion of the road ahead of the vehicle where the absence of other vehicles has been I. INTRODUCTION assumed [8]. Otherwise, the sky region is not a region of HE robot navigation is based on its interaction with the interest, and the horizon line threshold is applied to Tenvironment. In order to obtain information about it, generate a road image [9]. Stanford Racing Team [10] sensors are needed, which in many cases may be limited in implemented the horizon finding algorithm originally scope and subject to noise. However, when incorporating proposed by [11] to eliminate all pixels above that horizon. several types of sensors, there is an increase of autonomy Additionally, these techniques has been used for flight and “intelligence” degrees, especially in relation to stability and control system to Micro Air Vehicles [11], for navigation in unknown environments. On the other hand, the control of the airship [12] and landing aircraft control the type and quantity of sensors determine the volume of [13]. It also was employed as an absolute attitude sensor data for processing that requires a high computational cost. which is useful for low-level control of an unmanned aerial Furthermore, the important factors are the variety and vehicle [14]. complexity of environments and situations. For military or In this way, from a dynamic threshold search method civil purposes, some of these applications include: the based on Otsu segmentation [15] and Hough transform [16], a robust horizon finding algorithm that finds the Grand Challenge [1] and Urban Challenge [2]; Advanced horizon line was proposed and applied to generate the Driver Assistance Systems (ADAS) [3]; autonomous navigable area. It permits to investigate dynamically only a perception system [4], [5], and aerial robots [6], [7]. These small portion of the image (road) ahead of the vehicle. applications have a common issue: providing to the This algorithm is robust to illumination changes and does robot/vehicle platform the capability of perceiving and not need any contrast adjustments. interacting with its neighbour environment. The Section II presents the sensor perception review and Although extremely complex and highly demanding, monocular vision contributions. From the Section III and thanks to the great deal of information it can deliver, IV the results are presented, and the conclusions are given machine vision is a powerful means for sensing the in Section V. environment and has been widely employed to deal with a large number of tasks in the automotive field [8]. II. SENSOR PERCEPTION Due to the general applicability of it, the problem of Several models of distributed vehicle navigation mobile robots navigation is dealt with using more complex architecture are presented in the literature [1], [2], [17]. The perception layer uses many types of sensors, including Manuscript received July 13, 2011. ultrasonic sensors, laser rangefinders, radar, cameras, etc. Arthur de Miranda Neto is Ph.D. Student in Mechanical Engineering at the State University of Campinas – Brazil and in Information However, these sensors are not perfect: ultrasonic Technology and Systems at the University of Technology of Compiègne sensors are cheap but suffer from specular reflections, and – CNRS UMR 6599, Heudiasyc, France. Actually, he is researcher laser rangefinders and radar provide better resolution but engineer at the CNRS. are more complex and more expensive [4]. Alessandro Corrêa Victorino is associate professor at the University of Technology of Compiègne – CNRS UMR 6599, Heudiasyc, France. According to [8], the vision-based sensors are defined as Isabelle Fantoni is researcher at the University of Technology of passive sensors, the image scanning is performed fast Compiègne – CNRS UMR 6599, Heudiasyc, France. enough for Intelligent Transportation Systems, and it can Douglas Eduardo Zampieri is associate professor of the Computational Mechanics Department, at the State University of be used for some specific applications: road marking Campinas, Brazil. localization, traffic signs recognition, obstacle operations on images, with the purpose to reduce noise and identification. However, vision sensors are less robust than to segment them. millimeter-wave radars in foggy, night, or direct sun-shine One way to perform segmentation of an image is to use conditions. thresholds. This type of segmentation technique, called On the other hand, for range-based obstacle detection thresholding, is very simple and computationally fast, systems have difficulty for detecting small or flat objects however the identification of the ideal threshold can be on the ground, and range sensors are also unable to sufficiently complicated. The best thing to do in this case distinguish between different types of ground surfaces [4]. is to use techniques and algorithms that search the Notwithstanding, the main problem with the use of active thresholds automatically. Thresholding methods are sensors is represented by interference among sensors of the divided in two groups: global and local. The global ones same type, hence, foreseeing a massive and widespread use divide the image using only one threshold and the local of these sensing agents, the use of passive sensors obtains ones are those that divide the image in sub-images and for key advantages [8]. each one of them a threshold is defined [19]. Fig. 1 illustrates the monocular vision contribution to What this work proposes is a global thresholding the DARPA Grand Challenge [10]. This diagram shows method, which seeks not the ideal threshold for the whole that the reach of lasers was approximately 22 meters (left), image, but an ideal threshold associated at each portion whereas the monocular vision module often looks 70 (local information), that contributes to the final decision. meters ahead (right). It also shows the horizon detection Details of this search are described below. for sky removal. A. Image pre-processing Most research groups face this problem using highly sophisticated image filtering algorithms [8]. This work uses a color or gray-level image and smooth them using a Gaussian filter. The Gaussian smoothing operator is a 2-D convolution operator. It acts as low-pass frequency filters [20]. In order to reduce the number of data, it includes the resolution reduction of image (to 128x96). If the image is colored, in order to utilize the most Fig. 1 – Monocular vision: comparison of the laser-based important information of the color image, the candidate (left) and the image-based (right) mapper. Circles are color channel that was dominant in certain color space is spaced around the vehicle at 10 meters distance. It also selected to generate the histogram image [21]. For us this shows the horizon detection for sky removal [10]. space represents 60% of the image (Fig. 5). It is described by Eq. (1). On the safety front, the ADAS including front bumper mounted cameras. The progressive safety systems will be (1) Cc = arg max (N R , NG , N B ) developed through the manufacturing of an “intelligent {}R,G,B bumper” (safety belt) peripheral to the vehicle in answering new features as: lane change assistant, blind where Cc means the color channel and N means the spot detection, frontal and lateral pre-crash, etc. The R objective in terms of cost to fill ADAS functions has to be number of the dominant color channel in certain very lower than the current Adaptive Cruise Control (500 referenced region. Euros) [18].
Recommended publications
  • Optimized Route Network Graph As Map Reference for Autonomous Cars Operating on German Autobahn
    Optimized Route Network Graph as Map Reference for Autonomous Cars Operating on German Autobahn Paul Czerwionka, Miao Wang Artificial Intelligence Group Institute of Computer Science Freie Universitat¨ Berlin, Germany Abstract— This paper describes several optimization tech- mapping errors are more crucial when driving at high speeds, niques used to create an adequate route network graph for performing lane changes or overtaking maneuvers. autonomous cars as a map reference for driving on German An universal format to define such digital road maps has autobahn or similar highway tracks. We have taken the Route Network Definition File Format (RNDF) specified by DARPA been issued by the Defense Advanced Research Projects and identified multiple flaws of the RNDF for creating digital Agency (DARPA) for its autonomous vehicle competitions in maps for autonomous vehicles. Thus, we introduce various desert terrain in 2004 and 2005, and in urban environments enhancements to it to form a digital map graph called RND- in 2007. The Grand Challenges 2004 and 2005 used a route FGraph, which is well suited to map almost any urban trans- definition data format (RDDF) which consists of a list of portation infrastructure. We will also outline and show results of fast optimizations to reduce the graph size. The RNDFGraph longitudes, latitudes, speed limits, and corridor widths that has been used for path-planning and trajectory evaluation by define the course boundary. It was later updated to a route the behavior module of our two autonomous cars “Spirit of network definition file (RNDF) for the Urban Challenge Berlin” and “MadeInGermany”. We have especially tuned the including stop signs, parking spots and intersections.
    [Show full text]
  • A Voice and Pointing Gesture Interaction System for Supporting
    Clemson University TigerPrints All Dissertations Dissertations 5-2017 A Voice and Pointing Gesture Interaction System for Supporting Human Spontaneous Decisions in Autonomous Cars Pablo Sauras-Perez Clemson University, [email protected] Follow this and additional works at: https://tigerprints.clemson.edu/all_dissertations Recommended Citation Sauras-Perez, Pablo, "A Voice and Pointing Gesture Interaction System for Supporting Human Spontaneous Decisions in Autonomous Cars" (2017). All Dissertations. 1943. https://tigerprints.clemson.edu/all_dissertations/1943 This Dissertation is brought to you for free and open access by the Dissertations at TigerPrints. It has been accepted for inclusion in All Dissertations by an authorized administrator of TigerPrints. For more information, please contact [email protected]. A VOICE AND POINTING GESTURE INTERACTION SYSTEM FOR SUPPORTING HUMAN SPONTANEOUS DECISIONS IN AUTONOMOUS CARS A Dissertation Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Automotive Engineering by Pablo Sauras-Perez May 2017 Accepted by: Dr. Pierluigi Pisu, Committee Chair Dr. Joachim Taiber Dr. Yunyi Jia Dr. Mashrur Chowdhury ABSTRACT Autonomous cars are expected to improve road safety, traffic and mobility. It is projected that in the next 20-30 years fully autonomous vehicles will be on the market. The advancement on the research and development of this technology will allow the disengagement of humans from the driving task, which will be responsibility of the vehicle intelligence. In this scenario new vehicle interior designs are proposed, enabling more flexible human vehicle interactions inside them. In addition, as some important stakeholders propose, control elements such as the steering wheel and accelerator and brake pedals may not be needed any longer.
    [Show full text]
  • Study on Electrical Vehicle and Its Scope in Future
    © 2018 JETIR February 2018, Volume 5, Issue 2 www.jetir.org (ISSN-2349-5162) Study on Electrical Vehicle and Its Scope in Future Divya Sharma, Department of Electrical Engineering Galgotias University, Yamuna Expressway Greater Noida, Uttar Pradesh ABSTRACT: The cars are based on programming in advanced time to give the individual driver relaxed driving. In the automobile industry, various opportunities are known, which makes a car automatic. Google, the largest network, has been working on self-driving cars since 2010 and yet an innovative adaptation to present a mechanised vehicle in a radically new model is continuing to be launched. The modelling, calibration and study of changes in city morphology in autonomous vehicles (AVs). Transport system are utilized to travel among the tangential houses as well as the intermediate work and necessitate transportation space. The main benefits of an autonomous vehicle areit can be operated day parking area for other uses, mitigatemetropolitanground. We also reduce the cost per kilometre of driving. Researchers are interested in the area of automotive automation, where most applications are found in different places. The technology in this research paper will help to understand the quick, present and future technologies used or used in the automotive field to render automotive automation. KEYWORDS: Autonomous Vehicle, Linriccan Wonder, Robotic Van, Self-Driving Car, Tesla corporation. INTRODUCTION Users all over the world are delighted with the introduction of an automated vehicle for the general
    [Show full text]
  • Multimodal Machine Learning for Intelligent Mobility
    MULTIMODAL MACHINE LEARNING FOR INTELLIGENT MOBILITY Multimodal Machine Learning for Intelligent Mobility by Aubrey James Roche Doctoral Thesis Submitted in partial fulfilment of the requirements for the award of Doctor of Philosophy Institute of Digital Technologies, Loughborough University London © Aubrey James Roche May 2020 i MULTIMODAL MACHINE LEARNING FOR INTELLIGENT MOBILITY Dedicate to my partner Sádhbh & my son Ruadhán – Nunc scio quid sit amor, semper gratiam habebo. Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better. (Samuel Beckett) ii MULTIMODAL MACHINE LEARNING FOR INTELLIGENT MOBILITY Certificate of Originality Thesis Access Conditions and Deposit Agreement Students should consult the guidance notes on the electronic thesis deposit and the access conditions in the University’s Code of Practice on Research Degree Programmes Author: Jamie Roche .............................................................................................................................. Title: Multimodal Machine Learning for Intelligent Mobility ............................................................ I Jamie Roche, 2 Herbert Tec, Herbert Road, Bray, Co Wicklow, Ireland, “the Depositor”, would like to deposit Multimodal Machine Learning for Intelligent Mobility, hereafter referred to as the “Work”, once it has successfully been examined in Loughborough University Research Repository Status of access OPEN / RESTRICTED / CONFIDENTIAL Moratorium Period: 3 years 3 months ................... years, ending 13 ...........
    [Show full text]
  • Chapter 8. Automated Driving
    CHAPTER 8 Automated Driving Jorge Villagra1, Leopoldo Acosta2, Antonio Artuñedo1, Rosa Blanco3, Miguel Clavijo4, Carlos Fernández5, Jorge Godoy1, Rodolfo Haber1, Felipe Jiménez4, Carlos Martínez4, José E. Naranjo4, Pedro J. Navarro5, Ana Paúl3 and Francisco Sánchez3 1CSIC, Madrid, Spain 2Universidad de La Laguna, Santa Cruz de Tenerife, Spain 3CTAG - Centro Tecnolo´gico de Automocio´n de Galicia, Porrin˜o, Spain 4Universidad Polite´cnica de Madrid, Madrid, Spain 5Universidad Polite´cnica de Cartagena, Cartagena, Spain Contents 8.1 Fundamentals 275 8.2 Technology Bricks 279 8.2.1 Control Architectures 279 8.2.2 Situation Awareness and Risk Assessment 283 8.2.3 Decision Making 286 8.2.4 DriverÀVehicle Interaction 289 8.2.5 Motion Planning 292 8.2.6 Vehicle Control 304 8.3 Cooperative Automated Driving 310 8.3.1 Platooning 311 8.3.2 Urban Road Transport 314 8.4 Verification and Validation 315 8.5 Main Initiatives and Applications 317 8.5.1 Prototypes 317 8.5.2 Projects 322 8.5.3 Special Applications 327 8.6 Socioregulatory Aspects 332 8.6.1 Legal Pathways 332 8.6.2 Ethical Aspects 338 References 339 8.1 FUNDAMENTALS Following the Webster definition, an autonomous vehicle “has the right or power of self-government,” “exists or act separately from other things Intelligent Vehicles © 2018 Elsevier Inc. DOI: http://dx.doi.org/10.1016/B978-0-12-812800-8.00008-4 All rights reserved. 275 276 Intelligent Vehicles or people,” “is undertaken or carried on without outside control,” and “responds, reacts, or develops independently of the whole.” If we take these words at face value, it is therefore clear that there is a common mistake confusing this definition with the kind of systems cur- rently appearing on mass media, where the driver has very little interven- tion.
    [Show full text]
  • Real-Time Collision Risk Estimation Based on Pearson's Correlation
    Real-time Collision Risk Estimation based on Pearson’s Correlation Coefficient Arthur Miranda Neto, Alessandro Corrêa Victorino, Isabelle Fantoni, Janito Vaqueiro Ferreira To cite this version: Arthur Miranda Neto, Alessandro Corrêa Victorino, Isabelle Fantoni, Janito Vaqueiro Ferreira. Real-time Collision Risk Estimation based on Pearson’s Correlation Coefficient. IEEE Work- shop on Robot Vision (WORV 2013), Jan 2013, Clearwater Beach, FL, United States. pp.40-45, 10.1109/WORV.2013.6521911. hal-00861087 HAL Id: hal-00861087 https://hal.archives-ouvertes.fr/hal-00861087 Submitted on 11 Sep 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Real-Time Collision Risk Estimation based on Pearson’s Correlation Coefficient A. Miranda Neto 1, A. Corrêa Victorino 2, I. Fantoni 2 and J. V. Ferreira 1 1 Autonomous Mobility Laboratory (LMA) at FEM/UNICAMP, Brazil 2 Heudiasyc Laboratory at UTC – CNRS UMR 7253, France Abstract understated [8]. The real nature of the information used by humans to evaluate time-to-contact is still an open question. The perception of the environment is a major issue in Humans adapt their motion to avoid collisions in order to autonomous robots.
    [Show full text]
  • Autonomous Cars: Past, Present and Future
    Autonomous Cars: Past, Present and Future A Review of the Developments in the Last Century, the Present Scenario and the Expected Future of Autonomous Vehicle Technology Keshav Bimbraw Mechanical Engineering Department, Thapar University, P.O. Box 32, Patiala, Punjab, India Keywords: Autonomous Cars, Autonomous Vehicles, Cars, Mechatronics Systems, Intelligent Transportation Technologies and Systems, Automation. Abstract: The field of autonomous automation is of interest to researchers, and much has been accomplished in this area, of which this paper presents a detailed chronology. This paper can help one understand the trends in autonomous vehicle technology for the past, present, and future. We see a drastic change in autonomous vehicle technology since 1920s, when the first radio controlled vehicles were designed. In the subsequent decades, we see fairly autonomous electric cars powered by embedded circuits in the roads. By 1960s, autonomous cars having similar electronic guide systems came into picture. 1980s saw vision guided autonomous vehicles, which was a major milestone in technology and till date we use similar or modified forms of vision and radio guided technologies. Various semi-autonomous features introduced in modern cars such as lane keeping, automatic braking and adaptive cruise control are based on such systems. Extensive network guided systems in conjunction with vision guided features is the future of autonomous vehicles. It is predicted that most companies will launch fully autonomous vehicles by the advent of next decade. The future of autonomous vehicles is an ambitious era of safe and comfortable transportation. 1 INTRODUCTION ‘Linriccan Wonder’. Significant advances in autonomous car technology has been made after the Consumers all around the whole world are enthusiastic advent of the vision guided Mercedes-Benz robotic about the advent of autonomous cars for public.
    [Show full text]
  • Automated Vehicles and Regulation: Panel Discussion
    TRANSPORTATION RESEARCH Number E-C232 April 2018 USERS. VEHICLES. INFRASTRUCTURE. TRANSPORTATION RESEARCH BOARD 2018 EXECUTIVE COMMITTEE OFFICERS Chair: Katherine F. Turnbull, Executive Associate Director and Research Scientist, Texas A&M Transportation Institute, College Station Vice Chair: Victoria A. Arroyo, Executive Director, Georgetown Climate Center; Assistant Dean, Centers and Institutes; and Professor and Director, Environmental Law Program, Georgetown University Law Center, Washington, D.C. Division Chair for NRC Oversight: Susan Hanson, Distinguished University Professor Emerita, School of Geography, Clark University, Worcester, Massachusetts Executive Director: Neil J. Pedersen, Transportation Research Board TRANSPORTATION RESEARCH BOARD 2017–2018 TECHNICAL ACTIVITIES COUNCIL Chair: Hyun-A C. Park, President, Spy Pond Partners, LLC, Arlington, Massachusetts Technical Activities Director: Ann M. Brach, Transportation Research Board David Ballard, Senior Economist, Gellman Research Associates, Inc., Jenkintown, Pennsylvania, Aviation Group Chair Coco Briseno, Deputy Director, Planning and Modal Programs, California Department of Transportation, Sacramento, State DOT Representative Anne Goodchild, Associate Professor, University of Washington, Seattle, Freight Systems Group Chair George Grimes, CEO Advisor, Patriot Rail Company, Denver, Colorado, Rail Group Chair David Harkey, Director, Highway Safety Research Center, University of North Carolina, Chapel Hill, Safety and Systems Users Group Chair Dennis Hinebaugh, Director, National
    [Show full text]
  • Keeping Autonomous Driving Alive
    Göde Both Keeping Autonomous Driving Alive Göde Both Keeping Autonomous Driving Alive An Ethnography of Visions, Masculinity and Fragility Budrich Academic Press GmbH Opladen • Berlin • Toronto 2020 © 2020 This work is licensed under the Creative Commons Attribution-ShareAlike 4.0. (CC-BY-SA 4.0) It permits use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you share under the same license, give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/4.0/ © 2020 Dieses Werk ist beim Verlag Barbara Budrich GmbH erschienen und steht unter der Creative Commons Lizenz Attribution-ShareAlike 4.0 International (CC BY-SA 4.0): https://creativecommons.org/licenses/by-sa/4.0/ Diese Lizenz erlaubt die Verbreitung, Speicherung, Vervielfältigung und Bearbeitung bei Verwendung der gleichen CC-BY-SA 4.0-Lizenz und unter Angabe der UrheberInnen, Rechte, Änderungen und verwendeten Lizenz. This book is available as a free download from www.barbara-budrich.net (https://doi.org/10.3224/96665009). A paperback version is available at a charge. The page numbers of the open access edition correspond with the paperback edition. Diese Dissertation wurde von der Humanwissenschaftlichen Fakultät der Universität zu Köln im November 2019 angenommen. A CIP catalogue record for this book is available from Die Deutsche Bibliothek (The German Library) © 2020 by Budrich Academic Press, Opladen, Berlin & Toronto www. budrich-academic-press.de ISBN 978-3-96665-009-0 eISBN 978-3-96665-983-3 DOI 10.3224/96665009 Die Deutsche Bibliothek – CIP-Einheitsaufnahme Ein Titeldatensatz für die Publikation ist bei Der Deutschen Bibliothek erhältlich.
    [Show full text]
  • University of California Santa Cruz Tightly-Coupled Lidar
    UNIVERSITY OF CALIFORNIA SANTA CRUZ TIGHTLY-COUPLED LIDAR AND CAMERA FOR AUTONOMOUS VEHICLES A dissertation submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in ELECTRICAL ENGINEERING by Mohammad Hossein Daraei June 2018 The Dissertation of Mohammad Hossein Daraei is approved: Professor Hamid Sadjadpour, Chair Professor Yu Zhang Professor Roberto Manduchi Tyrus Miller Vice Provost and Dean of Graduate Studies Copyright © by Mohammad Hossein Daraei 2018 Table of Contents List of Figures v List of Tables vii Abstract viii Dedication x Acknowledgments xi 1 Introduction 1 2 Environment Perception for Autonomous Driving 9 2.1 Literature on Environment Perception . 10 2.2 Measuring the Surroundings . 14 2.3 Multi-modal Spatio-temporal Association . 17 2.3.1 Continuous Mapping Between Sensor Spaces . 20 2.3.2 Discrete Multi-modal Measurement Association . 22 2.4 Distance-aware Processing Resolution . 23 2.5 Proposed Fusion Architecture . 29 3 Velocity Estimation 34 3.1 Related Work . 36 3.2 Problem Formulation . 41 3.3 Proposed Motion Estimation and Depth Reconstruction . 42 3.3.1 Energy Function Derivation . 43 3.3.2 LiDAR Data Term EP (v, d) ................. 45 3.3.3 Surface Smoothness Term ES (d) .............. 46 3.3.4 LiDAR Likelihood Term, EL(v) ............... 49 3.3.5 Image-based Data Term EI (v)................ 50 3.3.6 Occlusion and parallax modeling . 53 3.4 Optimization . 54 iii 3.4.1 Inference and Tracking . 54 3.4.2 Non-linear Penalization . 57 3.4.3 Multi-resolution analysis . 58 3.5 B-Spline Wavelets as Basis Functions .
    [Show full text]
  • Object Detection from a Vehicle Using Deep Learning Network and Future Integration with Multi-Sensor Fusion Algorithm
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by IUPUIScholarWorks 2017-01-0117 Object Detection from a Vehicle using Deep Learning Network and Future Integration with Multi-Sensor Fusion Algorithm Raja Sekhar Rao Dheekonda* Sampad K. Panda§ MD. Nazmuzzaman Khan§ Mohammad Al-Hasan* Sohel Anwar§ Indiana University Purdue University Indianapolis §Department of Mechanical Engineering *Department of Computer Science Copyright © 2017 SAE International ABSTRACT that currently exist (e.g. collision avoidance system, lane keeping assist, blind spot detection, adaptive cruise Accuracy in detecting a moving object is critical to control) can provide the basics of an autonomous autonomous driving or advanced driver assistance vehicles’ framework. However, many challenges remain systems (ADAS). By including the object classification with the current state of the art in ADAS technologies that from multiple sensor detections, the model of the object hinders safe and robust implementation of the or environment can be identified more accurately. The autonomous vehicles for consumers. One of the key critical parameters involved in improving the accuracy are challenges is real-time detection and identification of a the size and the speed of the moving object. All sensor moving object of various shapes and speeds under all data are to be used in defining a composite object possible environments and backgrounds. representation so that it could be used for the class information in the core object’s description. This For object detection and classification, image and video composite data can then be used by a deep learning data from camera is the only information source in the network for complete perception fusion in order to solve existing methodologies [17-23].
    [Show full text]
  • Investigating the Emotional Aspect and Experience of Autonomous Vehicles from the Perspective of an Industrial Designer.”
    “Investigating the emotional aspect and experience of autonomous vehicles from the perspective of an Industrial Designer.” Process Book Vol.1 Research I CONTENTS II PRELIMINARY RESEARCH III AUTONOMOUS VEHICLES I IV ITERATIVE DESIGN BRIEF V PRECEDENCE AND SPECULATION FUTURE timeline SPECULATION timeline CONTENTS INDEX of TECH INDEX of ACCIDENT INDEX of SAFETY VI USING AUTONOMOUS VEHICLES VII RESEARCH PLAN Hello. VIII IDEATION V.1 TAPE Drawing V1 • IX CONTACT PART 1 1. 2. X MID-TERM VIDEO 2.1. 2.2. XI CONTACT PART 2 2.3. 2.4. Diderik Severin Astrup Westby - Fall 2016 - Design Core 2 Project Outline “The summer before your degree In addition to this, we were also project is a good opportunity to think asked to create a reflection portfolio. openly about your degree project. Your The aim with this was to aid us in fram- ideas for the degree project should be ing our interest and design language, to II taking shape over the summer, and you ensure we could create a project which are asked to begin initial research and was related to our wants as a design- investigation early, and choose two topic er, to be able to fully focus thorugh the areas capable of sustaining your inter- entire term of the projects duration. ests and energy. There was a alot of emphasis on PRELIMINARY cars in this portfolio, so I intentionally 4th year: Projects are self-man- tried looking elsewhere to see if there RESEARCH aged. Students choose the topic, deter- were any other opportunities I might mine the criteria for success, and select have found interesting.
    [Show full text]