Sensor Fusion in Time-Triggered Systems

Total Page:16

File Type:pdf, Size:1020Kb

Sensor Fusion in Time-Triggered Systems DISSERTATION Sensor Fusion in Time-Triggered Systems ausgefuhrt¨ zum Zwecke der Erlangung des akademischen Grades eines Doktors der technischen Wissenschaften unter der Leitung von O. Univ.-Prof. Dr. Hermann Kopetz Institut fur¨ Technische Informatik 182 eingereicht an der Technischen Universit¨at Wien, Fakult¨at fur¨ Technische Naturwissenschaften und Informatik von Wilfried Elmenreich Matr.-Nr. 9226605 Sudtirolerstraße¨ 19, A-8280 Furstenfeld¨ Wien, im Oktober 2002 . Sensor Fusion in Time-Triggered Systems Sensor fusion is the combining of sensory data or data derived from sensory data in order to produce enhanced data in form of an internal represen- tation of the process environment. The achievements of sensor fusion are robustness, extended spatial and temporal coverage, increased confidence, reduced ambiguity and uncertainty, and improved resolution. This thesis examines the application of sensor fusion for real-time ap- plications. The time-triggered approach provides a well suited basis for building real-time systems due to its highly deterministic behavior. The integration of sensor fusion applications in a time-triggered framework sup- ports resource-efficient dependable real-time systems. We present a time- triggered approach for real-time sensor fusion applications that partitions the system into three levels: First, a transducer level contains the sensors and the actuators. Second, a fusion/dissemination level gathers measure- ments, performs sensor fusion and distributes control information to the actuators. Third, a control level contains a control program making control decisions based on environmental information provided by the fusion le- vel. Using this architecture, complex applications can be decomposed into smaller manageable subsystems. Furthermore, this thesis evaluates different approaches for achieving de- pendability. These approaches attack the problem at different levels. At the transducer level, we introduce a filter algorithm that performs successive measurements in order to improve the data quality of a single sensor. A dif- ferent approach improves data by combining data of multiple sensors at the fusion/dissemination level. We propose two sensor fusion algorithms to ac- complish this task, the systematic confidence-weighted averaging algorithm and the application-specific robust certainty grid algorithm. The proposed methods are evaluated and used in a case study featuring an autonomous mobile robot. i ii Sensor Fusion in zeitgesteuerten Systemen Unter Sensor Fusion versteht man die intelligente Zusammenfuhrung¨ von Sensordaten zu einem konsistenten Bild der beobachteten Umgebung. Die Verwendung von Sensor Fusion erzielt stabiles Verhalten gegenuber¨ St¨oreinflussen,¨ eine Verbesserung des zeitlichen oder r¨aumlichen Messbe- reichs, erh¨ohte Aussagewahrscheinlichkeit einer Messung, eindeutige Inter- pretationen der Daten sowie verbesserte Aufl¨osung der Messdaten. Diese Arbeit untersucht die Integration von Sensor Fusion in Echt- zeitsystemen. Echtzeitsysteme mussen¨ auf gegebene Eingabedaten inner- halbdefinierterZeitschrankenreagieren.ZeitgesteuerteEchtzeitsysteme beinhalten eine globale Uhrensynchronisation und leiten s¨amtliche Steu- ersignale und Messzeitpunkte vom Fortschreiten der realen Zeit ab. Die Verfugbarkeit¨ einer global synchronisierten Zeit erleichtert die Implemen- tierung von Sensor-Fusion-Algorithmen, da die Zeitpunkte verteilter Beob- achtungen global interpretiert werden k¨onnen. In dieser Arbeit wird ein zeitgesteuerter Ansatz fur¨ den Entwurf und die Implementierung von Sensor Fusion in Echtzeitanwendungen vorge- stellt. Das vorgeschlagene Architekturmodell unterteilt eine Sensor-Fusion- Anwendung in drei Ebenen, die Transducerebene, die Fusionsebene und die Steuerungsebene. Die Unterteilung erm¨oglicht die Zerlegung einer komple- xen Applikation in beherrschbare, getrennt implementierbare und wieder- verwendbare Teile. Der zweite Schwerpunkt dieser Arbeit liegt bei der Erh¨ohung der Zu- verl¨assigkeit von Sensordaten mittels Sensor Fusion. Um Zuverl¨assigkeit zu erreichen, werden unterschiedliche Ans¨atze fur¨ die verschiedenen Ebe- nen vorgestellt. Fur¨ die Transducerebene wird ein Filter eingesetzt, wel- cher mehrere zeitlich hintereinanderliegende Messungen zusammenfuhrt,¨ um das Messergebnis zu verbessern. Auf der Fusionsebene werden Algorith- men vorgestellt, welche es erm¨oglichen, aus einer Anzahl von redundanten Messungen unterschiedlicher Sensoren ein robustes Bild der beobachteten Umgebung zu errechnen. Die Anwendung der vorgestellten Verfahren wird experimentiell am Bei- spiel eines selbstfahrenden Roboters demonstriert und evaluiert. Dabei wird die Umgebung des Roboters mittels Infrarot– und Ultraschallsensoren be- obachtet, um selbstst¨andig einen passierbaren Weg zwischen Hindernissen zu finden. iii iv Danksagung Diese Arbeit entstand im Rahmen meiner Forschungs- und Lehrt¨atigkeit am Institut fur¨ Technische Informatik, Abteilung fur¨ Echtzeitsysteme, an der Tech- nischen Universit¨at Wien. Besonders danken m¨ochte ich dem Betreuer meiner Dissertation, Prof. Dr. Hermann Kopetz, der mir die Forschungst¨atigkeit am Institut fur¨ Technische Informatik erm¨oglichte. Er unterstutzte¨ meine Arbeit durch wertvolle Anregun- gen und stimulierende Diskussionen und pr¨agte so meinen wissenschaftlichen Werdegang. Ich m¨ochte allen meinen Kollegen und Freunden am Institut fur¨ das ange- nehme Arbeitsklima danken. Das freundschaftliche Verh¨altnis, das wir unter- einander pflegen, sowie zahlreiche fachliche Diskussionen hatten wesentlichen Einfluss auf die Qualit¨at dieser Arbeit. Zudem danke ich Michael Paulitsch, Claudia Pribil, Peter Puschner, Thomas Galla sowie Wolfgang Haidinger, Ste- fan Pitzek, Philipp Peti und Gunther¨ Bauer fur¨ das gewissenhafte Korrektur- lesen beziehungsweise fur¨ wertvolle Hinweise bei der Arbeit an dieser Disserta- tion. Meiner Freundin Claudia Pribil m¨ochte ich schließlich fur¨ ihre Geduld und Unterstutzung¨ w¨ahrend meiner Arbeit an dieser Dissertation danken. v vi Contents 1 Introduction 1 1.1 Related Work . 3 1.2 Motivation and Objectives . 4 1.3 Structure of the Thesis . 5 2 Basic Terms and Concepts 7 2.1 Principles of Sensor Fusion . 7 2.1.1 Motivation for Sensor Fusion . 9 2.1.2 Limitations of Sensor Fusion . 12 2.1.3 Types of Sensor Fusion . 13 2.2 Real-Time Systems . 17 2.2.1 Classification of Real-Time Systems . 18 2.2.2 Model of Time . 21 2.2.3 Real-Time Entities and Real-Time Images . 21 2.3 Dependability . 22 2.3.1 Attributes of Dependability . 22 2.3.2 Means of Dependability . 24 2.3.3 Impairments of Dependability . 24 2.4 Distributed Fault-Tolerant Systems . 25 2.4.1 Fault Modelling . 26 2.4.2 Fault Tolerance through Redundancy . 27 2.4.3 Transparency, Layering, and Abstraction . 28 2.5 Smart Transducer Networks . 29 2.5.1 Sensors and Actuators . 29 2.5.2 Microcontrollers for Embedded Systems . 30 2.5.3 Smart Transducer Interfaces . 30 2.6 Chapter Summary . 32 vii 3 Sensor Fusion Architectures and Applications 33 3.1 Architectures for Sensor Fusion . 33 3.1.1 The JDL Fusion Architecture . 33 3.1.2 Waterfall Fusion Process Model . 35 3.1.3 Boyd Model . 36 3.1.4 The LAAS Architecture . 37 3.1.5 The Omnibus Model . 39 3.2 Methods and Applications . 40 3.2.1 Smoothing, Filtering, and Prediction . 40 3.2.2 Kalman Filtering . 41 3.2.3 Inference Methods . 44 3.2.4 Occupancy Maps . 45 3.2.5 Certainty Grid . 46 3.2.6 Reliable Abstract Sensors . 48 3.3 Chapter Summary . 49 4 Architectural Model 51 4.1 Design Principles . 51 4.2 Time-Triggered Sensor Fusion Model . 53 4.2.1 Transducer Level . 54 4.2.2 Fusion/Dissemination Level . 56 4.2.3 Control Level . 57 4.2.4 Operator . 57 4.3 Interfaces . 58 4.3.1 Interface Separation . 58 4.3.2 Interfaces in the Time-Triggered Sensor Fusion Model . 59 4.3.3 Interface File System . 62 4.4 Communication Protocol . 65 4.4.1 Bus Scheduling . 65 4.4.2 Clock Synchronization . 68 4.5 Discussion . 68 4.5.1 Benefits at Transducer Level . 68 4.5.2 Benefits at Fusion/Dissemination Level . 69 4.5.3 Benefits at Control Level . 69 4.6 Chapter Summary . 70 viii 5 Achieving Dependability by Sensor Fusion 71 5.1 Systematic versus Application-Specific Approach . 71 5.2 Systematic Dependability Framework . 72 5.2.1 Problem Statement . 72 5.2.2 Modelling of Observations . 72 5.2.3 Sensor Representation Model . 75 5.2.4 Representation of Confidence Values . 77 5.2.5 Fusion Algorithms . 78 5.3 Robust Certainty Grid . 83 5.3.1 Problem Statement . 83 5.3.2 Robust Certainty Grid Algorithm . 84 5.4 Chapter Summary . 89 6 Case Study Setup 91 6.1 Problem Statement . 91 6.1.1 Hardware Constraints . 91 6.1.2 Software Constraints . 92 6.1.3 Real-Time Constraints . 92 6.2 Development Environment . 93 6.2.1 Target System . 93 6.2.2 Programming Language . 93 6.2.3 Compiler . 94 6.2.4 Programming Tools . 94 6.3 System Architecture . 95 6.4 Demonstrator Hardware . 97 6.4.1 Electrical and Electromechanical Hardware . 97 6.4.2 TTP/A Nodes . 101 6.5 Demonstrator Software . 104 6.5.1 Infrared Sensor Filter . 104 6.5.2 Servo Control . 105 6.5.3 Grid Generation . 105 6.5.4 Navigation and Path Planning . 106 6.5.5 Fusion of Ultrasonic Observations . 109 6.5.6 Intelligent Motion Control . 110 6.6 Chapter Summary . 111 ix 7 Experiments and Evaluation 113 7.1 Analysis of Sensor Behavior . 113 7.1.1 Raw Sensor Data . 113 7.1.2 Sensor Filtering . 119 7.1.3 Fused Sensor Data . 121 7.1.4 Comparison of Results . 125 7.2 Evaluation of Certainty Grid . 128 7.2.1 Free Space Detection . 128 7.2.2 Dead End Detection . 130 7.2.3 Typical Situation with Three Obstacles . 130 7.3 Discussion and Chapter Summary . 133 8 Conclusion 135 8.1 Time-Triggered Architecture for Sensor Fusion . 135 8.2 Sensor Fusion Algorithms . 136 8.3 Outlook . 137 Bibliography 139 List of Publications 155 Curriculum Vitae 157 x List of Figures 2.1 Block diagram of sensor fusion and multisensor integration . 10 2.2 Alternative fusion characterizations based on input/output . 14 2.3 Competitive, complementary, and cooperative fusion . 16 2.4 Parts of a real-time system . 18 2.5 Dependability tree . 23 3.1 JDL fusion model . 34 3.2 The waterfall fusion process model .
Recommended publications
  • Getting Started with Motionfx Sensor Fusion Library in X-CUBE-MEMS1 Expansion for Stm32cube
    UM2220 User manual Getting started with MotionFX sensor fusion library in X-CUBE-MEMS1 expansion for STM32Cube Introduction The MotionFX is a middleware library component of the X-CUBE-MEMS1 software and runs on STM32. It provides real-time motion-sensor data fusion. It also performs gyroscope bias and magnetometer hard iron calibration. This library is intended to work with ST MEMS only. The algorithm is provided in static library format and is designed to be used on STM32 microcontrollers based on the ARM® Cortex® -M0+, ARM® Cortex®-M3, ARM® Cortex®-M4 or ARM® Cortex®-M7 architectures. It is built on top of STM32Cube software technology to ease portability across different STM32 microcontrollers. The software comes with sample implementation running on an X-NUCLEO-IKS01A2, X-NUCLEO-IKS01A3 or X-NUCLEO- IKS02A1 expansion board on a NUCLEO-F401RE, NUCLEO-L476RG, NUCLEO-L152RE or NUCLEO-L073RZ development board. UM2220 - Rev 8 - April 2021 www.st.com For further information contact your local STMicroelectronics sales office. UM2220 Acronyms and abbreviations 1 Acronyms and abbreviations Table 1. List of acronyms Acronym Description API Application programming interface BSP Board support package GUI Graphical user interface HAL Hardware abstraction layer IDE Integrated development environment UM2220 - Rev 8 page 2/25 UM2220 MotionFX middleware library in X-CUBE-MEMS1 software expansion for STM32Cube 2 MotionFX middleware library in X-CUBE-MEMS1 software expansion for STM32Cube 2.1 MotionFX overview The MotionFX library expands the functionality of the X-CUBE-MEMS1 software. The library acquires data from the accelerometer, gyroscope (6-axis fusion) and magnetometer (9-axis fusion) and provides real-time motion-sensor data fusion.
    [Show full text]
  • D2.1 Sensor Data Fusion V1
    Grant Agreement Number: 687458 Project acronym: INLANE Project full title: Low Cost GNSS and Computer Vision Fusion for Accurate Lane Level Naviga- tion and Enhanced Automatic Map Generation D2.1 Sensor Data Fusion Due delivery date: 31/12/2016 Actual delivery date: 30/12/2016 Organization name of lead participant for this deliverable: TCA Project co-funded by the European Commission within Horizon 2020 and managed by the European GNSS Agency (GSA) Dissemination level PU Public x PP Restricted to other programme participants (including the GSA) RE Restricted to a group specified by the consortium (including the GSA) CO Confidential , only for members of the consortium (including the GSA) Document Control Sheet Deliverable number: D2.1 Deliverable responsible: TeleConsult Austria Workpackage: 2 Editor: Axel Koppert Author(s) – in alphabetical order Name Organisation E-mail Axel Koppert TCA [email protected] Document Revision History Version Date Modifications Introduced Modification Reason Modified by V0.1 18/11/2016 Table of Contents Axel Koppert V0.2 05/12/2016 Filling the document with content Axel Koppert V0.3 19/12/2016 Internal Review François Fischer V1.0 23/12/2016 First final version after internal review Claudia Fösleitner Abstract This Deliverable is the first release of the Report on the development of the sensor fusion and vision based software modules. The report presents the INLANE sensor-data fusion and GNSS processing approach. This first report release presents the status of the development (tasks 2.1 and 2.5) at M12. Legal Disclaimer The information in this document is provided “as is”, and no guarantee or warranty is given that the information is fit for any particular purpose.
    [Show full text]
  • Multi-Sensor Perceptual System for Mobile Robot and Sensor Fusion-Based Localization
    2012 International Conference on Control, Automation and Information Sciences (ICCAIS) Multi-Sensor Perceptual System for Mobile Robot and Sensor Fusion-based Localization T. T. Hoang, P. M. Duong, N. T. T. Van, D. A. Viet and T. Q. Vinh Department of Electronics and Computer Engineering University of Engineering and Technology Vietnam National University, Hanoi Abstract - This paper presents an Extended Kalman Filter (EKF) dealing with Dirac delta function [1]. Al-Dhaher and Makesy approach to localize a mobile robot with two quadrature proposed a generic architecture that employed an standard encoders, a compass sensor, a laser range finder (LRF) and an Kalman filter and fuzzy logic techniques to adaptively omni-directional camera. The prediction step is performed by incorporate sensor measurements in a high accurate manner employing the kinematic model of the robot as well as estimating [2]. In [3], a Kalman filter update equation was developed to the input noise covariance matrix as being proportional to the obtain the correspondence of a line segment to a model, and wheel’s angular speed. At the correction step, the measurements this correspondence was then used to correct position from all sensors including incremental pulses of the encoders, line estimation. In [4], an extended Kalman filter was conducted to segments of the LRF, robot orientation of the compass and fuse the DGPS and odometry measurements for outdoor-robot deflection angular of the omni-directional camera are fused. navigation. Experiments in an indoor structured environment were implemented and the good localization results prove the In our work, one novel platform of mobile robot with some effectiveness and applicability of the algorithm.
    [Show full text]
  • Sensor Fusion and Deep Learning for Indoor Agent Localization
    Rochester Institute of Technology RIT Scholar Works Theses 6-2017 Sensor Fusion and Deep Learning for Indoor Agent Localization Jacob F. Lauzon [email protected] Follow this and additional works at: https://scholarworks.rit.edu/theses Recommended Citation Lauzon, Jacob F., "Sensor Fusion and Deep Learning for Indoor Agent Localization" (2017). Thesis. Rochester Institute of Technology. Accessed from This Thesis is brought to you for free and open access by RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected]. Sensor Fusion and Deep Learning for Indoor Agent Localization By Jacob F. Lauzon June 2017 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Engineering Approved by: Dr. Raymond Ptucha, Assistant Professor Date Thesis Advisor, Department of Computer Engineering Dr. Roy Melton, Principal Lecturer Date Committee Member, Department of Computer Engineering Dr. Andres Kwasinski, Associate Professor Date Committee Member, Department of Computer Engineering Department of Computer Engineering Acknowledgments Firstly, I would like to sincerely thank my advisor Dr. Raymond Ptucha for his unrelenting support throughout the entire process and for always pushing me to better my work and myself. Without him, and his mentorship, none of this would have been possible. I would also like to thank my committee, Dr. Roy Melton and Dr. Andres Kwasinski for their added support and flexibility. I also must thank the amazing team of people that I have worked with on the Milpet project, both past and present, that has dedicated many long hours to making the project a success.
    [Show full text]
  • An Overview of Iot Sensor Data Processing, Fusion, and Analysis Techniques
    sensors Review An Overview of IoT Sensor Data Processing, Fusion, and Analysis Techniques Rajalakshmi Krishnamurthi 1, Adarsh Kumar 2 , Dhanalekshmi Gopinathan 1, Anand Nayyar 3,4,* and Basit Qureshi 5 1 Department of Computer Science and Engineering, Jaypee Institute of Information Technology, Noida 201309, India; [email protected] (R.K.); [email protected] (D.G.) 2 School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, India; [email protected] 3 Graduate School, Duy Tan University, Da Nang 550000, Vietnam 4 Faculty of Information Technology, Duy Tan University, Da Nang 550000, Vietnam 5 Department of Computer Science, Prince Sultan University, Riyadh 11586, Saudi Arabia; [email protected] * Correspondence: [email protected] Received: 21 August 2020; Accepted: 22 October 2020; Published: 26 October 2020 Abstract: In the recent era of the Internet of Things, the dominant role of sensors and the Internet provides a solution to a wide variety of real-life problems. Such applications include smart city, smart healthcare systems, smart building, smart transport and smart environment. However, the real-time IoT sensor data include several challenges, such as a deluge of unclean sensor data and a high resource-consumption cost. As such, this paper addresses how to process IoT sensor data, fusion with other data sources, and analyses to produce knowledgeable insight into hidden data patterns for rapid decision-making. This paper addresses the data processing techniques such as data denoising, data outlier detection, missing data imputation and data aggregation. Further, it elaborates on the necessity of data fusion and various data fusion methods such as direct fusion, associated feature extraction, and identity declaration data fusion.
    [Show full text]
  • Heterogeneous Sensor Fusion for Accurate State Estimation of Dynamic Legged Robots
    Heterogeneous Sensor Fusion for Accurate State Estimation of Dynamic Legged Robots Simona Nobili1∗, Marco Camurri2∗, Victor Barasuol2, Michele Focchi2, Darwin G. Caldwell2, Claudio Semini2 and Maurice Fallon3 1 School of Informatics, 2 Advanced Robotics Department, 3 Oxford Robotics Institute, University of Edinburgh, Istituto Italiano di Tecnologia, University of Oxford, Edinburgh, UK Genoa, Italy Oxford, UK [email protected] ffi[email protected] [email protected] Abstract—In this paper we present a system for the state estimation of a dynamically walking and trotting quadruped. The approach fuses four heterogeneous sensor sources (inertial, kinematic, stereo vision and LIDAR) to maintain an accurate and consistent estimate of the robot’s base link velocity and position in the presence of disturbances such as slips and missteps. We demonstrate the performance of our system, which is robust to changes in the structure and lighting of the environment, as well as the terrain over which the robot crosses. Our approach builds upon a modular inertial-driven Extended Kalman Filter which incorporates a rugged, probabilistic leg odometry component Fig. 1: Left: the Hydraulic Quadruped robot (HyQ) with the Carnegie with additional inputs from stereo visual odometry and LIDAR Robotics Multisense SL sensor head at the front. Right: the main registration. The simultaneous use of both stereo vision and coordinate frames used in this paper. LIDAR helps combat operational issues which occur in real applications. To the best of our knowledge, this paper is the first In this paper, we demonstrate how inertial, kinematic, stereo to discuss the complexity of consistent estimation of pose and ve- vision and LIDAR sensing can be combined to produce a locity states, as well as the fusion of multiple exteroceptive signal sources at largely different frequencies and latencies, in a manner modular, low-latency and high-frequency state estimate which which is acceptable for a quadruped’s feedback controller.
    [Show full text]
  • Utility of Sensor Fusion of GPS and Motion Sensor in Android Devices in GPS-Deprived Environment
    Utility of Sensor Fusion of GPS and Motion Sensor in Android Devices In GPS-Deprived Environment Suresh Shrestha, Amrit Karmacharya, Dipesh Suwal Introduction • The use of Global Positioning System (GPS) signals to deliver location based services • Implemented by smart phones • Smart city Introduction • antennae needs a clear view of the sky • GPS-deprived conditions – Urban settlements – Indoor(inside hotels,malls) Application in GPS • GPS accuracies of smart phones have not been studied nearly as thoroughly as typical GPS receivers • Median horizontal error in static outdoor environment is around 5.0 to 8.5 m. • In dense and crowded areas with more buildings it is even worse. • Most android devices are equipped with motion sensors • This can be used to complement GPS accuracy INS (Inertial Navigation System) • continuously updates the position, orientation and velocity of the moving object from motion sensor readings. • typically contain – gyroscope : measuring angular velocity – accelerometer : measuring linear acceleration • immune to jamming and deception Drawbacks of INS • Measurement of acceleration and angular velocity cannot be free of error. • Integration drift • unrestrained positioning errors in the absence of coordinate updates from external source • GPS can be used for updating • Solution – Hybrid Positioning System GPS INS integrate other non-GPS methods in GPS-denied environments in order to enhance the availability and robustness of positioning solutions Kalman Filter • Estimating Algorithm • uses a series of measurements (with some inaccuracies) observed over time • produce more precise estimates by reducing noise Algorithm • The algorithm works in a two-step process. – Prediction Phase: Kalman filter produces estimates of the current state variables, along with their uncertainties.
    [Show full text]
  • Sensor Fusion Using Fuzzy Logic Enhanced Kalman Filter for Autonomous Vehicle Guidance in Citrus Groves
    SENSOR FUSION USING FUZZY LOGIC ENHANCED KALMAN FILTER FOR AUTONOMOUS VEHICLE GUIDANCE IN CITRUS GROVES V. Subramanian, T. F. Burks, W. E. Dixon ABSTRACT. This article discusses the development of a sensor fusion system for guiding an autonomous vehicle through citrus grove alleyways. The sensor system for path finding consists of machine vision and laser radar. An inertial measurement unit (IMU) is used for detecting the tilt of the vehicle, and a speed sensor is used to find the travel speed. A fuzzy logic enhanced Kalman filter was developed to fuse the information from machine vision, laser radar, IMU, and speed sensor. The fused information is used to guide a vehicle. The algorithm was simulated and then implemented on a tractor guidance system. The guidance system's ability to navigate the vehicle at the middle of the path was first tested in a test path. Average errors of 1.9 cm at 3.1 m s-1 and 1.5 cm at 1.8 m s-1 were observed in the tests. A comparison was made between guiding the vehicle using the sensors independently and using fusion. Guidance based on sensor fusion was found to be more accurate than guidance using independent sensors. The guidance system was then tested in citrus grove alleyways, and average errors of 7.6 cm at 3.1 m s-1 and 9.1 cm at 1.8 m s-1 were observed. Visually, the navigation in the citrus grove alleyway was as good as human driving. Keywords. Autonomous vehicle, Fuzzy logic, Guidance, Kalman filter, Sensor fusion, Vision.
    [Show full text]
  • Deep Learning Sensor Fusion for Autonomous Vehicles Perception and Localization: a Review Jamil Fayyad, Mohammad a Jaradat, Dominique Gruyer, Homayoun Najjaran
    Deep Learning Sensor Fusion for Autonomous Vehicles Perception and Localization: A Review Jamil Fayyad, Mohammad A Jaradat, Dominique Gruyer, Homayoun Najjaran To cite this version: Jamil Fayyad, Mohammad A Jaradat, Dominique Gruyer, Homayoun Najjaran. Deep Learning Sensor Fusion for Autonomous Vehicles Perception and Localization: A Review. Sensors - special issue ”Sen- sor Data Fusion for Autonomous and Connected Driving”, 2020, 20 (15), 35p. 10.3390/s20154220. hal-02942600 HAL Id: hal-02942600 https://hal.archives-ouvertes.fr/hal-02942600 Submitted on 18 Sep 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. sensors Review Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review Jamil Fayyad 1, Mohammad A. Jaradat 2,3, Dominique Gruyer 4 and Homayoun Najjaran 1,* 1 School of Engineering, University of British Columbia, Kelowna, BC V1V 1V7, Canada; [email protected] 2 Department of Mechanical Engineering, American University of Sharjah, Sharjah, UAE; [email protected] 3 Department of Mechanical Engineering, Jordan University of Science & Technology, Irbid 22110, Jordan 4 PICS-L, COSYS, University Gustave Eiffel, IFSTTAR, 25 allée des Marronniers, 78000 Versailles, France; dominique.gruyer@univ-eiffel.fr * Correspondence: [email protected] Received: 16 June 2020; Accepted: 24 July 2020; Published: 29 July 2020 Abstract: Autonomous vehicles (AV) are expected to improve, reshape, and revolutionize the future of ground transportation.
    [Show full text]
  • Freescale Sensor Fusion Library for Kinetis Mcus Updated for Freescale Sensor Fusion Release 5.00
    Freescale Semiconductor Document Number: FSFLK_DS DATA SHEET: PRODUCT PREVIEW Rev. 0.7, 9/2015 Freescale Sensor Fusion Library for Kinetis MCUs Updated for Freescale Sensor Fusion Release 5.00 Contents 1 Introduction .................................................................................................................................... 3 2 Functional Overview ...................................................................................................................... 6 2.1 Introduction ............................................................................................................................... 6 2.2 Accelerometer Only .................................................................................................................. 6 2.3 Accelerometer Plus Magnetometer ........................................................................................... 7 2.4 Accelerometer Plus Gyroscope ................................................................................................. 7 2.5 Accelerometer Plus Magnetometer Plus Gyroscope ................................................................. 7 3 Additional Support ......................................................................................................................... 8 3.1 Freescale Sensor Fusion Toolbox for Android .......................................................................... 9 3.2 Freescale Sensor Fusion Toolbox for Windows ........................................................................ 9 3.3
    [Show full text]
  • Sensor Fusion
    RD02: Sensor Fusion for System Wellbeing (Sensor Fusion 101) Learning Objectives You will learn: • What is sensor fusion • Sensor Tradeoffs • Frames of Reference • Introduction to Sensor Fusion, focusing on motion sensors – Magnetic Calibration – Electronic Compass – Separating gravity and linear acceleration – Virtual Gyro – Kalman Filters • Review & Wrap-up About Me • Michael Stanley • Formal training is in electrical engineering • B.S.E. from Michigan State, 1980 • MS from Arizona State 1986 • Employed at Motorola / Freescale Semiconductor from June 1980 to the present, where I’ve had multiple careers. Most recently: • SoC Integration / MCU Architecture • Sensors Architecture / Algorithms / Product Definition • basically, solving systems level problems • I blog on sensor related topics at •http://www.freescale.com/blogs/mikestanley and http://memsblog.wordpress.com/ • Email = [email protected] What is sensor fusion? Sensor fusion encompasses a variety of techniques which can: • Trade off strengths and weaknesses of the various sensors to compute something more than can be calculated using the individual components; • Improve the quality and noise level of computed results by taking advantage of: • Known data redundancies between sensors • Knowledge of system transfer functions, dynamics and/or kinematics Sensor Fusion is Everywhere • Google augmented reality headset photo from: http://commons.wikimedia.org/wiki/File:Google_Glass_Explorer_Edition.jpeg • Image of Surface Windows 8 Pro with Type Cover used with permission of
    [Show full text]
  • Intelligent Real-Time MEMS Sensor Fusion and Calibration," in IEEE Sensors Journal, Vol
    This is the accepted version of the manuscript: D. Nemec, A. Janota, M. Hruboš and V. Šimák, "Intelligent Real-Time MEMS Sensor Fusion and Calibration," in IEEE Sensors Journal, vol. 16, no. 19, pp. 7150-7160, Oct.1, 2016, doi: 10.1109/JSEN.2016.2597292, URL: http://ieeexplore.ieee.org/document/7529156 Intelligent real-time MEMS sensor fusion and calibration Dušan Nemec, Aleš Janota, Marián Hruboš, and Vojtech Šimák example of velocity measurement when the sensor is Abstract—This paper discusses an innovative adaptive measuring time derivative of desired variable. Main heterogeneous fusion algorithm based on estimation of the mean disadvantage of this method is great sensitivity of the output square error of all variables used in real time processing. The quality to the precision of the sensor measurements (especially algorithm is designed for a fusion between derivative and sensor bias will cause increasing drift of the integrated result). absolute sensors and is explained by the fusion of the 3-axial gyroscope, 3-axial accelerometer and 3-axial magnetometer into First step of elimination of the integrated error is calibration of attitude and heading estimation. Our algorithm has similar error the sensor. A standard way of calibration measures raw output performance in the steady state but much faster dynamic of the sensor as a response to stimulus with known amplitude. response compared to the fixed-gain fusion algorithm. In Relation between raw and real sensor outputs is formed into comparison with the extended Kalman filter the proposed the transfer function (calibration curve) and its parameters are algorithm converges faster and takes less computational time.
    [Show full text]