
Survey Of Robust Multisensor Datafusion Techniques For LiDAR And Optical Sensors On Autonomous Navigation Prasanna Kolar, Patrick Benavidez and Mohammed Jamshidi Department of Electrical and Computer Engineering The University of Texas at San Antonio One UTSA Circle, San Antonio, TX 78249, USA Email: [email protected], [email protected], [email protected] Abstract—A system is as good as its sensors and in turn a using sensors for autonomous navigation of the robot and the sensor is only as good as the data it measures. Accurate, optimal main sensors that will be used for obstacle detection are the sensor data can be used in autonomous systems applications LiDAR and camera. As we will see in the upcoming sections, such as environment mapping, obstacle detection and avoidance, and similar applications. In order to obtain such accurate data these 2 sensors can complement each other and hence are we need to have optimal technology to read the sensor data, being used extensively in detection in autonomous systems. process the data, eliminate the noise and then utilize it. As part LiDAR and camera market is expected to reach USD 52:5 of this paper, we present a survey of the current data processing Billion by year 2032, as given in a recent survey by the techniques that implement data fusion using various sensors like Yole group, as given in a write-up by ”First Sensors” group. LiDAR, stereo/depth cameras and RGB monocular cameras and we mention the implementations or usage of this fused data in The fusion of these sensors is playing a significant role in tasks like obstacle detection and avoidance or localization etc. In perceiving the environment in many applications including future, we plan on implementing a state-of-art fusion system, on the autonomous domain. Reliable fusion is also critical in the an intelligent wheelchair, controlled by a human thought, without safety aspect of these technologies. Many challenges lie ahead intervention of any of the user’s motor skills. and it is one of the exciting problems in this industry. We propose an efficient and fast algorithm that senses and learns the projection from the camera space to the LiDAR space The Navigation of an autonomous system typically com- and outputs camera data in the form of LiDAR detection (dis- prises of three important components namely: tance and angle) and a multi-sensor and multi-modal detection a) Mapping system: which senses and understands the system that fuses both the camera and LiDAR detections to obtain environment the system is in. better accuracy and robustness. b) Localization system: that informs the robot its current Index Terms—Riemannian Geometry, Minimum Distance to Mean, Deep Learning, Convolutional Neural Networks, Sliding position at any given time. Window, Internet of Things, Smart City, Smart Community c) Obstacle avoidance system: that keeps the vehicle from running into obstacles and keeping the vehicle in a safe 1. INTRODUCTION zone. Autonomous systems play a vital role in our daily life, in The navigation system in also responsible for decision making a wide range of applications like driverless cars, humanoid capability of the robot when it faces situations that demand robots, assistive systems, domestic systems, military systems negotiating with humans and/or other robots. This research and manipulator systems, to name a few. Assistive robotics focuses on the obstacle avoidance module of autonomous systems are a crucial area of autonomous systems that help navigation. people in need of medical, mobility, domestic, physical and Efficient mapping is a critical process that handles accurate mental assistance, that is gaining popularity in domestic ap- localization and driving decision making in the autonomous plications like Autonomous wheelchair systems [1], [2], au- system. The mechanical system chosen to present the numer- tonomous walkers [3], lawn movers [4], [5], vacuum cleaners ical scheme has several features. This system will be an intel- [6], intelligent canes [7], surveillance systems. The present ligent power wheelchair, that is capable of semi autonomous research focuses on development of state of art techniques driving in a known environment. This system utilizes a range using LiDAR and Camera, which will in turn be used as part of sensors such as a Cameras, Light Imaging Detection and of an object detection and obstacle avoidance mechanism in Ranging(LiDAR), ultrasound sensors, Navigation sensors etc. an intelligent wheelchair, which will be used by persons with Each sensor has its own preference for usage. We choose to use limited or no motor skills. In this survey, we concentrate on LiDARs as they are well known for their high speed sensing and are used for long range sensing and also for long range * ACE Lab, Department of Electrical Engineering mapping, while depth cameras and stereo cameras can be used for short range mapping and also used to efficiently detect obstacles, pedestrians [8] etc. Obstacle avoidance during navigation is a critical compo- nent of autonomous systems. Autonomous vehicles must be able to navigate their environment safely. While Path planning requires the vehicle to go in the direction nearest to the goal, and generally the map of the area is known, obstacle avoidance entails selection of the best direction within several unobstructed directions in real time. As mentioned earlier, this publication is limited to perform- ing research in developing a state of art framework using a LiDAR and Depth camera to provide a robust data fusion to be used for object detection, object identification and avoidance as required. But why do we need multiple sensors would Fig. 1: Concepts of perception be the question here and the answer is that every sensor used provides a different type of information in the selected environment, which include the tracked object, avoided object, There have been many attempts at reducing or removing the the autonomous vehicle itself, the world its being used and noise. For instance in object detection [11], background noise so on and so forth, and the information is provided with removal [12]. In this section we discuss filtering noise using differing accuracy and differing details. Due to the usage of Kalman Filter. Kalman filter is over 5 decades old and is one multiple sensors, an optimal technique for fusing the data to of the most sought after filtering techniques. We will discuss obtain the best information at the time is of essence. Multiple 2 flavors of Kalman filter, namely: Extended Kalman Filter sensor fusion has been a topic of research for several decades; and Unscented Kalman Filter. Different types of sensor data there is a dire need to optimally combine information from fusion technologies have been compared in this section. different views of the environment to get an accurate model 1) Decision or Highlevel fusion of the environment at hand. The most common process is 2) Feature or midlevel fusion to combine redundant and complementary measurements of 3) rawdata or low-level fusion the environment. The required information by the intelligent system cannot be obtained by a single sensor due to its Decision or Highlevel fusion: At the highest level, the system limitations and uncertainty. decides the major tasks and takes decisions based on the fusion The tasks of a navigation system namely mapping, local- of information, that is input from the system features ization, object detection and tracking can also be interpreted Feature or midlevel fusion: At the feature level, feature maps as the following process(es). containing lines, corners, edges, textures, lines are integrated • Mapping : A process of establishing spatial relationship and decisions made for tasks like obstacke detection, object among stationary objects in an environment recognition etc. • Localization : A process of establishing spatial relation- Rawdata or low-level fusion: At this most basic or lowest ship between the intelligent system and the stationary level, better or improved data is obtained by integrating raw objects data directly from multiple sensors, such data they can be • Object detection : A process of identifying objects that used in tasks. This new combined raw data will contain more are present in the environment information than the individual sensor data. • Mobile object tracking : A process of establishing tempo- We have not performed an exhaustive review of the data ral and spatial relationship between the intelligent system, fusion since there has been extensive research in this area and the mobile objects in the environment and the stationary have summarized the most common data fusion techniques objects. and the involved steps. These tasks vary in several ways and therefore a single sensor This paper is organized as follows: will not be able to provide all the information that is necessary This Section: 1 gives a brief introduction of datafusion and to optimally perform the above mentioned tasks. Hence we robot navigation. Section: 2 details the accomplishments in have the need to use multiple sensors that may be redundant the area of perception, benefits of datafusion and usefulness but are complementary and are able to provide the information of multiple sensors. Section: 3 gives details on the design of to the perception module in the intelligent system. Hence the the system while section: 4 discusses some sample datafusion perception module uses information from sensors like LiDAR, techniques. Section: 6 Details the sensor noise and gives camera, ultrasonic etc. We will detail these sensors and the details of noise filtering using techniques such as Kalman Fil- above mentioned tasks in the following sections. Combining tering while Section: 7 gives the architecture and framework information from several sensors is a current challenging of the proposed system while discussing some of the previous problem and state of the art solution [9], [10]. Every sensor methods. Section: 8-B describes the methodology. Section: has an amount of noise that is inherent to its properties.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-