Real-Time Collision Risk Estimation Based on Pearson's Correlation
Total Page:16
File Type:pdf, Size:1020Kb
Real-time Collision Risk Estimation based on Pearson’s Correlation Coefficient Arthur Miranda Neto, Alessandro Corrêa Victorino, Isabelle Fantoni, Janito Vaqueiro Ferreira To cite this version: Arthur Miranda Neto, Alessandro Corrêa Victorino, Isabelle Fantoni, Janito Vaqueiro Ferreira. Real-time Collision Risk Estimation based on Pearson’s Correlation Coefficient. IEEE Work- shop on Robot Vision (WORV 2013), Jan 2013, Clearwater Beach, FL, United States. pp.40-45, 10.1109/WORV.2013.6521911. hal-00861087 HAL Id: hal-00861087 https://hal.archives-ouvertes.fr/hal-00861087 Submitted on 11 Sep 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Real-Time Collision Risk Estimation based on Pearson’s Correlation Coefficient A. Miranda Neto 1, A. Corrêa Victorino 2, I. Fantoni 2 and J. V. Ferreira 1 1 Autonomous Mobility Laboratory (LMA) at FEM/UNICAMP, Brazil 2 Heudiasyc Laboratory at UTC – CNRS UMR 7253, France Abstract understated [8]. The real nature of the information used by humans to evaluate time-to-contact is still an open question. The perception of the environment is a major issue in Humans adapt their motion to avoid collisions in order to autonomous robots. In our previous works, we have proposed a preserve admissible time-to-contact. Velocity, distance, and visual perception system based on an automatic image discarding time are intrinsically linked together [9]. method as a simple solution to improve the performance of a real- In 1895, Karl Pearson published the Pearson’s Correlation time navigation system. In this paper, we take place in the obstacle Coefficient (PCC) [10]. The Pearson's method is widely used avoidance context for vehicles in dynamic and unknown in statistical analysis, pattern recognition and image environments, and we propose a new method for Collision Risk Estimation based on Pearson’s Correlation Coefficient (PCC). processing. Applications on the latter include the comparison Applying the PCC to real-time CRE has not been done yet, making of two images for image registration purposes, object the concept unique. This paper provides a novel way of calculating recognition, and disparity measurement [11]. Based on collision risk and applying it for object avoidance using the PCC. Pearson's method, we have proposed the discarding criteria This real-time perception system has been evaluated from real data [12], [13] as a simple solution to improve the performance of obtained by our intelligent vehicle. a real-time navigation system by exploiting the temporal coherence between consecutive frames. It also automatically 1. Introduction determines the reference frame in a real time execution. In ATELY, several applications for control of autonomous this paper, a real-time perception problem is applied to Land (semi)-autonomous vehicles are being developed. intelligent vehicles (human operated or autonomous The challenge to construct robust methods, and, in most systems). Based on the PCC variation, we estimate the cases, in real-time systems, is far from being achieved. This Collision Risk Estimation in dynamic and unknown can be observed by the great number of researches being environments by using a single monocular system. published in the last few years. In Section 2 we present a review of previous works. For military or civil purposes, some of these applications Section 3, 4 and 5 introduce the Pearson’s Correlation include: the Grand Challenge [1] and Urban Challenge [2]; Coefficient, the Otsu thresholding method and the Region- Advanced Driver Assistance Systems (ADAS) [3]; Merging Algorithm. Section 6 describes the proposed autonomous perception system [4], [5], and aerial robots [6]. Collision Risk Estimation methodology. The results are The goal of the Grand Challenge was the development of presented in Section 7 and the conclusions are given in an autonomous robot capable of traversing unrehearsed, off- Section 8. road terrain [1]. For the Urban Challenge, the goal of the 2. Related Works system was to drive a car, autonomously, in a city environment, with way crossings and with static and dynamic The perception layer uses many types of sensors [1], [2], obstacles [2]. On the other hand, driven by the high number [14], including ultrasonic sensors, laser rangefinders, radar, of vehicles in all over the world, the ADAS systems emerged cameras, etc, which in many cases may be limited in scope to help the driver in its driver task [3]. Examples of such a and subject to noise. These sensors are not perfect: ultrasonic system are: autonomous cruise control, laser-based systems, sensors are cheap but suffer from specular reflections and are radar-based systems, collision avoidance system and limited in range, and laser rangefinders and radar provide precrash system [7]. better resolution but are more complex and expensive [4]. In all these cases, the important factors are the variety and The vision-based sensors are defined as passive sensors complexity of environments and situations. These intelligent and the image scanning is performed fast enough for vehicle developments have a common issue: providing to the Intelligent Transportation Systems. However, vision sensors vehicle platform the capability of perceiving and interacting are less robust than millimeter-wave radars in foggy, night, with its neighbour environment. or direct sun-shine conditions [15]. All range-based obstacle The importance of motion in visual processing cannot be detection systems have difficulty for detecting small or flat objects on the ground, and range sensors are also unable to distinguish between different types of ground surfaces [4]. Notwithstanding, the main problem with the use of active 3. Pearson’s Correlation Coefficient sensors is represented by interference among sensors of the According to [29], an empirical and theoretical same type, hence, foreseeing a massive and widespread use development that defines the regression and correlation as of these sensing agents, the use of passive sensors obtains statistical topics were presented by Sir Francis Galton in key advantages [15]. For example, the monocular vision 1885. In 1895, Karl Pearson published the PCC [10]. It is contribution to the DARPA Grand Challenge [16] showed widely used in statistical analysis, pattern recognition and that the reach of lasers is approximately 22 meters, whereas image processing. It is described by Eq. (1) [11]: the monocular vision module often looks 70 meters ahead. On the safety front, the progressive safety systems will be (x x )(y y ) developed through the manufacturing of an “intelligent ∑ i − m i − m i (1) bumper” peripheral to the vehicle in answering new features r1 = 2 2 as: blind spot detection, frontal and lateral pre-crash, etc. ∑(x i − xm ) ∑(y i − ym ) The objective in terms of cost to fill ADAS functions has to i i th be very lower than the current Adaptive Cruise Control (500 where xi is the intensity of the i pixel in image 1, yi is the Euros) [17]. In the obstacle avoidance context, the collision intensity of the i th pixel in image 2, x is the mean intensity warning algorithms typically issue a warning when the m of image 1, and is the mean intensity of image 2. current range to an object is less than the critical warning ym distance, where the safety can be measured in terms of the 4. Otsu Thresholding Method minimum time-to-collision (TTC) [18]. To calculate the TTC several techniques are presented in the literature [19], [20], 4.1 Image pre-processing [21] [22], [23]. For example, from the fusion of radar and We use a color or gray-level image. If the image is vision, the results have demonstrated the advantages of both colored, in order to utilize the most important information of sensors to improve the collision-sensing accuracy [24]. The the color image, the candidate color channel, that was radar gives accurate range and range-rate measurements dominant in certain color space, is selected to generate the while the vision solves the angular accuracy problem of histogram image [30]. It is described by Eq. (2). radar; however this fusion solution is costly [21]. Moreover, many authors focus on algorithms not suitable to perform (2) under real-time requirements such as low computational Cc = arg max (N R , NG , N B ) {}R,G,B costs [19]. Measuring distances is a non-native task for a monocular camera system [19]. Nevertheless, TTC, or time- where Cc means the color channel of the dominant color to-contact estimation is an approach to visual collision channel in certain referenced region. detection from an image sequence. It is a biologically inspired method that does not require scene reconstruction or 3D depth estimation [20]. Actually, TTC is an interesting 4.2 Otsu thresholding method (OTM): description and well studied research topic [19]. Optical flow may be Region recognition can be handled by popular used to TTC [8], [25], [26]. However, computing TTC from thresholding algorithm such as Maximum Entropy, Invariant an optical flow has proven impractical for real applications Moment and OTM. For road detection, because OTM in dynamic environment [22]. Additionally, gradient-based supplies a more satisfactory performance in image methods can be used with a certain degree of confidence in segmentation, OTM was used to overcome the negative environments such as indoors were the lighting conditions impacts caused by environmental variation [30]. can be controlled. It is computationally expensive [27]. Furthermore, some authors consider the OTM as one of the Inspired by the TTC approaches, this paper presents a best choices for real-time applications in machine vision novel approach to obtain Collision Risk Estimation (CRE) [31], [32].