Traffic Light Mapping, Localization, and State Detection For
Traffic Light Mapping, Localization, and State Detection for Autonomous Vehicles Jesse Levinson*, Jake Askeland†, Jennifer Dolson*, Sebastian Thrun* *Stanford Artificial Intelligence Laboratory, Stanford University, CA 94305 †Volkswagen Group of America Electronics Research Lab, Belmont, CA 94002 Abstract— Detection of traffic light state is essential for au- tonomous driving in cities. Currently, the only reliable systems for determining traffic light state information are non-passive proofs of concept, requiring explicit communication between a traffic signal and vehicle. Here, we present a passive camera- based pipeline for traffic light state detection, using (imperfect) vehicle localization and assuming prior knowledge of traffic light location. First, we introduce a convenient technique for mapping traffic light locations from recorded video data using tracking, back-projection, and triangulation. In order to achieve Fig. 1. This figure shows two consecutive camera images overlaid with robust real-time detection results in a variety of lighting condi- our detection grid, projected from global coordinates into the image frame. tions, we combine several probabilistic stages that explicitly In this visualization, recorded as the light changed from red to green, grid account for the corresponding sources of sensor and data cells most likely to contain the light are colored by their state predictions. uncertainty. In addition, our approach is the first to account for multiple lights per intersection, which yields superior results by probabilistically combining evidence from all available lights. To evaluate the performance of our method, we present several To overcome the limitations of purely vision-based ap- results across a variety of lighting conditions in a real-world proaches, we take advantage of temporal information, track- environment.
[Show full text]