Intelligent Transportation Systems

Using Fuzzy Logic in Automated Vehicle Control

José E. Naranjo, Carlos González, Ricardo García, and Teresa de Pedro, Instituto de Automática Industrial

Miguel A. Sotelo, Universidad de Alcalá de Henares

ntil recently, in-vehicle computing has been largely relegated to auxiliary U tasks such as regulating cabin temperature, opening doors, and monitoring Automated versions fuel, oil, and battery-charge levels. Now, however, computers are increasingly assum- of a mass-produced ing driving-related tasks in some commercial models. Among those tasks are

vehicle use fuzzy logic • maintaining a reference velocity or keeping a safe nomous robots and fuzzy control systems and the distance from other vehicles, Universidad de Alcalá de Henares’s knowledge of techniques to both • improving night vision with infrared cameras, and artificial vision. (The “Intelligent-Vehicle Systems” • building maps and providing alternative routes. sidebar discusses other such projects.) We’re devel- address common oping a testbed infrastructure for vehicle driving that Still, many traffic situations remain complex and includes control-system experimentation, strategies, challenges and difficult to manage, particularly in urban settings. and sensors. All of our facilities and instruments are The driving task belongs to a class of problems that available for collaboration with other research groups incorporate human depend on underlying systems for logical reasoning in this field. So far, we’ve automated and instru- and dealing with uncertainty. So, to move vehicle mented two Citroën Berlingo mass-produced vans procedural knowledge computers beyond monitoring and into tasks related to carry out our objectives. to environment perception or driving, we must inte- into the vehicle control grate aspects of human intelligence and behaviors Automated-vehicle equipment so that vehicles can manage driving actuators in a Figure 1 shows two mass-produced electric Cit- algorithms. way similar to humans. roën Berlingo vans, which we’ve automated using This is the motivation behind the AUTOPIA program, an embedded fuzzy-logic-based control system to a set of national research projects in Spain. AUTOPIA control their speed and steering. The system’s main has two primary objectives. First, we want to imple- sensor inputs are a CCD (charge-coupled device) ment automatic driving using real, mass-produced color camera and a high-precision global position- vehicles tested on real roads. Although this objective ing system. Through these, the system controls the might be called “utopian” at the moment, it’s a great vehicle-driving actuators—that is, the steering, throt- starting point for exploring the future. Our second tle, and brake pedals. Both vehicles include an aim is to develop our automated system using mod- onboard PC-based computer; a centimetric, real-time ular components that can be immediately applied in kinematic differential GPS (RTK DGPS); Wireless the automotive industry. LAN support; two servomotors; and an analog/dig- AUTOPIA builds on the Instituto de Automática ital I/O card. We added a vision system in another Industrial’s extensive experience developing auto- computer connected to the control computer. Figure

36 1541-1672/07/$25.00 © 2007 IEEE IEEE INTELLIGENT SYSTEMS Published by the IEEE Computer Society Intelligent-Vehicle Systems

According to Ernst Dickmanns, it’s likely to be at least 20 transportation-system activities include research8 and manag- years before we’ll apply intelligent transportation systems to ing the National Field Testing Complex.9 In 2004, DARPA decided road transportation and achieve the ultimate goal of full au- to test automatic-vehicle technology by organizing the DARPA tonomous vehicle driving.1 The STARDUST final report assumes Grand Challenge. The experiment consisted of a set of activi- that ITS progress toward this goal will be rooted in adaptive ties for autonomous vehicles, with 25 out of 106 groups se- .2 lected to participate.10 Only 14 groups qualified for the final, In 1977, Sadayuki Tsugawa’s team in Japan presented the which was a race of approximately 320 km; the winner made first intelligent vehicle with fully automatic driving.3 Although it only 12 km.11 that system was limited, it demonstrated autonomous vehicles’ A second edition was held on 8 October 2005 in the US. The technical feasibility, and Tsugawa’s group continues its autono- Stanford Racing Team won, with a winning time of 6 hours, 53 mous vehicle work today, and many other international proj- minutes. In all, five teams completed the Grand Challenge ects have also been launched, including the following efforts course, which was 132 miles over desert terrain. DARPA has an- in Europe and the US. nounced a third Grand Challenge, set for 3 November 2007. In it, participants will attempt an autonomous, 60-mile route in an European projects urban area, obeying traffic laws and merging into moving traffic. In Europe, researchers completed autonomous-vehicle de- velopments such as VaMoRs (advanced platform for visual autonomous road vehicle guidance) and Argo as part of the References Prometheus program (the Program for European Traffic with Highest and Unprecedented Safety). 1. E. Dickmanns, “The Development of Machine Vision for Road Vehi- As part of the VaMoRs project, Ernst Dickmanns’ team at cles in the Last Decade,” Proc. IEEE Intelligent Vehicle Symp., vol. 1, the Universität der Bundeswehr developed an automatic-dri- IEEE Press, 2002, pp. 268–281. ving system using artificial vision and transputers in the late 2. Sustainable Town Development: A Research on Deployment of Sus- 1980s. The team installed the system in a van with automated tainable Urban Transport (Stardust), Critical Analysis of ADAS/AVG actuators and ran a range of automatic-driving experiments, Options to 210, Selection of Options to Be Investigated, European some of which were on conventional highways at speeds up Commission Fifth Framework Programme on Energy, Environment, to 130 kmh. Projects such as VAMP (the advanced platform for and Sustainable Development Programme, 2001. visual autonomous road vehicle guidance) are continuing this work today.4 3. S. Kato et al., “Vehicle Control Algorithms for Cooperative Driving Alberto Broggi’s team developed the ARGO vehicle at Parma with Automated Vehicles and Intervehicle Communications,” IEEE University.5 In 1998, the team drove ARGO 2,000 km along Ital- Trans. Intelligent Transportation Systems, vol. 3, no. 3, 2002, pp. 155–161. ian highways in automatic mode using artificial-vision-based steering. 4. U. Franke et al., “Autonomous Driving Goes Downtown,” IEEE In France, projects such as Praxitele and “La route automa- Intelligent Systems, vol. 13, no. 6, 1998, pp. 40–48. tisée” focus on driving in urban environments, as do the Euro- pean Union’s Cybercars and CyberCars-2 projects. Another Eu- 5. A. Broggi et al., Automatic Vehicle Guidance: The Experience of ropean project, Chauffeur, focuses on truck platoon driving. the ARGO Autonomous Vehicle, World Scientific, 1999.

US projects 6. R. Rajamani et al., “Demonstration of Integrated Longitudinal and California’s PATH (Partners for Advanced Transit and High- Lateral Control for the Operation of Automated Vehicles in Pla- ways)6 is a multidisciplinary program launched in 1986. Its ulti- toons,” IEEE Trans. Control Systems Technology, vol. 8, no. 4, 2000, mate goal is to solve the state’s traffic problems by totally or pp. 695–708. partially automating vehicles. PATH uses special lanes for auto- 7. F.Y. Wang, P.B. Mirchandani, and Z. Wang, “The VISTA Project and Its matic vehicles, which will circulate autonomously (guided by Applications,” IEEE Intelligent Systems, vol. 17, no. 6, 2002, pp. 72–75. magnetic marks) and form platoons. Carnegie Mellon University’s has automated 11 vehi- 8. N.N. Zheng et al., “Toward Intelligent Driver-Assistance and Safety cles since 1984 to study and develop autonomous-driving tech- Warning System,” IEEE Intelligent Systems, vol. 19, no. 2, 2004, pp. niques. In 1995, NAVLAB #5 carried out the No Hands across 8–11. America experiment, a trip from Washington D.C. to California along public highways in which artificial vision and a neural 9. F.Y. Wang et al., “Creating a Digital-Vehicle Proving Ground,” IEEE network control system managed the vehicle’s steering wheel. Intelligent Systems, vol. 18, no. 2, 2003, pp. 12–15. The University of Arizona’s VISTA (Vehicles with Intelligent 10. Q. Chen, U. Ozguner, and K. Redmill, “Ohio State University at the Systems for Transport Automation) project started in 1998 to DARPA Grand Challenge: Developing a Completely Autonomous Ve- conduct intelligent-vehicle research and develop technology hicle,” IEEE Intelligent Systems, vol. 19, no. 5, 2004, pp. 8–11. for vehicle control.7 Since 2000, the project has been cooperat- ing with the Chinese Academy of Sciences, whose intelligent- 11. J. Kumagai, “Sand Trap,” IEEE Spectrum, June 2004, pp. 34–40.

2 shows the control system that we developed control. To automate the steering, we installed card. The brake pedal is fully mechanical; we to handle all these devices. a DC servomotor in the steering wheel col- automated it using a pulley and a DC servo- The computer drives the vans using two umn. The Berlingo has an electronic throttle motor. We equipped the transmission with an fuzzy-logic-based controllers: the steering control, so we shortened the electronic circuit electronic gearbox with forward and reverse (lateral) control and the speed (longitudinal) to actuate the throttle using an analog output selection. We automated this using a digital

JANUARY/FEBRUARY 2007 www.computer.org/intelligent 37 Intelligent Transportation Systems

We compute the variables’ instantaneous value using the DGPS data and a digital envi- ronment map. The fuzzy output variable is Steering_Wheel and indicates which direction the system must turn the steering wheel to correct the input errors. Again, the variable also has left and right linguistic values. The value is left if the steering wheel must turn counterclockwise, and right if it must turn clockwise. We define the fuzzy sets that define the left and right values in an interval of –540 degrees and 540 degrees. As with human behavior, our guidance system works differently for tracking lanes or turning on sharp bends. When traveling Figure 1. The AUTOPIA testbed vehicles. An embedded fuzzy-logic-based control system along a straight road, people drive at rela- controls both speed and steering in each Citroën Berlingo. tively high speeds while gently turning the steering wheel. In contrast, on sharp bends, they rapidly reduce speed and quickly turn the steering wheel. We emulate such behav- DC Steering ior by changing the membership function Tachometer motor wheel parameters of the Lateral_Deviation, Angular_Devi- Analog ation, and Steering_Wheel linguistic variables. To GPS Guidance Throttle system card represent the human procedural knowledge in the driving task, we need only two fuzzy Wireless DC LAN Brake rules. These rules tell the fuzzy inference motor motor how to relate the fuzzy input and out- Camera Vision put variables: system IF Angular_Error left OR Lateral_Error left THEN Steering_Wheel right Figure 2. The AUTOPIA system control structure. The sensorial equipment supplies the IF Angular_Error right OR Lateral_Error right THEN necessary data to the fuzzy-logic-based guidance system, which decides the optimal Steering_Wheel left control signals to manage the vehicle actuators (steering wheel, throttle, and brake ). Although these rules are simple, they gen- I/O card that sends the correct gear to the human procedural knowledge into control erate results that are close to human driving. internal vehicle computer. We designed our algorithms. Also, fuzzy logic lets us mimic The rules are the same for all situations, but driving area to emulate an urban environment human driving behavior to some extent. the definition of the fuzzy variables’linguis- because automatic urban driving is one of tic values change. Figure 3 shows this fea- ITS’s less researched topics. Steering control ture in the membership function definition The steering control system’s objective is for Lateral_Error and Angular_Error. Figures 3a and Guidance system to track a trajectory. To model lateral and 3b show the degree of truth for the input error We modeled the guidance system using angular tracking deviations perceived by a values in straight-path tracking situations. fuzzy variables and rules. In addition to the human driver, we use two fuzzy variables: This definition lets the system act quickly steering wheel and vehicle velocity func- Lateral_Error and Angular_Error. These variables when trajectory deviations occur—again in tionalities, we also consider variables that the represent the difference between the vehi- keeping with human behavior. system can use in cle’s current and correct position and its ori- To prevent accidents, we must limit the (ACC) and overtaking capabilities. Among entation to a reference trajectory. maximum turning angle for straight-lane these variables are the distance to the next Both variables can take left or right linguistic driving. This limitation is also similar to bend and the distance to the lead vehicle (that values. Angular_Error represents the angle human behavior; we achieve it by defining is, any vehicle driving directly in front of the between the orientation and vehicle velocity the output variable membership function as automated vehicle). vectors. If this angle is counterclockwise, the a singleton, confining this turning to 2.5 per- Car driving is a special control problem Angular_Error value is left. If the angle is clock- cent of the total. Figure 3c and 3d show sim- because mathematical models are highly wise, the Angular_Error value is right. Lateral_Error ilar function definitions, but their shape’s gra- complex and can’t be accurately linearized. represents the distance from the vehicle to the dient is lower. This makes the driving system We use fuzzy logic because it’s a well-tested reference trajectory. If the vehicle is positioned less reactive when tracking a straight trajec- method for dealing with this kind of system, on the trajectory’s left, the Lateral_Error value is tory and assures that they’ll adapt to the route provides good results, and can incorporate left; it’s right if the vehicle is on the right. smoothly. We can also represent the output

38 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS Right Left Right Left 1 1

0 0 –0.8 0 0.8 –180 –2 0 2 180 (a) Meters (b) Degrees

Right Left Right Left 1 1

0 0 –5.9 0 3.75 –180 –53 0 63 180 (c) Meters (d) Degrees

LESS THAN null null MORE THAN null LESS THAN null null MORE THAN null 1 1

0 0 –15 0 20 –35 0 13 (e) kmh (f) kmh/s

LESS THAN nullfnullf MORE THAN nullf LESS THAN nullf nullf 1 1

0 0 –14 0 3 25 –5 0 (g) kmh (h) kmh/s

Figure 3. The membership function definition for fuzzy variables: (a) Lateral_Error straight, (b) Angular_Error straight, (c) Lateral_Error curves, (d) Angular_Error curves, (e) Speed_Error throttle, (f) Acceleration throttle, (g) Speed_Error brake, and (h) Acceleration brake. using a singleton without turning limitations. user-defined target speed, and the Acceleration IF Acceleration LESS THAN nullf THEN Brake up We fine-tuned the membership functions crisp value is the speed’s variation during a experimentally, comparing their behavior time interval. The throttle pressure range is where brake/throttle down means depress the brake with human operations and correcting it 2–4 volts, and the brake pedal range is and throttle, and brake/throttle up means release accordingly until the system performed ac- 0–240 degrees of the actuation motor. the brake and throttle. The associated mem- ceptably. So, the driving system selects a The fuzzy rules containing procedural bership functions of the fuzzy linguistic labels fuzzy membership function set depending on knowledge for throttle control are null and nullf define the degree of nearness to 0 the situation, which leads to different reac- of Acceleration and Speed_Error, respectively. tions for each route segment. IF Speed_Error MORE THAN null THEN Throttle up Figures 3e through 3h show the member- IF Speed_Error LESS THAN null THEN Throttle down ship functions of null (for the throttle con- Speed control IF Acceleration MORE THAN null THEN Throttle up troller) and nullf (for the brake controller) for To control speed, we use two fuzzy input IF Acceleration LESS THAN null THEN Throttle down Speed_Error and Acceleration, respectively. An variables: Speed_Error and Acceleration. To con- asymmetry exists in the two variable defini- trol the accelerator and the brake, we use The rules for brake control are tions for two reasons: two fuzzy output variables: Throttle and Brake. The Speed_Error crisp value is the difference IF Speed_Error MORE THAN nullf THEN Brake down • to account for the difference in how accel- between the vehicle’s real speed and the IF Speed_Error LESS THAN nullf THEN Brake up erating and braking vehicles behave, and

JANUARY/FEBRUARY 2007 www.computer.org/intelligent 39 Intelligent Transportation Systems

Target headway: 2s Target headway: 2s Target headway: 2m Speed: 30 kmh Speed: 9 kmh Speed: 0 kmh Distance: 16.6 m Distance: 5 m Distance: 2 m (a) (b) (c)

Figure 4. The adaptive cruise control + Stop&Go controller’s performance in an automated vehicle. Keeping a safe distance (a) at 30 kmh, (b) during speed reduction, and (c) in stop-and-go situations.

• to coordinate both pedals’ actuation to work at speeds lower than 40 kmh, so they situation. A few seconds later, the trailing emulate human driving. can’t offer stop-and-go maneuvers. Second, vehicle starts up again, eventually stopping the systems manage only the throttle auto- behind the lead vehicle. Figure 5a shows the Throttle and brake controllers are inde- matically; consequently, the speed adapta- system using the throttle and brake to con- pendent, but they must work cooperatively. tion range is limited. tinuously control the distance and time head- Activating the two pedals produces similar Our system overcomes these limitations way. Figure 5b shows that the trailing vehi- outcomes and can by automating the throttle and brake, which cle maintains its distance even if the speed is lets the system act across the vehicle’s entire low and the time headway is no longer sig- • increase the target speed (stepping on the speed range. In our case, we selected GPS as nificant. When the lead vehicle starts up, the throttle or stepping off the brake on down- the safety-distance sensor. We installed GPS trailing vehicle follows, respecting the time hill roads), in both vehicles, and they communicate their headway (see figure 5c). Finally, figure 5d • maintain speed (stepping on or off either position to one another via WLAN. shows each vehicle’s speed profile, which pedal when necessary), and Keeping a user-defined safety distance indicates the cars’ behavior in relation to • reduce the vehicle’s speed (downshifting from the next vehicle is a speed-dependent pedal pressure. the throttle or stepping on the brake. function: the higher the speed, the larger the required intervehicle gap. This is the time- Overtaking ACC+Stop&Go headway concept—a time-dependent safety The system can also manage obstacles or With ACC, the system can change the distance maintained between two vehicles. other vehicles in the vehicle’s path by calcu- vehicle’s speed to keep a safe distance from If we set a safety time gap of two seconds, lating when the vehicle should change lanes the lead vehicle. As an extreme example, the for example, the space gap is 22.2 meters for to overtake the (mobile or immobile) obsta- lead vehicle might come to a complete stop a vehicle moving at 40 kmh but approxi- cle. First, owing to a traffic jam. In this case, the ACC mately 55.5 meters for 100 kmh. The time must stop the vehicle using a stop-and-go gap setting depends on the vehicle’s braking • the vehicle must be in the straight-lane maneuver; when the road is clear, the ACC power, the weather, the maximum speed, and driving mode, reaccelerates the vehicle until it reaches the so on. For example, Article 54 of the Span- • the left lane must be free, and target speed. Combining ACC with stop- ish Highway Code states, • there must be room for the overtaking.2 and-go maneuvers increases driving com- The driver of a vehicle trailing another shall fort, regulates traffic speed, and breaks up keep a distance such that he or she can stop his Given this, overtaking occurs as follows: bottlenecks more quickly. Many rear-end or her vehicle without colliding with the lead- collisions happen in stop-and-go situations ing vehicle should this brake suddenly, taking 1. Initially, the vehicle is in straight-lane because of driver distractions. into account speed, adherence and braking mode. conditions. ACC systems have been on the market 2. The driving mode changes to lane- since 1995, when Mitsubishi offered the Pre- Figure 4 shows our ACC+Stop&Go con- change mode, and the vehicle moves view Distance Control system in its Dia- troller’s performance in one of our auto- into the left lane. mante model. Several sensors can provide mated vehicles. At the experiment’s begin- 3. The driving mode changes to straight- the vehicle with ACC capability: radar, laser ning, the trailing vehicle starts moving, lane mode until the vehicle has passed vision, or a combination thereof. Almost all speeds up, and eventually stops because the the obstacle or vehicle. car manufacturers now offer ACC systems lead vehicle is blocking the way. The lead 4. The driving mode again changes to for their vehicles, but they all have two clear vehicle then starts moving, gains speed, and lane-change mode, and the vehicle drawbacks. First, the ACC systems don’t brakes again, emulating a congested traffic returns to the right lane.

40 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS 1.0 Throttle pressure normalized 0.9 Brake pedal pressure normalized 0.8 0.7 0.6 0.5 0.4 0.3

Pedal pressure normalized 0.2 0.1 (a) 0

60 Headway distance 50

40

30

Headway (m) 20

10

(b) 0

10 9 Time-headway error 8 7 6 5 4 3 2 1

Headway time (sec.) 0 –1 –2 –3 (c) –4

30 Pursued speed Pursuer speed 25

20

15

Speed (kmh) 10

5

(d) 0 0 10203040506070 Time (sec.)

Figure 5. Performance measures: (a) normalization of throttle and brake pressure, (b) headway distance, (c) time-headway error, and (d) each vehicle’s speed.

JANUARY/FEBRUARY 2007 www.computer.org/intelligent 41 Intelligent Transportation Systems

Adaptive cruise control + Stop&Go straight-lane tracking No

Al Car ahead– (a) overtaking condition

Yes

Select overtaking speed

No (b) l Headway? Figure 7. The overtaking maneuver (a) starts with a change from the right to the left Yes lane and (b) ends with a change from the left to the right lane. A is the distance at which the vehicle changes lanes, and l is the vehicle’s length. Select lane-change mode

Change to the left lane 5. When the vehicle is centered in the lar_Error must be less than 2 degrees and Lat- lane, the driving mode changes back to eral_Error less than 0.8 meter. The system tran- straight-lane mode, and driving contin- sitions to step 4 when the overtaking vehicle’s No ues as usual. rear end passes the overtaken vehicle’s front Lane end and the separation is l (see figure 7b). change Figure 6 shows a detailed flowchart of Finally, the transition to step 5 is the same as completed? this algorithm. We calculate the time for from steps 2 to 3. starting the transition from the first to the Yes second step as a function of the vehicles’ Vision-based vehicle detection relative speed and the overtaking vehicle’s To achieve reliable navigation, all auto- Select straight-lane tracking length. Figure 7 illustrates the overtaking nomous vehicles must master the basic skill maneuver. The overtaking vehicle must of obstacle detection. This vision-based task Pass the overtaken car change lanes at point A+l, where A is the is complex. Consider, for example, common distance at which the vehicle changes lanes situations in urban environments, such as No and l is the vehicle’s length. The dot on the missing lane markers, vehicles parked on back of each vehicle represents a GPS both sides of the street, or crosswalks. All Overtake completed? antenna, located over the rear axle. Vehicles such situations make it difficult for a sys- use the GPS receptor and the WLAN link tem to reliably detect other vehicles, creat- Yes to continuously track their own position and ing hazards for the host vehicle. To address that of other vehicles. The lane change pro- this, we use a monocular color-vision sys- Select lane-change mode ceeds only if the front of the overtaking tem to give our GPS-based navigator visual vehicle is completely in the left lane upon reactive capacity. Change to the right lane reaching the rear of the overtaken vehicle in the right lane. Search and vehicle detection A is speed dependent—A = F(v), where v We sharply reduce execution time by lim- No is the relative speed between the overtaking iting obstacle detection to a predefined area Lane and overtaken vehicles because the higher in which obstacles are more likely to appear. change the velocity, the larger the lane-change dis- This rectangular area—or region of interest completed? tance. A is a function of the relative speed (ROI)—covers the image’s central section. between both vehicles because overtaking To robustly detect and track vehicles Yes depends on the two mobile objects’speed. In along the road, we need two consecutive this case, l is 4 meters, a Citroën Berlingo’s processing stages. First, the system locates length. vehicles on the basis of their color and Figure 6. A flow chart of the overtaking algorithm. The system transitions from step 2 to step shape properties, using vertical edge and 3 when the overtaking vehicle’s angular and color symmetry characteristics. It combines lateral errors are both low. Specifically, Angu- this analysis with temporal constraints for

42 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS (a) (b) (c) (d)

Figure 8. Two vehicle detection examples: (a) the original image, (b) the image with region-of-interest edge enhancement, (c) a vertical symmetry map, and (d) the detected vehicle’s position. consistency, assuming that vehicles gener- quence. A major challenge of temporal-spa- we want to determine whether any object in ally have artificial rectangular and sym- tial validation is for the system to identify the the current frame matches the vehicle being metrical shapes that make their vertical same vehicle’s appearance in two consecu- tracked. To do this, we specify a limited edges easily distinguishable. Second, the tive frames. To this end, our system uses the search area around the vehicle position, lead- system tracks the detected vehicle using a object’s (x, y) position in correlative frames. ing to fast, efficient detection. We also estab- real-time estimator. That is, it can use the position differences to lish a minimum correlation value and tem- describe the vehicle’s evolution in the image plate size to end the tracking process if the Vertical-edge and symmetry plane. At time instant t0, the system annotates system obtains poor correlations or if the discriminating analysis each target object’s (x, y) position in a vehicle moves too far away or leaves the After identifying candidate edges repre- dynamic list, and starts a time count to track scene. senting the target vehicle’s limits, the system all candidate vehicles’temporal consistency. Next, we filter the vehicle position mea- 3 computes a symmetry map of the ROI to At time t0 + 1, it repeats the process using the surements using a recursive least-squares enhance the objects with strong color sym- same spatial-validation criterion. We increase estimator with exponential decay.5 To avoid metry characteristics. It computes these char- the time count only for those objects whose partial occlusions, the system keeps the pre- acteristics using pixel intensity to measure distance from some previous candidate vehi- viously estimated vehicle position for five the match between two halves of an image cles is less than dv. Otherwise, we reset the consecutive iterations—without calculating region around a vertical axis. It then consid- time count. A candidate object is validated any validated position—before considering ers the vertical edges of paired ROIs with as a real vehicle when its time count reaches the vehicle track as lost. Given a loss, the high symmetry measures (rejecting uniform t = 0.5s. system stops vehicle tracking and restarts areas). It does this only for pairs represent- Given that the vision algorithm’s com- the vehicle detection stage. Figure 9 illus- ing possible vehicle contours, disregarding plete execution time is 100 ms, an empiri- trates our algorithm, showing how the sys- any combinations that lead to unrealistic cal value dv = 1m has proven successful in tem tracked the lead vehicle in real traffic vehicle shapes. effectively detecting real vehicles in the situations. scene. Figure 8 shows examples of the orig- Temporal consistency inal and filtered images along with the ROI Adaptive navigation In the real world, using only spatial fea- symmetry map and the detected vehicle’s After detecting the lead vehicle’s position, tures to detect obstacles leads to sporadic, final position. we must ensure safe navigation in ACC mode incorrect detection due to noise. We there- if the lead vehicle suddenly brakes within the fore use a temporal-validation filter to Vehicle tracking safety gap limits. This event could easily lead remove inconsistent objects from the scene.4 We track the detected vehicle’s position to a crash unless the host vehicle rapidly That is, the system must detect any spatially using position measurement and estimation. detects the braking situation and brakes hard. interesting object in several consecutive We use the detected vehicle’s ROI image as To ensure this, the system must detect the image iterations to consider that object a real a template to detect position updates in the lead vehicle’s brake light activation, which vehicle; it discards all other objects. next image using a best-fit correlation. We clearly indicates braking. We use the value t = 0.5s to ensure that a then use the vehicle’s (x, y) location in data A vehicle’s brake light position varies de- vehicle appears in a consistent time se- association for position validation. Basically, pending on its model and manufacturer. So,

JANUARY/FEBRUARY 2007 www.computer.org/intelligent 43 Intelligent Transportation Systems

Figure 9. Vehicle tracking in two real-world traffic situations.

results. Also, using vision for vehicle and ob- stacle detection lets the host vehicle react to real traffic conditions, and has proven a cru- cial complement to the GPS-based naviga- tion system. To improve and further this work, we’re collaborating with other European insti- tutions specializing in autonomous vehicle development under the UE Contract Cyber- Cars-2. Through this collaboration, we plan Figure 10. Sudden-braking detection. Once the system locates the brake lights, to perform a cooperative driving involving it continuously monitors their luminance to detect brake-light activation. more than six vehicles, adding new sensors for pedestrian detection, traffic-sign detection, the system must carry out a detailed search to tance is too short, it automatically stops the and infrastructure monitoring. We’ll also inte- accurately locate these lights inside the vehi- vehicle. grate new wireless communication systems cle’s ROI. We do have some a priori infor- that include vehicle-to-vehicle, vehicle-to- mation to ease the search: brake indicators infrastructure, and in-vehicle information are typically two red lights symmetrically transmission. Finally, we’re planning to use located near the vehicle’s rear left and right e carried out all of our experiments Galileo and GPS-2, next-generation GPS sides. Once the system locates these lights, Wwith real vehicles on real roads, systems that address some existing GPS it must detect sudden brake light activation; albeit within a private circuit. The results positioning problems and improve location it does this by continuously monitoring the show that the fuzzy controllers perfectly accuracy. lights’ luminance. In case of sudden activa- mimic human driving behavior in driving and tion, the system raises an alarm to the vehi- route tracking, as well as in more complex, cle navigator to provoke emergency braking. multiple-vehicle maneuvers, such ACC or Figure 10 shows an example of sudden- overtaking. In the near future, we’re planning braking detection. Brake lights are a redun- to run new experiments involving three auto- References dant safety feature: if they’re activated, a matic driving cars in more complex situa- 1. M.A. Sotelo et al., “Vehicle Fuzzy Driving braking procedure has already started. For- tions, such as intersections or roundabouts. Based on DGPS and Vision,” Proc. 9th Int’l tunately, our system continuously computes Fuzzy control’s flexibility let us integrate Fuzzy Systems Assoc., Springer, 2001, pp. the distance to the lead vehicle. If this dis- a host of sensorial information to achieve our 1472–1477.

44 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS The Authors SEE THE José E. Naranjo is a researcher in the Industrial Computer Science Depart- ment at the Instituto de Automática Industrial in Madrid. His research inter- FUTURE OF ests include fuzzy logic control and intelligent transportation systems. He received his PhD in computer science from the Polytechnic University of Madrid. Contact him at Instituto de Automática Industrial (CSIC), Ctra. COMPUTING Campo Real Km. 0,200 La Poveda, Arganda del Rey, Madrid, Spain; [email protected].

Miguel A. Sotelo is an associate professor in the University of Alcalá’s Department of Electronics. His research interests include real-time computer vision and control systems for autonomous and assisted intelligent road vehi- NOW cles. He received Spain’s Best Research Award in Automotive and Vehicle in IEEEIntelligent Systems Applications in 2002, the 3M Foundation awards in eSafety in 2003 and 2004, and the University of Alcalá’s Best Young Researcher Award in 2004. He received his PhD in electrical engineering from the University of Alcalá. He’s a member of the IEEE and the IEEE ITS Society. Contact him at Universi- dad de Alcalá de Henares, Departamento de Electrónica, Madrid, Spain; [email protected].

Carlos González is a software specialist in automation projects at the Insti- tuto de Automática Industrial. He received his PhD in physics from Madrid University and is an IEEE member. Contact him at Instituto de Automática Industrial (CSIC), Ctra. Campo Real Km. 0,200 La Poveda, Arganda del Rey, Madrid, Spain; [email protected].

Ricardo García is research professor and founder of the Consejo Superior de Investigaciones Científicas’Instituto de Automatica Industrial, where he Tomorrow's works in intelligent . His AUTOPIA project earned the 2002 Barreiros PCs, Research on Automotive Field Prize. He received his PhD in physics from handhelds, Bilbao University. Contact him at Instituto de Automática Industrial (CSIC), and Internet will Ctra. Campo Real Km. 0,200 La Poveda, Arganda del Rey, Madrid, Spain; use technology [email protected]. that exploits current research in Teresa de Pedro is a researcher at the Consejo Superior de Investigaciones artificial intelligence. Científicas’ Instituto de Automatica Industrial, where she works on AI as Breakthroughs in areas such applied to automation and leads the ISAAC (Integration of Sensors to Active Aided Conduction) project. Her research interests include fuzzy models for as intelligent agents, the unmanned vehicles. She received her PhD in physics from the Universidad Semantic Web, data mining, Complutense of Madrid. Contact her at Instituto de Automática Industrial and natural language (CSIC), Ctra. Campo Real Km. 0,200 La Poveda, Arganda del Rey, Madrid, processing will revolutionize Spain; [email protected]. your work and leisure activities. Read about this research as it happens in IEEE Intelligent Systems.

2. R. Garcia et al., “Frontal and Lateral Control door Environments, doctoral dissertation, for Unmanned Vehicles in Urban Tracks,” Univ. of Alcalá, 2001. IEEE Intelligent Vehicle Symp. (IV2002), vol. 2, IEEE Press, 2002, pp. 583–588. 5. H. Scheneiderman and M. Nashman, “A Dis- criminating Feature Tracker for Vision-Based 3. A. Broggi et al., “The Argo Autonomous Autonomous Driving,” IEEE Trans. Robotics Vehicle’s Vision and Control Systems,” Int’l and Automation, vol. 10, no. 6, 1994, pp. J. Intelligent Control and Systems, vol. 3, no. 769–775. 4, 2000, pp. 409–441.

4. M.A. Sotelo, Global Navigation System For more information on this or any other com-

Applied to the Guidance of an Terrestrial puting topic, please visit our Digital Library at www.computer.org/intelligent/subscribe.htm Autonomous Vehicle in Partially Known Out- www.computer.org/publications/dlib.

JANUARY/FEBRUARY 2007 www.computer.org/intelligent 45