<<

AI Magazine Volume 17 Number 1 (1996) (© AAAI) Articles The 1995 Competition and Exhibition

David Hinkle, David Kortenkamp, and David Miller

■ The 1995 and Exhibition was office-building layout and moving large boxes held in Montreal, Canada, in conjunction with into patterns (Konolige 1994; Nourbakhsh et the 1995 International Joint Conference on al. 1993). The third competition, held at Artificial Intelligence. The competition was AAAI-94 in Seattle, Washington, included designed to demonstrate state-of-the-art such events as office-building navigation and autonomous mobile , highlighting such trash pickup (Simmons 1995). The fourth tasks as goal-directed navigation, feature detec- tion, object recognition, identification, and phys- competition (the first at an IJCAI conference) ical manipulation as well as effective human- built on the successes of the previous compe- robot communication. The competition titions. consisted of two separate events: (1) Office Deliv- The goals for the competition and exhibi- ery and (2) Office Cleanup. The exhibition also tion have remained the same over the years. consisted of two events: (1) demonstrations of The first goal is to allow robot researchers research that was not related to the con- from around the world to gather (with their test and (2) robotics focused on aiding people robots) under one roof, work on the same who are mobility impaired. There was also a tasks, and exchange technical information. Robotics Forum for technical exchange of infor- Second is to assess (and push) the state of the mation between robotics researchers. Thus, this year’s events covered the gamut of robotics art in robotics. Third is to contrast and com- research, from discussions of control strategies to pare competing approaches as applied to the demonstrations of useful prototype application same task. Fourth is to provide a public forum systems. in which robotics research can be seen by the AI community, the media, and the general public. Because of this broad range of goals, his article describes the organization determining an appropriate format for the and results of the 1995 Robot Competi- competition and exhibition was a challenge. Ttion and Exhibition, which was held in After much discussion with robotics Montreal, Canada, on 21 through 24 August researchers, we decided on a format similar to 1995, in conjunction with the 1995 Interna- the previous year’s. There were four basic tional Joint Conference on Artificial Intelli- parts: (1) a formal competition with fixed gence (IJCAI-95). tasks and scoring, (2) a wheelchair exhibition The 1995 Robot Competition and Exhibi- (a new addition) in which the results of tion was the fourth annual competition in a mobile robotics research can be shown in a string of competitions that began at the practical application, (3) a robot exhibition in Tenth National Conference on Artificial Intel- which researchers can display robotics ligence (AAAI-92) in San Jose, California. Dur- research that is not applicable in the competi- ing this inaugural competition, 10 robots tion, and (4) a forum in which all participants searched for and approached a set of tall poles and invited researchers can discuss the state in a large arena (Dean and Bonasso 1993). of robotics research. The next competition, held at AAAI-93 in The formal competition was itself divided Washington D.C., saw robots participating in into two different tasks: The first task events that involved maneuvering around an involved navigating within an office-building

Copyright © 1996, American Association for Artificial Intelligence. All rights reserved. 0738-4602-1996 / $2.00 SPRING 1996 31 Articles

closed FG A door B 150cm

1400cm I H C

350cm

E D

500cm

1800cm

Figure 1. The Competition Arena. A closed door (room B) is inset from the walls.

environment using directions entered at run petition was designed to promote the ability time by a human. The second task involved of robots to detect when there is a problem distinguishing between trash and recyclables and ask for help through effective robot- and depositing each in the correct receptacle. human interaction. The event took place in a For each task, teams had two preliminary tri- re-creation of a typical office environment als in which to demonstrate to the judges with partitioned offices (figure 1). The robots their ability to perform the task and a final were asked to follow a series of instructions trial in which teams competed against each that told them to which room they were sup- other for points. Each task is described in posed to go. The instructions consisted of statements such as “exit the room and turn detail in the following sections, and results left,” “go to the end of the hallway and turn are given. Then, the wheelchair exhibition, right,” “enter the room on your third right,” the robot exhibition, and the forum are dis- and so on. (See figure 2 for the actual instruc- cussed in turn. tions for a trial in event 1). The robots would then proceed to search the start room for the door to the hallway, exit the room, and then Robot Competition Event 1: follow the instructions to the goal room. Office Delivery However, sometimes the instructions that were given by the human contained an error. In addition to the traditional goal-directed The given instructions would not lead to the navigation, the first event of the robot com- goal room—or to any room in fact! The

32 AI MAGAZINE Articles

Original Instructions Corrected Instructions Turn First Right Start Room E Turn Third Right Exit Room Left Enter First Left Turn First Right Turn Third Right Go Past Lobby Turn Third Right Turn Third Right Enter First Left

Figure 2. Instructions for Getting from Room E to Room D in Event 1. The italicized instruction is incorrect.

robots were required to monitor their and requested the assistance. Also, in the progress as they carried out the instructions, event of a collision with a stationary obstacle detect the error, and request further assis- or wall, 30 points were deducted. tance (new instructions) when an error was This event was similar to last year’s Office detected. The corrected instructions would be Delivery event but with greater emphasis given to the robot, which would then pro- placed on recovery from mistakes and ceed to the goal room, enter, and announce human-robot interaction. At the third robot that it had arrived. competition in 1994 (Simmons 1995), instead The robots were awarded points based on of being given just a set of directions to fol- completion of the task and the time it took to low, the robots were given a topological map complete. Exiting the room was worth 20 showing all the connections between hall- points, detecting an error in the instructions ways and rooms. Only one team (Nour- was worth 40 points, and entering the goal bakhsh, Powers, and Birchfield 1995) was able room was worth 40 points. The total points to complete the event in 1994. for completing the task was 100. The time taken to complete the task (in minutes) was Results subtracted from the maximum time points of This year, three teams were able to successful- 25; so, if a robot took 5 minutes to complete ly complete the event: (1) Korea Advanced the task, it would receive 20 points (25 points Institute of Science and Technology (KAIST), – 5 minutes). Extra points were awarded (2) North Carolina State University (NCSU), based on human-robot communication. Ro- and (3) Kansas State University (KSU). A bots received 10 bonus points for effective fourth entry, from the University of New communication and an additional 10 points Mexico (UNM), was damaged in transit and for accepting the instructions verbally. Points was unable to compete. The results of the were deducted for marking objects such as final round of competition were doors or hallways, at one point for each marker. Penalties were assessed for any mis- 1. CAIR-2 (KAIST), 130.5 points takes the robots made. Points were deducted 2. LOLA (NCSU), 114.0 points if the robot became confused and required 3. WILLIE (KSU), 70.5 points assistance. The number of points varied between –5 and –40, depending on how 4. LOBOT (UNM) many times assistance was required. However, only half as many points were deducted if the Teams robot realized that it had become confused Now we briefly describe each team’s robot

SPRING 1996 33 Articles

and its approach to the event. determining the direction and velocity of the CAIR-2—Korea Advanced Institute of robot as well as performing obstacle avoid- Science and Technology CAIR-2 is a cus- ance; and (4) registration for determining the tom-built prototype robot, developed at direction of the hallway from sonar data. KAIST. It uses sonar and infrared sensors and LOLA’s basic strategy was to maneuver from has a stereo vision system with pan and tilt. the initial starting position toward the center It has a custom-built voice-recognition sys- of the start room. This strategy afforded the tem and speech synthesizer. Although its robot a better position from which to scan native language is Korean, it spoke perfect the room for the exit door. Each doorway was English throughout the competition. The marked with a colored circle to assist in control architecture was designed to combine detection. Once the robot had detected the both behavior-based and knowledge-based doorway of the start room, it proceeded to approaches. Major components of the naviga- exit into the hallway and get its bearings by tion system are the collection of behaviors aligning itself with the hallway walls. (such as find-a-door, go-forward, and avoid- LOLA performed well throughout both the obstacles), a high-level task executor, a fuzzy- finals and the trials. The scores for both the state estimator, information extractors, and a trials and the finals were virtually identical. coordinator for behaviors. In the finals, LOLA exited the start room in CAIR-2’s basic strategy was to scan the room about 1-1/2 minutes and completed the event looking for a special marker on the doorway. in 6 minutes and 15 seconds. For more infor- The doorways for the start and goal rooms mation on LOLA, see “LOLA: Probabilistic Navi- were marked with a special symbol to assist gation for Topological Maps” and “LOLA: in detection. Once the robot detected the Object Manipulation in an Unstructured doorway, it proceeded to exit the room, keep- Environment,” also in this issue. ing its two cameras focused on the doorway WILLIE: Kansas State University The marker. The human-to-robot instructions KSU team used a NOMAD 200 robot (WILLIE) were entered verbally. As each instruction was from Nomadic Technologies. The robot was entered, the robot would repeat it back for equipped with two sonar rings. The robot confirmation. If the robot misunderstood an relied on a radio Ethernet to communicate instruction, the speaker would supply a cor- between the control program running on a rected instruction. workstation and the actual robot. CAIR-2 consistently performed well The basic strategy was navigation using throughout the trials and in the finals. In sonar widths to position the robot and identi- the finals, CAIR-2 exited the start room in fy doors in the hallway. A subsumption archi- about a minute and took a total of 5 minutes tecture, with threads running the sonar- and 25 seconds to complete the event. detection and sonar-avoidance routines, was Because the instructions were given to CAIR-2 used. For example, the exit-room strategy first verbally, supplying the corrected instruc- involved finding a wall; then, the robot did tions to the robot took an extra minute or wall following while a separate thread detect- two, which was, of course, not counted as ed doorways. When a door was found, the part of the running time. For more informa- robot aligned itself on the door and exited tion on CAIR-2, see “CAIR-2: Intelligent the room. for Guidance and Delivery,” The performance during the first trial run also in this issue. was good. WILLIE exited the start room in 1- LOLA: North Carolina State University 1/2 minutes, detected the error in the LOLA is a NOMAD 200 robot equipped with a human-supplied instructions, accepted the pan-tilt–mounted color camera and sonar new corrected instructions, and completed sensors. LOLA’s on-board computation the task within 5 minutes. However, during includes a dual C40 digital signal–processing the second trial run, because of radio interfer- (DSP) image processor and a 486 DX2-66 run- ence, the performance of the radio Ethernet ning LINUX. Communication with the robot degraded severely. for delivery of the directions and feedback Continuing in the contest required porting from the robot was done by radio Ethernet. the 10,000 lines of the control program from The control architecture comprises four mod- the UNIX workstation to the 486 processor on ules: (1) state-set progression for establishing a board the robot. It also meant installing LINUX probabilistic framework for feature-based nav- and a thread package on the 486 processor. igation in a topological space; (2) feature The porting took 12 hours and involved retun- detection for identifying doorways, hallways, ing the control program to account for differ- and so on; (3) low-level motion control for ences in timing related to running directly on

34 AI MAGAZINE Articles the robot instead of on the workstation. Robot Competition Event 2: The final run was not as successful as the Office Cleanup team would have liked. Some of the timing problems caused the robot to miscount a doorway and caused a collision with the wall. he second event of the robot competi- However, the team finished with a sense of tion was designed to promote interac- accomplishment and a desire to prepare for Ttion between mobile robots and their the 1996 competition. environment. The event took place in room C of the competition arena (figure 1). The exits LOBOT: University of New Mexico The from this room were blocked and, on the floor, UNM LOBOT is a custom-built mobile robot were empty soda cans and empty Styrofoam designed by UNM engineering students. The coffee cups. Also in the room, in the four cor- LOBOT is driven by a two-wheel differential ners, were trash and recycling bins (two of configuration with supporting casters. It is each). The task had the robots pick up the octagonal in shape, stands about 75 centime- soda cans and deposit them in the recycling ters tall, and measures about 60 centimeters bin and then pick up the Styrofoam cups and in width. Sensing is achieved using a layout deposit them in the trash bin. Scoring was of 16 ultrasonic transducers. The on-board based on the number of objects picked up and distributed processing and control system correctly deposited within 20 minutes. Penal- consists of a 486 PC-based master and net- ties were assessed for modifying the environ- worked MC68HC11 slaves. ment or colliding with obstacles. The LOBOT uses an object-oriented behav- In designing this event, the competition ioral approach based on a task decomposition organizers wanted to promote the research of event 1 (that is, exit-room, navigate-hall- area of mobile manipulation. Although virtu- way, enter-room). It is written in C++ and uses We saw al manipulation was allowed (that is, the commercial software for generating speech robot could approach the trash and an- a vast and recognizing high-level verbal instruc- nounce that it was picking up the trash with- improvement tions. The design philosophy views the LOBOT out actually doing so), the penalty was severe as a collection of objects that cooperate to enough that only one team used this over last achieve goals. In this hierarchical architec- approach. All the other robots had some form year in the ture, three main objects exist at the highest of manipulation, which is a large improve- level: (1) navigation; (2) the helm; and (3) ment over last year’s competition in which a performance object detection, recognition, and ranging similar event attracted only two teams that of robots (ODRR). The helm and the ODRR encapsulate performed actual manipulation (see Simmons in this the usable resources of the LOBOT, such as [1995]). motor control and sensor data. Navigation is a The competition organizers also wanted to event. client of the ODRR and the helm, using them promote the use of computer vision to distin- for perception, sensor-based decision making, guish between different objects and then and the initiating of appropriate actions. The have intelligent-control software make deci- motion behaviors are implemented as objects sions based on these perceptions. Thus, the within navigation that operate using an robots needed to recognize two classes of artificial force-field approach. In particular, object (trash and recyclables) and two classes the exit-room behavior is based on an aug- of container (trash bins and recycling bins). mented force-field approach adapted from The classes of container were marked with Kweon et al. (1992). That is, actual sonar- the symbols T and the recycling closed circle range data are augmented with additional vir- for trash and recyclables, respectively (figure tual-range data that collectively act as a virtu- 3). Again, there was an advancement over last al wall behind the robot. Repulsive forces year’s event, where there was only one type from the virtual wall combine with actual of trash and only one type of receptacle. range forces to push the robot toward free space. Starting from any initial location in a Results single-exit room, these artificial forces even- We saw a vast improvement over last year in tually flush the robot out through the exit. the performance of robots in this event. All For robustness, a doorway-traversal behavior the robots successfully used vision to distin- is also activated in the vicinity of the exit. guish among the objects. All the robots suc- Unfortunately, the LOBOT was damaged cessfully distinguished between the bins and during shipping and was unable to compete. navigated to the correct bin for each object. However, the team vowed to try again next The final results reflected the differences in year. manipulation and object recognition. The

SPRING 1996 35 Articles

dual C40 DSP image processor and a 486 DX2-66 running LINUX. LOLA’s basic methodol- ogy was as follows: LOLA first locates trash using predefined color-histogram models of the different types of trash and histogram Trash Recycle back projection. Second, LOLA heads off in pursuit of the trash. Third, during pursuit, LOLA tracks the centroid of the trash as it moves down the image plane and utilizes a nonlinear least squares algorithm to calculate its location relative to the robot. Fourth, once within range, LOLA grasps trash using position T estimation. Trash Recycle Once LOLA grasps a piece of trash, it looks for the appropriate receptacle and deposits the trash using the same method just described. The trash can and recycle bin are distinguished by a color marker at the base of the receptacle (pink for trash, yellow for recyclable). LOLA performed well in the preliminary round, depositing 13 objects. In the final Figure 3. Symbols for the Trash Bin (left) and Recycle Bin (right). round, LOLA was again performing well until optical sensors on its prototype arm started to give false readings. It was later determined that the optical sensors were being triggered top robots (see related articles in this issue) by the audience cameras! However, by this actually picked up the objects and deposited point, LOLA had already deposited enough them in the correct bins. The next tier of trash to win the event. In the final round, robots did not pick up objects but pushed LOLA correctly deposited seven objects. For them next to the appropriate trash and recy- more information on LOLA, see “LOLA: Object cling bins; they received fewer points than Manipulation in an Unstructured Environ- those robots that actually picked up objects. ment” and “LOLA: Probabilistic Navigation for One team used virtual manipulation. Also, Topological Maps.” several teams modified the trash or the bins CHIP: University of Chicago CHIP is a and received penalties for doing so. The final small mobile robot built on a Real-World results were Interface B12 base. It has a single arm for 1. LOLA (NCSU), 196 points manipulating objects in the world, sensors on the gripper for detecting touch and pressure, 2. CHIP (University of Chicago), 165 and eight sonar sensors to help with obstacle points avoidance. CHIP’s primary sensor is stereo, col- or vision. CHIP is controlled by the animate 3. WALLEYE (University of Minnesota), 135 points agent architecture. The low level of the archi- tecture is a collection of soft real-time rou- 4. NEWTON 1 & 2 (Massachusetts Institute tines that can be rearranged into different of Technology [MIT] and Newton Labs), control loops at different times. At a high lev- 105 points el, the reactive action package (RAP) system manipulates the set of routines running at 5. CLEMENTINE (Colorado School of any given time to create a sequence of con- Mines), 79 points trol states to accomplish a specific task. Teams CHIP systematically searches any area by recording where it looks in a global frame rel- Now we briefly describe each team’s robot ative to its “wake-up” position. It always and its approach to the event. looks in nearby unsearched areas simply by LOLA: North Carolina State University panning when possible but moving around as LOLA is a NOMAD 200 robot equipped with a needed. CHIP can recognize a broad class of prototype Nomadics , pan- small objects when seen against a relatively tilt–mounted color cameras, and sonar sen- clean background. It segments an edge image sors. LOLA’s on-board computation included a into regions of possible objects and, for each

36 AI MAGAZINE Articles segment, computes the size, aspect ratio, edge EYE looks for the appropriate trash bin and density, average color, fraction of white, and pushes the trash in front as it moves toward contour regularity. The resulting feature vec- the receptacle. When WALLEYE gets within the tor is classified against a set of fuzzy exem- trash zone, it lets go of the trash, depositing plars by choosing the nearest neighbor with- it. It then continues the search for more trash. in a maximal distance. WALLEYE performed well in the first prelimi- CHIP steadily improved over the two prelim- nary round, pushing 11 objects to the correct inary rounds and into the finals. In the initial bin. Reflections in the floor because of an preliminary round, CHIP was only able to cor- overnight cleaning were responsible for a sub- rectly deposit one object. In the second pre- par performance in the second round, where liminary round, CHIP deposited three objects. WALLEYE only deposited three pieces of trash. Then, after many hours of late-night hacking, In the finals, WALLEYE’s performance improved CHIP really shined in the finals, giving LOLA a somewhat, and it pushed four objects. run for her money by depositing four objects. Because it could not place the objects in the For more information on CHIP, see “Program- bins, only near them, it did not receive as ming CHIP for the IJCAI-95 Robot Competi- many points for each object as LOLA or CHIP, tion,” also in this issue. landing it in third place. However, WALLEYE showed that the task can be performed with WALLEYE: University of Minnesota The extremely limited computing power under a chassis of WALLEYE is built from an inexpen- variety of environmental conditions. sive radio-controlled car with the body shell NEWTON 1 and 2: MIT and Newton Labs of the car and the original electronics The MIT–Newton Labs entry in the trash-col- removed and replaced by specially designed lection contest was two vision cars. The boards. All boards are built around the vision car uses an off-the-shelf remote-control 68HC11 microcontroller and have, at most, car as its robot base. A small vision system 16K of EPROM (electronically programmable (the COGNACHROME vision system made by read-only memory) and 32K of random- Newton Research Labs) and a color camera access memory. WALLEYE uses three microcon- are mounted on the car. The COGNACHROME trollers, one for the main board, one to con- vision system includes a 68332-based proces- trol the motor, and one for the vision system. sor board and a custom video-processing The vision system uses a CCD (charge-cou- board. The video-processing board takes pled device) chip with digital output, a wide- NTSC (National Television Systems Commit- angle lens, and a frame grabber board on tee) input from the color camera, digitizes the which all the vision processing is done. The signal, and classifies the pixels on the basis of images are 160 x 160 pixels, with 256 gray color. This board sends a 1-bit signal for each levels. The camera can grab as many as 20 color channel to the 68332 board. (The sys- frames a second. WALLEYE has a gripper with a tem allows three color channels, although beam across the fingers to detect when some- only one was used for the contest.) The thing has been grasped. The gripper cannot 68332 board processes this signal to find all lift objects, only hold them. the objects of the specified color in the scene; The basic strategy of WALLEYE was to look it processes these data at 60 Hertz and uses around for a cup or a can. When a blob that the results to control the car. The camera is could correspond to such objects is found in the only sensor on the car. the image, WALLEYE starts driving toward it, The COGNACHROME vision system includes tracking the object while approaching. If the software for tracking objects on the basis of object seen was not really an object, the illu- color. Beyond this software, about 65 lines of sory object disappears while it is being C were written for the low-level interface to tracked. Then, WALLEYE again starts its search the car and about 300 lines of C to implement for another object. Tracking an object is, in the control software for the contest. There general, easier than finding it and much were two cars in the contest. Each car focused faster. When WALLEYE gets close to an object, on one color. Trash was colored blue, and the beam in the fingers is broken, signaling recyclables were colored orange. The trash the presence of something between the and recycling bins were goals in the corners, fingers. To guarantee that the object is indeed with blue or orange swatches above them. In a cup or a can, WALLEYE backs up and verifies the competition, the cars moved randomly that the fingers are still holding on to the until they saw a piece of trash and a goal of object. In this way, WALLEYE will not confuse the same color in the same image. The car legs of chairs or other immovable objects with would then move toward the trash and push trash. Once an object has been grasped, WALL- it toward the goal.

SPRING 1996 37 Articles

The MIT–Newton Lab robots were entered robotics technology that has been developed less than 48 hours before the competition, but over the last several years could be applied they proved to be crowd pleasers because they successfully. Many people are mobility moved quickly, and they moved constantly. impaired but are unable to safely operate a The robots knocked cans and cups into the bin normal power wheelchair (Miller and Grant area with great force. They sometimes rammed 1994). This year’s exhibitors concentrated on into each other and even tipped over. At the supplementing the control system of a power end of their frenzied activity, the robots man- wheelchair to endow it with some semiau- aged to push four objects near the correct bin tonomous navigation and obstacle-avoidance in the first trial and five objects near the cor- capabilities. The chairs, of course, also had to rect bin in the final round. Because the team be able to integrate continuous human com- modified both the bins and the objects and mands as well as follow their programmed did not place the objects in the bins, they instructions. received fewer points for an object than other Three chairs with automatic guidance sys- teams ahead of them. tems were brought to IJCAI-95. The NAVCHAIR CLEMENTINE: Colorado School of Mines from the University of Michigan has been CLEMENTINE is a Denning-Branch MRV4 mobile under development as a research project for robot with a ring of 24 ultrasonic sensors, a several years. WHEELESLEY from Wellesley Col- color camcorder, and a laser-navigation sys- lege and TAO-1 from Applied AI Systems were tem. CLEMENTINE was the only entry without a both built for this event. PENNWHEELS, from manipulator. CLEMENTINE is controlled by an the University of Pennsylvania, was also on-board 75MHz Pentium PC. The team con- exhibited. PENNWHEELS uses an innovative sisted of four undergraduate computer sci- mobility system but does not have any guid- ence students who programmed the robot as ance system (in fact, it is tethered to its power part of their six-week senior practical design supply and computer). course. The team took a behavioral approach, Although there was no formal contest for focusing on the issues of recognition, search, the chairs, a wheelchair limbo contest was and sensor fusion. held. The contest consisted of the chairs CLEMENTINE began the task by systematically automatically aligning and passing through a looking for the red regions using the color continually narrowing set of doorways. camcorder. If a red region was close to the Although the NAVCHAIR and WHEELESLEY use appropriate size of a can seen from that dis- totally different sensors, both were able to go tance, CLEMENTINE would move to the can and through narrow doorways, and both got ask a helper to pick up the can. CLEMENTINE stuck at the same point (when there were less would then continue to look for another can, than two inches of clearance on a side). TAO-1 to a maximum of three. If CLEMENTINE did not was demonstrated successfully but suffered an find another can, it would go to the nearest electronics failure during some maintenance recycle bin, drop off the can (again, asking a right before the limbo contest. helper to deposit the can), and then return to the center of the ring and scan for more trash. NAVCHAIR CLEMENTINE used its laser-navigation system, The NAVCHAIR assistive-navigation system is which triangulated its position from three bar- being developed to provide mobility to those code–like artificial landmarks. It also knew a individuals who would otherwise find it priori where the trash bins were. difficult or impossible to use a powered The trash-recognition process was successful, wheelchair because of cognitive, perceptual, and in a preliminary round, CLEMENTINE detect- or motor impairments. ed all 10 cans, depositing 7 of them. In the sec- By sharing vehicle-control decisions regard- ond round, CLEMENTINE deposited nine cans. ing obstacle avoidance, safe-object approach, However, the algorithm was sensitive to light- maintenance of a straight path, and so on, it ing changes and, in the final round, deposited is hoped that the motor and cognitive effort only seven cans, tying the number deposited of operating a wheelchair can be reduced. by the first-place team. However, because The NAVCHAIR prototype is based on a stan- CLEMENTINE was performing virtual manipula- dard Lancer powered wheelchair from Everest tion, each object was worth fewer points. and Jennings. The Lancer’s controller is divid- ed into two components: (1) the joystick mod- Wheelchair Exhibition ule, which receives input from the user through the joystick and converts it to a sig- A robotic wheelchair exhibition was added to nal representing desired direction, and (2) the this year’s event to demonstrate how the power module, which converts the output of

38 AI MAGAZINE Articles the joystick module to a control signal for the only the user interface is discussed here. left- and right-wheel motors. The compo- The user interface runs on a MACINTOSH nents of the NAVCHAIR system are attached to POWERBOOK. Although the input to the inter- the Lancer and receive power from the chair’s face is currently through the touch pad and batteries. button, a system could be built on top of this The NAVCHAIR system consists of three units: interface to customize the system for the user. (1) an IBM-compatible 33MHz 80486-based Some wheelchair users have some upper-body computer, (2) an array of 12 Polaroid ultra- control, but others need to use a sip-and-puff sonic transducers mounted on the front of a system. Some users can use voice; others can- standard wheelchair lap tray, and (3) an inter- not. The interface that was shown at IJCAI-95 face module that provides the necessary inter- is general and would have to be tailored to face circuits for the system. During operation, the needs of specific users. the NAVCHAIR system interrupts the connec- WHEELESLEY’s interface provides information tion between the joystick module and the while it allows the user to control the system. power module. The joystick position (repre- The user can track the speed of the senting the user’s desired trajectory) and the wheelchair and can set the default speed of readings from the sonar sensors (reflecting the wheelchair. (The default speed is the maxi- the wheelchair’s immediate environment) are mum traveling speed when no obstacles are used to determine the control signals sent to present.) For users who are unable to turn the power module. their heads to see obstacles, a map of the During the course of developing NAVCHAIR, wheelchair shows where obstacles are pre- HEELESLEY advances have not only been made in the sent. The interface allows the user to switch W technology of smart wheelchairs but in other between manual mode (no computer control), was the areas as well. Work on the NAVCHAIR has joystick mode (navigation using the joystick only system prompted the development of an obstacle- with computer assistance), and interface mode avoidance method, called the minimum vector (navigation using the interface with comput- that could field histogram (MVFH) method (developed by er assistance). drive through David Bell). MVFH is based on the vector field The system was demonstrated at IJCAI-95. histogram algorithm by Borenstein and Koren WHEELESLEY was the only system that could doorways (1991) that was originally designed for drive through doorways without being without autonomous robots. MVFH allows NAVCHAIR to steered by a human in the chair. perform otherwise unmanageable tasks and The wheelchair was built by the KISS Insti- being steered forms the basis of an adaptive controller. tute for Practical Robotics. Software and user by a A method of modeling the wheelchair interface development was done by a team of human operator, stimulus-response modeling to five undergraduates at Wellesley College, make control-adaptation decisions, has also supervised by Holly Yanco. in the been developed and experimentally validated chair. as part of the research on the NAVCHAIR. Cur- TAO-1 rent work focuses on using probabilistic rea- The autonomous wheelchair development at soning techniques from AI research to extend Applied AI Systems, Inc. (AAI), is based on a this modeling capability (Simpson 1995). behavior-based approach. Compared to more conventional AI approaches, this approach WHEELESLEY allows greatly increased performance in both Robotics researchers do not often discuss user efficiency and flexibility. In this approach, interfaces when explaining their systems. If the concepts of “situatedness” and embodi- they do, it is usually in terms of a program- ment are central to the development of the ming interface. However, when we move autonomous control system. Situatedness from autonomous robots to wheelchair emphasizes the importance of collecting robots, we need to carefully consider the user information through sensors directly interfac- interface. A robotic wheelchair must interact ing the real world, and embodiment stresses with the user and must do it well. The user the significance of doing things in physical should control the wheelchair system, not be terms in the real operational environment. controlled or constrained by it. The robustness and graceful degradation Unlike other wheelchair robots at the characteristics of a system built using the workshop that used a joystick as the sole behavior-based approach also make it attrac- interface, WHEELESLEY’s user has the option of tive for this development. interacting with the robot through a joystick The base wheelchair used for the current or the user interface. The joystick mode is implementation of the autonomous wheel- similar to the other teams’ joystick modes, so chair (TAO-1) is produced by FORTRESS of

SPRING 1996 39 Articles

Quebec. The electronics and control mechan- wheels and two caster wheels to move over ics that came with the wheelchair were left flat surfaces—just like a typical power intact. In fact, the chair can still be operated wheelchair. However, PENNWHEELS also has two using the joystick; the user can override the large two-degree-of-freedom arms that can lift autonomous control mode whenever he/she the front or rear wheels off the ground. By wishes. using the arms and powered wheels in con- The control system for the autonomous cert, PENNWHEELS is capable of negotiating sin- wheelchair developed at AAI is based on a gle steps, moving on to podiums, and so on. Motorola 68332 32-bit microcontroller (a sin- Although PENNWHEELS can go where few gle-chip computer with on-chip memory and other wheelchairs dare tread, it is definitely control electronics). It has AAI’s own multi- still in the conceptual prototype stage. The tasking, real-time operating system that robot is tethered to its power system and to a allows the controller to receive real-time sig- computer that calculates the arm and the nals from a large number of sensors, and it wheel movements. The motors are not sends control output to two motors to drive sufficiently powerful to lift the chair’s weight, the left and the right wheels. It looks after let alone that of a passenger. Even with these both forward-backward and left-right move- limitations, PENNWHEELS was able to give an ments of the chair. impressive demonstration of the possibilities This year’s Two color CCD cameras mounted on the of using hybrid wheel-legged mobility. robot chair detect free space and motion as much as 10 meters in front of the chair. Six active Robot Exhibition exhibition infrared sensors detect obstacles in close vicin- was an ity, to one meter from the chair. The signal This year’s robot exhibition was an extraordi- extraordinary from the cameras is processed by an intelli- nary crowd pleaser because all the robots that gent vision-processing unit that is also built were demonstrated were highly interactive crowd on behavior-based principles. The control pro- with the audience. The KISS Institute for pleaser gram for all the vision processing occupies 9.5 Practical Robotics demonstrated some of its kilobytes (KB), and the other behavior control educational robot systems, giving elementary because all occupies 2.75 KB. This program is significant- school students from the audience a chance the ly smaller than similar vision-based control to operate and control the robots. Newton programs operating in a real environment Labs demonstrated its height-speed color- robots implemented using conventional AI methods. tracking system by having its robots chase that were Development is expected to continue in a after objects tossed into the ring by audience demonstrated staged approach. We are now in the first members. Finally, the Stanford University phase, the safety phase, where work concen- CHESHM robot interacted directly with large were trates on improved ability to avoid obstacles crowds of people as they tried to fool the highly and dangerous situations in the environment. robot and trick it into taking a dive down the On completion of this phase, the chair will be stairwell. Everyone came out of the exhibi- interactive mobile without hitting any objects or other tion better educated and well entertained. with the moving things while it avoids common pitfalls that currently require human attention. ED.BOT and FIRE-FLY CATCHER audience. In the future, the vision system will be ED.BOT, built by the KISS Institute, is a small capable of detecting many other things, such mobile Lego robot the size of a shoe box. Its as landmarks found in the path of the on-board brain is an MIT 6.270 board, and wheelchair, unusual appearances in the pave- standard equipment includes front bump ment, and traffic signals. The number of sensors, phototransistors, and wheel en- infrared sensors will be increased to allow it coders. Powered by a small internal recharge- to move in more confined spaces. Later phases able battery pack, ED.BOT’s Lego motors enable will develop sophisticated interactions be- it to move forward or in reverse at the light- tween the human and the chair, improve ning speed of almost two miles an hour. mobility aspects of the chair, and introduce ED.BOT’s purpose is purely educational. It is evolutionary computation methodologies to designed for classroom use at all elementary- facilitate the chair adjusting to the needs and school–age levels. ED.BOT’s Lego structure is the situations of each individual user. both familiar and understandable to young students. Its on-board programs demonstrate PENNWHEELS each of the sensors and motors that are used PENNWHEELS is a prototype mobility system both individually and in combination to under development at the University of achieve simple tasks such as hiding in dark Pennsylvania. The robot uses two motorized places, moving through figure eights, and

40 AI MAGAZINE Articles hunting down light bulbs. Grade-school stu- Montreal as part of the robot exhibition. It dents use ED.BOT to gain an understanding of was not until after they arrived that their robot fundamentals, including exploring the code was modified so that they could com- basic systems and learning about design, sys- pete in the Office Cleanup event of the robot tem integration, and navigation. The little competition. robot is also used as a base on which to build The vision cars use the same hardware and more complicated mechanisms. color-tracking algorithms described earlier in DD.BOT participated at IJCAI-95 as an exhi- the section NEWTON 1 and 2: Massachusetts bition and hands-on demonstration of an Institute of Technology and Newton Labs. educational robot; therefore, it was accessible The key difference in programming was that to the many children walking by. Children as for the exhibition, the robots went at full young as five years old were interested in speed and tried to keep the objects they were leading this colorful little robot around by looking for centered in their visual field. shining a light at its phototransistors. Even The effectiveness of the tracking algo- the youngest were able to grasp that the rithms could best be seen in the Man versus robot would turn toward the phototransistor Machine Contest, where an audience mem- that received the most light. Older children ber was given the joystick to a radio-con- and adults could understand that the photo- trolled car. The car was colored orange, and transistors were wired crosswise to the oppos- the driver’s goal was simply to keep the car ing motor-wheel unit, making the unit turn away from the NEWTON vision car. This task faster and the robot turn toward the light. proved difficult. The audience member’s turn Perhaps it was best demonstrated by seven- ended when the vision car had rammed the

Tasks that were beyond the reach of robots a few years ago are now being done routinely in the competition.

year-old Kate Murphy, who enjoyed leading radio-controlled car off its wheels or into a the little robot around with a flashlight and dead-end corner. reading the appropriate light values off the The vision cars also chased rubber balls, displays as she assisted during one of ED.BOT’s went after frisbees, and were even able to official demos in the arena. Murphy especial- keep hoops rolling indefinitely—at least until ly liked to make ED.BOT hide in the dark using the far wall came up to meet them (at about its version of Hide, a program that teaches the 20 miles an hour!). concept of calibration, among other things. ED.BOT’s cousin, fiREflY CATCHER, was also a CHESHM big hit with the younger “roboteers.” FIREflY The umbrella project at Stanford University CATCHER, which was built as a design exercise under which CHESHM was developed is called for a robot class for 10 year olds, uses a simi- the Bookstore Project. The immediate goal of lar robot base equipped with a large green net the Bookstore Project is easy to state: Create a in a raised position in front. The net snaps totally autonomous robot that goes from the down whenever front bumpers register con- Stanford Computer Science Department to tact, and the three phototransistors show the bookstore and returns with a book. The light values in the correct pattern. A light more general goal is to create an autonomous bulb with toy wings on a small pedestal navigator that can travel the entire campus, served as our “firefly.” Occasionally, the chil- coexisting with bicyclists, cars, tourists, and dren would start the robot angled away from even students. the goal so that it would have to turn several Three important pieces of the Bookstore times, orienting itself toward the light, and Project puzzle have been addressed over the bump a few times against the pedestal before past few years: (1) the ability to interleave centering itself and swinging down the net planning and execution intelligently, (2) the on its innocent prey. It never missed. ability to navigate (that is, move without becoming lost), and (3) the ability to stay NEWTON and Many Colored Things alive. The NEWTON vision cars originally came to CHESHM is Stanford’s best attempt at solving

SPRING 1996 41 Articles

the problem of staying alive, that is, design- IJCAI-95 is the last in a series of three ing a general-purpose obstacle-avoidance sys- major tests of CHESHM’s wandering behavior tem. The real challenge is perception: A safe exclusively using this passive vision system. robot must detect all sorts of static and mov- The first experiment consisted of wandering ing obstacles, not to mention pot holes, the third floor of Stanford’s Computer Sci- ledges, and staircases. CHESHM uses a totally ence Department. The greatest danger in this passive vision system to perceive obstacles environment, other than the open staircase, robustly. The current system has three impor- proved to be graduate computer science stu- tant features: First, the depth-recovery system dents, who are possibly the most evil robot- makes no domain assumptions. As long as testing group in existence. The robot succeed- there is enough light to see by, and the obsta- ed in avoiding static and moving obstacles in cle has some contrast, it will be avoided. Sec- this environment and even outsmarted sever- ond, the vision system is totally passive and, al graduate students, to their dismay. therefore, does not have the interference or The second experiment involved wander- washout problems that active sensor systems ing Stanford’s Memorial Court, which is a such as infrared sometimes have. Third, the large concrete and tile outdoor area bounded vision system is entirely on board, a necessity by bushes, ledges, and steps. CHESHM success- for truly autonomous mobile robotics. fully interacted with more than 40 invited CHESHM comprises a NOMAD 150 base and a humans who herded the robot and tested its vision system. The Nomad 150 has no sonar, obstacle- avoidance capabilities. During a 2- infrared, or tactile sensors. The vision system hour experiment, the robot was herded is an on-board Pentium PC with a frame grab- toward and successfully recognized the stairs One of ber and three Sony CCD cameras. The three more than 15 times, with 100-percent relia- CHESHM’s cameras are pointed in the same direction so bility, and avoided all sizes of humans, save 1 that the images received from the cameras are head-on collision with a black dress. The greatest almost identical. Our depth-recovery system interaction of children with CHESHM was fasci- weaknesses is based on the idea of depth from focus; so, nating; at one point, the children played Ring the focusing rings of the three cameras are at around the Rosie with CHESHM, dancing round proved different but known positions. the robot while it spun about, trying to find to be its By examining which of the three cameras an escape route. willingness maximizes sharpness for each image region, IJCAI-95 was CHESHM’s final test: The robot CHESHM can form a scene depth map. Obstacle wandered upstairs during three separate cof- to run recognition is easy because the angle of the fee breaks over the course of the conference. over its cameras to the ground is known. Therefore, Each run was more than one hour long and the floor is expected to be a specific distance again involved herding toward a nearby stair- victims’ away in the image. If the depth-map distance case in an attempt to force CHESHM down the feet. is closer than the floor for a particular region, stairs. Over the course of three hours, CHESHM then there is an obstacle there. If the depth- experienced standing-room-only crowds (at map distance is farther than the floor should the beginning of the coffee breaks) as well as be, then there is a pot hole or a staircase. This intense stress testing from individual confer- simple method for detecting steps has proven ence participants. CHESHM again avoided the to be surprisingly reliable and might be a staircase with perfect reliability and avoided somewhat novel achievement for mobile the attendees well. robots. One of CHESHM’s greatest weaknesses proved CHESHM’s motion is programmed using a to be its willingness to run over its victims’ MACINTOSH POWERBOOK 170 that is fixed on top feet. The field of view of the camera system of the robot. The POWERBOOK receives depth- simply does not see low enough to allow map information (by serial port B) from the CHESHM to recognize feet and dodge them. vision system and communicates velocity When feet are located directly underneath commands to the Nomad 150 base (by serial legs, as is customary, the feet rarely pose a port A). The program that is being used to problem. However, when individuals try to test CHESHM is an almost purely functional trip CHESHM by sticking their feet out, they are wandering program that turns away from asking for a painful experience. Over the dangerously close obstacles or stairs. This pro- course of more than three hours of testing gram performs no filtering or sensor interpre- upstairs among conference attendees, CHESHM tation; therefore, it is a transparent tool for successfully avoided all body parts (save feet) examining the reliability of the vision mod- and all static and moving obstacles save four ule through observation of the robot’s wan- direct collisions with humans. Given that the dering behavior. robot successfully avoided hundreds of

42 AI MAGAZINE Articles

The 1996 Robot Teams.

humans over the course of this experiment, focus of the discussion was on the direction the researchers were extremely pleased with of the competition over the next few years. the results. There was a general consensus that the com- Stanford researchers are convinced that petition needs to move toward more natural their obstacle-avoidance solution is a good environments with moving obstacles and approach for the Bookstore Project. Now, that longer tasks requiring more robustness they are revisiting navigation, this time using should be encouraged. Many participants in purely passive vision as the only sensor. The the discussion felt it was time to start moving Bookstore Project is exciting because real- the competition out of a constructed arena time, passive perception is beginning to look and into the actual hallways and rooms of tenable using off-the-shelf processing power. the conference center or the conference hotel. There was also a call for more interac- Robot Forum tion between the robots and the conference attendees. The discussions at the forum will The Robot Forum was held after the competi- help next year’s organizers shape the AAAI-96 tion to allow for in-depth dialogue. At the robot competition. forum, each team gave a short presentation on its robot entry. A small group of noncom- Conclusion petition researchers, including Reid Simmons, Tom Dean, and Leslie Pack Kaelbling, also Overall, we were pleased with how the robots gave their impressions of the competition performed. Historically, robots tend to get and its impact on robotics. Then, a free- stage fright. When the crowds start to gather, wheeling discussion occurred. The primary the robots seem to become unpredictable. It

SPRING 1996 43 Articles

is not uncommon to hear, “I have no idea take the lessons learned from past years and why it is doing that. It has never done that construct an unambiguous 100-percent-objec- before!” Typically, the explanation turns out tive scoring criteria and to not deviate from to be that all the camera flashes, infrared the announced scoring once the events focusing sensors, and cellular phones inter- began. fered with the robots’ sensors and affected One of the key difficulties was in how to communications. Although there were a few design a manipulation task that was fair to problems this year, as in past years, the relia- both physical- and virtual-manipulator ro- bility of the robots has definitely improved. A bots. Although physical manipulation is obvi- major contributing factor to this improve- ously preferred over virtual (because of its ment was that the majority of teams did all inherently autonomous nature), past compe- their computing on board. History has clearly titions have had few successful physical- shown that this approach is a much more manipulation robots. Because virtual manipu- reliable configuration. lation can be so much faster than physical One objective this year was to define the manipulation, we had to compensate some- role of the robot contests in the greater how. scheme of robotics research. The wheelchair Based on past contests, we decided that exhibition did just that. The NAVCHAIR used a physically placing trash inside the trash can sonar-processing algorithm first demonstrat- would take approximately three times as long ed by a 1992 contest winner. TAO-1 used a as virtual manipulation and that placing the vision system demonstrated in the 1993 trash near the trash can would take about robot exhibition, and WHEELESLEY is the next- twice as long. Thus, the final rules said that generation refinement of another 1993 con- actually placing the trash in the trash can was test entry. The wheelchair application is an worth 35 points each, pushing trash into the important and practical use for intelligent trash zone (near the trash can) was worth 20 robotics, and much of the research that went points, and virtually placing the trash in the into the prototype systems exhibited this year trash can was worth 10 points. In addition, can be linked directly to robot contests of a the event would have two first-place winners, few years ago. one overall winner based on the total score, On a more detailed level, this year we and one winner in the physical-manipulator wanted to develop a core set of rules that out- category. It turned out that all but one of the line the tasks to be completed but also allow robots used physical manipulation and that teams some flexibility in making what would the overall winner (NCSU) used physical otherwise be arbitrary choices. For example, manipulation and took both awards. We were one of the objectives of the second event was also heartened by the fact that the final to demonstrate object recognition and results, based on an objective scoring system, manipulation. The rules stated that the trash matched most observers’ subjective impres- to be manipulated was Coke cans and Styro- sions of each robot’s abilities. foam cups. However, we allowed teams (at no Overall, the last four years of robot compe- penalty) to substitute other types of cans titions have been successful at pushing the (Pepsi perhaps) if they worked better for state of the art in mobile robotics. Tasks that them, as long as the substituted trash was of were beyond the reach of robots a few years the same approximate size and available in ago are now being done routinely in the com- stores. One team (University of Chicago) petition. This steady upward trend is primari- chose to use shape instead of color to distin- ly the result of advances in vision-processing guish between objects. Therefore, it decided techniques (especially color vision process- to use multiple types and colors of soda cans ing) and mobile manipulation. The competi- to demonstrate that extra generality. Also, to tions have allowed for sharing of technical reduce needless anxiety among the teams, a information across the community of rough outline of the arena was provided in researchers, and a benchmark set of tasks has advance, allowing teams to anticipate poten- evolved that allows for comparison of com- tial last-minute difficulties. peting technology approaches. As the designers of previous years’ competi- tions have attested to (Simmons 1995), Acknowledgments designing a set of rules and a scoring mecha- The competition organizers want to thank nism that is fair to such a diverse group of the many people and companies that made robots and strategies is a difficult task. A lot the competition successful. We especially of careful thought went into designing the want to thank the members of all the robot scoring and penalties. The objective was to teams who went through extraordinary

44 AI MAGAZINE Articles lengths, numerous border crossings, and David Hinkle is a senior scientist severe sleep deprivation to bring their robots at Lockheed Martin’s Artificial to IJCAI-95. Additional thanks go to Intelligence Research Center. He Microsoft, the Advanced Research Projects received his B.S. in computer sci- ence from Syracuse University in Agency, IJCAI, and AAAI for their generous 1987 and his M.S. in computer donations. Members of the AAAI staff who science from Northeastern Uni- helped tremendously were Carol Hamilton, versity in 1989. His research Mike Hamilton, Sara Hedberg, and Rick interests include machine learn- Skalsky. Awards for the winning teams were ing, neural networks, and robotics. He is currently provided by Network Cybernetics Corpora- conducting research in the area of data mining. He tion (Irvington, Texas), distributors of the AI is a member of the American Association for CD-ROM. Thanks to Pete Bonasso for video- Artificial Intelligence. He is also an organizer for taping the entire competition and Linda the 1996 AAAI Robot Competition and Exhibition in Portland, Oregon. Williams for helping with timing and general organization. Finally, NASA Johnson Space Center’s Robotics, , and Simula- tion Division and the MIT AI Lab graciously David Kortenkamp is a research allowed us to establish competition World scientist with Metrica Incorporat- Wide Web sites on their computers. ed and is responsible for directing research projects at NASA John- son Space Center’s Robotics References Architecture Lab. He received his B.S. in computer science from the Borenstein, J., and Koren, Y. 1991. The Vector Field University of Minnesota in 1988. Histogram for Fast Obstacle Avoidance for Mobile He received his M.S. and Ph.D. in Robots. IEEE Journal of Robotics and Automation 7(3): computer science and engineering from the Uni- 535–539. versity of Michigan in 1990 and 1993, respectively. Dean, T., and Bonasso, R. P. 1993. 1992 AAAI Robot His research interests include software architectures Exhibition and Competition. AI Magazine 14(1): for autonomous agents, mobile robot mapping, 35–48. navigation and perception, and human-robot inter- Konolige, K. 1994. Designing the 1993 Robot Com- action and cooperation. Kortenkamp has been petition. AI Magazine 15(1): 57–62. involved in all the American Association for Artificial Intelligence (AAAI) robot competitions, Kweon, I.; Kuno, Y.; Watanabe, M.; and Onoguchi, including leading the group that finished first in K. 1992. Behavior-Based Mobile Robot Using Active the inaugural competition in 1992. He is also an Sensor Fusion. In Proceedings of the 1992 IEEE organizer for the 1996 AAAI Robot Competition International Conference on Robotics and Automa- and Exhibition in Portland, Oregon. tion. Washington, D.C.: IEEE Computer Society. Miller, D. P., and Grant, E. 1994. A Robot Wheelchair. In Proceedings of the AAIA/NASA Con- ference on Intelligent Robots in Field, Factory, Ser- David Miller received his B.A. in vice, and Space, 407–411. NASA Conference Publi- astronomy from Wesleyan Uni- cation 3251. Houston, Tex.: National Aeronautics versity and his Ph.D. in computer and Space Administration. science from Yale University in Nourbakhsh, I.; Powers, R.; and Birchfield, S. 1995. 1985. He has been building and programming robot systems for DERVISH: An Office-Navigating Robot. AI Magazine 16(2): 53–60. more than a dozen years. He spent several years at the Jet Nourbakhsh, I.; Morse, S.; Becker, C.; Balabanovic, Propulsion Laboratory, where he M.; Gat, E.; Simmons, R; Goodridge, S.; Potlapalli, started the Planetary Micro-Rover Program. He won H.; Hinkle, D.; Jung, K.; and Van Vactor, D. 1993. the Most Innovative Entry Award in the First AAAI The Winning Robots from the 1993 Robot Compe- Robot Competition and Exhibition for SCARECROW, tition. AI Magazine 14(4): 51–62. the robot he built with his then–five-year-old son. Simmons, R. 1995. The 1994 AAAI Robot Competi- He is currently the cochair of the Robotics, tion and Exhibition. AI Magazine 16(2): 19–30. Resources, and Manufacturing Department of the Simpson, R. C.; Levine, S. P.; Bell, D. A.; Koren, Y.; International Space University Summer Program Borenstein, J.; and Jaros L. A. 1995. The NAVCHAIR and is also the technical director of the KISS Insti- Assistive-Navigation System. In Proceedings of the tute for Practical Robotics. IJCAI-95 Workshop on Developing AI Applications for the Disabled. Menlo Park, Calif.: American Association for Artificial Intelligence.

SPRING 1996 45