PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983 82 1 Programming

TOMAS LOZANO-PEREZ

Invited Paper

Abstract-The ’s principal advantage over traditional use and to implement. Because guidingcan be implemented automation is programmability. can perform arbitrary sequences without ageneral-purpose computer, it was in widespread of pre-stored motions or of motions computed as functions ofsensory use for many years before it was cost-effective to incorporate input. This paper reviewsrequirements for and developments in robot programmingsystems. The key requirements for robot pro- computersinto industrial robots. Programming by guiding gamming systems examined in the paper are in the areas of sensing, has some important limitations, however, particularly regarding world modeling, motion specification, flow of contrd, andprogram- the use of sensors. During guiding, the programmer specifies ming support Existing andproposed robot programming systems a single execution sequence for the robot; there are noloops, fall into threebroad categories guiding systems inwhich the user leads a robot through the motions to be performed, rohor-level pro- conditionals,or computations. This is adequatefor some gramming systems in which the user writes a computer program specify- applications,such as spot ,painting, and simple ma- ing motion and sensing, and rusk-level programming systems in which terialshandling. In otherapplications, however, such as the user specifii operations bytheir desired effecton objects. A mechanicalassembly and inspection, one needs to specify representative sample of systems in each of these categories is surveyed the desired action of the robot in response to sensory input, in the paper. dataretrieval, or computation. In these cases, robot pro- I.INTRODUCTION gramming requires the capabilities of a general-purpose com- puter programming language. HE KEY characteristic of robots is versatility; they can Somerobot systems provide computer programming lan- be applied to a large variety of tasks without significant guages with commands to access sensors and to specify robot Tredesign. This versatility derives fromthe generality of motions. We refer to these as explicit or robot-level languages. the robot’s physical structure and control, but it can be ex- The key advantage of robot-level languages is that they enable ploited only if the robot can be programmed easily. In some the data from external sensors, such as vision and force, to be cases, the lack of adequate programming tools can make some used in modifyingthe robot’s motions. Through sensing, tasks impossible to perform. In other cases, the cost of pro- robots can cope with a greater degree of uncertainty in the gramming may be a significant fraction of the total cost of an position of external objects, thereby increasing their range of application.For these reasons, robot programming systems application.The key drawback of robot-levelprogramming play a crucial role in robot development. This paper outlines languages,relative to guiding, is thatthey require the robot somekey requirements of robotprogramming and reviews programmerto be expert in computerprogramming and in existing and proposed approachesto meeting these requirements. the designof sensor-based motionstrategies. Hence, robot- A. Approaches to Robot Programming level languagesare not accessible tothe typicalworker on the factory floor. The earliest and most widespread method of programming Many recent approaches to robot programming seek to pro- robots involvesmanually moving therobot to each desired vide thepower of robot-levellanguages without requiring position,and recording the internal joint coordinates cor- programming expertise. One appraoch is to extend the basic responding to that position. In addition,operations such as philosophyof guiding to includedecision-making based on closing the gripper or activating a welding gun are specified at sensing. Another approach, known as task-level programming, some of these positions. The resulting “program” is a sequence requiresspecifying goals for the positions of objects,rather of vectorsof joint coordinates plus activation signals for than the motions of the robot needed to achieve those goals. externalequipment. Such a program is executedby moving In particular, a task-level specificationis meant tobe completely the robot through the specified sequence of joint coordinates robot-independent; no positions or paths that depend on the and issuing the indicated signals. This method of robot pro- robot geometry or kinematics are specified by the user. Task- gramming is usuallyknown as teaching by showing; inthis level programming systems require complete geometric models paper we will use the less common,but more descriptive, of the environment and of the robot as input; for this reason, term guiding [ 321. they are also referred to as world-modezing systems. Task-level Robot guiding is a programming method which is simple to programming is still in the research stage, in contrastto guiding and robot-levelprogramming which have reached the com- Manuscript received November 29, 1982. Thisresearch wis performed mercial stage. at the Artificial Intelligence Laboratory of the Massachusetts Institute ofTechnology. Support for the Laboratory’s Artificial Intelligence research is provided in part by the Office of Naval Research under Office B. Goals of this Paper of Naval Research Contract N00014-81-K-0494 and in part bythe Advanced Research Projects Agency under Office of Naval Research The goals of thispaper are twofold: one, to identify the Contracts N00014-80-C-0505 and N00014-82-K-0334. The author is with the Massachusetts Institute of Technology, Artifi- requirementsfor advanced robot programming systems, the cial Intelligence Laboratory, Cambridge, MA 02139. other to describe the major approaches to the design of these

0018-9219/83/0700-0821$01.00 0 1983 IEEE a22 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983

3) Grasp the coverusing a programmer-specified gripping force. CAMERA 477- 4) Test themeasured finger opening against the expected openingat the grasp point. If it is notwithin the expected tolerance,signal an error [61, [ 1031.This condition may indicatethat the vision system or the control system are malfunctioning. 5) Place the cover on the base, by moving to an approach position above the base and moving down until a programmer- specified upward force is detected by the wrist force sensor. During the downward motion, rotate the hand so as to null out any torques exerted on the cover because of misalignment Fig. 1. A representative robot application. ofthe cover and the base. Release the cover and record its current position for future use. 6) Inparallel with the previoussteps, move ROBOT2 to systems. The paper is not meant to be a catalog of all existing acquirean aligning pin from the feeder. Bring the pin to a robot programming systems. point above the position of the first hole in the cover, com- Adiscussion of therequirements for robot programming putedfrom the known position of thehole relative to the languages is not possiblewithout some notion of what the cover and the position of the cover recorded above. tasks to beprogrammed will be and who the users are. The 7) Insertthe pin. One strategy for this operation requires next section will discuss one task which is likely to be repre- tilting the pin slightly to increase the chances of the tip of sentative of robottasks in the near future. We will usethis the pin falling into the hole [43], [44]. If the pin does not task to motivatesome of thedetailed requirements later in fall into the hole, a spiral search can be initiated around that thepaper. The range of computersophistication of robot point [ 61, [ 3 1 1. Once the tip of the pin is seated in the hole, users is large,ranging from factory personnel with no pro- the pin is straightened.During this motion, the robot is grammingexperience to Ph.D.'sin computer science. It is a instructedto push down with a prespecified force, to push fatal mistake to use this fact to argue for reducing the basic in the y direction (so as to maintain contact with the side of functionality of robot programming systems to that accessible thehole), and move so as to nullout any forces in the x to the least sophisticateduser. Instead, we argue thatrobot direction [44]. At the end of this operation, the pin position programming languages should support the functional require- is tested to ascertain that it is within tolerance relative to the ments of its most sophisticated users. The sophisticated users computed hole position. canimplement special-purpose interfaces, in the language 8) Inparallel with theinsertion of thepin by ROBOT2, itself, forthe less experiencedusers. This is theapproach ROBOT^ fetches another pin and proceeds with the insertion takenin the design of computerprogramming languages; it when ROBOT2 is done. This cycle is repeated until all the pins also echoes the design principles discussed in [ 96 1 . areinserted. Appropriate interlocks must be maintained between the robots toavoid a collision. 11. A ROBOT APPLICATION This application makes use of four types of sensors: 1) Directposition sensors. Theinternal sensors, e.g., Fig. 1 illustrates a representative robot application. The task potentiometersor incremental encoders, in the robot joints involves tworobots cooperating to assemble a pump. Parts and in the conveyor belts are used to determine the position arrive, randomly oriented and in arbitrary order, on two moving of the robot and thebelt at any instant of time. conveyorbelts. The robot system performs the following 2) Vision sensors. Thecamera above each belt is used to functions: determine the identity and position of parts arriving on the 1) determine the position and orientation of the parts, using belt and to inspect them. a vision system; 3) Finger touch sensors. Sensorsin the fingersare used to 2) grasp the parts on the moving belts; control the magnitude of the gripping force and to detect the 3) place each part on a fixture, add it to the assembly, or presence or absence of objects between the fingers. put it aside for future use, depending on the state of the 4) Wrist force sensors. The positioning errors in the robot, uncertainty in part positions, errors in grasping position, and assembly . part tolerances all conspire to make it impossible to reliably Thefollowing sequence is onesegment of theapplication. positionparts relative to eachother accurately enough for The task is to grasp a cover on the moving belt, place it on the tighttolerance assembly. It is possible,however, to use the pump base, and insert four pins so as to align the two parts. forcesgenerated as the assemblyprogresses to suggestincre- Note the central role played by sensory information. mental motions that will achieve the desiredfinal state; this 1) Identify, using vision, the (nonoverlapping) parts arriving isknownascompZiuntmotion,'e.g., [601,[791,[1011,[1021. on one of the belts, a pump cover in this case, and determine Most of this application is possible today with commerically its position and orientation relative to the robot. During this available robots and vision systems. The exceptions are in the operation, inspect the pump cover for defects such as missing use of sensing. The pin insertion, for example, would be done holes or broken tabs. todaywith a mechanical compliance device [ 1021specially 2) Move ROBOT1 to the prespecifiedgrasp point forthe designed forthis type of operation.Techniques for imple- cover, relative to the cover's position and orientation as deter- mined by the vision system. Note that if the belt continues moving during the operation, the grasp point will need to be This is also known as activecompliance in contrast to passive updated using measurements of the belt's position. compliance achievable with mechanical devices. LOZANO-PEREZ: ROBOT PROGRAMMING 823 menting compliant motion via force feedback are known, e.g., when acquiring parts from feeders; the robot’s grasping motion [73], [ 751, [79], [88] ; but current force feedback methods is initiated when a light beam is interrupted by the arrival of a are not as fast or as robust as mechanical compliance devices. new part at the feeder. Another application is that of locating Current commercial vision systems would also impose limita- an imprecisely known surface by moving towards it and ter- tions on the task, e.g., parts must not be touching. Improved minating the approach motion when a microswitch is tripped techniques for vision and compliance are key areas of or when the value of a force sensor exceeds a threshold. This research. type of motion is known as a guarded move [ 1041 or stop on force [6],[ 731. Guardedmoves can be used toidentify 111. REQUIREMENTSOF ROBOTPROGRAMMING points on the edges of an imprecisely located object such as a pallet. The contact points can then be used to determine the Thetask described above illustrates the major aspects of pallet’s positionrelative to the robot and supply offsets for sophisticatedrobot programming: sensing, world modeling, subsequent pickup motions. Section IV-A illustrates a limited motionspecification, and flow of control.This section dis- form of this technique available within some existing guiding cusseseach ofthese issues andtheir impact on robot systems.General use of thistechnique requires computing programming. new positions on the basis of stored values; hence it is limited to robot-level languages. A. Sensing The second major use of sensing is in choosing among alter- Thevast majority of currentindustrial robot applications native actions in a program. One example is deciding whether are performed using position control alone without significant to place an object in a fixture or a disposal bin depending on external sensing. Instead, the environment is engineered so as the result of an inspection test. Another, far more common, to eliminate all significant sources of uncertainty. All parts are example arises when testing whether a grasp or insert action delivered by feeders, for example, so that their positions will had the desired effect and deciding whether to take corrective be knownaccurately at programming time. Special-purpose action. This type of error checking accounts for the majority devicesare designed tocompensate for uncertainty in each of thestatements in many robot programs. Error checking grasping or assembly operation. This approach requires large requires the ability to obtain data from multiple sensors, such investmentsin design time and special-purpose equipment as visual, force, and position sensors, to perform computations foreach new application. Because of themagnitude of the on the data, and to makedecisions on the results. investment,the range of profitableapplications is limited; The third major useof sensing in robot systems is in obtaining because of the special-purpose nature of the equipment, the theidentity and position of objectsor features of objects. capability of the system to respond to changes in the design Forexample in the application described earlier, avision of the product or in the manufacturing method is negligible. module is used to identify and locate objects amving on con- Underthese conditions, much of thepotential versatility of veyorbelts. Because vision systemsare sizable programs robots is wasted. requiring large amounts of processing, they often are imple- Sensing enables robots to perform tasks in the presence of mentedin separate processors. The robot program must be significantenvironmental uncertainties without special-pur- able, in these cases, to interface with the external system at pose tooling. Sensors can be used to identify the position of the level of symbolic data rather than at the levelof “raw” parts, to inspect parts, to detect errors during manufacturing sensorydata. Similar requirements arise ininterfacing to operations, and to accomodate to unknown surfaces. Sensing manufacturingdata bases whichmay indicate the identity places two key requirements on robot programming systems. of the objects in different positions of a pallet, for example. The first requirement is to provide general input and output Fromthese considerations we canconclude that robot pro- mechanisms for acquiring sensory data. This requirement can gramming systems should provide general input/output inter- be met simply by providing the 1/0 mechanisms available in faces, including communications channels to other computers, most high-level computerprogramming languages, although not just a few binary or analog channelsas is the rule in today’s this has seldom been done. The second requirement is to pro- robot systems. videversatile control mechanisms, such as force control, for Once the data from a sensor or database module are obtained, using sensory data to determine robot motions. This need to some computation must be performed on the module’s output specifyparameters for sensor-based motions and to specify so as to obtain a target robot position. For example, existing alternate actions based on sensory conditions is the primary commercial vision systems can be used to compute the position motivationfor using sophisticatedrobotprogramminglanguages. of the center of area of an object’s outline and the orientation Sensors are used for different purposes in robot programs; of the line that minimizes the second moment. These measure- each purpose has a separate impact on the system design. The ments are obtained relative to the camera’s coordinate system. principal uses of sensing in robot Programming are as follows Before the object can be grasped, these data must be related to the robot’s coordinate system and combined with informa- 1) initiating and terminating motions, tion about the relationship of the desired grasp point to the 2) choosing among alternative actions, measureddata (see Section Again, this points out the 3) obtaining the identity and positionof objects and features 111-B). interplaybetween the requirements for obtaining sensory of objects, data and for processing them. 4) complying to external constraints. The fourth mode of sensory interaction, active compliance, The most common use of sensory data in existing systems is is necessary insituations requiring continuous motion in toinitiate and terminate motions. Most robotprogramming response tocontinuous sensory input. Data from force, systems provide mechanisms for waiting for an external binary proximity, or visual sensors can be used to modify the robot’s signalbefore proceeding with execution of aprogram. This motion so as to maintainor achieve a desired relationship capability is used primarily to synchronize robots with other withother objects. The forcecontrolled motions to turn a machines.One commonapplication of thiscapability arises crank,for example, require that the target position of the 824 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983

robot from instantto instant be determined from the direction actionmechanisms and active compliance is extensibility. and magnitude of the forces acting on the robot hand, e.g., Thefirst three methods allow new sensors and modules to [601, [ 761. Other examples are welding on an incompletely be added or changed by the user, since the semantics of the known or moving surface, and inserting a peg in a hole when sensor is determined only by the user program. Active com- the position uncertainty is greater than the clearance between pliance,on the other hand, requires much more integration theparts. Compliant motion is anoperation specific to ro- between the sensor and the motion control subsystem; a new botics; it requires special mechanisms in a robot programming typeof sensor may require a significant system extension. system. Ideally,a user’s view of compliantmotion could be imple- There are several techniques for specifying compliant motions,mented in terms of lower level procedures in the same robot for a review see[ 621. One method models the robot as a spring language.Sophisticated users could then modify this imple- whose stiffness along each of the six motion freedoms can be mentationto suit newapplications, new sensors, or new set [ 351, [83]. This method ensures that a linear relationship motion algorithms. In practice efficiency considerations have is maintained between the force which is sensed and the dis- ruledout this possibility since compliant motion algorithms placements from a nominal position along each of the motion must be executed hundreds of times a ~econd.~This is not a freedoms. A motionspecification of thistype requires the fundamentalrestriction, however, and increasing computer following information: power,together with sophisticated compilation techniques, 1) A coordinateframe in which the force sensor reading may allow future systems to provide this desirable capability. are to beresolved, known as the constraintframe. Some In summary, we have stressed the need for versatile input/ common alternatives are: a frame attached to the robot hand, outputand computation mechanisms to support sensing in a fixed frame in the room, or a frame attached to the object robotprogramming systems. The most natural approach for being manipulated. providing these capabilities is by adopting a modern hgh-level 2) Thedesired position trajectory of therobot. This computer language as the basis for a robot programming lan- specifies the robot’s nominal position as a function of time. guage. We haveidentified one sensor-based mechanism; 3) Stiffnesses for each of the motion freedoms relative to namely,compliant motion, that requires specific language the constraint frame. For example, a high stiffness for trans- mechanisms beyond those of traditional computer languages. lation along the x-axis means that the robot will allow only Inaddition to the direct mechanisms needed to support small deviations from the position specified in the trajectory, sensing within robot programming languages, there are mech- even if high forces are felt in the x direction. A low stiffness, anisms needed due to indirect effects of the reliance on sensing on the other hand, means that a small force can cause a sig- for robot programming. Some of these effects are as follows: nificant deviation from the position specifiedby the trajectory. 1)Target positions are not known at programming time; The specification of a compliant motion for inserting a peg theymay be obtained from an external database or vision ina hole [62] is as follows:The constraint frame will be sensor or simply be defined by hitting something. located at the center of the peg’s bottom surface, with its z- 2) Theactual path to befollowed is notknown at pro- axisaligned with the axis of the peg.The insertion motion gramming time; it may be determined by the history of sen- will be a linear displacement in the negative z direction, along sory inputs. the hole axis, to a position slightly below the expected final 3) The sequence of motions is not known at programming destination of the peg. time; the result of sensing operations will determine the actual The stiffnesses are specified by a matrix relating the Cartesianexecution sequence. position parameters of the robot’s end effector to the force Theseeffects of sensinghave significant impact onthe sensor inputs structure of robotprogramming systems. The remainder of this section explores these additional requirements. f=KA where f is a 6 X 1 vector of forces and torques, K is a 6 X 6 B. WorldModeling matrix of stiffnesses, and A is a 6 X 1 vector of deviations of Tasks thatdo not involvesensing can be specifiedas a the robot from its planned path. While insertinga peg in a sequence ofdesired robotconfigurations; there is noneed hole, we wish theconstraint frame to follow a trajectory to represent the geometrical structure of the environment in straightdown the middle of thehole, but complying with terms of objects. When the environment is not known a priori, forcesalong the x- and y-axes and with torques about the however, some mechanism must be provided for representing x-and y-axes. The stiffness matrix K forthis task would thepositions of objectsand their features, such as surfaces be a diagonal matrix and holes. Some of these positions are fixed throughout the task,others must be determinedfrom sensory information, K=diag(ko,ko,kl,ko,ko,kl) andothers bear a fixed relationship with respect to variable where ko indicates low stiffness and kl a high stiffness.’ positions. Grasping an object, for example, requires specifying Thecomplexity of specifyingthe details of acompliant the desiredposition of therobot’s gripper relative tothe motionargues forintroducing special-purpose syntactic object’s position. At execution time, the actual object position mechanisms intorobot languages.Several such mechanisms is determined using a vision system or on-line database. The havebeen proposed for different compliant motiontypes desiredposition for the gripper can be determined by com- [671, 1751, [761, [831. posing therelative grasp position and the absolute object Onekey difference between the first three sensor inter- position; this gripper position must then be transformed to a

’Unfortunately, the numerical choices for stiffnesses are dictated by 3Reference [27] describes arobot system architecutre that enables detailed considerations of characteristics ofthe environment and of differentsensors to be interfaced into the motion control subsystem the control system [ 101 1, 13 51. from the user language level; see also 1751 for a different proposal. LOZANO-PEREZ: ROBOT PROGRAMMING 825

position we want the location of the hole relative to WORLD to be equal to thatof the pin;this relationship can be expressed as

Bracket Hole =Fixture Pin.

From this we can see that Bracket = Fixture Pin Hole-‘. Hence, the new gripper location is

WORLD Z E = Fixture Pin Hole-’ Grasp. The use of coordinate frames to represent positions has two Fig. 2. World model with coordinate frames. drawbacks. One drawback is that a coordinate frame, in gen- eral,does not specify a robot configuration uniquely. There robotconfiguration. A robot programming system should may be several robot configurations that place the endeffector facilitatethis type of computationon object positions and in a specified frame. For a robot with six independent motion robot configurations. freedoms, there are usually on the order of eight robot con- Themost common representation for object positions in figurations to place thegripper at a specified frame. Some roboticsand graphics is thehomogeneous transform, repre- frameswithin the robot’s workspace may be reachedby an sented by a 4 X 4 matrix [ 751. Ahomogeneous transform infinitenumber of configurations,however. Furthermore, matrixexpresses the relation of one coordinate frame to for robots with more than six motion freedoms, the typical another by combining a rotation of the axes and a translation coordinate frames in the workspace willbe achievable by an of the origin. Two transforms can be composed by multiplying infinite number of configurations. The different configurations the corresponding matrices. The inverse of a transform which that achieve a frame specification may not be equivalent; some relates frame A to frame B is a transform which relatesB to A. configurations, for example, may give rise to a collision while Coordinate frames can be associated with objects and features othersmay not. This indeterminacy needs to be settled at of interestin a task, including the robot gripper or tool. programmingtime, which may be difficult for frames deter- Transforms can then be used to express their positions with mined from sensory data. respect to one another. Another, dual, drawback of coordinate frames is that they A simple world model, with indicated coordinate frames, is may overspecify a configuration. When grasping a symmetric sh’own in Fig. 2. The task is to visually locate the bracket on object such as a cylindrical pin, for example, it may not be thetable, grasp it, and insert the pin, held in a stationary necessary to specify the orientation of the gripper around the fixture,into the bracket’shole. A similartask has been symmetry axis. Acoordinate frame will alwaysspecify this analyzed in [ 331, [ 931. orientation, however. Thus if the vision system describes the The meaning of the various transforms indicated in Fig. 2 pin’s position as a coordinate frame and the grasping position are as follows. Cam is thetransform relating the camera is specified likewise, the computed grasp position will specify frame to the WORLD frame. Grasp is the transform relating the gripper’sorientation relative tothe pin’saxis. In some the desiredposition of the gripper’sframe tothe bracket’s cases this wiU resultin a wasted alignment motion; in the frame. Let Bracket be the unknown transform that relates the worst case, the specified frame may not be reachable because bracket frame to WORLD. We will be able to obtain from the of physical limits on joint travel of the robot. Another use of vision system the value ofBkt, a transform relating the bracket’s partially specified object positions occurs in the interpretation frame to the camera’s frame.4 HoZe is a transform relating the of sensory data. When the robot makes contact with an object, hole’sframe tothat of thebracket. The value of Hole is itacquires a constrainton the position of thatobject. This known from the design of the bracket. Pin relates the frame informationdoes not uniquely specify the object’s position, of the pin to that of the fixture. Fixture, in turn, relates the butseveral such measurements can be used toupdate the fixture’s frame to WORLD. Z relates the frame of the robot robot’sestimate of the object’spositions [6]. This type of base to WORLD. Ourgoal is todetermine the transform computationrequires representing partially constrained relating the endeffector’s (gripper’s) frame E relative to the positions or, equivalently, constraints on the position param- robot’s base. Given E and Z, the robot’s joint angles can be eters [941, [ 141. determined (see, for example, [ 75I ). Despitethese drawbacks, coordinate frames are likely to The first step of the task is determining the value of Bracket, continuebeing the primary representation of positions in which is simply Cam Bkt. Thedesired gripper position for robot programs.Therefore, a robot programmingsystem grasping the bracket is shouldsupport the representation of coordinateframes and computationson frames via transforms.But this is notall; Z E = Bracket Grasp. a world model also should provide mechanisms for describing Since Cam is relative to WORLD, Bkt relative to Cam, and the constraints that exist between the positions. The simplest Grasp relative to Bkt, thecomposition gives us the desired case of this requirement arises in managing the various features gripper position relative to WORLD, i.e., 2 E. At the target on a rigid object. If the object is moved, then the positions of all its features are changed in a predictable way. The respons- ibility for updating all of these data should not be left with theprogrammer; the programming system should provide mechanisms for indicating the relationships between positions 826 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983

BLOCK 2 graphics [ 21. Another method [ 1 1, [801 is to describe posi- tions by sets of symbolic spatial relationships that hold between objects in each position. For example, the positionsof Block 1 in Fig. 3 must satisfy the following relationships:

(f3Against fl) and (f4Against f2). BLOCK I Fig. 3. Symbolic specification of positions. One advantage of using symbolic spatial relationships is that the positions they denote are not limited to the accuracy of alight-pen or of a robot,but that of themodel. Another so that updates can be carried automatically. Several existing advantage of this method is that families of positions such as languagesprovide mechanisms forthis, e.g., AL [671and those on a surface or along an edge can be expressed. Further- LM [48]. more, people easily understand these relationships. One small Beyondrepresentation and computation on frames, robot drawbackof symbolic relations is thatthe specifications are systemsmust provide powerful mechanisms for acquiring less concise than specifications of coordinate frames. frames. A significantcomponent of thespecification of a Anotherpotentially important method of acquiringposi- robot task is the specification of the positions of objects and tions is theuse of vision. Forexample, two cameras can features. Many of therequired frames, such as the position simultaneously track a point of light from a laser pointer and of the hole relative to the bracket frame in the example above, the system can compute the position of the point by triangu- canbe obtained from drawings of the part. This process is lation [ 371. One disadvantage of this method and of methods extremely tedious and error prone, however. Several methods based on CAD models is that there is no guarantee that the for obtaining these data have been proposed: specified point can be reached without collisions. We havefocused on the representation of singlepositions; 1) using the robot to define coordinate frames; thisreflects the emphasis in currentrobot systems on end- 2) usinggeometric models from Computer-Aided Design pointspecification of motions. In manyapplications, this (CAD) databases; emphasis is misplaced. For example, in arc-welding, grinding, 3) using vision systems. glueapplication, and many other applications, the robot is The first of these methods is the most common. A robot’s called upon to follow a complex path. Currently these paths endeffectordefines aknown coordinate frame, therefore arespecified as asequence of positions.The next section guiding the robot to a desired position provides the transform discusses alternativemethods of describing motions which needed todefine the position. Relative positions can be require representing surfaces and volumes. A large repertoire determinedfrom two absolute positions. Two drawbacks of of representationaland computational tools is already avail- thissimple approach are: some of the desired coordinate ablein CAD systems and Numerically Controlled (NC) ma- framesare inaccessible tothe gripper,also, the orientation chining systems, e.g., [21 l. accuracy achievable by guiding and visual alignment is limited.’ In summary,the data manipulated by robotprograms are Theseproblems can be alleviatedby computing transforms primarilygeometric. Therefore, robot programming systems from some number of points with known relationships to each have a requirement to provide suitable data input, data repre- other, e.g., the origin of the frame and points on two of the sentation, and computational capabilities for geometric data. axes. Indicating points is easier and more reliable than aligning Of these three, data input is the most amenable to solutions coordinate systems. Several systems implement this approach, that exploit the capabilities of robot systems, e.g., the avail- ability of the robot and itssensors. e.g., AL [331, [671 and VAL [881, [981. A second method of acquiring positions, which is likely to C. Motion Specification grow in importance,is the use of databases from CAD systems. CADsystems offer significant advantages for analysis, docu- The most obvious aspect of robotprogramming is motion mentation, and management of engineering changes. Therefore, specification. The solution appears similarly obvious: guiding. they are becoming increasingly common throughout industry. But,guiding is sufficientonly when all the desiredpositions CAD databasesare a natural source for thegeometric data and motions are known at programming time. We have post- needed inrobot programs. The descriptions of objects in a poneda discussion of motionspecification until after a dis- CADdatabase may not be in theform convenient for the cussionof sensing and modeling to emphasize thebroader. robot programmer, however. The desired object features may range of conditions under which robot motion must be speci- not be explicitly represented, e.g., a point in the middle of a fied in sensor-based applications. parametrically defined surface. Furthermore, positions specific Heretofore, we have assumed that a robot motionis specified to the robot task, such as grasp points, are not represented at by its final position, be it in absolute coordinates or relative all, and must still be specified. Therefore, the effective use of to some object. In many cases, this is not sufficient; a path for CAD databases requires a high-level interface for specifying the therobot must alsobe specified. A simpleexample of this desiredpositions. Pointing on agraphics screen is one pos- requirementarises when grasping parts: the robot cannot sibility, but is suffers from the two-dimensional restrictions of approachthe grasp pointfrom arbitrary directions; it must typically approach from above or risk colliding with the part. Similarly,when bringing the part toadd to asubassembly, 5Acommon assumption is that since the accuracy of the robot limits execution.the same accuracy is sufficient during task sDecification. This the approach path must be specified‘ Pathsare assumption neglects the effect of the robot’s Limited repeatability, how- commonly specified by indicating a sequence of intermediate ever. Errom in achieving thespecified position, hencompounded with positions, known as via points, that the robot should traverse thespecification errors, might cause the operation to fail. Further- more,theiflocation is used as the basis for relative locations,the between the and positions. propagation of errors can make reliable executionimpossible. Theshape of thepath between via points is chosenfrom LOZANO-PEREZ: ROBOT PROGRAMMING 821 among some basic repertoire of path shapes implemented by these constraints, or signal an error if no motion is possible. therobot control system. Three types of pathsare imple- This generalcapability is beyond the state of the art in tra- mented in currentsystems: uncoordinated joint motions, jectoryplanning, but a simple form has beenimplemented. straight lines in the joint coordinate space, and straight lines The user specifies a nominal Cartesian path for the robot plus in Cartesian space. Each of these represents a different tradeoff some allowed deviation from the path; the trajectory planner between speed of execution and “natural” behavior. They are then plans a joint space trajectory that satisfies the constraints eachsuitable to some applicationsmore than others. Robot [951. systems should support awide range of such motion regimes. Another drawback of traditional motion specification is the Oneimportant issue in motionspecification arises due to awkwardness of specifyingcomplex paths accurately as se- thenonuniqueness of themapping from Cartesian to joint quences of positions. More compact descriptions of the desired coordinates.The system must provide some well-defined pathusually exist. An approachfollowed in NC mechanismfor choosing among the alternative solutions. is to describe the curve as the intersection of two mathematical Insome cases, the user needs to identify which solution is surfaces. A recent robot language, MCL 1581, has been defined appropriate. VAL provides a set of configuration commands as an extension to APT, the standard NC language. The goal thatallow the user to choose one of the up to eight joint of MCL is to capitalize on the geometric databases and compu- solutionsavailable at some Cartesian positions. This mech- tationaltools developed within existing APT systems for anism is useful,but limited. In particular, it cannotbe ex- specifyingrobot motions. This approach is particularly tended to redundant robots with infinite families of solutions attractive for domains, such as aircraft manufacture, in which or tospecify the behavior at a kinematic singularity. many of the parts are numerically machined. Someapplications, such as arc-welding orspray-painting, Another very general approach to trajectory specification is canrequire very fine control of the robot’s speed alonga via user-supplied procedures parameterized by time. Paul [ 741, path, as well as of the shape of the path [91, [75]. This type [75] refers to this as functionallydefined motion. Thepro- of specification is supported by providingexplicit trajectory grammingsystem executes the function to obtain position control commands in the programming system. One simple set goals.This method can be used,for example, to follow a of commands could specify speed and acceleration bounds on surfaceobtained from CAD data, turn a crank, and throw the trajectory. AL provides for additional specifications such objects.The limiting factor in this approach is thespeed at as thetotal time of thetrajectory. Givena widerange of which the function can be evaluated; in existing robot systems, constraints,it isvery likely thatthe set of constraints for no method exists for executing user procedures at servo rates. particulartrajectories willbe inconsistent.The programming Aspecial case of functionallydefined motion is motion systemshould either provide awell-defined semantics for specifiedas a function of sensor values. Oneexample is in treatinginconsistent constraints6 or make it impossible to compliantmotion specifications, where some degrees of specifyinconsistent constraints. Trajectory constraints also freedomare controlled to satisfy force conditions. Another should be applicable to trajectories whose path is not known example is a motiondefined relative to amoving conveyor at programming time, for example, compliant motions. belt.Both of these cases arecommon enough that special- The choice of via points for a task depends on the geometry purposemechanisms have beenprovided inprogramming of the parts, the geometry of the robot, the shape of the paths systems.There are significant advantages to havingthese therobot follows between positions, and the placement of mechanisms implemented using a common basic mechanism. the motion in the robot workspace. When the environment is Insummary, the view of motionspecification as simply notknown completely at programming time, the via points specifyinga sequence of positions or robot configurations is mustbe specified very conservatively. This can result in un- too limiting. Mechanisms forgeometric specification of necessarily long motions. curves andfunctionally defined motion should also be pro- An additional drawback of motions specified by sequences vided.No existing systems provide these mechanisms with of robotconfigurations is thatthe via pointsare chosen, any generality. typically,without regards for thedynamics of the robot as D. Flow of Control it moves along the path. If the robot is to go through the via In the absence of any form of sensing, a fixed sequence of points very accurately, the resulting motion may have to be veryslow. This is unfortunate,since it is unlikelythat the operations is theonly possible type of robotprogram. This programmermeant the via points exactZy. Somerobot sys- model is not powerful enough to encompass sensing, however. tems assume that via points are not meant exactly unless told In general, the program for a sensor-based robot must choose otherwise. The system then splines the motion between path among alternative actions on the basis of its internal model of segments to achieve a fast, smooth motion, but one that does the task and the data from its sensors. The task of Section 11, not pass through the via points [751. The trouble is that the for example, may go through a very complex sequenceof states, path is then essentiallyunconstrained near the via points; because theparts are amving in randomorder and because the execution of the various phases of the operation is over- furthermore, the actual path followed depends on the speed of the motion. lapped.In each state, the task program must specify the appropriateaction for each robot. The programming system A possible remedy for both of these problems is to specify must provide capabilities for making these control decisions. themotion bya set of constraintsbetween features of the The major sources of informationon which control decisions robotand features of objects in theenvironment. The exe- can be based are: sensors, control signals, and the world model. cution system can then choose the “best” motion that satisfies The simplest use of this informationis to include a test at fixed Places in the Program to decide which action should be taken next,e&, “If (i j)then Signal X else Moveto Y.” One 6A special case occurswhen the computed pathgoes through a < kinematic It s hpwible in to satisfy trajectory importantapplication where this type of control is suitable constraintssuch as speed of theend-effector at thesingularity. is errordetection and correction. 828 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983

Robotoperations are subject to large uncertainties in the donewith only binary signals also serves to illustrate the initial state of the world and in the effect of the actions. As limitations of the method. a result, the bulk of robot programming is devoted to error 1)The programs are asymmetric; one robot is the master detectionand correction. Much of thistesting consists of of the operation. If the cover can arrive on either belt and be comparing the actual result of an operation with the expected retrieved by either robot, then either an additional signal line results.One common example is testingthe finger opening is needed to indicate which robot will be the master or both after a grasp operation to see if it differs from the expected robot systems must be subordinated to a third controller. value, indicating either that the part is missing or a different 2) If one of therobots finds a defectivepin, there is no part is there.This type of testcan be easily handled with way for it to cause the other robot to insert an additional pin traditional IF-THEN testsafter completion of the operation. while it goes to dispose of thedefective one. The program This test is so common that robot languages such as VAL and must allocate new signal lines for this purpose. In general, a WAVE [74] have made it part of the semantics of the grasp large number of signals may be needed. command. 3) Becauseone robot does not know the position of the Many robot applications also have other requirements that other one, it is necessary to coordinate them on the basis of do not fall naturally within the scope of the IF-THEN control very conservative criteria, e.g., being engaged in getting a pin structure.Robot programs often must interact with people orinserting a pin. This will resultin slow execution unless ormachines, such as feeders, belts, NC machines,and other thetasks are subdivided very finelyand tests performed at robots. These external processes are executing in parallel and each division, which is cumbersome. asynchronously; therefore, it is not possible to predict exactly 4) The position of the pump cover and the pin-feeder must whenevents of interestto the robot program may occur. be known by eachprocess independently. No information In thetask of Section 11, forexample, the arrivalof apart obtained during the execution of the task by one robot can within the field of view of one of the cameras calls for imme- be used by the other robot; it must discover the information diate action: either one of the robots must be interrupted so independently. as to acquirethe part, or the belt must be stoppeduntil a The difficulties outlined above are the due to limited com- robot canbe interrupted.The previous operations may then municationbetween the processes. Signal lines are a simple, beresumed. Other examples occur in detecting collisions or butlimited, method of transferringinformation among the partslippage from the fingers; monitor processes can be processes. In practice,sophisticated tasks require efficient created tocontinuously monitor sensors, but they must be meansfor coordination and for sharing the world model able tointerrupt the controlling process and issue robot (including the stateof the robots) between processes. commands without endangering ongoing tasks. Theissue of coordinationbetween cooperating and com- It is possible to use the signal lines supported by most robot petingasynchronous processes is one of themost active systemsto coordinate multiple robots and machines. For researchareas in Computer Science. Manylanguage mech- example, in the sample task, the insertion of the pins into the anismshave been proposed for process synchronization, pump cover(steps 6 through 8, Section 11) requiresthat among these are: semaphores [ 171, events, conditional critical ROBOTl and ROBOT2 be coordinated so as to minimize the regions [ 391, monitors and queues [ 11 1, and communicating duration of the operation while avoiding interference among sequentialprocesses [40]. Robot systems should build upon the robots. If we let ROBOTl be in charge, we can coordinate thesedevelopments, perhaps by usinglanguagea such as the operation using the following signal lines: ConcurrentPascal [ 11 ] orAda [42] as a base language.A fewexisting robot languages have adoptedsome of these 1) GET-PIN?: ROBOT2 asks if it is safe to get a new pin. mechanisms, e.g., AL and TEACH [81],[821. Even the 2) OK-TO-GET: ROBOT 1 says it OK. is mostsophisticated developments in computer languages do 3) INSERT?: ROBOT2 asks if it is safe to proceed to insert not address all the robot coordination problems, however. the pin. When theinteraction among robots is subject to critical 4) OK-TO-INSERT: ROBOT1 says it is OK. real-timeconstraints, the paradigm of nearly independent DONE : ROBOT 1 says it is all over. 5) controlwith periodic synchronization is inadequate. An The basic operationof the controlprograms could be as follows: exampleoccurs when multiple robots must cooperate phys- ROB OTl ROBOT2 ROBOTl ically, e.g., in lifting an object too heavy for any one. Slight Wait for COVER-ARRIVED 3: If signal DONE Goto 4 deviations from a pre-planned position trajectory would cause Signal OK-TOGET Signal GET-PIN? one of the robots to bear all the weight, leading to disaster. i:= 1 Wait for OK-TO-GET What is needed, instead, is cooperative control of both robots Call PlaceCover-in-Fixture Call Get-Pin-2 based on the force being exerted on both robots by the load Wait for INSERT-PIN? Signal INSERT-PIN? Signal OK-TO-INSERT Wait for OK-TO-INSERT [ 45 I, [ 601, [ 681. The programming system should provide a if (i < np) then do Call Insert-Pin-2 mechanism for specifying the behavior of systems more com- [Call Get-Pin-1 Goto 3 plex than a single robot. Another example of the need of this i:=i+ 11 4: ... kind of coordination is in theprogramming and control of else do [Signal DONE multifingered grippers [ 841. Goto 21 In summary, existing robot programming systems are based Wait for GET-PIN? on the view of a robot system as a single robot weakly linked if (i < np) then do toother machines.In practice, many machines including [Signal OK-TOGET i:= i+ 11 sensors,special grippers, feeders, conveyors, factory control Call Insert-Pin-1 computers,and several robots may be cooperating during a Goto 1 task.Furthermore, the interactions between them may be ... highlydynamic, e.g., to maintaina force between them, or This illustration of how a simple coordination task could be mayrequire extensive sharing of information. No existing LOZANO-PBREZ: ROBOT PROGRAMMING 829 robot programming system adequately deals with all of these this basic guiding. Inrobot-level systems, guiding is used to interactions. In fact, no existing computer language is adequate define positions while the sequencing is specified in a program. to deal with this kind of parallelism and real-time constraints. Thedifferences among basic guidingsystems are a) in the way the positions are specified and theb) repertoire of motions betweenpositions. The most common ways ofspecifying E. Programming Support positions are: by specifying incremental motions on a teach- Robot applications do not occur in a vacuum. Robot pro- pendant, andby moving the robot through the motions, grams often must access external manufacturing data, ask users either directly or via a master-slave linkage. for data or corrective action, and produce statistical reports. The incremental motions specified via the teach-pendant can These functions are typical of most computer applications and be interpreted as: independent motion of each joint between aresupported by all computerprogramming systems. Many positions,straight lines in thejoint-coordinate space, or robot systems neglect to support them, however. In principle, straightlines in Cartesianspace relative tosome coordinate theexercise of thesefunctions can be separated from the system, e.g., therobot’s base orthe robot’s end-effector. specification of the task itself but, in practice, they are inti- When using the teach-pendant, only a few positions areusudy matelyintertwined. A sophisticatedrobot programming sys- recorded, on command from the instructor. The path of the tem must first be a sophisticated programming system. Again, robot is then interpolated between these positions using one this requirementcan be readilyachieved by embedding the of the three typesof motion listed above. robotprogramming system within an existingprogramming When moving the robot through the motions directly, the system [ 751. Alternatively, care must be taken in the design completetrajectory can be recorded as a series of closely of newrobot programming systems not to overlook the spacedpositions on afixed time base. The latter method is “mundane” programming functions. used primarily in spray-painting,where it is important to A similar situation exists with respect to program develop- duplicate the input trajectory precisely. ment.Robot program development is oftenignored in the Theprimary advantage of guiding is itsimmediacy: what design of robotsystems and, consequently, complex robot yousee is whatyou get. In manycases, however, it is ex- programscan be very difficult to debug. The development tremely cumbersome, as when the same position (or a simple of robotprograms has several characteristicswhich merit variation)must be repeated atdifferent points in a task or special treatment. whenfine positioning is needed.Furthermore, we have 1)Robot programs have complexside-effects and their indicatedrepeatedly the importance of sensingin robotics execution time is usually long, hence it is not always feasible andthe limitations of guiding in thecontext of sensing. to re-initialize the program upon failure. Robot programming Another important limitation pf basic guiding is in expressing systemsshould allow programs to be modified on-line and controlstructures, which inherently require testing and immediately restarted. describing alternate sequences. 2) Sensoryinformation and real-time interactions are not 1) Extended Guiding: The limitations of basic guiding with usuallyrepeatable. One useful debugging toolfor sensor- respect to sensing and control can be abated, though not com- basedprograms provides the ability torecord the sensor pletely abolished, by extensions short of a full programming outputs, together with program traces. language. Forexample, one of themost common uses of 3) Complex geometry and motions are difficult to visualize; sensorsin robot programs is todetermine the location of simulators can play an important inrole debugging, for example, someobject to be manipulated. After the object is located, see [381, [65], [911. subsequent motions are made relativeto theobject’s coordinate Theseare not minor considerations, they are central to frame. This capability can be accomodated within the guiding increased usefulness of robot programming systems. paradigm if taught motions can be interpreted as relative to Most existingrobot systems are stand-alone, meant to be somecoordinate frame that may be modified atexecution used directly by a single user without the mediation of com- time. These coordinate frames can be determined, for example, puters. This design made perfect sense when robots were not byhaving the robot move until a touchsensor on theend- controlled by general-purpose computers; today it makes little effector encounters an object. Thisis known asguarded motion sense. A robot system should support a high-speed command or a search. This capability is part of some commercial robot interfaceto other computers. Therefore, if auser wants to systems, e.g., ASEA [3], CincinattiMilacron [41], and IBM develop an alternate interface, he need not be limited by the [321, 1921. This approachcould be extended tothe case performance of the robot system’s user interface. On the other when the coordinate frames are obtained from avision system. hand, the user can take advantage of the control system and Some guiding systems also provide simple control structures. kinematicscalculations in the existing system. This design For example, the instructions in the taught sequence are given would also facilitate the coordination of multiple robots and numbers. Then, on the basis of testson external or internal make sophisticated applications easier to develop. binarysignals, control can be transferred to differentpoints in thetaught sequence. The ASEA and Cincinatti Milacron Iv. SURVEY OF ROBOT PROGRAMMING SYSTEMS guidingsystems, for example, both support conditional branching.These systems also support simplea form of In this section, we survey several existing and proposed robot procedures. The procedures can be used to carry out common Programmingsystems. An additional survey of robotpro- gramming systems can be foundin [ 71. operations performed at different times in the taught sequence, such as commonmachining operations applied to palletized parts. The programmer can exploit these facilities to produce A. Guiding morecompact programs. These control structure capabilities All robotprogramming systems support some form of arelimited, however, primarily because guiding systems do guiding. The simplest form of guiding is to record a sequence not support explicit computation. of robotpositions that can then be “played back”; we call To illustratethe capabilities of extendedguiding systems, 830 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983

part(see Fig. 4). Notethat the parts are not in the work- space during the programming sequence. The pickup sequence could be programmed as follows: 1) Move verticallydown towards P2 untilcontact is felt (steps 1-4). 2) Open the fiigers (steps 5, 6). We have neglected to raise the arm before opening the fingers for simplicity. L- INPUT PALLET 3) Move down the distance between P2 and P3 relative to the actual location where contactwas detected (steps 7-9). 4) Close the fingers (steps 10, 1 1). Here is the sequence: Programmer action Remarks 1. Position vertically to P2. Manual motion to the end position of PICKUP OPERATION search. (DETAIL) 2. Select speed to P2. 3. Key code for search and This code indicatesthat the motion vertical operation that follows is a search in vertical direction. 4. PTPF Insertpositioning command to P2 in TARGET CONTACT program. TARGET GRASP 5. Set grip opening and Specify finger opening select waiting time. Fig. 4. Palletizing task. 6. GRIPPERS Insert command to actuate grippers (open). 7. Position to P3. Grasping position (relativeto P2). we present a simpletask programmed in the ASEArobot’s 8. Select time for motion. guidingsystem.’ The task is illustratedin Fig. 4; itconsists 9. PTPL Coordinatedjoint motion, relative to of picking a series of parts of different heights from a pallet, the position after the search. movingthem to a drillingmachine, and placing themon a 10. Set grip opening and Specify finger closing differentpallet. The resulting program has the following select waiting time. structure: 11. GRIPPERS Insert command to actuate grippers (close). Theputdown sequence would be programmed in a similar I.No. Instruction Remarks fashion. 10 OUTPUT ON 17 Flag ON indicates do pickup 2) Off-LineGuiding: Traditionalguiding requires that the 20 PATTERN Beginning of procedure workspace for the task, all the tooling, and any parts be avail- 30 TEST JUMP 17 Skip next instructionif flag is on able during program development. If the task involves a single 40 JUMP 170 50 OUTPUT OFF 17 Next time do put down large or expensivepart, such as anairplane, ship or auto- 60 ... Pickup operation (see below) mobile, it may be impractical to wait until a completed part 100 MOD End of common code for pickup is available before starting the programming; this could delay 110 .. . Positioning for fust pickup thecomplete manufacturing process. Alternatively, the task 130 MOD Execute procedure environmentmay be in space or underwater. In these cases, 140 ... Positioning for second pickup 160 MOD Execute procedure a mockupof the task may be built, but a moreattractive 170 ... Machining and put down operation alternative is available when a CAD model of the task exists. 200 OUTPUT ON 17 Next time do pickup In this case, the task model together with a robot model can 210 MOD End of common code for put down beused to define theprogram by off-lineguiding. Inthis 220 ... Position for first put down 23 0 MOD Execute procedure method, the system simulates the motions of the robot in re- 240 Position for second put down sponse to a program or to guiding input froma teach-pendant. Off-line guiding offers the additional advantages of safety and versatility.In particular, it is possible toexperiment with Note that the MOD operation is used with two meanings: 1) to different arrangements of the robot relative to the task so as indicate the end of a common section of the PATTERN, and to find one that, for example, minimizes task execution time 2) to indicate where the common section is to be executed. [381. Thesequence of instructions exected would be: 10, 20, 30, 50, 60, ***, 100, .**,130, 30, 40, 170;.*, 200;**230, B. Robot-Level Programming 30, 50, * . The key to the pickup operation is that we can use a search In Section I11 we discussed a number of important functional to locate the top surface of the part, so we need not know the issues in thedesign of robot programming systems. The design heights exactly. The fingers are initially closed and the robot of robot-level languages, by virtue of its heritage in the design starts out in position P1, which is above the highest part and of computer languages, has inherited many of the controversies verticallyabove P2, which is atthe height of theshortest of thatnotoriously controversial field. A fewof these con- troversial issues are importantin robot programming: ’This program is based on two program fragments included in the 1) Compiler versusinterpreter. Languagesystems that ASEA manual [ 31. compile high-level languages into a lowerlevel language can LOZANO-PEREZ: ROBOT PROGRAMMING 831 achieve great efficiency of execution as well as early detection MHI did not support arithmetic or any other control structure of some classes of programmingerrors. Interpreters, on the beyond sensor monitoring. The language, still, is surprisingly otherhand, provide enhanced interactive environments, in- “modern”and powerful. It was to be manyyears before a cludingdebugging, and are more readily extensible. These more general language was implemented. humanfactors issues have tended todominate; most robot 2) WAVE 1970-1 975: The WAVE [741 system, developed languagesystems are interpreterbased. Performance limita- atStanford, was the earliest systemdesigned as a general- tions of interpreters have sometimes interfered with achieving purposerobot programming language. WAVE was a “new” some useful capabilities, such as functionally defined motions. language,whose syntax was modeledafter the assembly 2) New versus old. Is it better to design a new language or language of the PDP-10. WAVE ranoff-line as an assembler extend an old one? A new one can be tailored to the need of on a PDP-10and produced a trajectory filewhich was exe- the newdomain. An oldone is likely to be more complete, cuted on-line by a dedicated PDP-6. The philosophy in WAVE to havean established user group, and to havesupporting was thatmotions could be pre-planned and that only small softwarepackages. In practice,few designers can avoid the deviations from these motions would happen during execution. temptation of starting de novo; therefore,most robot lan- This decision was motivated by thecomputation-intensive guages are “new” languages. There are, in addition, difficulties algorithmsemployed by WAVE fortrajectory planning and inacquiring sources for existing language systems. One dynamiccompensation. Better algorithms and faster com- advantage of interpreters in this regard is that they are smaller puters have removed this rationale from the design of robot than compilers and, therefore, easier to build. systems today. In the remainder of the section, we examine some represen- In spite of WAVE’S low-level syntax, the systemprovided an tativerobot-level programming systems, in roughly chrono- extensive repertoire of high-level functions. WAVE pioneered logical order. The languages have been chosen to span a wide several important mechanisms in robot programming systems; range of approaches to robot-levelprogramming. We use among these were examples to illustrate the “style” of the languages; a detailed 1j the description of positions by the Cartesian coordinates review of all these languages is beyond the scope of this paper. of the end-effector (x, y, z, and three Euler angles); We close thesection with a brief mention of some of the 2) the coordination of joint motions to achieve continuity many other robot-level programming systems that have been in velocities and accelerations. developed in the past ten years. 3) The specification of compliance in Cartesian coordinates. 1) MHI 1960-1961: The fit robot-levelprogramming language, MHI, was developed for one of the earliest computer- The following program in WAVE, from [74], serves to pick up controlled robots, the MH-1 at MIT [ 181. As opposed to its a pin and insert it into a hole: contemporarythe Unimate, which was notcontrolled by a general-purposecomputer and used noexternal sensors, TRANS PIN . . . Location of pin MH-I was equipped with several binary touch sensors through- TRANS HOLE.. . Location of hole out its hand, an array of pressure sensors between the fingers, ASSIGN TRIES 2 Number of pickup attempts and photodiodes on the bottom of the fingers. The availability MOVE PIN ; Move to PIN. MOVE first moves in +Z, ofsensors fundamentaly affected the mode of programming then to a point above PIN, then -Z. developed for the MH-1. MHI (MechanicalHand Interpreter)ran on an interpreter PICKUP: implemented on the TX-0 computer. The programming style CLOSE 1 ; Pickup pin SKIPE 2 ; Skip next instruction if Error 2 occurs in MHI was framedprimarily around guarded moves, i.e., ; (Error 2: fingers closed beyond arg moving until a sensory condition was detected. The language ; to CLOSE) primitives were: JUMP OK ; Error did not occur, goto OK OPEN 5 ; Error did occur, open the fingers 1 j “move”: indicates a direction and a speed; CHANGE Z, -1, NIL, 0,O ;Move down one inch SOJG TRIES, PICKUP ; Decrement TRIES, if not negative 2 j “until”: test a sensor for some specified condition; ;jump to PICKUP 3) “ifgoto”: branch to a program label if some condition is WAIT NO PIN ; Print “NO PIN” and wait for operator detected; JUMP PICKUP ; Try again when operator types 4) “ifcontinue”: branch to continue actionif some condition PROCEED holds. OK: MOVE HOLE ;Move above hole A sample program, taken from [ 181, foliows: STOP FV, NIL ;Stop on 50 02. CHANGE, 2, - 1, NIL, 0, 0 ; Try to go down one inch a, movex for 120 ; Movealong x withspeed 120 SKIPE 23 ;Error 23, failed to stop until sl 10 re1lo1 ; until senseorgan 1 JUMP NOHOLE ; Error did not occur (pin hitsurface) ; indicates a decrease of 10, relative FREE 2, X, Y ; Proceed with insertion by complying ; to the value at start of this step ; with forces along x and y ; (condition 1) SPIN 2, X,Y ; Also comply with torques about x and y until sl 206 lo1 abs stp ; or until sense organ 1 indicates STOP FV, NIL ;Stop on 50 oz. ; 206 or less absolute, then stop. CHANGE 2, -2, NIL, 0, 0 ; Make the insertion ;(condition 2) ifgoto fl, b : if condition 1 alone is fulfilled NOHOLE: ; go to sequence b WAIT NO HOLE ; Failed ifgoto t f2 ; if at least condition 2 is fulfded ; go to sequence c Notethe use of complianceand guarded moves to achieve ifcontinue t, a ; in all othercases continue sequence a robustness in the presenceof uncertainty and for error recovery. 832 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983

WAVE’S syntax was difficult, but the language supported a attempt to develop a high-level language that provides all the significant set of robot functions, many of which still are not capabilitiesrequired for robot programming as well as the available in commercial robot systems. programmizlg features of modem high-level languages, such as 3) MINI 1972-1976: MINI [go],developed at MIT,was ALGOLand Pascal. AL was designed tosupport robot-level not a “new” language, rather it was an extension to an existing and task-level specification. The robot level has been completed LISPsystem by means of a fewfunctions. The functions and will be discussed here; the task level development will be served as aninterface to a real-timeprocess running on a discussed in Section IV-C. separate machine. LISP has little syntax; it is a large collection AL, like WAVE and MINI, runs on two machines. One ma- ofprocedures with common calling conventions, with no chine is responsible for compiling the AL input into a lower distinction between user and system code. The robot control level language that is interpreted bya real-time control machine. functions of MINI simply expanded the repertoire of functions Aninterpreter for the AL languagehas been completed, as available to the LISP programmer. Users couldexpand the well [5]. AL wasdesigned to providefour major kinds of basic syntaxand semantics of the basic robotinterface at capabilities: will, subjectto the limitations of thecontrol system. The 1) Themanipulation capabilities provided by the WAVE principal limitation of MINI was the fact that the robot joints system: Cartesian specification of motions, trajectory planning, werecontrolled independently. The robot used with MINI and compliance. was Cartesian,which minimized the drawbacks of uncoordi- 2) The capabilities of a real-time language: concurrent exe- nated point-to-point motions. cution of processes, synchronization, and on-conditions. The principal attraction of “The Little Robot System” [ 441, 3) Thedata and control structures of an ALGOL-like (901 in which MINI ran was the availability of a highquality language,including data typesfor geometric calculations, 6-degree-of-freedomforce-sensing wrist [44] , [ 661which e.g., vectors, rotations, and coordinate frames. enabledsensitive force controlof the robot. Previous force- 4) Support for world modeling, especially the AFFIXMENT control systems either set the gains in the servos to control mechanism for modeling attachments between frames including compliance [43], or used theerror signals inthe servos of temporary ones such as formed by grasping. the electric joint motors to estimate the forces at the hand An AL program for thepeg-in-hole task is: [73]. In either case, the resulting force sensitivity was on the order of pounds; MIM’s sensitivity was more than an order BEGIN “insert peg into hole” FRAME peg-bottom, peg-grasp, hole-bottom, hole-top; of magnitude better (approximately 1 oz). {The coordinates frames represent actual positionsof object features, The basic functions in MINI set position or force goals for not hand positions } each of the degrees of freedom (SETM), reading the position peg-bottom + FRAME(nilrot, VECTOR(20, 30,O)*inches); and force sensors (GETM), and waiting for some condition to hole-bottom + FRAME(nilrot, VECTOR(25, 35,O)*inches); {Grasping position relativeto peg-bottom occur (WAIT). We will illustrate the use of MINI using a set } peg-grasp t FRAME(ROT(xhat, 180*degrees) ,3*zhat*inches); of simpleprocedures developed by Inoue[44]. The central tries t 2; piece of a peg-in-hole program would be rendered as follows grasped + FALSE; in MINI: { The top of the holeis defined to have a fued relation to the bottom } AFFIX hole-top to hole-bottom RIGIDLY (DEFUN MOVE-ABOVE (P OFFSET) AT TRANS(nilrot, 3*zhat*inches); ; set x, y, z gods and wait till they are reached (X = (X-LOCATION P)) OPEN bhand TO peg-diameter + l*inches; (Y = (Y-LOCATION P)) {Initiate the motion to the peg, note the destination frame } (Z = (PLUS (Z-LOCATION P) OFFSET)) MOVE bamn TO peg-bottom * peg-grasp; (WAIT ’ (AND (?X) (?Y) (?Z)))) WHILE NOT grasped AND i < tnes DO (DEFUN INSERT (HOLE) BEGIN “Attempt grasp” (MOVE-ABOVE HOLE 0.25) CLOSE bhand TO 0 * inches; ; define a target 1 inch below current position IF bhand < peg_diameter/2 (SETQ ZTARGET (DIFFERENCE (GETM ZPOS) 1.0)) THEN BEGIN “No object in grasp” ; move down until a contact force is met or until OPEN bhand TO peg-diameter + 1 * inches; ; the position targetis met. MOVE barm TO @ - 1 * inches; { @ indicates current location} (FZ = LANDING-FORCE) END (WAIT ’ (OR (?FZ) (SEQ (GETM ZPOS) ZTARGETI)) ELSE grasped+- TRUE; (COND ((SEQ (GETM ZPOS) ZTARGET) i+i+ 1; ; if the position goalwas met, i.e. no surface encountered END ; comply with lateral forces IF NOT grasped THEN ABORT(“Failed to grasp the peg”); (FX = 0) (FY = 0) ;and push down until enough resistanceis met. {Establish a fixed relation between arm and peg. } (FZ = INSERTION-FORCE) AFFIX peg-bottom TO barm RIGIDLY; (WAIT ’ (FZ))) {Note that we move the peg-bottom, not barm } (T; if a surface was encountered MOVE peg-bottom TO hole-top; (ERROR INSERT)))) {Test if a hole is below us } MINI did not have any of the geometric and control opera- MOVE barm TO €9- 1 * inches tionsof WAVE builtin, but most of thesecould easily be ON FORCE(zhat) > 10 * ounces DO ABORT(“No Hole’’); implementedas LISP procedures. The primary functional {Exert downward force, while complying to side forces } difference between the two systems lay in the more sophisti- MOVE peg-bottom to hole-bottom DIRECTLY catedtrajectory planning facilities of WAVE. Thecompen- WITH FORCE-FRAME = station IN WORLD sating advantage of MINI was that it did not require any pre- WITH FORCE(zhat) = - 10 * ounces planning; the programs could use arbitrary LISP computations WITH FORCE (fiat) = 0 * ounces WITH FORCE (yhat) = 0 * ounces to decide on motionsin response to sensory input. SLOWLY; 4/ AL 1974-Present: AL (241,[67] is an ambitious END “insert peg in hole” LOZANO-P~REZ:ROBOT PROGRAMMING 833

AL is probably the most complete robot programming system SETITRIES = TRIES - 1 yet developed; it was the first robot language to be a sophisti- IF TRIESGE 0 THEN 10 TYPE NOPIN cated computer language as well as a robot control language. STOP AL has been a significant influence on most later robot lan- guages. REMARK Move 300mm above HOLE following a straight line. 30 APPROSHOLE, 300 5) VAL 1975-Present: VAL [89], [98] is therobot lan- REMARK Monitor simal line 3and call procedureENDIT to guage used in the industrial robots of Unimation Inc., especially STOP theprogram the PUMA series. If was designed to provide a subset of the REMARK if the signal is activated during the next motion. capabilitiesof WAVE on astand-alone mini-computer. VAL REACT1ENDIT3, is aninterpreter; improved trajectory calculation methods APPROSHOLE, 200 REMARK Did not feel force, so continue to HOLE. have enabledit to foregoany off-line trajectory calculation MOVESHOLE phase.This has improved the ease of interactionwith the language.The basic capabilities of the VAL languageare as VAL hasbeen designed primarily for operations involving follows: predefined robot positions, hence its limited support of com- putation,data structures, and sensing. A newversion of the point-to-point, joint-interpolated, and Cartesian motions system, VAL-2, is underdevelopment which incorporates (including approach and deproach motions); moresupport for computation and communication with specificationand manipulation of Cartesiancoordinate external processes. frames,including the specification of locationsrelative 6) AML 1977-Present: AML [ 961 is therobot language to arbitrary frames; used in IBM’s robot products. AML, like AL, is an attempt at integervariables and arithmetic, conditional branching, developingcompletea “new” programming language for and procedures; robotics that isalso a full-fledged interpreted computer lan- setting and testing binary signal lines and the ability to guage. The design philosophy of AML is somewhat different monitorthese lines and execute a procedure when an fromthat of AL, however. Where AL focuses on providing event is detected. a rich set of built-in high-level primitives for robot operations, VAL’s support of sensing is limited to binary signal lines. AML has focused on providing a systems environmertt where Theselines can be used for synchronization and also for differentuser robot programming interfaces may be built. limitedsensory interaction as shownearlier. VAL‘s support For example, extended guiding [ 921 and visioninterfaces [ 501 of on-lineframe computation is limited tocomposition of canbe programmed within the AML languageitself. This constantcoordinate frames and fixed translation offsets on approach is similar to that followed in MINI. existing frames. It does support relative motion; this, together AML supports operations on data aggregates, which can be with the ability to halt a motion in response to a signal, pro- used toimplement operations on vectors,rotations, and vides the mechanismsneeded for guarded moves. The basic coordinate frames, although these data types are part of recent VAL alsohas been extendedto interact with an industrial releasesof the language. AML alsosupports joint-space tra- visionsystem [30] byacquiring the coordinate frame of a jectory planning subject to position and velocity constraints, part in the field of view. absolute and relative motions, and sensor monitoring that can As a computerlanguage, VAL is rudimentary;it most interruptmotions. Recent AML releases support Cartesian resembles thecomputer language Basic. VAL onlysupports motionand frame affixment, but not general compliant integervariables, notfloating-point numbers or character motion,8 or multiple processes. An AML program for peg-in- strings. VAL doesnot support arithmetic on positiondata. hole might be: VAL doesnot support any kind of dataaggregate such as PICKUP: SUBR (PART-DATA, TRIES); arrays or lists and, although it supports procedures, they may MOVE(GRIPPER, DIAMETER(PART-DATA)+O.2); not take any arguments. MOVE(< 1,2, 3>, XYZ-POSITION(PART-DATA)+); A sample VAL programfor the peg-in-hole task is shown TRY-PICKUP(PART-DATA, TRIES); END; below. VAL doesnot support compliant motion, so this operation assumes either that the clearance between the peg TRY-PICKUP: SUBR(PART-DATA, TRIES); and hole is greater than the robot’s accuracy or that a passive IF TRIES LT 1 THEN RETURN(’N0 PART’); compliancedevice is mountedon the robot’s endeffector DMOVE(3, -1.0); [ 1021. This limits the comparisons that can be made to other, IF GRASP(DIAMETER(PART-DATA)) = ’NO PART’ THEN TRY-PICKUP(PART-DATA, TRIES - 1); moregeneral, languages. In theexample, we assume that a END; separate processor is monitoring a force sensor and communi- cating with VAL via signal lines. In particular, signal line 3 goes GRASP: SUBR(DIAMETER, F); high if the 2 component of force exceeds a preset threshold. FMONS: NEW APPLY($MONITOR, PINCH-FORCE(F)); MOVE(GRIPPER, 0, FMONS); SETI TRIES = 2 RETURN ( IF QPOSITION(GRIPPER) LE DIAMETER/Z THEN ’NO PART’ REMARK If the hand closes to less than 100 mm, goto statement ELSE ’PART’); labelled 20. END; 10 GRASP 100,20 REMARK Otherwise continue at statement 30. INSERT: SUBR(PART-DATA, HOLE); GOT0 30 FMONS: NEW APPLY ($MONITOR, TIP-FORCE(LAND1NG-FORCE)); REMARK Openthe fingers, displace down along world Z axis and try again. *Compliant motions at low-speed could be written as user programs 20 OPEN1 5 00 in AML. by using its sensor 1/0 operations. For highspeed motions, DRAW 0, 0, -200 the real-time control process would haveto be extended. 834 PROCEEDINGSIEEE, OF THE VOL. 71, NO. 7, JULY 1983

MOVE(< 1, 2, 3>, HOLE+); PAL programsmanipulate basic coordinateframes that DMOVE(3, -1.0, FMONS); definethe position of key robotfeatures: z representsthe IF QMONITOR(FM0NS) = 1 THEN RETURN(’N0 HOLE’); base of the robot relative to the world, T6 represents the end MOVE(3, HOLE(3) + PART-LENGTH(PART-DATA)); of the sixth (last) robot link relative to Z, and E represents END; the position of the end-effector tool relative to ~6.Motions PART-IN-HOLE: SUBR(PART-DATA, HOLE); of the tool with respect to the robot base are accomplished PICKUP (PARTDATA, 2.); byspecifying the value of z + T6 + E,where +indicates INSERT (PART-DATA, HOLE); composition of transforms. So, theexample, z + ~6 + E = END; CAM + BKT + GRASP specifiesthat the end-effector should Thisexample has shown the implementation of low-level be placed at the grasp position on the bracket whose position routinessuch as GRASP, thatare available as primitives in is known relative to a camera, as discussed in Section 111-B. AL and VAL. In general, such routines would be incorporated The MOV command in PAL indicatesthat the intoa programming library available to users and would be “generalized” robot tool frame, ARM + TOL, is to be moved indistinguishable from built-in routines. The important point to. For simplemotions of theend-effector relative is that such programs can be written in the language, to the robot base, ARM is Z + T6 and TOL is E. We can rewrite The AML language design has adopted many decisions from ARM to indicate that the motion happens relative to another the designs of the LISPand APLprogramming languages. object, e.g., the example above can be rewritten to be AML, like LISP, does not make distinctions between system -BKT-CAM+Z+T6+E=GRASP. anduser programs. Also AML providesa versatile uniform data aggregate, similar to LISP’s lists, whose storageis managed In this case ARM can be set to the transform expression by thesystem. AML, like APL andLISP, provides uniform - BKT - CAM + Z + T6. facilities for manipulating aggregates and for mapping opera- tions over the aggregates. MOVGRASP will then indicate that the end-effector is to be The languages, WAVE, MINI, AL, VAL, and AML are well placed on the graspframe of thebracket, as determinedby within the mold of traditional procedural languages, both in thecamera. Similarly, placing the pin in the bracket’s hole syntax and the semantics of all except a few of their opera- can be viewed as redefining the tool frame of the robot to be tions.The next three languages we considerhave departed at the hole. This can be expressed as fromthe main line of computerprogramming languages in - FIXTURE + Z + T6 + E - GRASP + HOLE = PIN. more significant ways. 7) TEACH 19 75-1 978: The TEACH language [ 81 1, [ 821 By Setting ARM to - FIXTURE + Z + T6 and TOL to E - GRASP + was developed as part of the PACS system at Bendix Corpora- HOLE, MOVPIN will have the desiredeffect. Of course,the tion.The PACS systemaddressed two important issues that purpose of setting ARM and TOL is to simplify the expression havereceived little attention in otherrobot programming of related motions in the same coordinate frame. systems: the issue of parallel execution of multiple tasks with PAL is still under development; the system described in[ 931 multiple devices, including a variety of sensors; and the issue deals only with position data obtained from the user rather than of definingrobot-independent programs. In addressing these the robot. Much of the development of PAL has been devoted issues TEACH introduced several key innovations; among these to the natural use of guiding to define the coordinate frames. are the following: Extensions to thissystems to deal with sensory information 1) Programs are composed of partially ordered sequences of are suggested in [ 751. The basic idea is that sensory informa- statements that can be executed sequentially or in parallel. tion serves to define the actual value of some coordinate frame 2) The system supports very flexible mapping between the in the coordinate equations. logical devices, e.g., robots and fixtures, specified in the pro- 9) MCL 1979-Present: MCL [58] is anextension of the gram and the physical devices that carry them out. APTlanguage forNumerically Controlled machining to 3) All motionsare specified relative to local coordinate encompass robot control, including the following capabilities: frames, so as to enable simple relocationof the motion sequence. 1) datatypes, e.g., strings,booleans, reals, andframes; Thesefeatures are especially important in the context of 2) controlstructures for conditional execution, iterative systems with multiple robots and sensors, which are likely to execution, and multiprocessing; be commonin future applications. Few attempts have been 3) real-time input and output; made to deal with the organization and coordination problems 4) vision interface, including the ability to define a shape to of complex tasks with multiple devices, notall of them robots. be located in the visual field. Ruoff [ 821 reports that even the facilities of TEACH proved inadequatein coping with very complexapplications and Extending APT provides some ease of interfacing with existing argues for the use of model-based programming tools. machining facilities including interfaces to existing geometric 8) PAL 1978-Present: PAL [93] is verydifferent in con- databases. By retaining APT compatibility, MCL canalso ception from the languages we have considered thus far. PAL hope to draw on the existing body of skilledAPT part pro- programsconsist primarily of a sequence of homogeneous grammers. On theother hand, the APT syntax, which was coordinate equations involving the locations of objects and of designed nearly 30 years ago, is not likely to gain wide accep- therobot’s endeffector. Some of thetransforms in these tance outside of the NC-machining community. equations, e.g., those specifying the relative location of a fea- 10) Additional Systems: Many otherrobot language sys- ture to an object’s frame, are defined explicitely in the pro- temsare reported in theliterature, among these are the gram.Other coordinate frames are defined implicitly by the following: equations; leading the robot through an execution of the task 1) ML [ 1041 is a low-level robot language developed at IBM, establishesrelations among these frames. Solving for the with operations comparable to those of a computer assembly implicitly defined frames completes the program. language.The motion commands specified joint motions for LOZANO-P~REZ:ROBOT PROGRAMMING 835 an(almost) Cartesian robot. The language provided support forguarded moves by means of SENSOR commandsthat enabled sensor monitors; when a monitor was activated by a sensor value outside of the specified range, all active motions were terminated. ML supported two parallel robot tasks and provided for simple synchronization between the tasks. 2) EMILY [ 191 was anoff-line assembler for the ML language.It raised the syntax of ML toa levelcomparable to Fortran. [ 3) MAPLE 161 was an interpretedAL-like language, also (AUBUC)- D developedat IBM. Theactual manipulation operations were carried out by using the capabilitiesof the ML system described Fig. 5. Models obtained by set operations on primitive volumes. earlier. MAPLE never recieved significant use. 4) SIGLA [85], developedatOlivetti for the SIGMA finalstate. The output of thetask planner is arobot-level robots,supports a basicset of jointmotion instructions, program to achieve the desiredfinal state whenexecuted in testing of binary signals, and conditional tests. It is compar- thespecified initial state. If thesynthesized program is to able to the ML languagein syntactic level. SIGLA supports reliablyachieve its goal, the planner must take advantage of pseudoparallelexecution of multipletasks and some simple anycapabilities for compliant motion, guarded motion, and force control. errorchecking. Hence the task planner must synthesize a 5) MAL [ 281, developedat Milan Polytechnic,Italy, is a sensor-based robot-level program. Basic-like languagefor controlling multiple Cartesian robots. Task-level programming is still a subject of research; many The language supports multiple tasks and task synchronization unsolvedproblems remain. The approach, however, is a by means of semaphores. naturaloutgrowth of ongoingresearch and development in 6) LAMA-S [ 201, developed at IRIA, France, is a VAL-like CAD/CAM and in artificial intelligence. languagewith supportfor on-line computations, for arrays, Taskplanning can be divided intothree phases: modeling, and for pseudoparallel execution of tasks. task specification, and robot-program synthesis. These phases 7) LM [48],developed at IMAG,Grenoble, France, is a arenot computationally independent, but they provide a language that providesmost of themanipulation facilities convenient conceptual division of the problem. of AL in a minicomputer implementation. LM also supports I) World Modezing: Theworld model for a taskmust affixment,but not multiprocessing. LM is beingused as the contain the following information: programminglanguage for arecently announced industrial robot produced by Scemi, Inc. 1)geometric descriptions of all objectsand robots in the 8) RAIL [ 251, developed at AUTOMATIX Inc, contains a task environment; large subset of PASCAL, including computations on a variety 2) physical description of all objects, e.g., mass and inertia; of data types, as well as high-level program control mechanisms. 3) kinematic descriptions of all linkages; RAIL supports interfaces to binary vision and robot welding 4) descriptions of robotcharacteristics, e.g., jointlimits, systems.The language has a flexible way of definingand acceleration bounds, and sensor capabilities. accessing inputor output lines, either as single ormultiple Modelsof task states also must include the positions ofall bitnumbers. RAIL statementsare translated into an inter- objects and linkages in the world model. Moreover, the model mediaterepresentation which can be executedefficiently mustspecify the uncertainty associated with each of the whileenabling interactive debugging. RAIL is syntactically positions. The role that each of these items plays in the syn- more sophisticated than VAL; it is comparable to AML and thesis of robot programs will be discussed in the remainder of LM. RAIL does not support multiprocessing or affixment. thesection. But first, we will explorethe nature ofeach of 9) HELP, developedat General Electric for their robot the descriptions and how they may be obtained. products,including the Allegro robot [ 261.The language is The geometric description of objects is the principal compo- Pascal-likeand supports concurrent processes tocontrol the nent of the worldmodel. The major sources of geometric two arms in the Allegro system. It is comparable in level to modelsare CAD systems,although computer vision may RAIL and AML. eventually become a major source of models [ 81. There are This is not a complete list, new languages are being developed threemajor types of commercial CAD systems,differing on every year, but it is representative of the state of the art. their representations of solid objects:

C. Task-LevelProgramming 1) line-objectsare represented by the lines andcurves needed to draw them; Robot-level languages describe tasks by carefully specifying 2) surface-objects are represented as a set of surfaces; the robot actions needed to carry them out. The goal of task- 3) solid-objects are represented as combinationsof primitive level programming systems [ 721, on the other hand,is to enable solids. task specification to be in terms of operations on the objects inthe task. The peg-in-hole task, for example, would be Linesystems and some surface systems do not represent all describedas: INSERTPEG IN HOLE, instead of the sequence thegeometric information needed for task planning. A list of robot motions needed to accomplish the insertion. of edge descriptions, for example, is not sufficient to describe A taskplanner transforms the task-level specifications into a unique polyhedron, e.g., [ 591. In general, a solid modeling robot-levelspecifications. To do this transformation, the system is required to obtain a complete description. In solid taskplanner must have a description of theobjects being modelers, models are constructed by performing set operations manipulated,the task environment, the robot carrying out on a few types of primitive volumes. The objects depicted in the task, the initial state of the environment, and the desired Fig. 5,for example, can be described as theunion of two 836 PROCEEDINGSIEEE, OF THE VOL. 71, NO. 7, JULY 1983 solid cylinders A and B, a solid cube C, and a hollow cylinder Nul D. Thedescriptions of the primitiveand compound objects Bear, Washer vary greatly among existing systems. For surveys of geometric modeling systems see [41, [ 101, [go]. The legal motions of anobject are constrained by the pacer presence of other objects in the environment and the form of the constraints depend in detail on the shapes of the objects. Boorin This is thefundamental reason why a task planner needs geometricdescriptions of objects.There are additional con- straints on motion imposed by the kinematic structure of the U robot itself. If the robot is turning a crank or opening a valve, Fig. 6. Task description as a sequence of model states. then the kinematicsof the crank and thevalve impose additional restrictionson the robot’s motion. The kinematic models stepsduring execution of the task. An assembly of several provide the task planner with the information required to plan parts, for example, might be specified by a sequence of models robotmotions that are consistent with external constraints. as eachpart is added to the assembly.Fig. 6 illustrates one Examples of kinematic models and their use in planning robot possiblesequence of modelsfor a simple task. All of the motions can be found in [60]. models in thetask specification share the descriptions of Thebulk of theinformation in a world model remains the robot’s environment and of the objects being manipulated; unchanged throughout the execution of a task. The kinematic the steps in the sequence differ only in the positions of the descriptions of linkages are an exception, however. As a result objects. Hence, a task specification is, at first approximation, of the robot’s operation, new linkages may be created and old amodel of the robot’s world together with a sequence of linkages destroyed. For example, inserting a pin into a hole changes in the positions of the model components. creates a new linkage with one rotational and one translational A model state is given by the positions of all the objects in degreeof freedom. Similarly, the effect of inserting the pin theenvironment. Hence, tasks may be defined,in principle, might be to restrict the motion of one plate relative to another, by sequences of states of the world model. The sequence of thus removing one degree of freedom from a previously existing modelstates needed to fully specify a task depends on the linkage. The task planner must be appraised of these changes, capabilities of thetask planner. The ultimate task planner eitherby having the user specify linkage changes with each might need only a description of the initial and final states of new task state, or by having the planner deduce the new link- the task. This has been the goal of much of the research on ages from the task state description. automaticproblem solvingwithin artificial intelligence(see, In planning robot operations, many of the physical charac- e.g., [70]). Theseproblem solving systems typically donot teristics of objects play important roles. The mass and inertia specifythe detailed robot motions necessary to achieve an of parts,for example, will determinehow fast they can be operation.’These systems typically produce aplan where moved or how much force can be applied to them before they theprimitive commands are of theform: PICKUPfA) and fall over. Also, the coefficient of friction between a peg and MOVETOIp) without specifying the robot path or any sensory ahole affects the jamming conditions during insertion (see, operations.In contrast to these systems, task planners need e.g., [ 71 I, [ 1021). Hence, the world model must include a significant information about intermediate states, but they can description of these characteristics. be expected to produce a much more detailed robot program. The feasible operations of a robot are not sufficiently char- The positions needed to specify a model state are essentially acterized by its geometrical, kinematical, and physical descrip- similar tothose needed to specify positions to robot-level tions. We have repeatedly stressed the importance of a robot’s systems.The option of using therobot to specifypositions sensing capabilities: touch, force, and vision. For task planning is notopen, however. The other techniques described in purposes, vision allows obtaining the position of an object to Section 111-B are still applicable. The use of symbolic spatial somespecified accuracy, at execution time. Force sensing relationships is particularlyattractive for high-level task allowsperforming guarded and compliant motions. Touch specifications. information could serve in both capacities, but its use remains We haveindicated that model states are simply sets of largelyunexplored [36]. Inaddition to sensing,there are positionsand task specifications are sequences of models. many individual characteristicsof robots that must be described Therefore, given a method such as symbolic spatial relation- in the world model: velocity and acceleration bounds, position- shipsfor specifying positions, we should be ableto specify ing accuracy of each of the joints, and workspace bounds, for tasks.This approach has several important limitations, how- example. ever. We noted earlier that a set of positions may overspecify Much of the complexity in aworld model arises from model- a state. A typical example [23] of thisdifficulty arises with ing the robot, which is done once. Geometric, kinematic, and symmetric objects, for example a round peg in a round hole. physicalmodels of otherobjects must be provided for each The specific orientation of the peg around its axis given in a new task, however.The underlying assumption in task-level model is irrelevant tothe task. This problemcan be solved languages is that this information would have been developed bytreating the symbolic spatial relationships themselves as as part of the design of these objects. If this assumption does specifyingthe state, since these relationships can express not hold, the modeling effort required for a task-level specifi- families of positions. Another, more fundamental, limitation cation, using current modeling methods, might dwarf the effort is thatgeometric and kinematic models of anoperation’s needed to generate a robot-level program to carry out the task. 2) Task Specification: Taskscan be specified tothe task ’Themost prominent exception is STRIPS [69], whichincluded planner as a sequence of models of the world state at several mechanisms to carry out the plan in the real world. LOZANO-P~REZ:ROBOT PROGRAMMING 831 finalstate are not always a complete specification of the desired operation. One example of this is the need to specify howhard totighten a bolt during an assembly. In general, VI a complete description of a task may need to include param- eters of theoperations used to reach one task state from another. The alternative to task specification by a sequence of model I II I I states is specification by a sequence of operations. Thus instead Fig. 7. Two equivalent obstacle avoidance problems. of building a model of an object in its desired position, we can describethe operation by whichit can be achieved. The tions are performed. For linkages, information on uncertainty description should still be object-oriented, not robot-oriented; at each of the joints can be used to estimate the position un- for example, the target torque for tightening a bolt should be certainty of each of thelinks and ofgrasped objects 1121, specified relative to the bolt and not the robot joints. Opera- [941. tions will also include a goal statement involvingspatial 3) Robot ProgramSynthesis: Thesynthesis of arobot relationships between objects. The spatial relationships given program from a task specification is the crucial phase of task in the goal not only specify positions, they also indicate the planning.The major steps involved in this phase are grasp physical relationships between objects that should be achieved planning,motion planning, and plan checking. The output by the operation. Specifying that two surfaces areAgainst each of the synthesis phase is a program composed of grasp com- other, for example, should produce a compliant motion that mands,several kinds of motionspecifications, sensor com- moves until the contact is actually detected, not a motion to mands,and error tests. This program is ina robot-level lan- theposition where contact is supposed tooccur. For these guagefor aparticular robot and is suitablefor repeated reasons,existing proposals for task-level programming lan- execution without replanning. guages have adopted an operation-centered approach to task Grasping is a key operation in robot programs since it affects specification [51], [52],[55]. all subsequent motions. The grasp planner must choose where Thetask specified as asequence of model states in Fig. 6 to grasp objects so that no collisions will result when grasping can be specified by the following symbolic operations, assuming ormoving them [491, [521, [531, [631, [1051. Inaddition, that the model includes names for objects and object features: the graspplanner must choose grasp positions so thatthe graspedobjects are stable in the gripper [81, [341, 1731. In PLACE BEARING1 SO (SHAFT FITS BEARING1.HOLE) AND particular,the grasp must be able towithstand the forces (BEARING1.BOTTOM AGAINST SHAFT'.LIP) generatedduring motion and contact with other objects. PLACE SPACER SO (SHAFT FITS SPACER.HOLE) AND Furthermore, the graspoperation should be planned so that (SPACER.BOTTOM AGAINST BEARING1.TOP) itreduces, or at least doesnot increase, any uncertainty in PLACE BEARING SO (SHAFT FITS BEARING2.HOLE) AND the position of the object tobe grasped [ 6 11. (BEARING2.BOTTOM AGAINST SPACER.TOP) Once the object is grasped, the task planner must synthesize motionsthat will achieve the desiredgoal of theoperation PLACE WASHER SO (SHAFT FITS WASHER.HOLE) AND (WASHER.BOTTOM AGAINST BEARING2.TOP) reliably. We haveseen thatrobot programs involve three basic kinds of motions: free, guarded,and compliant. Motions SCREW-IN NUT ON SHAFT TO (TORQUE= to) duringan assembly operation, for example, may have up to The first step in the task planning process is transforming foursubmotions: guardeda departure from the current thesymbolic spatial relationships among object features in position,a free motion towards the destination position of the SO clauses above to equations on the position parameters the task step, a guarded approach to contact at the destination, of objects in the model. These equations must then be simpli- and a compliant motion toachieve the goal position. fied as far as possible to determine the legal ranges of positions During free motion, the principal goal is to reach the desti- of all objects [ 11,[78], [94]. Thesymbolic form of the nationwithout collision; therefore, planning free motions is relationships is used during program synthesis also. aproblem in obstacle avoidance. Manyobstacle-avoidance We have mentioned that the actual positions of objects at algorithms exist but noneof them are both general and efficient. task execution time will differ from those in the model; among The type of algorithm that has received the most attention are the principal sources of error are part variation, robot position those that build an explicit description of the constraints on errors,and modeling errors. Robot programs must tolerate motionand search for connected regions satisfying those some degree of uncertainty if they are to be useful, but pro- cmstraints;see, e.g., [131,I151, [461, [531, 1561, [861, grams that guarantee success under worst case error assump- [87], [97]. A simpleexample of thiskind of technique is tionsare difficult to write and slow to execute. Hence, the illustratedin Fig. 7. A movingpolygon A = UiAi,with dis- taskplanner must use expectationson the uncertainty to tinguished point UA, must translate among obstacle polygons choosemotion and sensing strategies that are efficient and Bi. This problem is equivalent tothe problem in which UA robust[44]. If theuncertainty is too large to guarantee translates among transformed objects C~,J.Each Ci,i represents success, thenadditional sensory capabilities or fixtures may theforbidden positions of UA arisingbecause of potential be used to limit the uncertainty [ 1.11, [ 941. For this reason, collisions between Ai and Bi. Any curve that does not overlap estimated uncertainties are a key part of task specification. any of the Cki is a safe path for A among the Bi. Extensions It is not desirable to specifyuncertainties numerically for of this approach can be used to plan the paths of Cartesian each position of each state. For rigid objects, a more attractive robots 1531, [56l. alternative is to specify the initial uncertainty of each object Compliant motions are designed to maintain contact among and use the task planner to update the uncertainty as opera- objects even in the presence of uncertainty in the location of PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983

maybe absent altogether. In these cases and many others, thesynthesized programs will not have theexpected result; the synthesized program should detect the failure and either correct it ordiscontinue the operation. Error detection will avoidpossible damage tothe robot and other parts of the environment.Hence, an important part of robotprogram synthesisshould be theinclusion of sensorytests for error detection. Error detection and correction in robot programs is a very difficultproblem, but one for which very little 4: research is available [ 141, [ 291, [ 521. 4) Task-LevelSystems: A number of task-levellanguage U systems have been proposed, but no complete system has been Fig. 8. Ambiguous results of a guarded motion under uncertainty. implemented. We saw above that many fundamental problems remain unsolved in this area; languages have served primarily the objects; see [62] for a review. The basic idea is that the as a focus of research, rather than as usable systems. robot canonly control its position along the tangent to a The Stanford Hand-Eye system [ 221 was the first of the task- surface”without violating the constraints imposed by the level system proposals. A subset of this proposal was imple- surface. In the direction normal to the surface, the robot can mented,namely Move-Instance [ 731,a program that chose only control forces if it is to guarantee contact with the sur- stable grasping positions on polyhedra and planned a motionto face. The planning of compliant motions, therefore, requires approach and move the object. The planning did not involve models that enable one to deduce the directions which require obstacle avoidance (except for the table surface) or the plan- forcecontrol and those that require position control. This ning of sensory operations. planning is mostcomplicated when the robot interacts with The initial definition of AL [24] called forthe ability to other mechanisms [ 601. specify models in AL and to allow specification of operations Compliant motions assume that the robot is already in con- in terms of these models. This has been the subject of some tact with an object; guarded motions are used to achieve the research [ 51, [ 941, but the results have not been incorporated initialcontact with an object [ 1041.A guarded motion in into the existing AL system. Some additional work within the the presenceof uncertainty, however, does not allow the context of Stanford’sAcronym system [12] hasdealt with program to determine completely the relative position of the planning grasp positions [751, but AL has been viewed as the target language rather than the user language. objects, several outcomes may be possible as aresult of the motion(see Fig. 8). Astrategy, composed of compliant Taylor [ 941 discusses an approach to the synthesis of sensor- motions,guarded motions, and sensing must be synthesized based AL programsfrom task-level specifications. Taylor’s to reliablyachieve thespecified goal. In particular, for the method relies on representing prototypical motion strategies example in Fig. 8, the strategy must guarantee that the desired for particular tasks as parameterized robot programs, known final state is achieved no matter which of the possible states as procedure skeletons. A skeleton has all the motions, error actually is reached [ 141, [47], [52], [56], [94]. tests, and computations needed to carry out a task, but many Most of the difficulty in doing motion synthesis stems from of the parameters needed to specify motions and tests remain the need to operate under uncertainty in the positions of the to be specified. The applicability of a particular skeleton to a objects and of the robot. These individual uncertainties can be task depends on the presence of certain features in the model modeledand their combined effect on positions computed. andthe values of parameterssuch as clearances and uncer- The requirements for successful completion of task steps can tainties.Choices among alternative strategies for singlea be used to choose the strategy for execution, e.g., an insertion operation are made by first computing the values of a set of with large clearance may be achieved by a positioning motion, parameters specific to the task, such as the magnitude of un- while one with little clearance might require a guarded motion certainty region for the peg in peg-in-hole insertion, and then usingthese parameters tochoose the “best,” e.g., fastest, tofind the surface followed by a compliant motion [ 141, strategy. Having chosen a strategy, the planner computes the [ 741. In general, the uncertainty in the position of objects maybe too large to guaranteethat some motion plan will additional parameters needed to specify the strategy motions, such as grasp positions and approach positions. A program is succeed. In these cases, noncontact sensing such as vision may be usedat run-time to reduce the uncertainty. The task produced by insertingthese parameters into the procedure plannermust decide when such information is likely to be skeleton that implements the chosen strategy. useful, given that the sensory information also will be subject Theapproach to strategy synthesis based on procedure skeletonsassumes that task geometry for common subtasks to error. This phase of taskplanning has beendubbed plan is predictable and can be divided into a manageablenumber checking; it is treated in detail in [ 141. of classes each requiring a different skeleton. This assumption Task planning, as describedabove, assumes that the actual is needed because the sequence of motions in the skeleton wiU state of the world will differ from the world model, but only only be consistent with a particular classof geometries. The withinknown bounds. This will not alwaysbe the case however;objects may be outside the bounds of estimated assumption does not seem to be true in general.As an example, uncertainty,objects may beof the wrong type,or objects considerthe tasks shown in Fig. 9. A programfor task A couldperhaps be used to accomplishtasks B and C,but it LOZANO-PEREZ:ROBOT PROGRAMMING a39

be slow when compared to mechanical means of reducing uncertainty. €7 Both of theseproblems are receiving significant attention today. When they are effectively overcome, the need for good UTT robot programming tools will be acute. (a) 0) (C) maingoalThe of this beenpaperhas to assessthestate of the Fig. 9. Similar peg-in-hole tasks which require differentstrategies. art in robot programming compared with the requirements of sophisticatedrobot tasks. Our conclusion is that all of the existing robot systems fall short of meeting the requirements trasts to an approach which derives the strategy directly from we can identify today. consideration of the task description [ 561. In advanced sys- The crucial problem in the development of robot program- tems, both types of approaches are likely to play a role. minglanguages is ourlack of understanding of the basic The LAMA system was designed at MIT [521, [551 as a issuesin robotprogramming. The question of what basic task-levellanguage, butonly partially implemented. LAMA set of operationsarobot system should support remains formulatedthe relationship of taskspecification, obstacle unanswered.Initially, theonly operation available was joint avoidance,grasping, skeleton-based strategy synthesis, and motion. More recently,Cartesian motion, sensing, and, errordetection within one system. More recentwork at especially,compliance have beenrecognized as important MIT has explored issues in task planning in more detail outside capabilitiesfor robot systems. In future systems, a whole of the context of anyparticular system [ 13 1, [ 141, [531, range of additionaloperations and capabilities are tobe [571, [601, [611. expected: AUTOPASS, at IBM [ 5 1 ], defined the syntax and semantics of a task-level language and an approach to its implementation. 1) Increasingintegration of sensingand motion: More efficient and complete implementations of compliant motions A subset of the most general operation, the PLACE statement, was implemented. The major part of the implementation effort are a key priority. focused on amethod for planning collision-free paths for 2) Complete object models as asource of data for sensor interfacesand trajectory planning: Existingpartial models Cartesian robots among polyhedral obstacles [ 561, [ 1001 . of objectsare inadequate for most sensing tasks; they are RAPT [77] is animplemented system for transforming symbolicspecifications of geometricgoals, together with a also limited as a source of path constraints. Surface and volume program which specifies the directions of the motions but not models, together with appropriate computational tools, should their length, into a sequenceof end-effector positions. RAPT’S also open the way for more natural and concise robot programs. emphasishas been primarily ontask specification; it does 3) Versatile trajectow specifications: Current systems over- not dealwith obstacle avoidance, automatic grasping, or specify trajectories and ignore dynamic constraints on motion. sensory operations. Furthermore,they severelyrestrict thevocabulary of path Some robot-level language systems have proposed extensions shapesavailable to users. A mechanismsuch as functionally to allowsome task-level specifications. LM-GEO [471 is an defined motion can make it easy to increase the repertoire of implemented extension to LM [48] whichincorporates sym- trajectories available to the user. bolicspecifications of destinations.The specification of 4) Coordinationof multiple parallel tasks: Currentrobot ROBEX [99] includesthe ability toautomatically plan systems have almostcompletely ignored this problem, but collision-free motionsand to generate programs that use increasing use of robots with more thansix degrees of freedom, sensory information available during execution. A full-blown gripperswith twelve or more degrees of freedom,multiple ROBEX, includingthese capabilities, has not been imple- special-purpose robots with two or three degrees of freedom, mented. andmultiple sensors will makethe need for coordination The deficiencies of existing methods for geometric reasoning mechanisms severe. andsensory planning have preventedimplementation of a 5) The IIO, control,and synchronization capabilities of completetask-level robot programming system. There has, general-purposecomputer programming Ianguages: A key however,been significant progress towards solving the basic problem in the development of robot languages has been the problems in task planning; see [ 541 for a review. reluctance,on the part of usersand researchers alike, to accept that a robot programming language must be a sophisti- V. DISCUSSIONAND CONCLUSIONS cated computer language. The evidence seems to point to the Existing robot programming systems have focused primarily conclusion that a robot language should be a superset of an on thespecification of sequences of robotconfigurations. established computer programming language, not a subset. Thisis only asmall aspect of robotprogramming, however. The developments should be matched with continuing efforts The central problem of robot programming is that of speci- at raising the level of robotprogramming towards the task fyingrobot operations so thatthey can operate reliably in level. By automating many of the routine programming func- thepresence of uncertaintyand error. This has long been tions, we can simplify the programming process and thereby expand the range of applications availableto robotsystems. recognized in research labs, but until very recently has found littleacceptance in industrial situations. Some key reasons One problem that has plagued robot programming research for this difference in viewpoint are: hasbeen the significant“barriers toentry” to experimen-tal research in robotprogramming. Because robotcontrol sys- 1)the lack of reliableand affordable sensors, especially tems on available robots are designed to be stand alone, every those already integrated into the control and programmingresearch group has to reimplementa robot control system systems of a robot; from the ground up.This is a difficult and expensive operation. 2) existing techniques for sensory processing have tended to It is to be hoped that commercial robots of the future will be 840 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983

designed with a view towards interfacing to other computers, system for acomputer controlledmanipulator,” IBMT. J. rather than as stand-alone systems. This should greatly stimu- Watson Res Center, Tech. Rep. RC 6210, May 1976. developmentof sophisticatedrobot programming [20] D. Falek and M. Parent, “An evolutive language for an intelli- late the gent robof”Zndust. Robot, pp. 168-171, Sept. 1980. systems that we will surely need in the future. [ 211 I. D. Faux and M. J. Pratt Computational Geometry for Design andManufacture. Chichester,England: Ellis Horwood Press, ACKNOWLEDGMENT 1979. [22] J. Feldman et al., “The Stanford Hand-Eye Project,” in Proc. Many of the ideas discussed in this paper have evolved over FirstZJCAZ (London, England, Sept. 1971), pp. 350-358. the yearsthrough discussions with many people, too numerous [23] R. A.Finkel, “Constructing and debugging manipulator pro- grams,”Artificial Intelligence Lab., Stanford Univ., Rep. to mention. The author has benefited,especially, from AM 284, Aug. 1976. extensive discussions with M. Mason and R. Taylor. He thanks [24] R. Finkel, R. Taylor, R. Bolles, R. Paul, and J. Feldman, “AL, both of them for their time and their help. The initial moti- A programming system for automation,” Artificial Intelligence vation for this paper and many of the ideas expressed herein Lab., Stanford Univ., Rep. AIM-177, Nov. 1974. as a resultof the “Workshop on Robot Programming [25] J. W. Franklin and G. J. Vanderbrug, “Programming vision and arose roboticssystems with RAIL,” SME Robots VI, pp. 392-406, Languages” held at MIT in January 1982, sponsored by ONR. Mar. 1982. The author is indebted to all the participants of the workshop. [ 261 General Electric “GE Allegro documentation,” General Electric The following people read drafts and provided valuable com- Corp., 1982. ments: M. Brady,R. Brooks, S. Buckley, E. Crimson, J. [27] C.C. Geschke, “A system forprogramming and controllug sensor-basedmanipulators,” Coordinated Sci. Lab., Univ. of , Hollerbach, B. Horn, M. Mason, andR. Paul. The authoralso Illinois, Urbana, Rep. R-837, Dec. 1978. i wishes to thankthe two referees fortheir suggestions. [28] G. Gini, M. Gini,R. Gini and D. Giuse, “Introducing software

~ systems in industrial robots,” in Proc. 9th Znt. Symp. on Zndus- REFERENCES trial Robots (Washington DC, Mar. 1979), pp. 309-321. [29] G. Gin4 M. Gini, and M. Somalvico, “Determining and nonde- [ 11 A. P. Ambler and R. J. Popplestone, “Inferring the positions of terministicprogramming in robot systems,” obernen’cs and bodiesfrom specified spatial relationships,” Artificial Zntell., Systems, voL 12, pp. 345-362, 1981. VOL 6, no. 2, pp. 157-174, 1975. [30] G. J. Gleasonand G. J. Agin,“A modular vision system for [ 21 A. P. Ambler, R. J. Popplestone, and K. G. Kempf, “An experi- sensor-controlledmanipulation and inspection,” in Proc. 9th ment in the OfflineProgramming of Robots,” in Roc. 12th Znt Symp. on Zndusrrial Robots (Washington, DC, Mar. 1979), Znt. Symp. on Industrial Robots (Paris, France,June 1982), pp. 57-70. pp. 491-502. (311 T. Goto, K. Takeyasu, and T. Inoyama “Control algorithm for [3] ASEA‘‘Industrial robot system,”ASEA AB, Sweden, Rep. precision insert operation robots,” ZEEE Trans. Systems, Man, YB 110-301 E. Cybern., voL SMC-10, no. 1, pp. 19-25, Jan. 1980. [4] A.Baer, C. Eastman,and M. Henrion,“Geometric modem: [32] D.D. Grossman,“Programming a computer controlled manip A survey,” Computer Aided Des., voL 11, no. 5, pp. 253-272, ulator byguiding through the motions,” IBM T.J. Watson Sept. 1979. Res Cen., Res. Rep. RC6393, 1977 (Declassified 1981). [5] T. 0. Binford, ‘The ALlanguage for intelligentrobots,” in [33] D.D. Grossman andR. H. Taylor, “Interactive generation of Proc. ZRLA Sem. on Languages and Methods of Programming objectmodels with amanipulator,” ZEEE Trans. Systems, Industrial Robots (Rocquencourt, France, June 1979), pp. 73- Man, Cybern., voL SMC-8, no. 9, pp. 667-679, Sept. 1978. 87. [34] H.Manafusa and B. Asada,“Mechanics of gripping formby [6] R. Bollesand R.P. Paul, ‘The use ofsensory feedback in a artiiicial fmgers,” Trans. SOC. Znstrum. Contr.Eng., vol. 12, programmableassembly system,” Artificial Intelligence Labo- no. 5, pp. 536-542, 1976. ratory, Stanford University, Rep. AIM 220, Oct 1973. [35] -, “A robotic hand with elastic fingers and its application to [7] S. Bonnerand K. G. Shin, “A comparativestudy of robot assembly process,” presented at theIFAC Symp. on Informa- languages,”ZEEE Computer, pp. 82-96, Dec. 1982. tion andControl Problems in ManufacturingTechnology, [8] J. M. Brady,“Parts description and acquisition using vision,” Tokyo, Japan, 1977. Proc. SPZE, May 1982. [36] L.D. Harmon, “Automatedtactile sensing,” RoboticsRes., [9] -, “Trajectoryplanning,” in Robot Motion: Planningand voL 1, no. 2, pp. 3-32, Sumer 1982. Control, M. Brady et al., Eds Cambridge, MA:MIT Press, [37] T. Hasegawa, “A new approach to teaching object desicriptions 1983. for a manipulation environment,” in Proc. 12th Znt. Symp. on [ 101 I. Braid, “New directions in geometric modelug” presented at Industrial Robots (Paris, France, June 1982), pp. 87-97. the CAM-I Workshop on GeometricModeling, Arlington, TX, [38] W. B. Heginbotham, M. Dooner, and K.Case, “Robot applica- 1978. tion simulation,”Zndus. Robot, pp. 76-80, June 1979. [ 111 P.Brinch Hansen, “The programming language concurrent [39] C.A.R.Hoare, ‘Towards atheory of parallel programming,” Pascal,” ZEEE Trans. Software Eng., vol. SE-1, no. 2, pp. 199- in Operating Systems Technqiuer New York: Academic Press, 207, June 1975. 1972, pp. 61-71.

1121L- 1 R. A. Brooks. “Symbolic reasonim among 3-D modelsand 2-D [40] -, “Communicatingsequential processes,” Commun. ACM, images,” Artij?ciaiInteIl., voL 17, ip. 2851348, 1981. voL 12, no. 8, pp. 666-677, Aug. 1978. [ 131 -, “Solving the find-path problem by representing free space [41] H.R. Holt, “Robot decisionmaking,” Cincinnati Milacorn as generalizedcones,” Artificial Intelligence Lab., MIT, AI Inc., Rep. MS77-751, 1977. Memo 674, May 1982a. [42] J. D. Ichbiah, Ed. Reference Manual for the A& Programming [14] -, “Symbolicerror analysis and robot planning,” Znt. J. Language, US Department ofDefense, Advanced Research Robotics Res, voL 1, no. 4, 1983. Projects Agency, 1980. [15] R.A. Brooksand T. Lozano-Pbrez, “A subdivision algorithm [43] H. Inoue,“Computer controlled bilateral manipulator,” Bull. in configuration space for findpath with rotation,” ZEEE naris. JSME, voL 14, no. 69, pp. 199-207,1971. Syst., Man, Cybem, voL SMC-13, pp. 190-197, Mar./Apr. [44] -, “Forcefeedback in precise. assembly tasks,” Artificial 1983. Intelligence Lab., MIT, Rep.AIM-308, Aug 1974. [16] J. A. Darringez and M. W. Blasgen, “MAPLE: A high level lan- [45] T. Ishida,“Force control in coordination of two arms,”Pre- guage for research in mechanical assembly,” IBM T. J. Watson sented at the Fifth Int Cod. on Artificial Intelligence,- Cam- Res. Center, Tech. Rep. RC 5606, Sept. 1975. bridge, MA, Aug. 1977. [ 171 E. W. Dijkstra, ‘To-operating sequential processes,” in Program- 1461 H.B. Kuntze and W. Schill. “Methods for collision avoidance in L. mingLanguages, F. Genuys,Ed. New York AcademicPress, computer controlled industrial robots,”in Roc. 12th Znt. Symp. 1968, pp. 43-112. on ZnmCstrial Robots (Paris, France, June 1982), pp. 519-530. [ 181 H.A. Ernst, “A computer-controlled mechanical hand,” Sc.D. [47] J. C. Latombe,“Equipe intelligence artifkielle et robotique: thesis, Massachusetts Institute of Technology, Cambridge,1961. Etat d’avancement des recherches,” Laboratoire IMAG, [19] R.C. Evans, D. G. Gamett, and D.D. Grossman,“Software Grenoble, France, Rep. RR 291, Feb. 1982. LOZANO-PEREZ:ROBOT PROGRAMMING 84 1

[48] J. C. Latombeand E.Mazer, “LM: A high-level language for [79] M. H. Raiert and J. J. Craig,“Hybrid position/force control controllingassembly robots,” presented at the Eleventh Int. ofmanipulators,” ASME J. DynamicSyst., Meas., Con tr., Symp. on Industrial Robots, Tokyo, Japan, Oct. 1981. voL 102, pp. 126-133, June 1981. [49] C. Laugier, “A program for automatic grasping of objects with [80] A.A.G. Requicha,“Representation ofrigid solids: Theory, a robot arm,” presented at the Eleventh Int. Symp. onhdustrial methods, and systems,” Comput. Sun?, vol. 12, no. 4 pp. 437- Robots, Tokyo, Japan, Oct 1981. 464, Dec. 1980. [50] M. A.Lavin and L. I. Lieberman, “AML/V: An industrial ma- [81] C. F. Ruoff,“TEACH-A concurrent robot control language,”

chinevision pronramming system,” -. Int. J. Robotics Res., inProc. ZEEECOMPSAC(Chicago, IL, Nov. 1979), pp. 442-445. voL 1, no. 3, 1982. [82] -, “An advanced multitasking robot system,”Zn&st. Robot,

LA1511 L. I. Lieberman and M. A. Weslev.., “AUTOPASS: An automatic June 1980. programmingsystem forcomputer controlled mechanical [83] J. K. Salisbury,“Active stiffness control of a manipulator in assembly,”IBMJ. Res. Devel.,voL 21, no. 4, pp. 321-333, 1977. Cartesian coordinates,” presented at the IEEE Conf. on Decision [52] T. Lozano-P6rez, “The design ofa mechanicalassembly system,” and Control, Albuquerque, NM, Nov. 1980. Artiticial Intelligence Lab., MIT, AI Tech. Rep. TR 397,1976. [84] J. K. Salisbury and J. J. Craig, “Articulated hands: Force con- [53] -, “Automatic planning of manipulator transfer movements,” trol and kinematicissues,”RoboticsRes.,vol. 1, no. 1, pp. 4-17, IEEE Trans.Systems, Man, Cybern., vol. SMC-11, no. 10, 1982. pp. 681-698, Oct. 1981. [85] M. Salmon, “SIGLA: The Olivetti SIGMA robot programming [54] -, “Task planning,” in Robot Motion: Planning and Control, language,”presented at the Eight Int. Symp. on Industrial M. Brady et al. Eds Cambridge. MA: MIT Press, 1983. Robots, Stuttgart, West Germany, June 1978. [55] T.Lozano-Ptrez and P. H. Winston,“LAMA: A language for [ 861 J. T. Schwartz and M. Shark, “On the piano movers problem I: automatic mechanical assembly,” in Proc. 5th Int, Joint Coni The caseof atwo-dimensional rigid polygonal body moving on Artificial Intelligence (MassachusettsInstitute of Technology, amidstpolygonal barriers,” Dep. Comput. Sci., Courant Inst. Cambridge, MA, Aug. 1977), pp. 710-716. Math. Sci., NYU, Rep. 39, Oct. 1981. [56] T. Lozano-P&rez and M. A. Wesley, “An algorithm for planning [ 871 -, “On the piano movers problem 11: General properties for collision-freepaths among polyhedral obstacles,” Commun. computing topological properties ofreal algebraic manifolds,” ACM, vol. 22. no. 10, pp. 560-570, Oct. 1979. Dep. Comput. ScL, CourantInst. Math.Sci., NYU, Rep. 41, [57] T.Lozano-Pdrez, M. T.Mason, and R. H. Taylor,“Automatic Feb. 1982. synthesisof fme-motion strategies for robots,” Artificial In- I881 B. Shimano, “The kinematic design and force control of com- telligence Lab., MIT, July 1983. putercontrolled manipulators,” Artificial Intelligence Lab., [58] McDonnell Douglas, Inc “Robotic System for Aerospace Batch Stanford Univ., Memo 313, Mar. 1978. Manufacturing,” McDonnell Douglas, Inc, Feb. 1980. [89] -, “VAL:An industrial robot programming and control [59] G. Markowsky and M. A.Wesley, “Fleshing out wireframes,” system,” in Proc. ZRIA Sem. on Languages and Methods of Pro- IBMJ. Res. Devel., vol. 24, no. 5, Sept. 1980. gramrninnZndusmal Robots (Rocquencourt,-- France. June [60] M. T.Mason, “Compliance and force control for computer 1979), pi. 47-59. controlled manipulators,” IEEE Trans.Systems, Man Cybern., 1901 D. Silver. “The littler robot svstem.” MIT Artificial Intellieence .> Y voL SMC-11, no. 6, pp. 418-432, June 1981. Lab.,Rep. AIM 273, Jan. 1973. ‘ [61] -, “Manipulatorgrasping and pushing operations,” Ph.D. [91] B. I. Soroka, “Debugging robot programswith a simulator,” dissertation, Dep. Elec. Eng. Comput Sci., MIT, 1982. presented at the SME CADCAM-8, Dearborn, MI, Nov. 1980. [62] -, “Compliance,”in Robot Motion:Planning and Control, [92] P. D. SummersandD. D. Grossman, “XPROBE: Anexperimental M. Brady et al., Eds. Cambridge, MA: MIT Press, 1983. system for programming robots by example,” IBM T. J. Watson [63] D. Mathur,“The grasp planner,” Dep. Artificial Intelligence, Res. Center, Rep., 1982. Univ. of Edinburgh, DAI Working Paper 1, 1974. [93] K. Takase, R. P. Paul, and E. J. Berg, “A structured approach to [64] E.Mazer, “LM-Geo: Geometric programming ofassembly robot programmingand teaching,” presented at the IEEE robots,” Laboratoire IMAG, Grenoble, France, 1982. COMPSAC, Chicago, IL, Nov. 1979. [65] J. M. Meyer, “An emulation system for programmable sensory [94] R. H. Taylor, “The synthesis of manipulator control programs robots,”ZBMJ. Res. Devel,, voL 25, no. 6, Nov. 1981. fromtask-level specifications,” Ph.D. dissertation,Artificial [66] M. Minsky,“Manipulator design vignettes,” MIT Artificial Intelligence Lab., Stanford Univ., Rep. AIM-282, July 1976. Intelligence Lab., Rep. 267, Oct. 1972. [95] -, “Planningand execution ofstraight-line manipulator [67] S. Mujtabaand R. Goldman, “AL user’s manual,” Stanford trajectories,” ZBM J. Res.Develop., vol. 23. pp. 424-436, Artificial Intelligence Lab., Rep. AIM 323, Jan. 1979. 1979. [68] E.Nakano, S. Ozaki,T. Ishida, and I. Kato“Cooperational [96] R. H. Taylor, P. D. Summers, and J. M. Meyer, “AML: A manu- control of the anthropomorphousmanipulator ‘MELAR”,” facturing language,”Robotics Res., vol. 1, no. 3, Fall 1982. in Proc. 4th Znt.Symp. on Industrial Robots (Tokyo, Japan, [97] S. M. Udupa,“Collision,detection and avoidance in computer 1974), pp. 251-260. controllermanipulators, presented at the FifthInt. Joint [69] N. Nilsson,“A mobile automation: an application of artificial Conf. on Artificial Intelligence, MIT, 1977. intelligence techniques,” in Proc. Znt. Joint ConL on Artificial [98] UnimationInc. “User’s guide to VAL:A robot programming Intelligence, pp. 509-520, 1969. and control system,” Unimation Inc., Danbury, CT, version 12, [70] -, Principles of ArtificialIntelligence. CA: Tioga Pub., June 1980. 1980. [99] M. Weck and D. Zuhlke, “Fundamentals for the development of [71] M. S. Ohwovoriole and B. Roth, “A thoery of parts mating for ahigh-level programming language for numericallycontrolled assembly automation,”presented at Ro.Man.Sy.-81,Warsaw, industrial robots,” presented at the AUTOFACT West, Dearborn, Poland, 1981. MI, 1981. [72] W. T.Park, “Minicomputer software organization for control [ 1001 M. A. Wesley et al., “A geometric modeling system for automated of industrial robots,” presented at the Joint Automatic Control mechanical assembly,”jBMJ. Res. Devel., vol. 24, no. 1 pp. 64- Cod, San Francisco, CA, 1977. 74, Jan. 1980. [73] R. P. Paul, “Modeling, trajectory calculation, and servoing of a [loll D. E. Whitney,“Force feedback control of manipulator fine controlled arm,” Stanford Univ.,ATtificial Intelligene Lab., motions,”J. Dynamic Syst., Meas., Contr., pp.91-97, June 1977. Rep. AIM 177, Nov. 1972. [lo21 -, “Quasi-staticassembly of compliantly supported rigid [ 741 -, “WAVE: A model-based language for manipulator control,” parts,” J. DynamicSyst., Meas., Con-tr., vol. 164, no. 1, pp. Zndust. Robot, Mar. 1977. 65-77, MU. 1982. [75] -, Robot Manipulators:Mathematics, Programming, and [lo31 W. M. Wichman,“Use of optical feedback in the computer Control. Cambridge, MA: MIT Press, 1981. control of an am,” Artificial Intelligence Lab., Stanford Univ., [76] R. P. Paul and B. Shimano, “Compliance and control,” inProc. Rep. AIM 55, Aug. 1967. 1976 Joint Automatic Control Conf:, pp. 694-699,1976. [lo41 P. M. Will and D.D. Grossman, “An experimental system for [77] R. J. Popplestone, A.P. Ambler, and I. Bellos, “RAPT, A lan- computercontrolled mechanical assembly,” ZEEE Trans. guage for describing assemblies,” Indust. Robot, voL 5, no. 3, Comput., vol. C-24, no. 9, pp. 879-888, 1975. pp. 131-137,1978. [lo51 M. Wingham,“Planning how to graspobjects in acluttered [78] -, “An interpreter for a language for describing assemblies,” environment,” M.Ph.thesis, Edinburgh Univ., Edinburgh, Artificial Znteni, voL 14, no. 1, pp. 79-107, 1980. Scotland, 1977.