1 Optical Guidance Method for Capable of Vision and Communication Igor Paromtchik, Member, IEEE The Institute of Physical and Chemical Research RIKEN 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan Fax: +81 48-467-7248, Email: [email protected] (Regular Paper)

Abstract— The optical guidance of robots spans research top- achieved by means of projecting a light onto the ground, ics of , , communication and real-time as sketched in Fig. 1 in section III. The optical guidance control. The proposed method aims to improve the accuracy of system operates with the environmental model and comprises a guidance along a desired route in an environment that is unknown to the robot. The key idea is to indicate the numerical computer-controlled which directs a laser beam coordinates of the target position by means of projecting a laser onto desired positions. The communicates light onto the ground. In contrast with other guidance methods, with the robot when indicating its target position and sub- which communicate the target position numerically, using optical sequent checking if the robot has attained this position. The commands avoids the need of coordinate transformation between robot’s vision system processes the color images in order to the robot’s coordinate system and that of the environmental model (“world” reference coordinates). The visual feedback and detect the laser beacon on the ground and evaluate its relative communication ensure that the robot accurately follows the route coordinates. The robot’s controller drives the vehicle toward indicated by laser beacons, and self-localization becomes less the beacon. The guidance system controls the orientation and relevant for guidance. The experimental results have proved the lighting of the laser in order to indicate target positions – one effectiveness of this method. at each instant along the planned route. When the robot reaches Index Terms— , optical guidance, laser. the proximity of the beacon, the guidance system shows the next target position, and the robot continuously follows the path. The visual feedback ensures that the robot accurately I. INTRODUCTION follows the indicated positions. UIDANCE means show the way while in motion, and The main advantage of this method is the improved accu- G localization is to confine within a particular area [1]. racy of guidance. It also allows implicit localization of the Guidance is passive if no localization of the guided objects robot within the environment: when the robot has reached is performed (e.g. a lighthouse guiding ships), and is active its indicated target position and has confirmed this to the when it involves communication with the guided objects. This guidance system, an adequate estimate of its coordinates in paper introduces an optical guidance method which represents the environmental model is known. Since the control system an active guidance concept. The method has been developed of the robot operates with the relative coordinates of target to guide mobile and humanoid robots in an environment that positions obtained from image processing, the transformation is unknown to the robots. between the coordinate system of the environmental model The motivation of this work originates in our experiments (“world” reference coordinates) and that of the robot as well on teleoperation of wheeled robots where the robot pose as self-localization become less relevant for guidance. (position and orientation) is obtained from dead-reckoning and The communication ability and updating the environmental is transmitted to the teleoperation system in order to update the model by the guidance system allows us to use this system as robot pose in the environmental model. The human operator a mediator for cooperative multiple robots [5]. For instance, uses this model to remotely control the robot. However, the sensor data gathered by the robots and stored in the accumulation of positional and orientational errors caused by environmental model is available to all robots in the fleet, the wheels sliding on the ground and inaccurate modeling i.e. cooperative knowledge acquisition and sharing can be results in a discrepancy between the actual robot pose in the achieved. The distribution of tasks and their allocation to the environment and its estimate in the model. An accumulation of robots is performed with the use of the environmental model this discrepancy over time makes the teleoperation impossible as a part of the guidance system. One robot can request the from some instant because of the danger of collision with the system to guide another robot to a specified destination. environment. This paper focuses on the optical guidance method and The proposed guidance method aims to eliminate this dis- its feasibility shown on example of a laser guidance of a crepancy from the process of guidance [2], [3]. The novelty mobile robot. The paper is organized as follows. The related of the work described in this paper is supported by two works on guidance of robots are discussed in section II. The patents [2], [4]. The key idea of the method is to show the concept of optical guidance and the kinematic models are target position instead of commanding it numerically. This is presented in section III. The operation of the guidance system 2 developed and the communication diagrams are considered in humans and robots and guide them in a networked environ- section IV. The path computation by the guidance system and ment. Each mobile robot is identified by a color bar code motion generation by the robot are explained in section V. stored in the database. The robot can be localized by one of The implementation and experimental results are described the visual sensors by means of measuring the distance between in section VI. The conclusions are given in section VII. the robot and the sensor. According to the desired path and The guidance of cooperative multiple robots, mapping of the estimated pose, the control command is transmitted to the the environment, tracking control and self-localization by the robot via a wireless LAN. Guidance based on localization and robot are intentionally left beyond the scope of this paper. communication is achieved. A system with a laser attached to one of two Canon VC-C1 communication cameras which are placed on a robot II. RELATED WORKS manipulator is described in [16]. The laser is precisely aligned The robot guidance involves various tasks such as: teleoper- with the optical axis of its camera and is centered over the top ation and communication, environment modeling and motion of the camera lens. This system makes it possible to measure planning, image processing and fusion of sensor data, real-time the distance to the object being viewed. The reported accuracy control for path tracking and navigation. This section deals of the distance calculation between the camera and the laser with the methods which are closely related to optical guidance, spot on the surface is roughly the nearest tenth of an inch. while reviews of other navigation methods and sensors can be A computer-servoed laser pointer projects the spot of light found in [6] and [7]. at the user-specified locations on the surface of interest for Simultaneous localization and mapping (SLAM) improves camera-space manipulation in [17]. The light spots serve as the performance of a mobile robot while navigating in an uncer- common camera-space points which are necessary to establish tain or unknown environment [8], [9], [10], [11]. The robot mapping between the cameras viewing the workspace of the explores the environment, builds the map and, concurrently, manipulator. The high precision of positioning and orienting navigates without external guidance. The target position is the end effector of the manipulator is achieved when a method commanded explicitly (in most cases, numerically) in a refer- of camera-space manipulation is used. ence coordinate system. SLAM serves to maintain the accurate Numerous publications address tracking a reference path by transformation between the robot’s coordinate system and the a mobile robot. The reference path is specified by a guidance reference one. This is required in order to enable the robot to line or, for example, magnetic markers in the ground. The on- attain the given target position. In contrast with SLAM, the board displacement sensor measures the tracking error, and proposed optical guidance method communicates the target the controller issues the adequate steering commands [18]. position implicitly (optically), that allows us to avoid the In contrast, the proposed method focuses on guidance, i.e. transformation between these coordinate systems and the need providing a feasible reference path for the robot. Tracking of self-localization by the robot. the laser beacons in the optical guidance method relies on Visual servoing makes use of image processing to control communication between the guidance system and the robot, the robot relative to a landmark [12], [13]. The target position that ensures that the next target position is indicated when the is specified by the target image captured from this position. robot has reached the proximity of its current target position The objective is to achieve convergence of the current image and has confirmed this to the guidance system. As a result, the taken by the robot toward the target one by means of control- guidance accuracy is improved. Besides, the method allows us ling the robot motion. As a result, the robot attains its target to flexibly modify the reference path. position. The optical guidance method differs from visual The advantage of the proposed optical guidance method is servoing because it produces the laser beacons which indicate due to the following features: a feasible route for the robot in the environment. Besides, the target positions are indicated precisely in the environment communication between the robot and the guidance system by means of computer-servoing the laser pointer or point- ensures that the next target position is commanded when the ers situated in the environment. robot has reached its current target position or its proximity. visual feedback provides better positioning accuracy of The use of a laser light to indicate an area of interest has the robot. various known applications. For instance, use of a laser pointer the path to follow is indicated as a sequence of target to indicate objects for guidance of mobile robots is discussed positions (laser beacons). in [14]. The human operator or a “manager” mobile robot accumulation of errors caused by dead-reckoning will directs the laser pointer onto an object of interest, and two not influence localization of the robot in the environment “worker” mobile robots, each equipped with a color CCD because the localization is performed when the robot has camera, detect the laser spot. The relative coordinates of attained its indicated target position. the laser spot are obtained from stereo vision. This method communication between the guidance system and the requires that the precise poses of the “worker” robots are robot allows knowledge acquisition and sharing in the known. However, the precise localization of mobile robots is case of cooperative multiple robots. not a trivial task [6]. one guidance system can indicate target positions for The robot guidance by means of information support from multiple robots. sensor devices distributed in an “intelligent space” is proposed These features are addressed in the subsequent sections, except in [15]. The sensor devices equipped with processors watch in the case of guidance of cooperative multiple robots. 3

III. MODELS The guidance system indicates target positions for the robot by means of a laser light projected onto the ground [3]. The approach is sketched in Fig. 1 where the overall system involves: the laser, the teleoperation board, and the mobile robots. The guidance system comprises the teleoperation board and the laser pointer with its actuators – at least two degrees- of-freedom are required in order to direct the optical axis of the laser to any position on the ground. The coordinates of target positions are computed from the environmental model according to the motion task, or they are set by an operator. Fig. 2. A laser pointer mounted onto the pan-tilt mechanism of a Canon The guidance system relies on the wireless communication VC-C1 communication camera. with the control systems of the robots. The robot’s vision system processes color images to detect the laser beacon on the

where

7

¡G¥¨§ ©

ground and evaluate its relative coordinates. In order to avoid ¥

§ 8

occlusion of the laser beacons, two or more laser pointers are #H (2)

7

¥ ¡¥ © ¥ ©N 8

used, e.g. in the opposite corners of the work area.

 6 JI ILK M I JILK MPO

1 1 1 (3)

Laser Pointer The pan angle is

 

§

¥ £ !© "9¡$ &/'Q)/$ + © @6 Actuators

Laser and

 

6  RS-232C θ # (4)

2

6 

ho and the tilt angle is obtained as

Ethernet

¥ £¦R© "S¡$¨&('*)($¨+UT $ &/'=>CBD+

4*

V

6 6  O 1 Teleoperation

Camera 8 (5)





T  r (xo,yo) 

θ  1 W¡X Y£

(x,y) The equations (4) and (5) allow us to compute  in

Mobile Robot i order to direct the laser beam onto the given positions on the

¡ ¡ ¡G

Yw

  

1 4 Mobile Robot N ground. Note that if # , and , the equations

Mobile Robots Mobile Robot 1 Xw (4) and (5) describe the case shown in Fig. 1.

Fig. 1. A sketch of a laser guidance system. B. Mobile robot kinematics Our mobile robot is shown in Fig. 3. The robot is equipped A. Guidance system kinematics with four omni-directional wheels which allow it to move in

Let the laser pointer be mounted onto a pan-tilt mechanism, two directions and rotate simultaneously. The kinematic model ¢¡¤£¦¥¨§ © ¥ 

and denote the rotation angles of this mecha- of the vehicle is depicted in Fig. 4. The coordinates of the robot

¥¨§ ¥

nism, where and are the pan and tilt angles respectively. relative to a reference coordinate system Z\[ are denoted as

¥§¡¥ ¡

_¡R£ `  \¡£¦`  Let the initial angles be . Let the “rotation point” £¦R©]^©A¥ , where and are the coordinates of

of the pan-tilt mechanism be situated at a height  and its the intersection point C of the imaginary axes connecting the

£¦ ©  

¥a¡G¥]£ `  

ground coordinates be denoted  , as shown in Fig. 1. wheel centers of each pair of the parallel wheels, is

¡ £ !© "  `

Let  denote the position of the laser beacon on the orientation of the robot, and is time. The translation along

the ground. each direction and rotation with respect to Z\b are decoupled Obtain a kinematic model of a guidance system where the and can be driven by the corresponding actuators [19]. laser pointer is mounted onto the pan-tilt mechanism of a The kinematic model of the robot is described by the Canon VC-C1 communication camera, shown in Fig. 2. Let

following equations:c

'*A¥ i >CBD+¥]©

the pan angle be counted relative to the vertical plane which h¡Gi

0

,

6

d egf

>CBD+¥ i '*<>"¥"©

involves the rotation point of the pan-tilt mechanism and the \¡Gi

8

0 ,

f (6)

origin of the ground coordinate system. Let a constant offset ¥k¡mlP©

¡%$¨&('*)($¨+-,/.

f

 

# 1 0

of the pan angle be . . Let denote an angle

i i 0

between the support base of the pan-tilt mechanism and a where and , denote the robot velocities in the longitudinal 

vertical line (the same angle 1 is between the laser optical

§

¡2¥ ¡3 ¥ and lateral directions respectively in the robot’s coordinate ¥

axis in the initial position and a vertical line). system, l is the rotational velocity, and is the orientation

i ¡n

 4 Let denote a distance between the rotation point and the angle. Note that if the constraint , is set in (6), this laser optical axis. results in a non-holonomic model.

A position of the laser beacon on the ground is The equation (6) allows us to evaluate the translational and

 £ )/$ +¥¨7 '=<>?¥¨7 "'*<>"¥¨7

98 §

¡¢5 ©

6  ;: 4

 rotational velocities of the robot and estimate its pose with

 £ )($ +¥7 '*<>?¥7 A>CBD+E¥¨7

8

§F

 (1)

Z

[

@6  ;: 4  respect to . The model (6) provides to obtain the robot’s 4

Fig. 5. The desired path "$#%& is indicated by laser beacons at a distance step Fig. 3. The omni-directional mobile robot. ')()*

, and ",+-#%& is the actual path of the robot approaching to the desired path either from the left or right side.

w r Σ r Σ Y Y r X and drives the vehicle toward the beacon until the new target

position is received. The distance between the robot and its M § target position during the path tracking is bounded by ¤¦¥ , A y as seen in Fig. 5. C

A. Communication for guidance The diagram in Fig. 6 illustrates communication between θ the robot control system and the guidance system. Initially, the robot sends a target request to the guidance system. The x X request is accepted and the system orients the laser pointer ap- propriately in order to indicate a target position. The laser light Fig. 4. A kinematic model of the omni-directional mobile robot.

Guidance system

ready laser light on light off, waiting ready coordinates in the “world” coordinate system. This model is motion valid for a vehicle moving on flat ground with a pure rolling contact without slippage between the wheels and the ground. time However, these requirements cannot be fulfilled in practice, accepted captured that results in the discrepancy between the actual robot pose light is on target request target request in the environment and its estimation in the model. The beacon detected or failure to detect target image image target proposed optical guidance aims to eliminate this discrepancy waiting request capture processing request and improve the accuracy of guidance. Robot control system time

IV. EVENT-BASED GUIDANCE

¡

¡  £¦` *©]£¦` £¢¦ £¦`  Fig. 6. Communication diagram I.

Let a desired path  be indicated as  the laser beacons separated by a distance step ¤¦¥¨§ © , as is turned on, and the robot’s control system is informed that

shown in Fig. 5, where the actual path of the robot is denoted

¡

 £¦` *©" £ ` £¢ 

£ ` ¡ the target position is being indicated. The image is captured ¤¦¥ §

 . When the robot reaches the -

b b b by the robot’s vision system, and the image processing starts. proximity of its target position, a new target position is set.

£ `  The laser light is turned off. The image processing results in

Tracking the desired path  is achieved if either detection of the beacon or failure. Whatever happens,

B £¦`  £¦`  ©

the robot’s control system sends the corresponding information

6

    (7)

b to the guidance system along with a target request. In the case

 © where  is a given constant. of failure to detect the beacon, the guidance system performs The position of the laser beacon is obtained from image a failure compensation procedure (e.g. the target is set closer

processing. The relative coordinates of the beacon are dis- to the robot, or the additional laser pointer is used to avoid

0

,  tances  and in the longitudinal and lateral directions occlusion). The shaded areas in Fig. 6 show the time period

respectively, as seen by the robot camera. The orientation error of this event-based guidance, from requesting a target position

¥ ¡ $ &/'Q)/$ +  

is computed as ! and is compensated in order to obtaining results of image processing. 

to direct the vehicle toward the beacon. The control system Let .-§,© denote a motion time for a transition of the laser

£¦!©] ©A¥j

/  of the robot generates trajectories in the -coordinates beacon from a position  to a target position , shown in

5

£ §*9¡ £ 9¡ £ ; ¡G

f f

§    

Fig. 5. The period . is a variable delay that also includes time , , , and an auxiliary point

§¡ "! §

 

for oscillations in the pan-tilt mechanism to abate. When the is introduced:  . The coefficients of (8) are: ©

image is captured, let the subsequent processing time period ¡





#¤

¤ (9)

¡

© . §

equate to . which is also variable. The periods



§

¡ ©

¡

.

f

$¤ %¤

and must be shortened because they represent the waiting ¤ (10)

§

M 

periods of the robot and the guidance system, respectively. 8

¡ £ § 

 

¤ ¤

§

6 6

 

¤ ¤

For instance, the laser pointer can be re-oriented without ¤

$¤ 

waiting for the results of image processing – the corresponding

£  ©

M 

diagram is shown in Fig. 7. When the image processing ¤

f

§ £ §=

6 6

%¤ %¤ $¤ %¤

8 (11)

  

¤ ¤ terminates, the laser pointer is ready to show the next target ¤ §

position to the robot. Note that this operation requires the 8

§

¡ £ 

 

¤ ¤

8



§

6 6

#¤ #¤

reliable detection of laser beacons. ¤

¤



Guidance system

£  ©



8

¤

f

§¨£ §;

6

%¤ #¤ ¤ #¤ 8 (12)

light off, laser motion ready

 

ready light on 

¤ ¤ ¤

¦ ¡"© §© © © § 6

where O;O=O . The coefficients (9)–(12) provide

¡

£¦`  `& §© ¢

time £¦` 

f 

the continuity of  and for . The £¦` 

guidance system indicates target positions along a path  accepted, light is on accepted, light is on captured

target request ¥ § target request at a distance step ¤ which is set according to a desired beacon detected or failure to detect precision. target image image target request capture processing request

Robot control system B. Motion generation time

Fig. 7. Communication diagram II. The motion generation aims to obtain the smooth functions

£¦`   £ `  ¥ £ ` 

 , and in order to drive the vehicle from its



b b b

¡ £¦ ©" 

 

The indication and detection of the target positions must be current position  to a target position where

b b

fast enough to ensure the continuous motion of the robot along the laser beacon isb detected. The cubic interpolation is used,

$£¢^£ © 

8

¡ ¡

§ § I I

. . . . .

the path, i.e. . (diagram I) or and the smooth trajectories ensure minimal accelerations and



b b ©

(diagram II), where . is a period corresponding to robot slippage. The motion generation is performed in the coordinate b

motion over a distance ¤¦¥ § . Also, the laser beacon should system of the robot while the guidance path is computed in appear at a such look-ahead distance in front of the vehicle the “world” reference coordinates of the environmental model.

where it can be reliably detected. Consider a polynomial 





`





£ ` S¡ ©  ` © 5

F

ATH OMPUTATION  

V. P C  (13) b

The guidance system operates with the environmental model '

£ jk¡ £  ¡



h  

and computes the path for the robot to follow. This path with the boundary conditions:  , ,



£ jS¡ £ H¡ 

b b







f f f f

h   ©

is further indicated to the robot by means of laser beacons.  , , where is the interpolation

,

¡¢£ ©  

b b

0 

The robot’s control system detects the beacon, evaluates its period, is a matrix of coefficients. Let 



 

relative coordinates and issues the control commands for the be an auxiliary point between  and such that b

servo-systems. Obtaining the path by the guidance system and ©





6

  ¤ 

)(    (14)

motion generation by the robot’s control system are discussed b



¡

 

6 

   in this section. where ¤ .

Let the coefficients be recomputed with a sampling









f

/ /+*

I §I §

A. Guidance path period such that while , , 

  and ¤ are constant. Note that the recomputation of

An heuristic search is performed in the environmental model 

with the sampling period / interrupts the interpolation that

to find a coarse route to the destination. The search results ¡

¡ £¦ ©"   ¦g¡ "©¨§¨© © ©

-,

was intended for a period . Let the appropriations 

£.¦  ¡ £0¦  ¡ £.¦ 

O=O=O

¥¤ ¤ ¤ 8

in a set of subgoals  , ,

 

f f

£¦` Y¡

/ _/,  /  ,  / ¤a

 , , ,

¦-¡1§¨© © £¦`  £ ` 

b M b b

representing the planned route. The guidance path 



f

¡ ¡

R£ ` Q© ^£ ` £¢ ` § © ¢

O;O=O 

ensure the continuity of  and at the



£.¦ § ¦ b

, is obtained by means of a spline- b 6  boundary between subsequent computations and . Let interpolation [20]:

 the recomputation of be performed until the robot reaches



` `





  

¡ the proximity of :

5 £¦`  ¡ ©¢` ` ©"` § ¢©

6

¤

F



¤ ¤ ¤

(8)

£.¦  ©

¤



/ /

 

6

  ¤¦¥





§ 

 (15)

b

,

 © ¡ £ ¡ ¥ £¦` 

0



I I

 ¤  ¤¦¥ / ¤¦¥ §

¤ ¤

§

¤

 

where ¤ is a matrix of coefficients, where . The orientation is

` § ` ¦ ¡"© §© © © § § ¡ ` `C§

b





6 O;O=O 6 I I O=O;OI ¤

¤ , and computed according to the equation similar to (13) with the

£ §; ¡ ` ¡

  . The initial and final conditions are:  , conditions which are similar to (14) and (15).

6

 £¦`  f

Fig. 8 illustrates this motion generation for the -coordinate. The obtained property (21) proves the convergence of 

¦

b

f 

Each -th interpolation step is performed to a virtual point to  as in (16) and allows us to set the first derivative while

 

situated at a distance ¤ from the actual position. The virtual interpolating by means of a cubic polynomial. Note that this



point is shifted with a sampling period / in the direction property is not apparent for the cubic interpolation, but is a

  0 of  , and the coefficients are recomputed [21], [4]. result of our interpolation method [4]. This property is used to set the velocity of the robot, as described in section VI-B.

∆ The second derivative is obtained as

xT virtual point

£   

xd x 



R£.¦ S¡ 5   £.¦ 

 6  6

¨ . .

k f



/ /

P£ §   §

6# F O f

f (22)



6 6

. ¤ .

 £¦`  

k+1 ¨ 

The value of depends on a deviation between f and R£ ` 

f . Taking into account (16), one can conclude that this

k+2 deviation has its extremal value at the moment when a new

 

actual position f is given, i.e. in the beginning of the interpolation step.

 H£.¦ 



/ f

According to (22), for given f and , there can always ¡¤

k+p 

%$

be found such ¤ that the condition (17) is satisfied.

§ * One should note, that because . , the equation (22) can

k+p+1 be approximated:





 £0¦ '& ( £  R£.¦ C

¨

f

/  /



6 O

 f

f (23)

 ¤ N

∆ xs The features of this motion generation are summarized as

 £¦`  R£¦`  R£ `  f

follows: and f are continuous; tends to a given

¢¡¤£ ¥¦¥§¥

Fig. 8. A virtual point at interpolation steps , , value and has a trapezoidal profile while the convergence speed

 £ `    £ ` 

¨

 ¤

depends on ; is limited and tends to zero while f

f 

One can prove that for a given  the following condition ¦ © tends to a given value.

holds for © :

£©  ©

f f



/

6 I

 

   (16)

b VI. EXPERIMENTS

 ©

where  is a small constant, and The overall architecture of our experimental mobile robot is

¨

£¦`   © ©

¨ shown in Fig. 9 and it involves: the vehicle with its actuators

0

   (17)

b and sensors, the on-board control system, and the remote

©

 

¨ 0 where © is a given constant. The condition (16) control interface. The reactive control ensures an autonomous

holds because the modified spline-interpolation is used: the operation of the robot in a dynamic environment. The collision

interpolation planned for a period is interrupted at a sam- avoidance algorithm operates with a rule matrix obtained from





*

/

¤   pling period while maintaining  constant and an adaptive behavior acquisition scheme which is based on

recomputing the coefficients . reinforcement learning. The reactive control processes data



 

Let us prove (16) and (17) for one-dimensional case of - gathered by the ultrasonic and infrared sensors as well as the

 ¡ 

coordinate. When the coefficients 0 are recomputed color CCD camera. Based on the sensors configuration, eight

with the sampling period / , one can derive from (13) for possible directions of motion are considered [23]. The reactive

 £0¦ 

T

b / f omitting the index :

 control algorithm ensures a collision-free motion to a given

! §

¤ target position.

 

 £0¦ S¡  £ § Q©

8 8

¤ 

/ The robot controller provides coordinated translation and

4 4 f

f (18) §

 rotation of the omni-directional vehicle when moving to a tar-

¡ § ¡P£ §  i £  

M

 

8 get position. The controller also ensures attaining the desired



6 6 6 6

4 ......

where , f , and



0

¡ ¡ §

i velocity by the robot, as explained in section V-B. The omni-

*

 

the following notations are used: and . .

§ H£.¦ 

 directional motion (holonomic case) or a constrained motion

I I

4 / Because , the limit value of f in equation (18) (non-holonomic case) can be performed according to the

is [22]:

B  £.¦ 9¡

assigned task. For instance, vision-based tracking of a dynamic

/

§ O f

 (19)

¤ 6

4 object requires the object to be kept in the view field of the on- 

 board CCD camera and is achieved by the coordinated control

Taking into account that f is an auxiliary constant, one can

 ¡

 of the translational and rotational coordinates of the robot.

choose f and rewrite (19) as

P£ § 

 The performance monitoring aims to increase the reliability

B  £0¦ S¡ i

6

. /

O and safety of the robot operation. The monitoring of measur-

 f

 (20)

¤

 6 .

! able signals allows fault detection and diagnosis in a closed-



i i ¡   ©

§!

 "! 

If is computed as f equation (20) becomes: loop during the operation, e.g. measuring the capacity level

B R£.¦ S¡ 

of the electric batteries of the robot and, if needed, requesting



/

O

f f

 (21)

¤ the control system to interrupt execution of the on-going task 7

Laser Pointer Y c' CCD Plane

Xc'

Teleoperation Camera Lens Ethernet P' Yc Camera Plane

Xc

Reactive Coordination Interpolation Servo-control control of drives Xg Ground Plane

P Zc ISA bus Yg Real-Time Control US/IR data Visual data Performance Hardware processing processing monitoring interface

0 MAX Fig. 10. A sketch of the experimental setup. D DRV A Actuators and Sensors

Fig. 9. The overall architecture of our mobile robot. and direct the robot to a location where the electric batteries can be replaced. Fig. 11. A camera snapshot of the ground: a laser beacon (in the center and The robot is equipped with a CCD color camera (focal zoomed above), a 5 Yen coin (left) and a 30 cm ruler (right).

length 7.5 ¡ ) which is in the inclined position relative ©

to the ground. A laser pointer (wavelength 635 , power ¢

output 2 , class I) is mounted on top of a Canon VC- the image is white-colored that shows saturation of the camera. C1 communication camera, as shown in Fig. 2. The laser The pixels indicated by bold contours in Fig. 12 illustrate the

pointer provides a red light with a spot of 1 ¡ in diameter horizontal and vertical differencing applied in order to find the

at a distance of 1 . The Canon communication camera pixels where the intensity change of a red color is maximal. is equipped with a pan-tilt mechanism with two stepping The edge detection aims to obtain the pixel coordinates of motors. The robot’s software is developped in C language and the horizontal and vertical edges of the laser beacon. Each

runs under VxWorks operating system. The software of the pixel represents the intensity of red, green and blue colors. The

 

guidance and teleoperation system is written in JAVA language distribution of a red color in the image is given by a matrix

¡¥¤ 

£ M © ©

¦¨§ ¦

T T 

and runs under Linux operating system. where  is the intensity of a red color

/© j

at a pixel £ . The edge detection is performed by means

  

of differencing of adjacent pixels of columns and rows of a

§ §

¡¤ ¡¤

£ £ £

¦ ¦ ¦ ¦ §

A. Localization of a laser beacon §

T T T T

6 6

  

matrix :  and .

©

£ £

The detection of the laser beacon involves: edge detection The maximal value T of matrices and gives the

£© © 

(differencing and search for pixels where the maximal change coordinates of the pixel where the intensity change of

of intensity of a red color occurs); image segmentation (global a red color is maximal.

©

T T

thresholding to evaluate the neighboring pixels relative to a If the obtained intensity exceeds a threshold / , the

£ © © 

given color threshold and estimate a size of the detected neighboring pixels of a pixel are evaluated. An area



£ © ©  /© 

£ £

§ 

areas); selection of pixels according to a given range of the is selected around a pixel such that ,

© © ©

¦

T T T T T T T

6 I 6 I

¤ / ¤ / ¤ /

  

chromaticity values [24].  and , where is a

 

 /© ?

Our experimental setup is sketched in Fig. 10. In this figure, given threshold. The coordinates £ of the selected pixels

© © © ©

© © 

0 0

  

P denotes a position of the laser beacon which is seen in the are:  and . The coor-

 

£ ©=© /

image as a bright spot of the size of a few pixels, as illustrated dinates of the central pixel of the selected area are:

? " © ¡ © £© © © © 9¡ £ © ©

M M

8 8

0 0

: : 6 6

in Fig. 11. The laser spot on the ground has a blur contour and and . £

the shape of the beacon is elliptical, as shown in Fig. 12, where The size in pixels of the selected area § must correspond to the grid represents pixels. The central part of the beacon in the known average size of the laser beacon in the image. The 8

projected are shown as points, and the plus signs show the coordinates estimated by image processing. The accuracy is lower at the borders of the camera view field (upper and lower points along the abscissae axis in Fig. 13) while the accuracy along the optical axis of the camera (central points

along the abscissae axis) is sufficient for tracking the beacons:

the maximal dispersion was less than 10 , and the average

dispersion was about 3 ¡ . Note that the detection of a laser beacon depends on the lighting conditions, the surface material, the laser power output and the camera sensitivity. The localization accuracy is influenced by quantization errors due to the small size of the beacon, camera displacement from the calibrated setting and camera constraints [26]. Fig. 12. A laser beacon (zoomed part of the image).

B. Motion generation

 

© ©  £ The motion generation by the robot’s control system is RGB-values of the pixel are transformed into the CIE

illustrated by Fig. 14. The desired maximum translational

 §

chromaticity values in order to check if they correspond to the ©

£ ©¡ *© (

O :£¢

velocity is set , and the maximum rotational velocity

 

chromaticity values of the laser. Finally, coordinates M

T

O ( :¤¢

is  . Firstly, the robot moves from a start position

£  ©§ § £  © 

M M ¥

of the central pixel in the saturated area are obtained from ¥

O O O

O toward an intermediate position . Then,

£ § ]©  M

the edge pixels. The chromaticity values of the central pixel M

6 O O are computed as an average of the corresponding chromaticity after a period of 5 ¢ , a new target position is set values of the edge pixels. (“end position” in Fig. 14). The robot abandons its interme-

diate position, changes its orientation and moves toward the

©"i ©"i£¦

Note that the differencing is performed on one image at £¦i 0 a time rather than differencing of two sequential images (one end position. The corresponding velocity profiles , captured when the laser light is turned on and another auxiliary are near trapezoidal, as shown in Fig. 15, and the desired image – when the light is turned off). Albeit processing two velocities are maintained. This experiment shows that the images would simplify detection of the laser beacon, this motion generation developed is suitable to control the robot introduces a delay related to communication with the guidance in a dynamic environment, in particular, to track a moving system and processing the auxiliary image. Since our objective object. is to detect the laser beacon without stopping the robot motion, 3

the above procedure is used.

£© © 

intermediate position

In order to transform the coordinates from pixels into meters, the camera model makes use of a plane to plane 2.5 homography [25]. The accuracy of localizing the laser beacons on the ground relative to the robot’s camera is illustrated in 2 Fig. 13, where the ground coordinates of the camera are in the end position y [m] 1.5 0.6 start position 1 0.4

0.2 0.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 x [m]

0

¨§ © -& Fig. 14. Example of the robot motion on the # -plane. -0.2 Lateral distance [m] -0.4 C. Discrepancy estimation Obtain the discrepancy between the target coordinates re- -0.6 0.2 0.4 0.6 0.8 1 1.2 1.4 ceived from the guidance system explicitly (numerically) and Longitudinal distance [m] those computed during the optical guidance by means of image processing and localizing the laser beacon. Since the laser Fig. 13. Localization of projected laser beacons (the camera is at the origin system can be set precisely in the environment, and localizing in inclined position and looks along the abscissae axis). the laser beacon is accurate, the obtained discrepancy reflects the accumulated positional and orientational errors from vari- origin of the coordinate system and the camera is inclined, ous sources such as: wheels sliding on the ground, inaccurate as shown in Fig. 10. The positions where the laser light is modeling, and the system’s dynamics. 9

0.3 VII. CONCLUSION 3 0.2 The novel optical guidance method for robots capable of vision and communication was proposed to improve the 2 0.1 accuracy of guidance. The key idea of this method is to interactively indicate the feasible reference path by means of

0 laser beacons. Optical guidance was considered as an event- driven process based on image processing and communication. 1 The modified spline-interpolation provided the feasible refer- -0.1 ence path and smooth motion trajectories. The method was illustrated in practice using a laser to guide an omni-directional -0.2 20 25 30 35 mobile robot in an environment that was unknown to the robot.

time [s] The architecture and kinematic models of the laser guidance

¤¦¥ ¤¦¥

£#%& £ ¦§# %& £ ©¨ #%&

Fig. 15. Velocity profiles: 1 - ¢¡ in , 2 - in and 3 - system and the robot were described. The image processing for ¤¦¥ in © . detecting and localizing the laser beacon was discussed. The experimental results obtained have shown the effectiveness of this guidance method in improving the accuracy of robot Initially, the coordinate systems of the robot and that of motion. A video of our experiments on optical guidance is the guidance system (“world” reference coordinates) coincide. available at http://celultra.riken.jp/  paromt/dev.html. In order to obtain the discrepancy, each target position is informed to the robot in two ways concurrently: (1) – nu-

£¦R©]" ACKNOWLEDGEMENT merically, i.e. -coordinates of the target position are transmitted to the robot via a wireless communication, and Thanks are given to H. Asama (The University of Tokyo), (2) – optically, i.e. the same target position is shown by T. Suzuki (Tokyo Denki University) and H. Kaetsu (RIKEN means of a laser beacon. The robot follows the laser beacons Institute) for their encouragement during this work. according to the proposed method. The coordinates computed from image processing and those received explicitly from the guidance system are memorized. REFERENCES A segment of the motion after the robot has traveled some [1] A. Hornby, Oxford Advanced Learner’s Dictionary of Current English. distance from the calibrated pose is shown in Fig. 16. The Oxford University Press, 1987. [2] I. Paromtchik and H. Asama, “Method and system of optical guidance of mobile body,” U.S. Patent 6,629,028, 2003. [3] ——, “Optical guidance system for multiple mobile robots,” in Proc. of the IEEE Int. Conf. on Robotics and Automation, Seoul, Korea, May 2001, pp. 2935–2940. Discrepancy [m] [4] I. Paromtchik, H. Asama, and I. Endo, “Moving route control method 0.5 by spline interpolation,” Japan Patent 3 394 472, 2003. [5] Y. U. Cao, A. S. Fukunaga, and A. B. Kahng, “Cooperative mobile 0.4 robotics: Antecedents and directions,” Autonomous Robots, no. 4, pp. 7–27, 1997. 0.3 [6] J. Borenstein, H. Everett, and L. Feng, Navigating Mobile Robots – Motion Systems and Techniques. Wellesley, MA, USA: A K Peters, 1996. 0.2 3 direction [7] M. Hebert, “Active and passive range sensing for robotics,” in Proc. of 0.1 2 the IEEE Int. Conf. on Robotics and Automation, San Francisco, CA, 1 USA, April 2000, pp. 102–110. 0 [8] R. Smith, M. Self, and P. Cheeseman, “Estimating uncertain spatial -5 -4 0 relationships in robotics,” in Autonomous Robot Vehicles, I. Cox and -3 -2 y [m] G. Wilfong, Eds. Springer Verlag, 1990, pp. 167–193. -1 -1 0 [9] J. Castellanos and J. Tardos,´ Mobile Robot Localization and Map Build- 1 ing: A Multisensor Fusion Approach. Kluwer Academic Publishers, 2 -2 x [m] 3 Boston, 2000. 4 5 -3 [10] S. Thrun, “A probabilistic on-line mapping algorithm for teams of mobile robots,” Int. Journal of Robotics Research, vol. 20, no. 5, pp. 335–363, 2001. Fig. 16. Example of optical guidance: a segment of the motion after the [11] H. Choset and K. Nagatani, “Topological simultaneous localization and robot has traveled some distance from the calibrated pose. mapping (SLAM): Toward exact localization without explicit localiza- tion,” IEEE Transactions on Robotics and Automation, vol. 17, no. 2, pp. 125–137, 2001. accumulated discrepancy is represented by the vertical dashed [12] S. Hutchinson, G. Hager, and P. Corke, “A tutorial on visual servo lines and it varies during the motion. The result shown in control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996. Fig. 16 illustrates the advantage of the proposed method: [13] P. Corke, Visual Control of Robots: High-Performance Visual Servoing. the accumulated discrepancy does not affect the accuracy of Research Studies Press Ltd., 1997. guidance because the target position is shown to the robot [14] A. Nakamura, T. Arai, T. Hiroki, and J. Ota, “Development of a multiple mobile robot system controlled by a human – realisation of object by means of the laser beacon instead of commanding it command level,” in Distributed Autonomous Robotic Systems 4, 2000, numerically. pp. 208–218. 10

[15] J.-H. Lee, N. Ando, T. Yakushi, K. Nakajima, T. Kagoshima, and H. Hashimoto, “Applying intelligent space to warehouse – the first step of intelligent space project,” in Proc. of the IEEE/ASME Int. Conf. on Advanced Intelligent Mechatronics, Como, Italy, July 2001, pp. 290– 295. [16] J. Redd, C. Borst, R. Volz, and L. Everett, “Remote inspection system for hazardous sites,” in Report ANRCP-1999-18, National Resource Center for Plutonium, Amarillo, TX, USA, April 1999. [17] M. Seelinger, E. Gonzalez-Galv´ an,´ M. Robinson, and S. Skaar, “Towards a robotic plasma spraying operation using vision,” IEEE Robotics and Automation Magazine, no. 12, pp. 33–49, 1998. [18] J. Ackermann, J. Guldner, W. Sienel, R. Steinhauser, and V. Utkin, “Linear and nonlinear controller design for robust automatic steering,” IEEE Transactions on Control Systems Technology, vol. 3, no. 4, pp. 132–142, 1995. [19] H. Asama, M. Sato, L. Bogoni, H. Kaetsu, A. Matsumoto, and I. Endo, “Development of an omni-directional mobile robot with 3 DOF decou- pling drive mechanism,” in Proc. of the IEEE Int. Conf. on Robotics and Automation, Nagoya, Japan, May 1995, pp. 1925–1930. [20] I. Paromtchik and U. Rembold, “A practical approach to motion gen- eration and control for an omnidirectional mobile robot,” in Proc. of the IEEE Int. Conf. on Robotics and Automation, San-Diego, CA, USA, May 1994, pp. 2790–2795. [21] I. Paromtchik and H. Asama, “A motion generation approach for an omnidirectional mobile robot,” in Proc. of the IEEE Int. Conf. on Robotics and Automation, San Francisco, CA, USA, April 2000, pp. 1213–1218. [22] I. Bronshtein and K. Semendyayev, Handbook of Mathematics. Verlag Harri Deutsch, Van Nostrand Reinhold Co., Edition Leipzig, 1985. [23] Y. Arai, T. Fujii, H. Asama, Y. Kataoka, H. Kaetsu, A. Matsumoto, and I. Endo, “Adaptive behaviour acquisition of collision avoidance among multiple autonomous mobile robots,” in Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Grenoble, France, September 1997, pp. 1762–1767. [24] D. Marr, Vision. W.H. Freeman and Co., New York, 1982. [25] J. Semple and G. Kneebone, Algebraic Projective Geometry. Oxford University Press, 1979. [26] C. Yang and F. Ciarallo, “Optimized sensor placement for active visual perception,” Journal of Robotic Systems, vol. 18, pp. 1–15, 2001.