Vector Averaging for Collaborative Control of an Internet

Ken Goldberg, Billy Chen, Rory Solomon IEOR and EECS Departments, UC Berkeley

This paper describes a collaboratively controlled Internet robot, where a distributed group of users shares control of an arm. A Java applet at each browser collects motion vectors from up to 30 simultaneous users. A central server averages these vectors to produce a single control stream for the robot. Users receive visual feed- back from a digital camera mounted above the robot arm. The system was online for 18 months of continuous operation and used by over 25,000 remote users. We conclude with two experiments that quantify how vector averaging can improve performance in the presence of noise as suggested by the Central Limit Theorem.

1 Introduction

In most teleoperation systems, a single user input controls a single robot. Several re- searchers have considered the problem of one user controlling multiple . We consider the inverse problem: combining multiple user inputs to control a single indus- trial robot arm. Consider the following scenario in space or undersea: a group of users are work- ing together to control a telerobot. Each user monitors a different sensor and submits control inputs appropriate to that sensor’s information. All of these inputs must be combined to produce a single control input for the telerobot. If the inputs can be put into vector form, one solution to this “control fusion” problem is to average all inputs. Since each scientist has access to a different noisy sensor, the Central Limit Theorem suggests that the mean may yield a more effective control signal than that from any individual input. We note that the CLT assumes independence and zero-mean noise, which may not be satisfied in practice. In the taxonomy proposed by Tanie et al. [9], a system with one user controlling one robot is a Single Operator Single Robot (SOSR) system. In a Multiple Operator Single Robot (MOSR) system, a group of users shares simultaneous access to a single robot. Inputs from all users must be combined to generate a single control stream for the robot. We refer to these as ”collaborative control” systems. There are benefits to ∗Contact: [email protected]. This work was supported in part by the National Science Foun- dation Award IIS-0113147.

1 collaboration: the group of users may be more reliable than a single user. In this paper we describe a MOSR teleoperation system that allows a group of online users to control the movements of a single industrial robot arm. We describe a client-server system on the Internet that facilitates many simultane- ous users cooperating to share a single robot resource. As illustrated in Figure 1, each user/client downloads two Java applets by visiting the system website. These applets interact with two PC’s in our lab to control the Robot Arm and display video images.

Client Applet V Applet C

Internet

Webserver V Video Card Webserver C Robot Control

CCD Camera Robot Arm

Figure 1: System Architecture for Collaborative Internet Robot: each user client runs two applets, Applet V for video feedback and Applet C for control.

Since 1994, our lab has implemented five systems that allow users to interact re- motely via the Internet to control a physical robot. In the Mercury Project [18], users queued up for 5 minute individual control sessions. In the 1995 Telegarden project [19], user motion requests were interleaved so that each user seemed to have simultaneous control of the robot. These two control models are analogous to the batch and multi- tasking approaches to mainframe computing. They suffer from the standard latency and throughput problems associated with time sharing. In this paper we report on ex- periments with a control model where motion commands from multiple simultaneous users are aggregated into a single stream of motion commands.

2 2 Related Work

Goertz demonstrated one of the first “master-slave” teleoperators 50 years ago at the Argonne National Laboratory[15]. Remotely operated mechanisms have long been desired for use in inhospitable environments such as radiation sites, undersea [4] and space exploration [5]. See Sheridan [40] for an excellent review of the extensive liter- ature on teleoperation and telerobotics. Internet interfaces to soda and candy machines were demonstrated in the early 1980s, well before the introduction of the WWW in 1992. One of the first web cam- eras was the Trojan coffee pot at Cambridge. The Mercury Project’s Internet robot [18] went online in August 1994. In Sept 1994, Ken Taylor, at the University of Western Australia, demonstrated a remotely controlled six-axis telerobot with a fixed observing camera [10]. Although Taylor’s initial system required users to type in spatial coor- dinates to specify relative arm movements, he and his colleagues have subsequently explored a variety of user interfaces [23]. Also in October 1994, Wallace demonstrated a robotic camera [22] and Cox put up a system that allows WWW users to remotely schedule photos from a robotic [24]. Paulos and Canny have implemented several Internet robots with elegant user interfaces [32, 33]. Bekey and Steve Goldberg used a six-axis robot, rotating platform, and stereo cameras to allow remote scholars to closely inspect a rare sculpture [41]. Internet robots are an active research area. In addition to the challenges associated with time delay, supervisory control, and stability, Internet robots must be designed to be operated by non-specialists through intuitive user interfaces and to be accessible 24 hours a day. See [25, 39] for recent papers and [20] for a collection of recent projects. In a ”collaborative” or Multiple Operator Single Robot (MOSR) system, a group of users shares simultaneous access to a single robot. Inputs from all users are combined to generate a single control stream for the robot as illustrated in Figure 2.

User n

. . . EnvironmentΣ Aggregate Control User 1

Figure 2: In a collaborative control system, an ensemble of user inputs are aggregated into a single control signal.

Pirjanian studies how reliable robot behavior can be produced from an ensemble of sources [34]. Drawing on research in fault-tolerant software [29], Pirjanian considers systems with a number of homogenous sources (sharing a common objective), and considers a variety of voting schemes. He shows that fault-tolerant behavior fusion can be optimized using plurality voting [6] but does not study the motion paths generated by specific malfunctioning modes. McDonald, Small, Graves, and Cannon [31] describe an Internet-based collabora-

3 tive control system that allows several users to assist in waste cleanup using Point-and- Direct (PAD) commands [8]. In their system, collaboration is pipelined, with overlap- ping plan and execution phases. They demonstrate that collaboration improves overall execution time but they do not address conflict resolution between users. Chong et al [9] study a multi-operator-multi-robot (MOMR) networked system and propose sev- eral control methods to cope with collision arising from network time delays. Adapting their terminology, ours would be a multi-operator-single-robot (MOSR) system. Cinematrix is a system similar in spirit to MOSR. Cinematrix is a commercial software system that allows a roomful of participants to collaboratively control events projected on a screen [36]. Collaborative control is achieved using two-sided color- coded paddles and a vision system that aggregates the color field resulting from all paddle positions. The developers report that groups of untrained users are able to coordinate aggregate motion, for example to move a cursor through a complex maze on the screen. Fong et al use the term “collaborative control” to describe systems where collabo- ration occurs between a single and a single operator who is treated as a peer to the robot and modeled as a noisy information source [12]. This would be a SOSR system in Tanie’s terminology. Related models of SOSR collaboration such as Cobots are analyzed in [14, 2, 30]. Collaborative control is related to the very active research topic of Cooperative (behavior-based) robots, where groups of autonomous robots interact to solve an objec- tive [3]. These might be described as No Operator Multiple Robot (NOMR) systems. Recent results are reported in [28, 11, 38, 35, 7]. Collaborative Control is also related to MOMR online collaborative games such as Quake (Capture the Flag), where users remotely control individual avatars. Outside of , the notion of collaborative control is relevant to a very broad range of collab- orative human activities including economic markets, pricing behavior, voting, traffic flows, etc. Excellent overviews of the broader context can be found in [27, 37]. There is also a substantial body of research on Distributed Artificial Intelligence (DAI) and Multi-Agent Systems (MAS), emphasizing multiple actors, multiple contexts, multi- ple representations, resource limitations, compatibility-based problems and robustness [13]. In our MOSR model of collaborative control, the focus is on group control of a single shared resource in contrast to groups of resources. A preliminary version of this paper appeared in [16]. For recent research in MOSR collaborative control, see [17, 21].

3 User Interface

To attract online users to use our system, we selected the popular Ouija [26] board game. Several users play the game together by placing their hands together on a sliding plastic disk known as a “planchette”. The planchette is free to slide over a board marked with letters and messages such as Yes and No. The group of users poses a question.

4 As each user concentrates, the planchette slides across the board to indicate an answer. Although we do not address the claim that the planchette is influenced by supernatural powers, it is in many cases influenced by the conscious and unconscious movements of all participants. In this way the rigid planchette aggregates force vectors from the group.

Figure 3: User interface in a browser window such as Internet Explorer. The top window displays live video of the robot and Ouija board in our laboratory. The bottom window displays user force inputs corresponding to mouse movements.

Figure 3 illustrates the interface at the client’s Internet browser. Users first register by providing an active email address, a userid, and password. A list of online “players” (active users) is updated and displayed to the right. We display the current question to be considered at the bottom. This question is randomly selected from a file and is parameterized to include the name of one player (experience suggests that if we allow users to type in questions, the question display would quickly degenerate into graffiti). Figure 1 describes the system architecture. Each client automatically downloads two applets; each communicates with a unique server in our lab. Applet and Server V provide live streaming Video feedback of the robot and planchette motion. Applet and Server C coordinate control of the robot.

5 The user’s mouse serves as a local planchette. The user is asked to place both hands lightly on the mouse. Subtle mouse motions are monitored by Applet C, resulting in small motion of the displayed planchette icon in Applet C’s window. These motions are continuously sampled by the client’s applet to produce a motion vector that is sent back to Server C once every 5 seconds. Server C collects motion vectors from all active clients, averages, and then sends a global motion command to the robot. The resulting planchette motion is observed by all active clients through video Applet V.

4 Hardware

Figure 4: Industrial Robot Arm with Planchette and Board.

System hardware includes two PCs, a CCD camera, and an Adept 604-S 4 axis robot arm. Webserver V is a Pentium II 266 running Windows NT 4.0 SP5. Server V runs the Apache 1.3.3 server and a commercial software package called InetCam which handles live video. Server V also holds a FlyVideo ’98 windows-compatible video card that is connected to a color CCD camera. The video card is capable of either full motion capture, at 30 frames per second, or single still image captures. For our purpose, we used the full motion capture capabili- ties of the card. Resolution of video captures can be set up to a maximum of 640x480

6 with 12, 24, or 32-bit color resolutions. It supports connections of the following types: 75 ohm IEC coaxial input (cable TV), composite video input (RCA), S-Video input (SVHS), audio input, and line audio output. The supplied driver allows us to set reso- lution rates and color resolution as well as configure optimal settings for hue, contrast, color, and brightness. Webserver C is an Intel Pentium II, 266 MHz, running Red Hat Linux version 5.2. This machine runs an Apache Web server version 1.3.3. Webserver C handles HTML client requests. The Robot Server also runs on this machine. The Robot Server is attached through an RS-232 serial connection to a controller for an Adept 604-S robotic arm with four degress of freedom. The Adept controller runs the V+ operating system and programming language.

5 Software

After a user registers and enters the system, the client downloads one 25 KB java archive that includes all classes for Applets V and C. Initially, we kept each Applet separate, but discovered that for slow (14.4Kbps) connections, Applet V would load first and then begin streaming video, which would consume the bandwidth and prevent loading of Applet C.

5.1 Applet V: Streaming Video In our initial design, we implemented two server-side scripts: one that took a snapshot from the capture card once-per-second and converted it to a black-and-white GIF, and another CGI script which then pushed the image to the client’s web browser once-per- second. This had two drawbacks. First, we wanted to use inter-frame compression to reduce bandwidth usage since each image was roughly 70 kB. Second, Microsoft Internet Explorer did not offer support for the server-side push function. Instead, we used a commercially available streaming video package, Inetcam ver- sion 2.1.0 [1]. We configured the InetCam server with frame resolution = 240 x 180, compression rate = 65 % , maximum frame rate = 5 per second, and max number of simultaneous users = 30.

5.2 Applet C: Planchette Control Applet C displays a small window with a representation of the planchette. Applet C also displays two text panels: one listing current players and another listing the question being considered. When Applet C has been downloaded, it establishes communication to Server C through a socket connection. Through this connection, the client sends desired force vectors to Server C every 3 seconds. Server C aggregates force commands from all active users and controls the robot. Server C also transmits information about the current question being asked and the player list back to each Applet C.

7 To control the robot from Server C, we developed code based on Java I/O streams that can communicate with Adept’s V+ programming-language (which has several functions for handling serial I/O). Testing revealed that every 120 bytes or so, one or two bytes would be mysteriously lost. We fixed this problem by slowing down the connection to 4800 bits per second. We originally wrote applet C using Java 1.1, but found that as of June 1999 few browsers supported that version so we re-wrote it in Java 1.0 using the Java Development Kit (JDK) and the Abstract Windowing Toolkit (AWT).

5.3 Planchette Motion Model

Figure 5: Planchette Coordinates.

As described above, Applet C at each client sends a desired motion vector to Server C every 3 seconds. At the client, the user’s mouse position is read by a local java applet and a ”virtual” planchette is displayed in the lower window as it tracks the user’s mouse. To make the interface more realistic, the planchette motion is based on an inertial model. We treat the vector from the center of the planchette screen to the current mouse position as a force command. Consider the coordinate frame defined in Figure 5. User i specifies desired acceleration (aix,aiy). We model frictional drag on the planchette with a constant magnitude and a di- rection opposite the current velocity of the planchette. If the current velocity of the planchette is v0 =(v0x,v0y) and the magnitude of the constant frictional acceleration −v a a = a √ −v0x a = a √ 0y a =(a ,a ) is f then fx f 2 2 , and fy f 2 2 . So that f fx fy . The v0x+v0y v0x+v0y resulting velocity of the planchette is: v = v0 +(a + af )∆t. The virtual planchette is updated locally 30 times a second, so ∆t = .03 seconds.

8 One suggestion was that we should “normalize” the input force vector from the user after polling cycle. This has the positive effect of treating no motion as a zero force input, but has the negative effect of causing confusion due the difference in cycle time between the local planchette motion and the remote planchette motion. For example if the remote planchette is in the upper left and the user wants to move it to the lower right, he or she will position the local planchette in the lower right, but after one cycle this position would be treated as zero and the user would be unable to move the planchette any further in the desired direction. Thus we do not normalize the input force vector. Summing the inputs from all users gives us the net desired acceleration of the planchette, ax = i aix, and ay = i aiy. Then a =(ax,ay). The industrial robot accepts commands in the form of a desired goal point and velocity, computed from a.

6 Usage Statistics

The system was live and available almost continuously from January 2000 until August 2001. Figure 6 shows the rate of new users who used the system in year 2000. Access logs indicate that well over 25,000 people have used the system.

3500

3000

2500

2000

1500 Number of new users

1000

500

0 1 2 3 4 5 6 7 8 9 10 11 12 Month (January = 1, December = 12)

Figure 6: Plot of new users per month.

Although the robot is no longer online, the system continues to run with a simulated robot arm and attracts approximately 50 new users per day.

7 Central Limit Theorem and Collaborative Control

It follows from the Central Limit Theorem that independent random variables can be combined to yield an estimate that gets more accurate as the number of variables in- creases. Let Xi be an input from user i. Consider that each X i is an idependent and

9 2 identically distributed random variable with mean µ and variance σ . Let χn denote the arithmetic average of the inputs and χ∗ denote the normalized random variable χ − µ χ∗ = n √σ n The Central Limit Theorem states that as n approaches ∞, χ∗ approaches a normal distribution with mean zero and unit variance. In other words, the mean estimator becomes more accurate as the number of users increases. To quantify how vector averaging can improve system performance, we designed two experiments.

8 Maze Navigation Experiment

Figure 7: Test Maze M1.

To get some intuition about how collaboration affects performance the presence of noise, we conducted an experiment with two undergraduates. We replaced the Ouija board with a simple maze and asked the undergraduates to use our interface to navigate through the maze as shown in Figure 7. Figure 8 illustrates how the camera image looks after it is corrupted by noise. We detect maze boundaries in software to prevent the planchette from moving across a maze wall or from leaving the maze itself. A and B were two female undergraduates. Neither had any previous experience with the system. For each trial, we recorded their total navigation time in seconds, first individually (A,B), and then when working collaboratively (A+B): Subject Trial 1 Trial 2 Trial3 A 146 139 105 B 177 141 175 A+B 65 71 72 Note that each student has different noisy information about the maze and the cur- rent location of the planchette. Although this experiment is not conclusive, the data in the last line of the table shows that maze completion time improves from 31-63% when the players collaborate.

10 Figure 8: Two views of maze M1 corrupted by digital noise.

9 Simulation Experiment

In a second experiment, we studied how a related system performs with a large number of simulated users (agents). As illustrated in Figure 9, we designed a simulation where a set of agents works together to control the motion of a point ”robot” in the plane. The goal is to move the robot from an initial starting point to a defined disk in the plane. At each time step, each agent generates a desired motion vector. These motion vectors are averaged to generate a single motion increment for the robot. This procedure is repeated until the robot converges at the goal. Noise in sensing and control is modeled with two-dimensional Gaussian distribu- tions. Geometrically, each trial is based on the subset of 2 defined by 0

11 agent vectors aggregate vector

Σ

ε -goal disk ε -goal disk

Figure 9: Geometric representation of one motion increment generated by 7 simulated agents with Gaussian noise.

Figure 10: Average convergence times for five group sizes based on 1000 trials each.

12 We ran 5 sets of 1000 trials each. For each set, the number of agents was set at 10, 20, 30, 40, and 50. Figure 10 illustrates the average number of iterations required for the system to converge for each group size. As the number of collaborating agents in- creases, the number of iterations required for the system to converge decreases. These results are consistent with the Central Limit Theorem and illustrate how vector averag- ing can improve aggregate performance. sectionSummary and Future Work We consider a system where a group of users collaborates rather than competes for control of a single robot. Rather than users taking turns in a queue, we propose to allow shared control by putting control commands into vector form and averaging. In this paper we have described a Multi-Operator Single Robot (MOSR) system for collaborative teleoperation on the Internet. It uses vector averaging to share con- trol among many simultaneous users. We described system architecture, user inter- face, hardware, software, and user statistics. We used commercially-available hard- ware where possible and describe custom client and server software we developed for this project, which was online almost continuously for 18 months. The Central Limit Theorem suggests that the vector average may yield a more ef- fective control signal than that from any individual input. We explored this in two experiments that quantify how vector averaging can reduce error in the presence of noise. In the first, we compared the performance of two students navigating a maze in the presence of visual noise, first individually and then using vector averaging. We observed that performance time was improved in the latter case. In the second ex- periment we considered how the number of participants affects performance and used large collections of simulated users. Here, we found that performance improves as the size of the group increases. Neither of these experiments is conclusive and we do not claim that collaboration will improve performance in all cases. Our results do suggest the worthy hypothesis that remote collaboration is possible and in some cases may improve overall system performance. Much remains to be done to explore this hypothesis.

Acknowledgments

We thank Bobak Farzin, Steve Bui, Jacob Heitler, Derek Poon, Gordon Smith, Brian Carlisle, Gil Gershoni, Richard Rinehart, Michael Idinopulous, Hiro Narita, Tiffany Shlain and Heidi Zuckerman-Jacobson for their contributions to this project. This work was supported in part by the National Science Foundation Award IIS-0113147.

References

[1] http://www.inetcam.com.

13 [2] H. Arai, T. Takubo, Y. Hayashibara, and K Tanie. Human-robot cooperative ma- nipulation using a virtual nonholonomic constraint. In IEEE International Con- ference on Robotics and Automation, April 2000. [3] Ronald C. Arkin. Cooperation without communication: Multiagent schema- based robot navigation. Journal of Robotic Systems, 9(3), 1992. [4] R. D. Ballard. A last long look at titanic. National Geographic, 170(6), December 1986. [5] A. K. Bejczy. Sensors, controls, and man-machine interface for advanced teleop- eration. Science, 208(4450), 1980. [6] D. M. Blough and G. F.Sullivan. A comparison of voting strategies for fault- tolerant distributed systems. In 9th Symposium on Reliable Distributed Systems, 1990. [7] Z. Butler, A. Rizzi, and R. Hollis. Cooperative coverage of rectilinear environ- ments. In IEEE International Conference on Robotics and Automation, April 2000. [8] D. J. Cannon. Point-And-Direct Telerobotics: Object Level Strategic Supervisory Control in Unstructured Interactive Human-Machine System Environments. PhD thesis, Stanford Mechanical Engineering, June 1992. [9] N. Chong, T. Kotoku, K. Ohba, K. Komoriya, N. Matsuhira, and K. Tanie. Re- mote coordinated controls in multiple telerobot cooperation. In IEEE Interna- tional Conference on Robotics and Automation, April 2000. [10] Barney Dalton and Ken Taylor. A framework for internet robotics. In IEEE International Conference On Intelligent Robots and Systems (IROS): Workshop on Web Robots, Victoria, Canada, 1998. [11] B. Donald, L. Gariepy, and D. Rus. Distributed manipulation of multiple objects using ropes. In IEEE International Conference on Robotics and Automation, April 2000. [12] T. Fong, C. Thorpe, and C. Baur. Collaborative control: a robot-centric model for vehicle teleoperation. In Agents with Adjustable Autonomy. Papers from the 1999 AAAI Symposium, March 1999. [13] Les Gasser. Multi-agent systems infrastructure definitions, needs, and prospects. Proceedings of the Workshop on Scalable MAS Infrastructure, Barcelona Spain, 2000. [14] R. Gillespie, J. Colgate, and M Peshkin. A general framework for cobot control. In International Conference on Robotics and Automation, 1999.

14 [15] Raymond Goertz and R. Thompson. Electronically controlled manipulator. Nu- cleonics, 1954. [16] K. Goldberg, Steve Bui, Billy Chen, Bobak Farzin, Jacob Heitler, Derek Poon, Rory Solomon, and Gordon Smith. Collaborative teleoperation on the internet. In IEEE International Conference on Robotics and Automation (ICRA), 2000. [17] K. Goldberg and Billy Chen. Collaborative control of robot motion: Robustness to error. In International Conference on Intelligent Robots and Systems (IROS), 2001. [18] K. Goldberg, M. Mascha, S. Gentner, N. Rothenberg, C. Sutter, and Jeff Wieg- ley. Beyond the web: Manipulating the physical world via the www. Computer Networks and ISDN Systems Journal, 28(1), December 1995. Archives can be viewed at http://www.usc.edu/dept/raiders/. [19] K. Goldberg, J. Santarromana, G. Bekey, S. Gentner, R. Morris, C. Sutter, and J. Wiegley. A telerobotic garden on the world wide web. Technical report, SPIE OE Reports, June 1996. Visit: http://telegarden.aec.at. [20] K. Goldberg and Roland Siegwart, editors. Beyond Webcams: An Introduction to Online Robots. MIT Press, 2002.

[21] K. Goldberg, D. Song, Y. Khor, D. Pescovitz, A. Levandowski, J. Himmelstein, J. Shih, A. Ho, E. Paulos, and J. Donath. Collaborative online teleoperation with spatial dynamic voting and a human “tele-actor. In IEEE International Confer- ence on Robotics and Automation (ICRA), May 2002. [22] http://found.cs.nyu.edu/cgi-bin/rsw/labcam1. [23] http://telerobot.mech.uwa.edu.au/.

[24] http://www.eia.brad.ac.uk/rti/. [25] H. Hu, L. Yu, P. Tsui, and Q. Zhou. Internet-based robotic systems for teleopera- tion. Assembly Automation Journal, 21(2), 2001. [26] Parker Brothers Inc. The ouija board game. [27] N. L. Johnson, S. Rasmussen, and M. Kantor. New frontiers in collective problem solving. Technical Report NM LA-UR-98-1150, May 1998. [28] O. Khatib, K. Yokoi, K. Chang, D. Ruspini, R. Holmberg, , and A. Casal. Coor- dination and decentralized cooperation of multiple mobile manipulators. Journal of Robotic Systems, (13(11)), 1996. [29] Y W Leung. Maximum likelhood voting for fault-tolerant software with finite output space. IEEE Transactions on Reliability, 44(3), 1995.

15 [30] K. Lynch and C. Liu. Designing motion guides for ergonomic collaborative ma- nipulation. In IEEE International Conference on Robotics and Automation, April 2000. [31] M. McDonald, D. Small, C. Graves, and D. Cannon. Virtual collaborative control to improve intelligent robotic system efficiency and quality. In IEEE International Conference on Robotics and Automation, April 1997. [32] E. Paulos and J. Canny. Delivering real reality to the world wide web via teler- obotics. In IEEE International Conference on Robotics and Automation (ICRA), 1996. [33] E. Paulos and J. Canny. Designing personal tele-embodiment. In IEEE Interna- tional Conference on Robotics and Automation (ICRA), Detroit, MI, 1999. [34] Paolo Pirjanian. Multiple Objective Action Selection and Behavior Fusion Using Voting. PhD thesis, Aalbord University, 1998. [35] Paolo Pirjanian and Maja J Mataric. Multi-robot target acquisition using multiple objective behavior coordination. In IEEE International Conference on Robotics and Automation, April 2000. [36] Rachel and Lauren Carpenter. http://www.cinematrix.com . [37] S. Rasmussen and N.L. Johnson. Self-organization in and around the internet. In Proceedings of 6th Santa Fe Chaos in Manufacturing Conference, April 1998. [38] I. Rekleitis, G. Dudek, and E. Milios. Multi-robot collaboration for robust ex- ploration. In IEEE International Conference on Robotics and Automation, April 2000. [39] R. Safaric, M. Debevc, R. Parkin, and S. Uran. Telerobotics experiments via internet. IEEE Transactions on Industrial Electronics, 48(2):424–31, April 2001. [40] Thomas B. Sheridan. Telerobotics, Automation, and Human Supervisory Control. MIT Press, 1992. [41] Yuichiro Akatsuka Steven B. Goldberg, George A. Bekey. Digimuse: An inter- active telerobotic system for viewing of three-dimensional art objects. In IEEE International Conference On Intelligent Robots and Systems (IROS): Workshop on Web Robots, Victoria, Canada, 1998.

16