<<

SPHERES Interact - -Machine Interaction aboard the International

Enrico Stoll Steffen Jaekel Jacob Katz Alvar Saenz-Otero Space Systems Laboratory Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge Massachusetts, 02139-4307, USA [email protected] [email protected] [email protected] [email protected]

Renuganth Varatharajoo Department of Aerospace Engineering University Putra Malaysia 43400 Selangor, Malaysia [email protected]

Abstract

The deployment of space for servicing and maintenance operations, which are tele- operated from ground, is a valuable addition to existing autonomous systems since it will provide flexibility and robustness in mission operations. In this connection, not only robotic manipulators are of great use but also free-flying inspector supporting the oper- ations through additional feedback to the ground operator. The manual control of such an inspector at a remote location is challenging since the navigation in three- dimensional space is unfamiliar and large time delays can occur in the communication chan- nel. This paper shows a series of robotic experiments, in which satellites are controlled by aboard the International Space Station (ISS). The Synchronized Position Hold Engage Reorient Experimental Satellites (SPHERES) were utilized to study several aspects of a remotely controlled inspector satellite. The focus in this case study is to investigate different approaches to human- interaction with varying levels of autonomy under zero-gravity conditions. 1 Introduction

Human-machine interaction is a wide spread research topic on since there are many terrestrial applica- tions, such as industrial assembly or rescue robots. Analogously, there are a number of possible applications in space such as maintenance, inspection and assembly amongst others. Satellites are the only complex engineering system without an infrastructure for routine maintenance and repair. The based satellite servicing missions, like the Hubble, Solar Maximum, SYNCOM IV-3, and INTELSAT VI (F-3) missions, which were executed in the past, are not applicable to arbitrary spacecraft. The reasons there- fore are on the one hand the costs due to the Space Shuttle deployment, which exceed the expenses for re-constructing and re-launching the specific satellite. On the other hand, using manned spacecraft like the Space Shuttle always constitutes a risk for the crew. Currently, the size and weight of spacecraft is limited by the . They have to fit into the envelope of the respective launcher. However, most scientific satellites would benefit from larger payload volumes. Space telescopes for example could be significantly improved if more room was available for larger apertures. Likewise, if the size of solar panels can be increased, the increased energy available allows to integrate more complex . In-space robotic assembly (ISRA) is an approach to overcome launcher limitations. Robotic assembly can be used to construct spacecraft in after their parts have been brought to space with multiple launch vehicles. This principle has already been employed for the con- struction of the ISS. Different modules were separately brought to space by either the Space Shuttle or the Russian launch vehicle and were subsequently assembled in space by astronauts. In contrast, ISRA can also be performed by controlling the procedures from ground. Similar to ISRA, ground controlled robotic spacecraft can be utilized for on-orbit repair and maintenance operations. Robotic spacecraft can also be utilized for inspecting a target satellite or monitoring such on- orbit servicing (OOS) operations. Furthermore, it is planned to use ground controlled spacecraft for orbit transfer. Malfunctioning spacecraft can be de-orbited from low Earth orbit (LEO) or geostationary (GEO) satellites, that have exceeded their operating life, can be relocated to the graveyard orbit.

1.1 State of the Art

In general, the deployment of regular used robotic systems in space is currently limited to the Shuttle Remote Systems (SRMS), the Japanese Experiment Module Remote Manipulator System (JEM-RMS) and the Mobile Servicing System (MBS) aboard the ISS. In addition to the 17 m long Canadarm2 (Space Station Remote Manipulation System, SSRMS), the MBS also features a Special Purpose Dexterous Manip- ulator (SPDM) (Mukherji et al., 2001). The three systems can be teleoperated by the crew and are being used for Extra Vehicular Activity (EVA) support, Space Station assembly, and satellite operations (retrieve, repair, deploy). In combination with the SRMS, the Orbiter Boom Sensor System (OBSS) (Greaves et al., 2005) is utilized for the inspection of the Shuttle’s heat protection tiles. In addition to the described robotic servicing capabilities which are strictly bound to the Shuttle or the ISS, several satellite-based demonstrators have been brought to orbit (or controlled on Earth via a satellite in orbit (Stoll et al., 2009a)) in order to demonstrate the possibility of more flexible and cost-effective future robotic on-orbit servicing systems. This section gives a brief overview of existing systems with special em- phasis on missions that involve free-flyers for proximity operations and inspection. The Technology Experiment (ROTEX) (Hirzinger et al., 1993) was developed by the German Aerospace Center (DLR). It was flown by the National Aeronautics and Space Administration (NASA) aboard in 1993 and was the first remotely controlled robot in space. Besides au- tonomous (pre-programmed) and tele-sensor-programmed (learning by showing) operations, the operator on ground could control the robot by using predictive, three dimensional (3D) computer graphics in teleopera- tion mode with a delay of approximately seven seconds. The Ranger program started in 1992 as the Ranger telerobotic flight experiment (RTFX) at the University of Maryland (Roderick et al., 2004). The goal was to develop a dexterous extravehicular space telerobot with four robot manipulators and a free-flight capability in space. In 1996 the program was redi- rected as a Shuttle launch payload but never advanced beyond an engineering model. The Japanese Engineering Test Satellite VII (ETS-VII) (Imaida et al., 2001), launched by the Japan Aerospace Exploration Agency (JAXA) in 1997, was composed of a pair of satellites and successfully demon- strated bilateral teleoperation in space. The smaller and cooperative target satellite was autonomously inspected and captured by the servicer satellite featuring a six degrees of freedom (DoF) robotic manipula- tor with haptic feedback. Installed in 2005 outside the Russian , the German Robotic Component Verification aboard the ISS (Rokviss) (Albu-Schaffer et al., 2006) experiment featured a two joint . It was controlled by operators on ground utilizing a haptic-visual display for telepresent manipulation via a direct S-band link with a total communication delay below 30 ms. In addition, the robot could be operated automatically in order to allow continuous experimentation without the need for a constant ground link to the experimenta- tion platform. The Demonstration of Autonomous Rendezvous Technology (DART) (Rumford, 2003), developed by NASA and launched in 2005, was the first mission to rendezvous with a satellite completely autonomously. However, DART showed problems with its navigation system and suffered from excessive fuel usage. When DART approached its target for the execution of close proximity and formation flight operations, it overshot an im- portant waypoint and collided with the communication satellite MUBLCOM, it was supposed to rendezvous with. Consequently, the mission was retired prematurely. Developed by the US Air Force Research Laboratory, the Experimental Small Satellite-10 (Davis and Melan- son, 2004) and 11 (Madison, 2000) (XSS-10/11) were launched in 2003 and 2005, respectively, and were intended to demonstrate key technologies for future on-orbit servicing missions. The micro satellites demon- strated line-of-sight guidance, rendezvous as well as close-proximity maneuvering around an orbiting satellite. Both missions utilized the upper stage of the launch vehicle as a simulated target spacecraft to be serviced. Brought into geostationary orbit in 2006, the Micro-Satellite Technology Experiment’s (MiTEx) (Boeing- Company, 2006) purpose was to execute a variety of autonomous operations, maneuvering and station- keeping. In 2008 and 2009, both satellites conducted the first deep space inspection of the malfunctioning defense support satellite (DSP-23). The goal of the Orbital Express (Shoemaker and Wright, 2003) mission, developed by the Defense Ad- vanced Research Projects Agency (DARPA) and launched in 2007, was to validate the technical feasibility of robotic on-orbit servicing including autonomous rendezvous, proximity operations, capture, docking and fuel transfer. The experiment was composed of two satellites, the servicer ASTRO which featured a robotic manipulator, and a surrogate target satellite, called NextSat. After completing a successful docking ma- neuver, refueling and the substitution of orbital replacement units could be demonstrated on a low level of autonomy. Funded by DARPA and implemented by the Naval Center for , the Spacecraft for the Universal Modification of (SUMO) (Bosse et al., 2004) was initially planned to be launched in 2010. The spacecraft is supposed demonstrate the integration of machine vision and multiple robotic manipulators with autonomous control to demonstrate rendezvous and grapple maneuvers for future spacecraft servicing operations in geostationary orbit. The Orbital Life Extension Vehicle (OLEV) (Krenn et al., 2008) and the German Orbital Servicing Mission (DEOS) (Sellmaier et al., 2010) are currently under development at DLR. DEOS investigates technologies to autonomously and manually perform rendezvous and proximity operations as well as to capture a tum- bling and uncooperative target satellite with a manipulator based on DLR’s 7-DoF Lightweight Robot-III (LWR-III). The implemented torque controlled joints have previously been space-qualified during the Rokviss mission and allow the execution of a soft and reflexive grapple maneuver while anticipating the movement of the coupled satellite platform (Abiko et al., 2006). OLEV’s purpose is to approach and dock with a depleted geostationary satellite utilizing a special capture tool for apogee motor insertion. Consequently, active atti- tude and position control can be performed for the coupled system along with orbit transfer maneuvers such as relocation to a graveyard orbit for OLEV and controlled de-orbiting for DEOS. Figure 1 shows a classification of all mentioned free-flying space-robotic systems and outlines the focused research area within the framework of this paper. The composition is based on a breakdown in different pos- sible OOS tasks. Additionally, the respective grade of autonomy is considered. Missions including multiple aspects may be depicted twice in the Figure 1. Concluding the results of the aforementioned complex OOS demonstrator missions that involve free-flyers, it becomes obvious that they are mostly applied in operations where the target is known in detail. Most of them were successful due to an autonomous approach, which had been monitored from the ground. Especially when the approached target’s exact composition and state are uncertain or not known at all, the remote environment can not be precisely modeled for predictive, dynamic computations used during close-proximity and grapple operations. Therefore, additional teleoperated inspec- tor satellites with real-time visual feedback become very beneficial for gaining crucial target information. In addition to OOS missions, such inspectors were supposed to be used for Space Transportation System (STS) and ISS maintenance and EVA support operations. Currently, there are no operational, free-flying inspection or maintenance support systems deployed in space. However, there have been several space and ground-based demonstrations. The Autonomous Extra-vehicular Robotic Camera (AERcam) (Choset et al., 1999), featuring a vision camera system, was designed to provide astronauts and ground control with a visual feedback of the Space Shuttle’s and International Space Station’s exterior. The prototype (AERCam Sprint) was first deployed aboard Columbia during STS-87 in 1997. While being teleoperated by an inside the Shuttle it flew freely inside the forward cargo bay. Currently, the Miniature Autonomous Extravehicular Robotic Camera (Mini AERcam) (Fredrickson et al., 2003) is being developed at NASA as the nano-satellite class successor of AERCam. It is supposed to deliver the capability for next generation external inspection and remote viewing of human spaceflight activities, including ISS operations. Besides a teleoperation mode, the system is supposed to feature a supervisory autonomous control with collision avoidance. The Space Systems Laboratory at the University of Maryland is developing the Supplemental Camera and Maneuvering Platform Space Simulation Vehicle (SCAMP SSV) (SCAMP SSV, 2006), which demonstrates free-flying camera applications within a neutral buoyancy test bed. It provides a stereo video interface for teleoperated 3-DoF navigation up to full 6-DoF autonomous control. Similar in size compared with SPHERES (cp. Chapter 2), the Personal Satellite Assistant (PSA) (Dorais and Gawdiak, 2003) was developed by NASA and is intended to act as a free floating and autonomous intra-vehicular spacecraft. It is supposed to propel itself autonomously with eight small impellers. In con- trast to the aforementioned OOS related experiments, PSA would interact with the crew in a short-sleeve environment with the main purpose to provide a remote-sensing and diagnosis platform for astronauts and ground control.

1.2 Work Overview

The possibilities of performing representative research on human-spacecraft interaction are limited on Earth, since almost all interactions are affected by gravity. Thus, full 6-DoF experiments cannot be representatively performed and experiments are often reduced to 3-DoF. As depicted in the previous section, several ambitious missions have demonstrated key technologies and concepts for robotic operations in space. However, this often required development in the context of complex, space-proven1 systems, the challenges of which often restrict the scope of demonstration missions. The possibility of extending the functionality is limited, once the spacecraft is in orbit. In contrast, the MIT SPHERES test bed aboard the ISS utilizes three experimental satellites, designed for providing researchers with a long term, replenishable, and upgradeable testbed for the validation of high risk metrology, control, and autonomy technologies. It features the ability of easy abort-improve-repeat approaches, i.e. all experiments are observed by astronauts and can easily be aborted. After evaluating the results on ground the algorithms can be improved and tests can be repeated in a subsequent test session. Moreover, re-programming of the satellites allows for changing the control algorithms with respect to varying test objectives. This paper shows a case study of a teleoperated inspector satellite as emphasized in Figure 1, which is supposed to support ISRA or OOS operations. Several tests aboard the ISS have been performed that together form a mission scenario for a satellite inspection robot and shed light on important aspects of such inspector operations.

• After initial proximity operations for approaching a target satellite, the inspector satellite will start its operations by orbiting around the target satellite to build or update a map (geometric model)

1Space is a harsh environment. Besides the constantly changing thermal condition, spacecraft have for example to be designed and tested in consideration of the impacts of radiation and vacuum. Assembly Maintenance Inspection EVA support XSS-10 / XSS-11 ROTEX DART

utomated Ranger MiTEx ETS-VII Orbital Express mini AERcam nomous/A o SUMO Aut OLEV nomy rvised o e DEOS Aut Sup ETS-VII SCAMP d

e DEOS mini AERcam Ranger

eleoperat ROTEX T Rokviss Grade of On Orbit, In Development, Research Autonomy Flown Ground Tests Focus

Figure 1: A classification of OOS demontrator missions

of the remote environment. The Circumnavigation experiment in section 3 investigates interactions between the human operator and the controlled spacecraft.

• Such close proximity operations are always critical and the likeliness of collisions has to be reduced. In addition to autonomous abort mechanisms, the effectiveness of a Manual Abort command is therefore evaluated in the following experiment.

• Once sufficient (map) data of the remote location and its physical properties is available, the inspector will move on to a target location. This can either be a certain instrument at the target satellite, that has to be inspected, or an advantageous location for monitoring the docking approach of a servicer spacecraft to the target spacecraft. The resulting precise motion towards a commanded location is considered in the Human Navigation experiment.

• The inspector satellite will not only stay at a certain location but may have to change its position to optimize its view on the incidents in space. Thus, subsequent tests are investigating Collision Avoidance techniques with varying levels of onboard autonomy.

• Since all the operations in space will be subjected to a time delay between operator command and spacecraft reaction (and the according to the operator), the concluding experiments will incorporate time delays for performing Delayed Human Navigation and Delayed Avoidance. 2 SPHERES

The Synchronized Position-Hold, Engage, Reorient Experimental Satellites is a high fidelity test bed designed for developing and maturing algorithms for distributed satellite system concepts. The SPHERES program began as a design course in the Space System Laboratory (SSL) at MIT and developed over the years to a permanent robotic experiment aboard the ISS.

2.1 The SPHERES Test Environment

Pressure Thusters regulator Property Value Hardware Diameter 0.22m BttButtons Mass (with tank and batteries) 4.3kg Max linear acceleration 0.17m/s2 Pressure Ultrasound Max angular acceleration 3.5rad/s2 gauge sensors Power consumption 13W Gas tank Battery lifetime 2h Battery

Figure 2: Main components (left) and basic properties (right) of a SPHERE Satellite

SPHERES are football-sized nano-satellites, currently three aboard the ISS and on ground respectively. The SPHERES spacecraft control their position and attitude using a cold gas system. Multi-phase CO2 is stored in a tank located inside the satellites. It is regulated to 25 psi and fed through an expansion capacitor to the 12 valve thruster, which are distributed over the surface and controlled via pulse-width modulation. Hardware buttons on a control panel are used to power and reset the satellite, initiate the bootloading of the test software, and enable the satellite for the tests. The basic properties of the satellites are summarized in Figure 2 (right). The experimental satellites feature a full 6-DoF control authority. The built-in navigation system consists of a custom pseudo-GPS based on ultrasound beacons and sensors. The beacons are located at the borders of the test volume such as the walls of the Japanese Experiment Module (JEM). This enables the SPHERES to perform absolute state measurements. Given a priori knowledge about the beacon configuration, the on- board computer performs time-of-flight measurements and additionally uses three accelerometers and three gyroscopes to estimate its state. The sensor fusion is done by an Extended Kalman Filter (EKF), reaching a precision in the vicinity of 10−3 m (Nolet, 2007). SPHERES are currently used to mature space technology. Scientists on the ground and astronauts aboard the ISS, initiate, monitor, and occasionally restart the experiments as well as change consumables such as propellant tanks and batteries. In this way, SPHERES is a risk-tolerant test bed that can be used for robotic control in space. Complex tests can be performed in a representative environment under zero gravity conditions and full 6-DoF control, without the danger of losing hardware in case the test conditions prove too challenging (Saenz-Otero, 2005). Since its commissioning in 2006, about 28 SPHERES test sessions, each featuring approximately 10-15 tests, were executed aboard ISS. The test sessions have included research on Formation Flight (Chung and Miller, 2008), Docking and Rendezvous (Nolet and Miller, 2007), Fluid Slosh, Fault Detection, Isolation, and Recovery (FDIR) (Pong, 2010), and general distributed satellite systems control and autonomy. Before being uploaded to ISS, the flight experiment software is integrated and verified with the SPHERES test bed on a 3-DoF (two degrees of translational and one degree of rotational freedom) air-bearing table at MIT SSL. The satellites are put on floating devices that are equipped with additional CO2 tanks, cp. Figure 3 (left). A research oriented Graphical User Interface (GUI) provides detailed text-based state information about all involved satellites as well as custom telemetry, e.g. activation of an integrated collision avoidance system. Figure 3 (right) shows the typical course of development for a flight experiment. After the initial scenario is implemented using a C++ based guest scientist program (GSP) (Enright and Hilstad, 2004), which uses a defined programming interface, the code can be tested and Computer Simulation • Implementation in simulation (()ideal) environment • Source code validation and error management • Simulated human-machine interaction HdHardware-in-the-loop ground ttitesting i3in 3-DFDoF • Implementation for real-time computation on actual

y SPHERES hardware F it l SPHERE • Testing on air-bearing table at MIT SSL with real human- idelity mashine interaction (repeatable) • Consideration of environment imperfections Flat Table Accessibi Flight testing on ISS in 6-DoF • Testing in a relevant environment on International Space Floating Station under zero-gravity conditions and with full 6-DoF, Device minor imperfections (air flow) • Restricted experiment time, limited repeatabilility Real Space Hardware Implementation SPHERES Test Bed SPHERES Test Bed Figure 3: SPHERES put on floating devices on the SSL air-bearing table (left) and typical course of devel- opment for a SPHERES experiment implementation (right) debugged within2.32.3 a MATLABSPHERESSPHERES simulation onon thethe environment InternationalInternational (Radcliffe, Space 2002). Station Subsequently, hardware-in-the-loop experiments are performed in 3 DoF on the air-bearing table. This iterative approach inherently reduces risk and allows for assessing repeatability while improving the reliability of the implemented scenario. The SPHERESSimilarSimilar are alsototo thethe equipped SPHERESSPHERES with testingtesting an expansion environmenvironm portent foron addingground, new the hardwareequipment for aboardaboard ground ISSISS and isis eventually ISS testing.composedcomposed Thus, it of canof threethree be extendednano-satellites,nano-satellites, to incorporate aa custcustomom metrology tests and system validations based on for ultrasound new ISS time-of- hardwaretime-of- such as computer visionflightflight based measurements,measurements, navigation communicationscommunications equipment (Tweddle, hardware,hardware, 2010) consumables or ground (tanks tests ofand other batteries),batteries), novel spaceandand anan subsystems (Varatharajoo and Kahle, 2005), (Varatharajoo et al., 2003). As Figure 3 (right) depicts, the accessibility astronautastronaut interface.interface. Fig.Fig. 2-32-3 (left)(left) showsshows papartrt of the space-based SPHERES setupsetup inin thethe decreases with increasing fidelity from computer simulation to space hardware. JapaneseJapanese ExperimentExperiment ModuleModule (JEM,(JEM, alsoalso knownknown as Kibo module), wherewhere bothboth testtest sessionssessions documenteddocumented within within this this thesisthesis havehave beenbeen performed.performed. 2.2 SPHERES aboard the International Space Station

OVHD Z‐ Axis AFT BeaconBeacon ‐‐ (x5)(x5) XX AxisAxis STBDSTBD Y++ Axis GUIGUI and and Y Axis comm.comm.

PORTPORT COCO fuelfuel YY‐‐AxisAxis tanktank22 SatelliteSatellite (x3)(x3) FWD DECK + X++ Axis Z Axis

Figure 4: A SPHERESFig.Fig. 2-3: 2-3: SPHERES SPHERES experiment inin thethe in JEMJEM the during JEMduring under TS25aTS25a (l microeft) - ISS-fixed gravity coordinate conditions frameframe (left) (right,(right, and the ISS-fixed coordinate frame as used by the astronauts(right) courtesycourtesy of NASA)

Analogously to the SPHERES testing environment on ground, the equipment aboard ISS is composed of Fig. 2-4 shows the complete positions of the ultrasound beacons for the metrology system in a three nano-satellites,Fig. 2-4 shows communications the complete positions hardware, of the replenishable ultrasound beacons consumables for the (tanksmetrology and system batteries), in a and an JEMJEM mockup. mockup. TheyThey areare mountedmounted onon thethe wallswalls araround the test volume which isis approximatelyapproximately aa 2m 2m cube. cube. A A SSC SSC laptoplaptop isis usedused asas controlcontrol ststation to collect and store telemetry, uploadupload newnew algorithmsalgorithms andand interactinteract withwith ththee experiment.experiment. Unfortunately, crew-satellite interactioninteraction isis currentlycurrently stillstill limitedlimited toto keykey commandscommands via the crew interface. However,However, currentcurrent developmentsdevelopments aimaim atat includingincluding anan upgradedupgraded graphical user interface with virtuavirtuall realityreality supportsupport inin orderorder toto allowallow forfor moremore advancedadvanced experimentalexperimental setups withinwithin thethe SPHERESSPHERES InteractInteract program program (cp. (cp. chapterchapter 2.4).2.4). Additionally,Additionally, aa stereo-vision camera systemsystem isis scheduledscheduled toto extendextend thethe SPHERESSPHERES hardwarehardware system.system. Fig.Fig. 2-3 (right) shows the ISS-fixed coordinatecoordinate systemsystem usedused forfor orientationorientation andand crew-satellitecrew-satellite interaction.interaction. Instead ofof e.g.e.g. ±{x,±{x, y,y, z},z}, thethe directionsdirections AFT,AFT, FWDFWD (forward),(forward), PORT,PORT, STBDSTBD (starboard),(starboard), OVHD (overhead),(overhead), andand DECKDECK areare used.used. ThatThat way,way, directionsdirections areare intuitivelintuitively usable. The alignment ofof thisthis ISS-fixedISS-fixed coordinatecoordinate frame frame in in the the JEMJEM cancan alsoalso bebe seenseen in Fig. 2-4.

PagePage 1616 astronaut interface. Figure 4 (left) shows the complete SPHERES setup in the JEM, where SPHERES ex- periments are currently conducted. Previously, the U.S. Destiny Laboratory (USLab) was used for SPHERES test sessions. The current test volume is approximately a 2 m cube. A complete test session (TS) usually takes between two and three hours of crew time2. After a first in- troductory crew conference, the astronaut sets up and configures the hardware. An ISS-supplied standard laptop is used as a control station to upload new programs to the satellites, collect telemetry and interact with the experiment. The provided simplified flight GUI is used for detailed crew instructions on the cur- rent experiment, test initiation as well as the display of test results, that are also communicated down to the MIT SPHERES team. Besides the direct visual feedback of the scene right in front of the astronaut, i.e. the movement of the satellites, the current crew-satellite interaction during a test run is limited to keyboard commands as well as text-based feedback on the laptop screen. However, current developments aim at including an upgraded GUI including augmented virtual reality (VR) elements in order to allow for more advanced human interaction experiments (as depicted in the following section). In addition, crew feedback may be entered and, if required for experiment evaluation, customized follow-up questions have to be answered within the GUI. Usually the crew has to complete the following tasks during a test session:

i) First, the crew unstows all SPHERES hardware, sets up the ultrasound beacons, connects the commu- nication equipment to the laptop and runs the GUI. Depending on the status of the consumables that are installed in the satellites, and the number of satellites that are used for the following experiments, it is sometimes required to exchange some batteries and tanks. All such actions are logged in the user interface in order for the ground team to have an accurate overview about the quantity and quality of all resources aboard ISS.

ii) After uploading the current experiment file to the satellites, the crew carefully reads the HTML-based experiment instructions embedded into the user interface. This includes a graphical description of the satellites initial positioning in the test volume as well as an experiment synopsis including demanded astronaut-satellite interaction and tasks. The astronaut also gets a basic idea about the scientific background of each experiment. iii) After approximately positing all involved SPHERES according to the instructions and enabling them by pressing a dedicated button in the satellites’ control panel, the crew starts a test run by pressing the respective test number key in the GUI. Subsequently, the satellites will initiate, locate themselves in the test volume, and move into their proper initial position. iv) Usually SPHERES experiments run completely autonomously with no crew interaction required. For the described human-satellite interaction scenarios within this publication, however, the crew had to issue keyboard commands for actively controlling the position and velocity of one SPHERE. After completing all steps listed in the crew instructions, test runs are usually terminated automatically if a set of certain parameters is met, e.g. a predefined position of the controlled SPHERE is reached. Alternatively, tests may also be stopped using a dedicated button in the GUI. The action chosen depends on whether or not all satellites react nominally. Numerical test results are displayed in the GUI and are communicated down to the ground team. Subsequently it is decided whether to move on or repeat the current test run in case of an obvious or potential failure.

v) After each test run, the crew has the chance to enter personal feedback into the GUI in order for the experiment scientist to better understand what happened, especially in case of off-nominal satellite behavior - that might be software, hardware or human-machine interaction related. In addition, manda- tory crew feedback can be implemented within each test, i.e. radio buttons or a verbal description about some detail of the experiment. vi) Subsequently, the consecutive tests are executed, either until all experiments within the test session are finished, or the end of the scheduled crew time is reached. The SPHERES hardware is finally stowed away again.

2The time in which astronauts are available for a specific research purpose. Figure 4 (right) shows the ISS-fixed coordinate system used for orientation and crew-satellite interaction. Instead of ±X,Y,Z the directions AFT, FWD (forward), PORT, STBD (starboard), OVHD (overhead), and DECK are used. During operations, the MIT SPHERES team monitors the experiments in real-time by means of a high-bandwidth video and audio downlink. Instructions are communicated, upon request, to the astronaut performing the experiments. Downloaded telemetry and recorded video of the test sessions serve as a source for experiment evaluation and if needed, subsequent algorithm improvement for follow-up experiments.

2.3 SPHERES Interact

There is a wide variety of applications for terrestrial existing. The spectrum comprises un- derwater telerobotics (Ridao et al., 2007), search and rescue operations (Ruangpayoongsak et al., 2005), minimal invasive surgery (Ortmaier, 2007), and unmanned aerial vehicles (Chandler et al., 2002) amongst others. The control architectures can usually be distinguished by whether a direct control, shared control or supervisory control strategy is applied (Niemeyer et al., 2008). A direct control allows the operator to define the robot’s motion, whereas the precision of a rate controlled robot is higher than under acceleration control (Massimino et al., 1989). A shared control architecture utilizes local sensory feedback loops with varying levels of operator assistance. The use of virtual elements in this context has proven to increase the accuracy and consolidate the task performance (Abbott et al., 2007). Utilizing supervisory control (Sheridan, 1992) the operator usually issues high level commands and receives a summary of the actions in terms of sensory information. If the control architectures are subject to a time delay in the communication channel, they can suffer from serious instabilities (Hirche and Buss, 2007) (Munir and Book, 2003), for which several compen- sation techniques such as wave variables (Tanner and Niemeyer, 2006) or time domain passivity (Ryu et al., 2005) were developed. Most of the previous SPHERES test sessions on the ISS matured autonomous algorithms. Future servicing missions and the assembly of complex space structures will depend on increased autonomy. However, the ability of to provide high-level oversight and task scheduling will always be critical. In addition, elements of shared and supervised control techniques have to be evaluated to support free flyer operations. Thus, a complex test environment for demonstrating advanced concepts for human-spacecraft interaction was in this context set up on ground (Stoll and Kwon, 2009). The SPHERES hardware is synchronized with a virtual reality entity of the test environment. The human operator is capable of controlling the exper- imental satellites in 3-DoF by means of a force feedback joystick (the Novint Falcon), that features three translational degrees of freedom. That way the satellites can be enveloped by virtual obstacles (cp. Figure 5) and collision avoidance or fuel optimal paths can be made perceptible by utilizing the force feedback that is easy to understand by the operator (Stoll et al., 2010). The actual state as well as the commanded state are visible in the virtual environment and time delays can be introduced in the communication channel. However, high-fidelity feedback is not yet available to the human operator aboard ISS and fundamental research had to be conducted using the ISS laptops only as a mean for human-machine interaction. Ex- periments were designed to be executable with keyboard commands and visual feedback (motion of the satellites) only. That way experiments were conducted to develop and advance algorithms for adjustable autonomy and human spacecraft interaction. The research began with basic tests during Test Session 11, where the crew was asked to move a satellite to multiple corners in a predefined volume. The satellite au- tonomously prevented collisions with the walls of the ISS. The test demonstrated the ability of the crew to use the ISS laptop to control SPHERES. It provided baseline results for future tests. An ongoing sequence of ISS tests is being conducted in the framework of the SPHERES Interact program, including upgrading the SPHERES GUI to provide visual feedback about the satellites’ state. The goal of the program is to conceive new algorithms that utilize both human interaction and machine autonomy to complete complex tasks in a 6-DoF environment. The research area comprises human orientation, navigation, collision avoidance, and interaction with autonomy under the influence of time delay by a communication channel. Figure 5: SPHERES ground test environment with haptic-visual feedback and synchronized virtual (left) and the hardware entity (right)

3 Experiments aboard the International Space Station

This section presents eight experiments, which were performed in the framework of three different SPHERES test sessions aboard the ISS. All of the tests required astronauts to control SPHERES satellites under zero- gravity conditions.

• TS19 took place on August 27, 2009 and lasted approximately 3.75 hours. Amongst other tests, two SPHERES Interact experiments were performed.

• TS20 took place on December 05, 2009. A total of 28 tests were run in 2.5 hours. Four tests concerned the SPHERES Interact program.

• TS24 was executed on October 07, 2010. The session contained a total of 18 tests, from which 8 were SPHERES Interact tests.

Table 1 summarizes the research objectives of the tests performed in the framework of SPHERES interact. The results of the tests are presented in this section. The experiments evaluated the potential benefits of a human controlled inspector satellite. Hence, each of the eight tests in this case study tries to shed light on a partial aspect of an inspector satellite mission, which is supporting ISRA or OOS. The availability of crew time is very limited and so are the consumables (battery packs, CO2 tanks) available for SPHERES, which

Table 1: Executed experiments aboard ISS and their respective research objectives Research objectives Test TS Orientation Dist. est. Performance Coll. avoid. Delayed Autonomy Circumnav. 19 x x x Manual Abort 19 x x x x x Human Nav. 20 x x x Manual Avoid. 20 x x x x Shared Avoid. 20 x x x x Supervised Avoid. 20 x x x x Delayed Nav. 24 x x x x Delayed Avoid. 24 x x x x x cannot be recharged or refilled on ISS but have to be brought to orbit. Thus, each experiment is usually only performed once. The small sample size can influence the statistical significance of the results. Therefore, the SPHERES tests are usually designed in a way that the main objective, which was evaluated in one test will be re-verified as a secondary objective or sub-goal in a later test. Table 1 emphasizes this approach for the experiments presented here. Each tests has a different objective. However, most of the tests feature a common intersecting set, which permits drawing and re-verifying the conclusions.

3.1 Human Task Performance Evaluation

The strategy for evaluating the human task performance is directly connected to the requirements derived from teleoperated satellite scenarios for OOS and ISRA missions. Since the propellant is in general the main limiting factor for space missions, fuel usage was chosen as one of the metrics to evaluate the human task performance. Especially for OOS and ISRA missions, the time window3, i.e. the time available for communicating with an orbiting spacecraft from ground, is another major constraint for interactive on-orbit operations. The time until a specific positioning task is accomplished will therefore also contribute to the test metrics. Time and fuel consumption are opposing factors, since both cannot be minimized at the same time. The crew therefore has to find a trade-off between both factors in order to maximize the task performance. Further, the accuracy in positioning the inspector satellite, i.e. the distance to the target position as well as its residual velocity are incorporated into the human task performance considerations. The latter is of importance for collision avoidance purposes. Therefore, the overall performance P for an inspector satellite positioning task is composed of the four performances for time Pt, fuel Pf , distance Pd, and velocity Pv.

P = Pt + Pf + Pd + Pv (1)

They are individually evaluated using a calibration value cali and a reference value iref where i ∈ (t, f, d, v).

P = ((1 − calt(ttask − tref )) + (1 − calf (ftask − fref ))

+ (1 − cald |dfinal − dtarget|) + (1 − calv |vfinal|) (2)

The individual performances are standardized that way to a magnitude of 1 if there is no deviation to the respective reference value. The reference values are constant throughout each test setup and for time and fuel derive from the average data within similar tests. The reference values for distance and final velocity are predefined and will be stated at the bottom of each evaluation table. The calibration factors cali represent the performances’ derivative with respect to the measured values and therefore consider their deviation from the respective reference values.

αi cali = ; i ∈ (t, f, d, v) (3) iref

The factors αi allow for applying different weights to the individual performance criteria. Figure 6 shows a graphical representation of the individual performance functions in Eq. 2 depending on the values obtained in the tests. The time performance Pt approaches the value one if the time used for the task is equal to the reference time. If it exceeds the reference time Pt will decrease. Likewise, if the task time is better than the reference, Pt will increase. Analogously, the fuel performance Pf is evaluated and can exceed the value one if the fuel consumption is better than the reference. In contrast, the task of locating the inspector in a

3This matter will be explained in more detail in section 3.9. given distance to the target cannot be overperformed. Deviations from the target distance will, no matter if positive or negative, always result in a decreased distance performance Pd as Figure 6 shows. Furthermore, the performance Pv which takes into account the velocity will be at its maximum if there is no residual velocity. According to Eq. 2, an overall task performance of 4 means that the performance values for time and fuel coincide with their reference values and that the satellite is positioned with no deviation to the its intended target position and no residual velocity. This implies a strong task performance in the respective experiment. Values below 4 reflect a decreased task performance, whereas values above 4 show an overperformance.

P [-]

Pt, Pf

1

Pv dy/dx(Pi) = cali

Pd

d=d , v =0, target final d [m], vfinal [m/s], f=fref, t=tref, f[%]t[f [%], t [s]

iref/αi

Figure 6: Performance criteria evaluation for human-satellite interaction with SPHERES

3.2 Circumnavigation

Within this test, elements of human supervision were implemented into the SPHERES test bed for the first time. This so-called supervised autonomy aims at providing the human operator (here the astronaut) with the possibility of direct interaction with the SPHERES. High level commands are given by the operator and executed by the satellites autonomously. A map building or updating task of the inspector satellites was simulated in this experiment, in which the astronaut controlled an inspector that orbited around a target satellite. Since there was no feedback on the satellite states available from the flight GUI, the operator’s direct observation on the satellite motion was the only way to receive state feedback. No augmented or virtual reality techniques were supporting the astronaut and the human decision making process was purely based on the motion of the inspector satellite. This experiment in particular aimed at testing the astronauts’ spatial orientation and judgment of motion in three dimensional space. It was evaluated whether a human operator is capable of recognizing motion patterns (orbits) and how precise they are controllable. Note that since in this experiment there was no manual control of the satellite position, the metrics of Eq. 1 will not be applied. Therefore the test setup was as follows. After an initial positioning period of the two SPHERES, the primary satellite (SPH1), which is the inspector satellite, flew around the secondary (target satellite) (SPH2) in a plane parallel to DECK, and was constantly pointing with its Velcro side4 (-X) as an imaginary camera at SPH2. The task of the astronaut was to align this camera with the Z-axis of the target. That means it should be either aligned with the pressure regulator or the tank of the target (cp. Figure 2), which served as imaginary target instruments. The test involved two phases,

i) an inclination change and

4Velcro is used as a simple docking mechanism. ii) an alignment adjustment of the inspector.

While the inspector was permanently orbiting the target satellite, keyboard commands had to be used to change the inclination of the inspector’s orbit plane to 90◦. The DECK served in this connection as the reference plane. After reaching this final orbit, the goal in the phase was to stop the inspector at a position roughly aligned with either the regulator or the tank of the target. In the second phase, this rough position- ing was to be adjusted to a precise alignment. Figure 7 (left) depicts the motion of SPH1 in 3D and points out the inclination change start after an ini- tial convergence of the EKF and the positioning of both SPHERES. The target satellite was autonomously staying at the origin of the coordinate system, i.e. the center of the experiment volume. The inclination increments for the inspector were not constant but depending on the actual inclination. The closer the actual inclination i was to a 90◦ inclination (final orbit), the smaller the increments of i were, which should allow the astronaut to precisely change the inclination. This was supposed to yield data on the accuracy of the human controlled orbit inclinations. Figure 7 (right) shows the inclination of the inspector with respect to deck. The inclination was changed to a maximum of 75◦ but the target inclination of about 90◦ was never reached. It was difficult for the astronauts to recognize an orbiting plane of SPH1 and to reference its inclination to DECK. The crew feedback stated that ”it was a little difficult to understand the use of [...] commands to control inclination and alignment. Initially, the movement of the primary satellite seemed to move in a plane parallel to the deck, then moved more vertical. It was difficult to predict with the two keys and the continual motion of the primary satellite when the plane was perpendicular to deck.” On the one hand, it seemed to be complicated to recognize a 3D path since the motion of the SPHERES is comparably slow (approximately 2 cm/s). On the other hand the change of inclination did not happen as a step but there was a transition phase of the inspector from one orbit inclination to the other. This motion superimposed the actual circumnavigation and made it difficult for the astronaut to judge the actual inclination. Therefore, this maneuver was not stopped with a rough alignment of SPH1 with the Z-axis of SPH2 (in- spector ”above” or ”under” the target), but instead to the side. This can be seen in Figure 7 (left) as the region in which the alignment adjustment of SPH1 took place. The following alignment adjustment of SPH1 confused the astronauts since they did not see a distinct motion of the inspector as a reaction to the keyboard commands. The crew stated that ”For the alignment adjustment, it was not clear what the satellite response was to the [...] command[s].” This is due to the fact that the alignment adjustment incremented only 1◦ per keyboard command. This command was intended to allow precise repositioning of the satellite in 3D but instead made the control appear unresponsive. Even though this test was not operated as expected, it yielded very valuable information on the human interaction with SPHERES. It is evident that motion patterns are hard to recognize for a human super- visor, since the SPHERES velocity is comparably slow. The orientation of a SPHERES circumnavigation orbit is difficult to judge in three dimensions. Therefore, re-orientations of orbits or motions to change an orbit should be executed and evaluated by low level autonomous processes instead of high level human commands. Further, precise location (within the magnitude of 1◦) of a SPHERES is too complicated for the human supervisor to judge, since in three dimensional space reference points are not easily utilizable and the commanded SPHERES motion is superimposed by random noisy motion.

3.3 Manual Abort

While the inspector satellite is orbiting the target satellite to support the ground operator with his decision making process, it is crucial to monitor the distance between the two satellites. Collision risk is usually minimized by autonomous algorithms, which monitor the relative positions. Since the algorithms are not always fail safe, as the DART collision showed (cp. section 1.1), the efficiency of a human controlled abort command is evaluated in the following experiment. The incorporation of a manual abort capability into mission operations can yield an additional safety layer, but false positive abort commands within an inspection task can have a significant time and fuel penalty associated with moving the satellite to a safe position and restarting proximity operations. Accordingly, the test metric to be evaluated in this test is the number of false abort commands. Figure 7: 3D motion of the of the inspector satellite around the target at the center of the test volume (left) and the commanded inclinations of the inspector over time (right)

Human perception of a potential collision was evaluated in this experiment. After the two satellites underwent an initial positioning period, the inspector (SPH1) orbited around the target (SPH2) in a plane parallel to DECK. It was constantly pointing with its Velcro side (again the imaginary camera system) at the target satellite. A series of simulated errors occurred, which caused the inspector to decrease its orbiting radius and thus its absolute distance to the target satellite. It was the astronaut’s task to observe the motion and detect possible collisions. In the case a potential threat to the satellites was detected, a keyboard command was used to initiate an abort maneuver. Following an abort command, SPH1 increased its distance by returning to the original formation. Six possible collision maneuvers were implemented into the test, each with a different relative position of SPH1 to SPH2 to test how the spatial positions of both satellites influence the judgment of distance of the astronaut. In order not to provoke an actual collision between satellites, an autonomous abort procedure was also implemented into the test, which was executed whenever the distance between both satellites falls below a certain threshold. Figure 8 (left) shows the maneuvers which caused SPH1 to decrease its orbiting radius. Out of those 6 maneuvers, 2 were aborted manually. The other 4 were terminated by the autonomous abort maneuver. Figure 8 (right) depicts the absolute distance between SPH1 and SPH2 (center) position over time. Keeping in mind that the SPHERES feature a span between 21 cm and 23 cm it can be deduced that the astronauts were able the judge relative distances between both satellites very well and that distances of 25 cm to 26 cm have not been considered as a potential threat of a collision. There was no manual abort initiated until the distance amounted to less than 24 cm. This demonstrated that human judgment of distances in the three dimensional environment is much better than the judgment of attitude. Further, the abort command has only been used in the two described incidents, meaning that there were no false positives initiated. The crew stated that ”The satellites separated on their own except for two times when the [...][abort] command was selected. The satellites were very responsive to the command.” The test suggests that a human supervisor with a third person perspective has a good understanding of when a proximity maneuver of a SPHERE constitutes a possible threat to another spacecraft. With this configuration an operator abort capability could add safety and robustness to an autonomous mission. manual abort

manual abort

contact

Figure 8: Absolute distance between the inspector and the target satellite (left) and the motion of inspector in the x-y plane (right)

3.4 Human Navigation

After there is sufficient data of the proximity environment available from the initial scanning procedures, the observer satellite can approach special areas of interest under manual control. For an inspection, the satellite may examine fine details of an area of interest or for docking, an observer satellite can closely monitor the final berthing phase to provide more accurate distance measurements. The human navigation experiment simulated a close approach maneuver of an inspector to a target location using manual keyboard commands. The five global metrology beacons served as distinct locations to be inspected and were supposed to be approached to a distance of about 30 cm in an arbitrary sequence until the satellite exhausted its propellant. Figure 9 (left) shows the motion of the inspector in the test volume and the actual beacon positions (right). The inspector started in the middle of the test volume and was first controlled by the crew to approach beacon number 1 in the upper back right corner. Here, the minimum distance between the inspector and the approached beacon was 39.2 cm. Beacon 4 was the second target of the crew and the minimum distance amounted to 31.0 cm. The trend of distances indicates that the crew appeared to get more confident in approaching the targets and came closer towards the demanded 30 cm of distance. The final target, before the tank depleted, was beacon 2. Here the crew was able to reach a distance of 28.0 cm. This test again showed the crew’s ability of judging distances in 3D space very well, which seconds the test results of the Manual Abort tests in TS 19. The crew did not directly maneuver from one beacon to the other (e.g. directly from beacon 1 to beacon 4). Before approaching the new target, the inspector was steered back to the middle of the test volume. These re-centering maneuvers can also be seen in Figure 10, which shows the background telemetry5 of the inspector satellite. The position coordinates as well as the velocities are depicted. The x, y, and z position of SPH1 were close to zero in the maneuvers at t = 210 s (after beacon 1), at t = 340 s (after beacon 4), and again at t = 420 s (after beacon 2, shortly before the tank depleted). Here the absolute velocity of the inspector was close to zero as well. Table 2 summarizes the performance criteria and the performance of the three beacon approaches, conducted in this experiment. The re-orientation phase for beacon 4 and 2 in the middle of the test volume increases time and fuel6. Hence, the overall performance (1.3 and 2.6) decreases compared to beacon 1, which constitutes with Poverall = 4.1 a slight overperformance. The reference values for evaluating the respective performances are listed at the bottom of Table 2, tref and fref are the means of time and fuel during the tests. dtarget = 30cm and is the target distance to SPH2, whereas dref = 30cm was chosen for this and all following

5Background telemetry is available for all SPHERES during the test sessions. Not only the state vectors, but also the attitude in form of quaternions and angular rates are logged. 6The fuel consumption is stated as a fraction of the overall tank volume. positioning tasks to allow a comparability between tests with different target distance. Further, vref = 2cm/s is a heuristic value for an acceptable residual velocity. The weighing factors αi are constant for all tests and were chosen to put more emphasize on the fuel consumption since this is the most crucial aspect in space. The performance Pd,v = Pd +Pv are listed separately to allow comparability with subsequent tests, for which fuel and time values will differ since the task has changed.

target 2 target1 beacon 4 beacon 1 d = 31.0 cm d = 39.2 cm

start

target 3 beacon 2 d = 28.0 cm

Figure 9: 3D motion of SPH1 in the test volume when approaching the beacons

Figure 10: Background telemetry of the human navigation test

3.5 Collision Avoidance Steering Law

The following tests evaluated the interaction between a human controlled spacecraft and autonomous collision avoidance mechanisms. This section shortly describes the collision avoidance steering law (Katz et al., 2011). It operates on the closest point of approach (CPA), defined as the point in space and time in a relative trajectory when two objects are closest. For two satellites, starting at time t = t0, the motion of satellites 1 and 2 is assumed to continue along the current velocity direction. Defining the relative position from satellite 1 to satellite 2, (x2 − x1), as r12 and relative velocity, (x˙ 2 − x˙ 1), as u12, the time evolution of the relative Table 2: Human Navigation Performance Performance Criteria Performance Target time fuel distance final velocity Pd,v Poverall Beacon 1 116.6 s 2.57 % 39.2 cm 1.1 cm/s 1.1 4.1 Beacon 4 169.4 s 6.23 % 31.0 cm 3.04 cm/s 0.4 1.3 Beacon 2 91.8 s 4.64 % 28.0 cm 3.06 cm/s 0.4 2.6

cm tref = 126s, fref = 4.5%, dref = 30cm, dtarget = 30cm, vref = 2 s , αt = αd = αv = 1, αf = 2 position is r12(t) = r12(t0) + u12(t0)t. (4) For clarity, the time index and subscripts will be omitted from this point forward. All values can be assumed to be from the perspective of satellite 1 at t = t0 unless otherwise specified. Taking the squared magnitude of the relative position and minimizing with respect to time gives the time at closest point of approach tCPA.

2 T d = r(tCPA) r(tCPA) (5) d d2 = 2(rT u) + 2t (uT u) = 0 (6) dt CPA rT u t = − (7) CPA uT u

The distance at closest point of approach, dCPA, can be calculated by evaluating Eq. 5 at tCPA and taking the square root. q T dCPA = r(tCPA) r(tCPA) q T T = r r + (r u)tCPA (8)

Potential collisions can be identified by examining the pair (dCPA, tCPA). If tCPA > 0 and dCPA < da, where da is a critical distance threshold, then the avoidance controller should be activated to avoid the collision. The collision avoidance controller steers a pair of satellites away from a potential collision by commanding a change in velocity that increases the magnitude of dCPA. To minimize the required velocity correction, the thrust is directed along the gradient of dCPAwith respect to the satellite’s current velocity, x˙ 1. q ∂dCPA ∂ T T ∂u = r r + (r u)tCPA ∂x˙ 1 ∂u ∂x˙ 1   1 T T ∂tCPA = − tCPAr + r u (9) 2dCPA ∂u ∂t 1 CPA = 2(rT u)uT − (uT u)rT  (10) ∂u (uT u)2

The gradient of dCPA can also be used in a linear approximation to select the thrust magnitude. Assuming that the satellite provides an impulsive change in velocity along gT = ∂dCPA with some magnitude, k, the ∂x˙ 1 approximation for dCPA is given by Eq. 11. After specifying the desired dCPA target, dt, the resulting thrust magnitude is calculated from Eq. 12. g d = d + gT k (11) CPA CP A,0 kgk d − d k = CPA t (12) kgk 3.6 Manual Avoidance

The three following tests of Manual, Shared, and Supervised Avoidance consider an inspector satellite which has to be moved from one observation point to another with an obstacle – the target satellite – in the path. The goal of the tests was to evaluate collision avoidance techniques with varying levels of onboard autonomy. A pure autonomous collision avoidance controller can be counter-productive for a human operator. It takes away the user’s control capability and might steer the inspector into another direction than the operator anticipated. This counteraction is hypothesized to have a noticeable impact on the task performance of the operator. Thus, the three tests were designed to benchmark different collision avoidance techniques with increasing autonomy of the onboard controller. The crew’s general task was identical in the three tests. The inspector (SPH1) was guided by use of the keyboard commands, to a position 50 cm FWD of the target (approx. 0.8 m FWD of the center of the test volume) and stop with as little velocity as possible behind the target satellite (SPH2). Figure 11 shows the general test setup and the anticipated motion of the satellites. There was no autonomous collision avoidance (CA) involved in the Manual Avoidance test, in which the crew had to take care of the CA manually, while guiding the inspector with 3 translational degrees of freedom. Autonomous CA was introduced in the Shared Avoidance test, in which the crew had to guide the inspector in 3 degrees of freedom again. The third Supervised Avoidance test utilized the autonomous CA but the crew was only enabled to control the inspector in 1 degree of freedom, towards and away from the target satellite. Figure 12 (left) shows the projected motion of the inspector satellite into the x-y and the x-z plane for the Manual Avoidance test. It can be seen that the crew maneuvered the inspector with sufficient distance to SPH2 into the target area with a 30 cm radius around the target position (50 cm FWD of SPH2). The test was automatically terminated as soon as the SPH1 was positioned in the target area with an absolute velocity of less than 2 cm/s. The inspector was controlled at first to a position in close vicinity (in x direction) of the target area’s center. However, the position was changed afterwards to closer proximity to SPH2. The distance between the SPHERES satellites in x direction was most complicated to judge since SPH2 was in the line of sight at that point in time. Table 3 shows the results of this CA test with pure manual CA. The crew needed 79 seconds to accomplish the task. The final distance between the inspector and the target position was 22 cm, whereas the final velocity amounted to 1.41 cm/s. The tank was used by 3.32% in this test. There was no collision between the inspector and the target. It showed that the crew was capable of judging relative distances (this continues to support results of test session 19). The crew successfully avoided collision at all points in the test.

STBD

SPH1 SPH2

AFT FWD

PORT

Figure 11: Initial setup of the collision avoidance tests, including an inspector satellite and a target satellite as obstacle 3.7 Shared Avoidance

For this test, the autonomous CA was activated, whenever the relative distance between the two satellites was smaller than 40 cm, moving the satellite away from a potential collision. Figure 12 (middle) shows the projected motion into the x-y plane and the x-z plane. The CA area, in which the autonomous CA algorithm controlled SPH1, was made visible next to the target area. Figure 12 (middle) shows that a direct path from the starting to the target position was executed. The motion was efficient, considering the length of the path. There was almost no change in y direction. Furthermore, there was no overshooting in x direction, i.e. the x coordinate was monotonically increasing until SPH1 was stopped in the target area. Another interesting fact, considering-human spacecraft interaction, is that for manual avoidance a downward motion (change in z position) was chosen by the crew and not a sideways motion, as common on Earth. Table 3 summarizes the results in comparison to the manual avoidance test. It shows that the crew was confident with the shared CA algorithms. The task execution time amounted to 24 seconds, which is less than one third of the time required when the crew alone had to take care of CA (manual avoidance). The final distance from the target position amounted to 21 cm. This can be considered as an equal value to the first test. Further, the implementation of the shared CA enabled the crew to concentrate on the final velocity. Controlling the satellite to 0.27 cm/s, the final velocity was almost zero. The fuel consumption was halved in comparison to the first test and no collisions between SPH1 and SPH2 took place. The crew utilized and trusted the shared CA, which resulted in a faster task performance and less fuel consumption. As can be seen in Figure 12 (middle) the path of SPH1 intersected the CA area and the autonomous control overruled the manual control in that area, but it did not result in a distinct change of the crew control.

3.8 Supervised Avoidance

final position

target area

collision avoidance

path

Figure 12: Motion of the SPHERES around the obstacle during the Manual Avoidance (left), the Shared Avoidance (middle), and the Supervised Avoidance (right) experiments

The collision avoidance responsibilities were shared between the crew and the machine autonomy in this test. Again, the crew was asked to position SPH1 approx. 50 cm FWD of SPH2, but the crew was only allowed to control SPH1 in the X (FWD/AFT) direction. An autonomous position controller for the y- and z-axis controlled both coordinates to zero, and the CA technique interrupted all three controls. The activation of the CA algorithm can be derived from the characteristics of the velocities in Figure 13. Between 80 s and 100 s there are frequent changes in velocity to be seen, which means that the crew was steering the inspector towards the target satellite (in the x direction) and the autonomous CA became active and controlled position and velocity whenever a possible collision was detected. This can also be seen in Figure 12 (right) where the projected motion is depicted. Inside the CA area the motion is jagged as the collision avoidance controller alternates with the y and z centering controllers. In contrast, outside CA area, the approach of the inspector towards the target is straight, since only the x direction was controlled by the crew. Table 3 summarizes the results of all three CA tests with human interaction. The supervised avoidance helped the crew to finish the task in less than half of the time needed for manual avoidance. The propellant consumption of the supervised avoidance is higher than the manual avoidance. This is due to the fact that in the CA area the crew control in connection with the centering controllers were counteracting the autonomous CA algorithm. The final positioning error was approximately one third less than in the other two tests, whereas the final velocity was nearly as high as in the manual avoidance test. The table also summarizes the values for the individual performance criteria and the performance of the three avoidance experiments. The fuel and time reference values are an average of all avoidance test, i.e. the three tests presented here and the four delayed avoidance tests in Section 3.10. Manual and Supervised avoidance are comparable in their performance due to similar values for fuel and velocity. The shared avoidance test shows a strong overperformance (Poverall > 4), which underlines the confidence of the crew in using this controller. The performance suggests that this grade of autonomy has to be given preference over manual and supervised avoidance when considering a human controlled inspector. Furthermore, since the CA experiments constitute in general a positioning or re-locating task, they can be compared to the human navigation experiments in section 3.4. The performances Pd,v are therefore of interest and the performance of shared avoidance (Pd,v = 1.2) is similar to approaching the first beacon (Pd,v = 1.1). It seems reasonable to assume that using shared CA, the crew could concentrate on the task of re-locating the inspector without taking care of collision issues. This was similar to approaching beacon 1 from the middle of the test volume without an obstacle or having to re-orientate.

Table 3: Collision Avoidance Performance Performance Criteria Performance Grade of Autonomy time fuel distance final velocity Pd,v Poverall Manual 79 s 3.32 % 35.8 cm 1.41 cm/s 0.8 2.4 Shared 24 s 1.76 % 68.8 cm 0.27 cm/s 1.2 4.6 Supervised 34 s 3.57 % 64.5 cm 1.84 cm/s 0.6 2.4

cm tref = 100s, fref = 2.5%, dref = 30cm, dtarget = 50cm, vref = 2 s , αt = αd = αv = 1, αf = 2

Figure 13: Background telemetry of the supervised avoidance test Uncontrolled floating PitiPosition in itia liza tion

Target approach

Target: Beacon 5 D=37.1cm

Figure 14: Beacon positions in the JEM (left) and 3D motion of the satellite towards the beacons in the JEM test volume (right)

3.9 Delayed Human Navigation

As an extension to the tests in the previous sections, this section investigates the effects of communication delay on human-spacecraft interaction. In general, there are two approaches to communicate with robotic components in low Earth orbit. A direct communication link between an operating and the teleoperated satellite can limit time delay to approximately 30 ms as Rokviss (Albu-Schaffer et al., 2006) showed. However, this approach limits operational time to 5-13 minutes (depending on the orbit height) for 4-6 of approximately 15 orbits per day. Using a single geostationary communication satellite for data relay on the other hand can expand the time windows to more than 40 minutes on each orbit. The disadvantage of this approach is the increased time delay, since the signal has to travel round trip through the geostationary orbit at 36,000km height. The time delay can amount to 600 ms or more as experiments with the Advanced Relay Technology Mission (ARTEMIS) have shown (Stoll et al., 2009b). This number multiplies if more relay satellites are utilized for global coverage or if deep space robotics is considered. The purpose of the Delayed Human Navigation tests was to evaluate the crew’s performance in navigating the inspector to a certain observation position. Manual keyboard commands were translated into thrust impulses under the influence of communication delays of 1s, and 3s. Further, the human ability to anticipate delay and effectively apply keyboard commands with respect to the satellite’s movement was examined. The crew chose a working position in the PORT-AFT corner of the experiment module with the laptop workstation pointing in direction to STBD. As a result, there was a spatial rotation of approximately 90◦ around the z-axis between the crew’s personal reference frame, i.e. the crew’s line of sight (PORT-STBD) and the ISS-fixed coordinate frame that also had to be anticipated when applying keyboard commands. After a period of initial positioning the crew was asked to steer the satellite near one of the beacons on the STBD side of the module (cp. Figure 14, left), at a distance of 30 cm and with minimal residual velocity. Both beacons featured the same distance to the satellite’s initial position. The difference between them, however, was the crew’s alternate perspective from the controlling laptop. The criteria for evaluating the crew’s task performance were again the time to relocate the satellite, the fuel consumption, as well as the final position, and velocity. Figure 14 (right) shows the satellites motion in the test volume for the first test without communication delay. The green dots represent the initialization and finish points of the target approach. Table 4 shows the compilation of all performance values gained from the three test runs. According to crew feedback, there were no difficulties in appropriately issuing keyboard commands. As can be seen in Figure 14 (right), the satellites movement is straightforward and aims precisely at the approached target. The tests substantiate results from previous tests, concluding that the crew is able to judge the spatial distance between the satellite and a fixed object such as a beacon very well from different perspectives. Furthermore, throughout all tests a low residual velocity was achieved for final positioning. Considering the performance Pd,v, the precision for final position and velocity in this test appear to be independent of the current communication delay. The time and fuel consumption for the target approach on the other hand increased significantly with higher communication delays. Thus, the crew required more time and an increased number of translational force commands to gain comparable final positioning values under the influence of communication delay. The overall performance decreased with increasing time delay as Table 4 shows. The beacon approach in the undelayed case is comparable to the first beacon approach of the human navigation experiment in section 3.4. Both show a better performance than their respective reference values and suggest a strong confidence of the crew in commanding the inspector. Previous ground experiments on the air-bearing table indicated that most test subjects, when being con- fronted with a comparable 3-DoF scenario for the first time, show a significant increase of 1.4 points on average in overall task performance during the first two repetitions of the same test. After the second itera- tion, however, only minor increase could be observed, leaving the values on a level for all following tests. In this case and within the given 6-DoF zero-g environment, however, the crew did not show such a behavior. In contrast, the first executed non-delayed test run resulted in an significantly higher overall task performance of 1.3 points compared to the second executed test run with 1s of communication delay.

3.10 Delayed Avoidance

The purpose of this test was to investigate different collision avoidance techniques with focus on manual and shared control under the influence of communication delay. The test setup was identical with the previously executed non-delayed collision avoidance tests. The position of the crew was identical with the Human Navigation tests (cp. Section 3.4). Since the direction of motion was from PORT to STBD (cp. Figure 14), the crew had an undisturbed view on the scene in the first part of each test. For the second part, however, the view was partly obstructed by the blocking satellite. Figure 15 shows the motion of the the satellites in the test volume for the manual control 1s-delay test without CA support. As can be seen in the figure, a downdrift during the test was observed by the astronaut throughout all tests. This may have been caused by the air flow in the JEM. Except for the first run, the Delayed Avoidance tests once more proved the crew’s good judgment of distance and the ability to manually position a SPHERE satellite with a low final velocity. However, the performance values that were reached, are slightly worse compared with the previous tests. This was likely due to the fact that the direct view of the scene was partly blocked by the obstacle satellite during the final positioning phase. This can especially be observed when considering the position and velocity values of the 1s-delay test with CA. The crew always chose a relatively wide path around the obstacle compared to previous, undelayed avoidance tests. The CA algorithm therefore did not influence the operator’s performance in this case. An decrease of performance time can be observed with increasing time delay and active CA. This trend has to be confirmed in upcoming test sessions that incorporate higher communication delay.

Table 4: Delayed Human Navigation Performance Performance Criteria Performance Communication Delay Target time fuel distance final velocity Pd,v Poverall none Beacon 5 67.0 s 1.56 % 37.1 cm 0.14 cm/s 1.7 4.3 1 s Beacon 5 70.6 s 2.6 % 41.8 cm 0.24 cm/s 1.5 3.0 3 s Beacon 4 133.4 s 2.25 % 34.4 cm 0.02 cm/s 1.8 2.8

cm tref = 70s, fref = 2.1%, dref = 30cm, dtarget = 30cm, vref = 2 s , αt = αd = αv = 1, αf = 2 Position initialization SPH2 position hold

Final position

Downdrift Circumnavigate

Figure 15: 3D motion of the satellites in the JEM test volume

Table 5: Delayed Avoidance Performance Performance Criteria Performance Communication Delay time fuel distance final velocity Pd,v Poverall none 144.8 s 2.51 % 96.5 cm 0.41 cm/s 0.2 1.8 1 s 120.4 s 2.08 % 35.6 cm 0.37 cm/s 1.3 3.5 none with CA 135.1 s 2.21 % 55.9 cm 0.34 cm/s 1.6 3.5 1 s with CA 160.4 s 2.17 % 50.3 cm 0.73 cm/s 1.6 3.3

cm tref = 100s, fref = 2.5%, dref = 30cm, dtarget = 50cm, vref = 2 s , αt = αd = αv = 1, αf = 2

4 Conclusions and Future Outlook

The research presented here focused on human-machine interaction in space. In particular, experiments were performed, which involved astronauts controlling experimental satellites aboard the ISS. Different concepts and levels of autonomy were evaluated. A summary of the results and approaches for future work is presented.

4.1 Conclusions

A case study of a teleoperated inspector satellite was performed and partial aspects of such a mission have been simulated in the framework of a series of experiments. The inspection started with a fly around of the inspector around the servicer. The tests showed that motion patterns are hard to recognize for a human operator. The orientation of a SPHERES orbit is difficult to judge in three dimensions. This was seconded by the crew feedback, which summarized that it was complicated to use commands to control inclination and alignment. The subsequent Manual Abort experiment investigated the potential benefit of a manual abort command, which can overrule the control algorithms and stop the autonomous operations. During the test, there was no false positive abort command issued, which suggests the crew had a good understanding of when a proximity maneuver constitutes a possible threat. It was concluded that a manual abort command could add another safety layer to mission operations. While the performance of the two initial tests was evaluated with the help of the inclination accuracy and the number of false positives, all subsequent tests used the distinct performance metrics of task completion Table 6: Cross Evaluation of the Experiment Performances orientation & distance collision delayed interaction test re-orientation estimation avoidance control with autonomy mean

Poverall (TS20 & TS24) 3.1 3.0 3.1 3.2 3.4 3.2 Pd,v (TS20 & TS24) 0.9 1.1 1.0 1.6 1.1 1.1 time, fuel usage, distance to target, and residual velocity. They were applied since the experiments required the astronaut to command the inspector from a starting position to a goal position near the target satellite. During this motion different levels of autonomy, collision avoidance techniques, and the influence of time delay to the task performance was evaluated. The human navigation test showed that relocating the inspector to a target location at another spacecraft can be executed precise and fuel efficient. However, when re-locating the spacecraft to a subsequent target, a re-orientation phase might be necessary for the human operator. This indirect motion from the start to the goal via a reference position is time and fuel intense and decreased the overall performance. Thus, means have to be generated to provide the operator with additional (artificial) reference points or planes in order to increase the performance. The subsequent experiments considered the influence of different collision avoidance techniques on the task performance. Therefore, the inspector had to be relocated to a position behind the target spacecraft without provoking a collision. Out of the different levels of autonomy that were applied in the scenario, shared autonomy showed a high performance and therefore might enhance the operation of a teleoperated inspector. This assumption can be affirmed by the performance values reached for time, fuel, and residual velocity, which are superior to the other control levels. Since a ground controlled spacecraft will have an immanent time delay in the communication channel, the final experiments considered operations, which were subject to a communication delay. According to crew feedback, the Delayed Human Navigation tasks were well executable under time delay of up to 3 seconds. The combined performance value for distance and velocity affirms this statement with an approximate constant value throughout the three tests (0s, 1s, and 3s delay). The overall performance decreased distinctly while the time delay increased. This is due to the fact that feedback to the operator commands is delayed, which decreased the performance value for time. The Delayed Avoidance tests showed that a CA algorithm does not negatively influence the overall performance of the operator and reconfirmed that precise distance and residual velocities can be achieved under the influence of time delay. Since every experiment is usually only run once, the tests were designed in a way that partial aspects of one experiment are re-verified in a follow-up experiment. Therefore, it was shown in multiple tests that relative distances between two spacecraft can be achieved by manual control precisely. No collisions took place in any of the tests, which emphasizes the ability of a human operator to steer spacecraft in three-dimensional space reliably and precisely given a third person perspective of the scene. This principle of redundant (primary or secondary) test objectives was emphasized in Table 1, which can be used to cross-evaluate all experiments with positioning tasks7 under the considered aspects. Table 6 shows a cross evaluation over all experiments, for which a common criteria such as delay control applies. The according mean of Pd,v and Poverall were computed. Considering the overall performances Poverall, it can be concluded from Table 6 that all task performances are approximately equal and none of the individual tasks was distinctly worse performed. However, it can also to be seen that the interaction with autonomy adds benefit to the operations. The crew proved that tasks with a delay in the communication channel can be fulfilled with a high degree of efficiency. Assisting elements such as autonomous collision avoidance can improve the performance of the human operator when controlling an inspector satellite. It gives more confidence to the operator and draws the attention to the actual task. That way the operator does not primarily have to care about CA and works more efficiently, which is reflected in the reached high overall performance. Moreover, the values for Pd,v in Table 6 confirm that the according performance for orientation tasks is slightly worse than the mean, whereas the reached performances in distance estimation, collision avoidance and

7Only experiments from TS 20 and TS24 where taking into consideration, since identical performance metrics were available. interaction with autonomy are comparable. A remarkable result of this evaluation is that the delayed control also resulted in a strong average distance and velocity performance. An explanation for this circumstance can be that the human operator tried to act more careful in an environment in which the feedback to issued commands is not instantaneous and, therefore, the likeliness of failing is considered higher compared with non-delayed manual operations.

4.2 Future Outlook

Future research plans aim at simulating a whole inspection scenario including three experimental satellites. This reflects a more complex OOS or ISRA scenario with an observer, servicer, and target satellite. The human operator will control two of the satellites (servicer and inspector) and simulate procedures, such as proximity operations, inspection tasks, and docking. Enhanced virtual reality tools shall be embedded in the SPHERES GUI not only on ground but also in the flight GUI. The performance of control tasks can be limited by line of sight conditions and satellites blocking the path. As already tested on Earth, a virtual reality representation would provide the possibility to change the perspective and may also increase the task performance in space. As seen in the tests, manual re-location tasks may demand a re-orientation phase. Creating artificial reference points by means of virtual or augmented reality may make this re-orientation phase obsolete. Further, it was evident that tasks are slower executed under the influence of time delay. Virtual Reality techniques can therefore be used to present a a pre-calculated state feedback to the operator in order to improve the task performance. This approach is already used in the MIT ground test environment (cp. section 2.1). Such augmentation approaches can further be helpful to display distances, attitude data, and CA zones. Further, a stereo-vision camera system is scheduled to extend the SPHERES hardware system, providing the option of realistically controlling the satellites from the perspective of an observer on Earth with limited accessibility to the remote environment.

Acknowledgments

This work was supported in part by the post-doctoral fellowship program of the German Academic Exchange Service (DAAD). The authors acknowledge the valuable input and hints by Alessandra Babuscia, Thomas Dirlich, and Sebastian J. I. Herzig.

References

Abbott, J., Marayong, P., and Okamura, A. (2007). Haptic virtual fixtures for robot-assisted manipulation. In Springer Tracts in Advanced Robotics, volume 28, page 4964. Springer-Verlag, Heidelberg. Abiko, S., Lampariello, R., and Hirzinger, G. (2006). Impedance control for a free-floating robot in the grasping of a tumbling target with parameter uncertainty. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1020 –1025. Albu-Schaffer, A., Bertleff, W., Rebele, B., Schafer, B., Landzettel, K., and Hirzinger, G. (2006). Rokviss - robotics component verification on iss current experimental results on parameter identification. In Proceedings of IEEE International Conference on Robotics and (ICRA)., pages 3879 –3885. Boeing-Company (2006). Microsatellite launch. Crosslink – The Aerospace Corporation mag- azine of advances in aerospace technology, 7:1–2. Retrieved August 1, 2011, from http://www.aero.org/publications/crosslink/fall2006/headlines.html. Bosse, A. B., Barnds, W. J., Brown, M. A., Creamer, N. G., Feerst, A., Henshaw, C. G., Hope, A. S., Kelm, B. E., Klein, P. A., Pipitone, F., Plourde, B. E., and Whalen, B. P. (2004). Sumo: spacecraft for the universal modification of orbits. In Tchoryk, P., Jr., and Wright, M., editors, Spacecraft Platforms and Infrastructure, volume 5419, pages 36–46. Chandler, P. R., Pachter, M., Swaroop, D., Howlett, J. M. F. J. K., Rasmussen, S., Schumacher, C., and Nygard, K. (2002). American control conference. In IEEE International Safety, Security and Rescue Robotics Workshop, Anchorage, USA.

Choset, H., Knepper, R., Flasher, J., Walker, S., Alford, A., Jackson, D., Kortenkamp, D., Burridge, R., and Fernandez, J. (1999). Path planning and control for aercam, a free-flying inspection robot in space. In IEEE International Conference on Robotics and Automation (ICRA)., volume 2, pages 1396 –1403.

Chung, S.-J. and Miller, D. W. (2008). Propellant-free control of tethered formation flight, part 1: Linear control and experimentation. Journal of Guidance Control and Dynamics, 31:571–584.

Davis, T. M. and Melanson, D. (2004). Xss-10 microsatellite flight demonstration program results. In Tchoryk, P., Jr., and Wright, M., editors, Spacecraft Platforms and Infrastructure, volume 5419, pages 16–25. SPIE.

Dorais, G. and Gawdiak, Y. (2003). The personal satellite assistant: an internal spacecraft autonomous mobile monitor. In Aerospace Conference, 2003. Proceedings. 2003 IEEE, volume 1, pages 1 – 348 vol.1.

Enright, J. and Hilstad, M. (2004). The spheres guest scientist program: Collaborative science on the iss. In proc. of IEEE Aerospace Conference, Big Sky, Montana, USA, Mar. 2004.

Fredrickson, S. E., Abbott, L. W., Duran, S., Jochim, J. D., Studak, J. W., Wagenknecht, J. D., and Williams, N. M. (2003). Mini aercam: development of a free-flying nanosatellite inspection robot. In Tchoryk, P., Jr., and Shoemaker, J., editors, SPIE Space Systems Technology and Operations, volume 5088, pages 97–111. SPIE.

Greaves, S., Boyle, K., and Doshewnek, N. (2005). Orbiter boom sensor system and shuttle return to flight: Operations analyses. In AIAA Guidance, Navigation, and Control Conference and Exhibit.

Hirche, S. and Buss, M. (2007). Transparent data reduction in networked telepresence and teleaction systems. In Networked Telepresence and Teleaction Systems. Part II: Time-delayed communication, volume 16, pages 532–542. MIT Press Journals.

Hirzinger, G., Brunner, B., Dietrich, J., and Heindl, J. (1993). Sensor-based space robotics-rotex and its telerobotic features. IEEE Transactions on Robotics and Automation, 9(5):649 –663.

Imaida, T., Yokokohji, Y., Doi, T., Oda, M., and Yoshikawa, T. (2001). Ground-space bilateral teleoperation experiment using ets-vii robot arm with direct kinesthetic coupling. In IEEE International Conference on Robotics and Automation (ICRA), volume 1, pages 1031 – 1038.

Katz, J. G., Saenz-Otero, A., and Miller, D. W. (2011). Development and demonstration of an autonomous collision avoidance algorithm aboard the iss. In IEEE Aerospace Conference, Big Sky, USA.

Krenn, R., Landzettel, K., Kaiser, C., and Rank, P. (2008). Simulation of the docking phase for the smart- olev satellite servicing mission. In 9th International Symposium on Artificial Intelligence, Robotics and Automation in Space. JAXA.

Madison, R. (2000). Micro-satellite based, on-orbit servicing work at the air force research laboratory. In IEEE Aerospace Conference Proceedings, volume 4, pages 215 –226.

Massimino, M., Sheridan, T., and Roseborough, J. (1989). One handed tracking in six degrees of freedom. In IEEE Int. Conf. on Systems, Man, and Cybernetics, Cambridge, USA.

Mukherji, R., Rey, D. A., Stieber, M., and Lymer, J. (2001). Special purpose dexterous manipulator (spdm) andvanced control features and development test results. In Proceedings of the 6th International Symposium on Artificial Intelligence and Robotics and Automation in Space: i-SAIRAS 2001.

Munir, S. and Book, W. (2003). Control techniques and programming issues for time delayed internet based teleoperation. Journal of Dynamic Systems, Measurement, and Control, 125:205–214. Niemeyer, G., Preusche, C., and Hirzinger, G. (2008). Telerobotics. In Springer Handbook of Robotics. Springer-Verlag, Heidelberg.

Nolet, S. (2007). Development of a Guidance, Navigation and Control Architecture and Validation Pro- cess Enabling Autonomous Docking to a Tumbling Satellite. PhD thesis, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics.

Nolet, S. and Miller, D. W. (2007). Autonomous docking experiments using the spheres testbed inside the iss. Sensors and Systems for Space Applications, 6555(1):65550P.

Ortmaier, T. (2007). Robot assisted force feedback surgery. In Applications in Advances in Telerobotics. Springer-Verlag, Heidelberg.

Pong, C. (2010). Autonomous thruster failure recovery for underactuated spacecraft. PhD thesis, Mas- sachusetts Institute of Technology, Department of Aeronautics and Astronautics.

Radcliffe, A. (2002). A Real-Time Simulator for the SPHERES Formation Flying Satellites Testbed. PhD thesis, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics.

Ridao, R., Carreras, M., Hernandez, E., and Palomeras, N. (2007). Underwater telerobotics for collaborative research. In Applications in Advances in Telerobotics. Springer-Verlag, Heidelberg.

Roderick, S., Roberts, B., Atkins, E., and Akin, D. (2004). The ranger robotic satellite servicer and its autonomous software-based safety system. IEEE Intelligent Systems, 19.

Ruangpayoongsak, N., Roth, H., and Chudoba, J. (2005). Mobile robots for search and rescue. In IEEE International Safety, Security and Rescue Robotics Workshop.

Rumford, T. (2003). Demonstration of autonomous rendezvous technology (dart) project summary. In Space Systems Technology and Operations Conference, Orlando, USA.

Ryu, J.-H., Preusche, C., Hannaford, B., and Hirzinger, G. (2005). Time domain passivity control with reference energy following. IEEE Transactions on Control Systems Technology, 13:737–742.

Saenz-Otero, A. (2005). Design Principles for the Development of Space Technology Maturation Laboratories Aboard the International Space Station. PhD thesis, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics.

SCAMP SSV (2006). Supplemental camera platform space simulation vehicle (scamp ssv). Retrieved De- cember 1, 2010, from http://www.ssl.umd.edu/projects/SCAMP/SSV/index.shtml.

Sellmaier, F., Spurmann, J., and Boge, T. (2010). On-orbit servicing missions at dlr/gsoc. In International Astronautical Congress (IAF).

Sheridan, T. (1992). Automation and Human Supervisory Control. MIT Press, Cambridge.

Shoemaker, J. and Wright, M. (2003). Orbital express space operations architecture program. In Tchoryk, P., Jr., and Shoemaker, J., editors, SPIE Space Systems Technology and Operations, volume 5088, pages 1–9. SPIE.

Stoll, E., Artigas, J., Letschnik, J., Walter, U., Pongrac, H., Preusche, C., Kremer, P., and Hirzinger, G. (2009a). Ground verification of the feasibility of telepresent on-orbit servicing. Journal of Field Robotics, 26(3):287 – 307.

Stoll, E. and Kwon, D. (2009). The benefit of multimodal telepresence for in-space robotic assembly. In Proceedings of the IASTED International Conference on Robotics and Applications.

Stoll, E., Letschnik, J., Walter, U., Artigas, J., Kremer, P., Preusche, C., and Hirzinger, G. (2009b). On-orbit servicing - exploration and manipulation capabilities of robots in space. IEEE Robotics & Automation Magazine, 16(4):29–33. Stoll, E., Saenz-Otero, A., and Tweddle, B. (2010). Multimodal human spacecraft interaction in remote environments - a new concept for free flyer control. In Machine Learning and Systems Engineering, pages 1–14. Springer-Verlag, Heidelberg. Tanner, N. and Niemeyer, G. (2006). High-frequency acceleration feedback in wave variable telerobotics. IEEE/ASME Transactions on Mechatronics, 11:119–127.

Tweddle, B. E. (2010). Computer vision based navigation for spacecraft proximity operations. Master’s thesis, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics. Varatharajoo, R. and Kahle, R. (2005). A review of spacecraft conventional and synergistic systems. Aircraft Engineering and Aerospace Technology, 77:131–141.

Varatharajoo, R., Kahle, R., and Fasoulas, F. (2003). Approach for combining attitude and thermal control systems. Journal of Spacecraft and , 40:657–664.