<<

An Overview of Space Robotics © 2007 David L. Akin

3.1 Introduction Spacecraft, dating back to the first Earth satellites, have always been referred to as “robotic” systems. For the purposes of this overview of robotic technologies, the term “robotics” will be restricted to systems which have the capability to directly interact with their environment. This may take the form of mobility, grasping, or full dexterous manipulation. The intent of this effort is to look at the current state of space robotics, and to extrapolate to future capabilities and applications, while suggesting critical development steps that need to be taken to bring the far-term vision to fruition. Very large future space systems may well require the use of nonterrestrial materials, which will in turn need extensive robotic capabilities for their collection, processing, and assembly. For this reason, the technologies and applications considered will include planetary surface systems, as well as on-orbit robotics.

3.1.1 Potential Applications of Space Manufacturing The “market” envisioned for the purposes of this report is the development of a capable and robust space manufacturing infrastructure. The need for increased capabilities, whether in intelligence-gathering, science, or operations, drives toward larger and larger space systems. Taking observational astronomy as a representative (and unclassified) case, the transition from the 2.5-meter optics of to the 6-meter optics of the James Webb Space Telescope has required a paradigm shift from the launch of a monolithic spacecraft to the development of a highly articulated deployable system. Indeed, JWST may well be at (or slightly beyond) the technological feasibility limits for single-launch deployables: one developmental design required over 100 deployment actuators, all of which were single failure points for the success of the mission. To move beyond the 6-meter horizon for quantum increases in system performance, whether a terrestrial planet finder or highly sensitive SIGINT, will at a minimum require on-orbit assembly, where systems and components are integrated and tested post-launch. Some of the bigger potential systems, such as power satellites, only become feasible when we move beyond space assembly into the realm of space manufacturing – transporting raw materials to the production site, and fabricating components in situ for assembly and testing. Why would any rational program manager accept the risks of space manufacturing? Given the predominate role of Earth launch costs in overall program budgets, a robust space manufacturing capability represents a beneficial form of “bootstrapping”. Spacecraft are traditionally volume-, rather than mass-limited for conventional launch systems. Moving to raw materials significantly increases the bulk density and packing efficiency of the transportation system, thereby increasing efficiency in the single most expensive element. Since the predominate structural load sources are due to launch vehicle accelerations and vibrations, a system fabricated on-orbit can be optimized solely for operating loads in microgravity, which are (in general) 2-3 orders of magnitude less than launch loads. This gives rise to innovative designs of extremely lightweight systems, with much lower transportation costs than a traditional system. Of course, these economic An Overview of Space Robotics © 2007 David L. Akin advantages are greatly enhanced when the initial development costs of the space manufacturing infrastructure are supported by government funding, and made available to a range of space systems to amortize the costs over multiple programs. The ultimate benefit of space manufacturing, especially for extremely large projects such as space solar power, lies in the use of nonterrestrial materials. Earth is the largest rocky body in the solar system; the energy costs of getting materials from Earth’s surface to orbit are therefore the highest possible. Nonterrestrial materials are available in abundance from the lunar surface, from Apollo-Amor Earth-crossing asteroids, or (deeper in space) from Mars, main-belt asteroids, or from planetary moons throughout the solar system. A study performed under contract to NASA Marshall Space Flight Center in 1978 showed that solar power satellites could be fabricated and assembled in space from lunar materials for significantly lower costs than for Earth-launched components. Despite the high-tech nature of the required systems, it was found that 95% of the spacecraft could be fabricated in situ from lunar materials. With the possible discovery by Clementine and Lunar Prospector Orbiter of water ice in the vicinity of the lunar poles, the feasible percentage of nonterrestrial material would further increase, and the costs of establishing a lunar mining and transportation infrastructure would drop precipitously.

3.1.2 Robotics Requirements for Space Manufacturing What, then, will be required to bring space manufacturing into reality? For near-term applications, which will likely be limited to assembly and servicing, primary interest will be on manipulation capabilities, along with the ability to maneuver with high accuracy around an extended microgravity work site. With expansion to on-orbit fabrication and use of nonterrestrial materials, the entire field of planetary surface mobility becomes relevant, along with human-robot interactions and dedicated processes such as mining, refining, and parts fabrication. In developing a taxonomy for robotics requirements, manipulation may be broken down into categories of constrained and unconstrained motion. Unconstrained motion consists of transporting a payload (of whatever mass) from one state (position and attitude) to a desired final state. This may involve avoiding obstacles in the manipulator work space, but is unconstrained motion in that no external contact is allowed as part of the maneuver. This is clearly the simplest of all manipulation tasks, and technological limitations are largely based on the capabilities of the maneuvering robot arm. Following an inadvertent maneuver or actuator runaway, the system will have to bring the payload to a controlled stop short of obstacles without exceeding the limitations of the end effector grasp or any braking systems on the robot actuators. This creates limits on energy imparted to the payload, which in turn sets limits on speed of motion. There are several external considerations, even in unconstrained manipulator motions. Reaction control system thruster firings can transmit substantial loads to the grappled payload; it is routine, for instance, to inhibit primary RCS firings during remote manipulator system (RMS) operations. Reaction torques based on manipulator motions will be imparted to the base of the arm, and into the host spacecraft. Many of the velocity limitations applied to the Remote Manipulator System (SSRMS) are due to reaction torques and control system disturbances to the space station itself, An Overview of Space Robotics © 2007 David L. Akin which has limited authority to absorb these disturbance forces without saturating its attitude control system. Constrained motion consists of the controlled motion of a payload in the presence of physical constraints, which limit available degrees of freedom and create significant disturbance forces in the manipulator itself. A typical example of this would be berthing a payload into a receptacle: guides are typically provided to ensure that the mating mechanism is properly aligned, but these guides produce friction, lock the manipulator payload into a single orientation during the final insertion maneuver, and give rise to the possibility of binding or other failure of the procedure. Most manipulators planned for dexterous activity incorporate some form of active compliance control, wherein external forces (such as insertion friction) are sensed and automatically nulled out, to prevent binding and ease operator workload requirements. Manipulators without active compliance, such as the shuttle RMS, typically utilize passive compliance elements such as flexibility in the manipulator arm, or design contact tasks to minimize insertion requirements. An alternate dimension to use as a metric for manipulation is the complexity and precision required of the end effector. This may be measured in the number of degrees of freedom required to perform a particular task, to grasp a specific interface, or merely to touch or actuate a switch. This is also associated with the required precision of motion to use an interface, ranging from the ±4 inches of RMS and SSRMS grapple fixtures to the precise (±0.005 in) control required to torque a standard bolt or use a microconical interface. 3.2 Space Robotics State-of-the-Art This section will endeavor to provide a concise overview of the current state-of-the-art of space robotics, as context to the following section on necessary future development activities. While focusing on systems flown and successfully operated to date, this section will also include systems in the development process for future flights, and major systems developed for past programs which, for whatever reason, did not survive until flight. It does not attempt to cover all of the development activities in laboratories which may be relevant to future space robotics systems, other than in identifying essential research efforts which are not currently underway.

3.2.1 Manipulation Systems “Manipulation” implies the ability to physically interact with the local environment. Whether actuating a control, positioning a large payload, or servicing a spacecraft, this category of robot must have the capability to “reach out and touch” items in the work site.

3.2.1.1 Historical Space Manipulation Systems Space manipulation systems may be subdivided into three general categories: sampling systems, which are low dexterity manipulators used for scooping soil and rocks in the immediate vicinity of planetary lander spacecraft; positioning arms, or “cranes”, for An Overview of Space Robotics © 2007 David L. Akin maneuvering and positioning large payloads such as space station modules; and dexterous systems, capable of utilizing a greater number of interfaces and providing finer control, particularly for constrained tasks such as module removal and replacement. 3.2.1.1.1 Sampling Systems The first true robotic systems flown in space were sampling arms on the Lunar Surveyor and Mars Viking landers. These were prismatic (linear extension) systems mounted to two degree-of-freedom (2 DOF) articulated bases, used for scooping and trenching samples from the local regolith. The Surveyor arm used a pantograph-style articulation (Figure 1) for extension, while the Viking arm was based on a bi-stem linear deployment system (Figure 2). End effectors on these systems were simple scoops, although the Viking system incorporated a grating for controlling the size of sample particles deposited into the analysis instruments.

Figure 1: Lunar Surveyor (note Figure 2: Viking Lander Sampling Arm pantographic sampling arm to right)

Following an interregnum in planetary surface missions after Viking, a reinvigorated Mars program in the late 1990s led to advanced sampling systems. The Sojourner rover, discussed in the mobility section, is in itself a sampling system, but not germane to this particular discussion. The 1998 Mars Polar Lander mission included the development of a revolute (rotary joints) sampling manipulator of advanced capabilities which was extremely lightweight (Figure 3); unfortunately, no operational data were obtained from this mission due to a landing system failure. An Overview of Space Robotics © 2007 David L. Akin

Figure 3: Mars Polar Lander with Sampling Manipulator The 2003 Mars Exploration Rover (MER) program developed a 4 DOF Instrument Deployment Device in lieu of a sampling arm. This small manipulator uses a rotating- turret end effector to bring a rock abrasion tool, imaging spectrometer, and video microscope up to local rock samples (Figure 4). This represents an extension and further refinement of the Pathfinder paradigm, in which the robotic systems are used to bring instruments to in situ samples, rather than return samples to centrally located instruments.

Figure 4: Mars Exploration Rover 2003 Instrument Deployment Device 3.2.1.1.2 Positioning Systems The bi-stem approach of the Viking sample arm was also adopted in the first true positioning manipulator used in space. A fixed bi-stem deployable boom was mounted on Skylab, to transport film to and from the Apollo Telescope Mount (Figure 5). This freed the EVA crew from having to translate while controlling the large film packages, and represents the first use of robotic systems to aid and improve extravehicular performance. An Overview of Space Robotics © 2007 David L. Akin

Figure 5: Robotic Film Transfer System on Skylab The first major robotic system deployed operationally in space was the Space Shuttle Remote Manipulator System (RMS). This is a 50-foot long, 6 DOF manipulator used to deploy and retrieve payloads from the orbiter, and is the first instantiation of a class of robotics best referred to as "cranes", as they are primarily used for positioning large payloads in a manner similar to cranes on Earth (Figure 6).

Figure 6: Space Shuttle Remote Manipulator System The end effector of the RMS is a "snare wire" system, in which three taut wires surround and capture a metal rod capped by a knob to prevent the wires from slipping off. As soon as the wires are tightened around the rod, the payload is captured, and cannot be inadvertently released. By retracting the snare wires into the barrel of the end effector, the flat plate of the end effector is compressed against a matching plate on the grapple fixture, providing a rigid connection for payloads up to the maximum payload capacity of the orbiter (Figure 7). Early experience with the capture of free-flying spacecraft, such as the attempted capture of a payload trunnion fitting on the Solar Maximum spacecraft repair (STS 41-C), clearly demonstrated the importance of this "capture before contact" feature for operational end effectors (Figure 8). An Overview of Space Robotics © 2007 David L. Akin

Figure 7: RMS End Effector Figure 8: Trunnion Pin Capture Mechanism on MMU

Limitations of the shuttle RMS include its restriction to a single grapple fixture design, which has a substantial volume and mass impact on payloads. This also prevents the use of RMS to capture spacecraft which do not incorporate an RMS grapple fixture; if a need arises to use the RMS to capture or control this satellite, extraordinary means will have to be used to retrofit a grapple capability, usually through the use of EVA. A classic example of this would be the retrieval of the Palapa and Westar satellites on STS 41-C (Figure 9). The EVA crew had to add a capture bar to the target communications satellites, inserted by using the Manned Maneuvering Unit (MMU) as a free-flying spacecraft to capture the satellites and stabilize them for RMS capture. A similar attempt to provide an RMS retrieval capability on a stranded Intelsat spacecraft failed due to unappreciated effects of microgravity rotational dynamics, and ultimately resorted to a three-person EVA to accomplish the capture and stabilization manually. (Figure 10)

Figure 9: Manned Maneuvering Unit Figure 10: Three-Person EVA Manual Capture of Palapa Satellite Capture of Intelsat 6

An Overview of Space Robotics © 2007 David L. Akin

Although joint torques for the RMS can be inferred from motor currents, there is no direct operational measurement of forces and torques applied to the manipulator payload. This prevents any implementation of active compliance control, which is essential for highly constrained assembly operations, When the first ISS shuttle assembly mission was to berth the module with the Unity node, the free-flying Zarya module was placed in position above the docking interface, and the RMS was placed into a "limp" mode (Figure 11). The orbiter thrusters were then used to effect the actual docking procedure.

Figure 11: Zarya being positioned for berthing by RMS As part of a technology experiment, a force-torque sensor was flown on STS 62 in March, 1994. This experiment involved the development and testing of an RMS force- torque sensor, which was used to demonstrate that applied loads could be monitored and displayed to the RMS operator in real time. This was a component of the “Dexterous End Effector” flight experiment, which aimed at increasing the grapple capabilities of the RMS. Since the end effector in question consisted only of an electromagnet and a matching steel plate on the target, the use of the term “dexterous” was one of the more egregious misnomers of space robotics history. The second such "crane" system was closely related to the first, as the Canadian Space Agency and prime contractor MD Robotics delivered the Space Station Remote Manipulator System, or SSRMS, to the International Space Station in April, 2001. SSRMS is clearly an evolutionary extension of the Shuttle RMS, with greater payload (100,000 pounds) and seven degrees of freedom. Perhaps the greatest innovation of SSRMS is its midpoint symmetry and ability to maneuver end-over-end around the ISS. Data, video, and command links may pass through either end effector, as long as it is attached to a Power-Data Grapple Fixture (PDGF) on ISS. SSRMS may also be based on the Mobile Work System as part of the station servicing infrastructure. One price paid for this operational flexibility is ambiguity on the kinematics: for any arbitrary orientation of the two end effectors, there are in general eight possible arm poses. This significantly increases the complexity of arm trajectory planning and kinematic analysis (Figure 12). An Overview of Space Robotics © 2007 David L. Akin

Figure 12: Space Station Remote Manipulator System A further feature of SSRMS, which has already been used to advantage in ISS operations, is its design for on-orbit servicing. Following the failure of a command string in one actuator, that joint of the SSRMS was replaced on-orbit by EVA. This capability for servicing is critical for long-term reliable operations in the space environment. ISS also represents an interesting feature of robotic operations in a complex space worksite. While RMS and SSRMS are highly capable systems, some activities are best done with more limited specialized systems. Both the US and Russian programs have independently developed crew-operated positioning devices to manipulate large masses during EVA servicing. The ORU Transfer Device (OTD) is either manually operated or actuated through the use of a standard EVA power tool, and is capable of placing orbital replacement units (ORUs) within 18 feet of the mounting point (Figure 13). A similar system, Strela, was developed by the Russians for their space station and adapted for external operations on the Russian segments of ISS. Strela has similar articulation to OTD; exclusively human powered, its thirty-foot maximum extension is considerably greater than the OTD boom. (Figure 14)

Figure 9: Orbital Transfer Device Figure 10: Strela Crane A highly innovative approach to 6 DOF spatial positioning was taken by the Charlotte robot system, flown on STS 63 in February, 1995. The Charlotte system uses eight An Overview of Space Robotics © 2007 David L. Akin tension lines, attached to various locations in the Spacehab pressurized module. By extending and retracting the cables while maintaining tension, the robot could be maneuvered in three translational degrees of freedom, and (with some constraints) three rotational DOF as well. The system was designed to maneuver along the instrument lockers in the module, and had a simple grasping end effector intended to allow it to actuate simple switches under ground control. (Figure 15)

Figure 15: Charlotte Cable-Driven Robot 3.2.1.1.3 Dexterous Systems Two systems have flown as experiments that may legitimately be considered as the precursors of truly dexterous robotics. In April 1993, the Robotics Experiment (ROTEX) was developed and flown by DLR on STS 55. The ROTEX manipulator arm was a small 6 DOF arm with a generic gripping end effector. The primary purpose of the experiment was to demonstrate robotic activities in a variety of control modes, including supervisory control and autonomy. One of the most advanced aspects of the hardware was a high degree of integration of multimodal sensing systems, including stereo vision, laser distance finders, and force and tactile sensors. The system was capable of capturing a free-floating object in the work cell, inside the Spacelab pressurized module (Figure 16).

Figure 16: Graphics Simulation of ROTEX in Work Cell An Overview of Space Robotics © 2007 David L. Akin

The Japanese Manipulator Flight Demonstration (MFD) experiment flew on shuttle mission STS 85 in August 1997. As an early test of technology for the Japanese Experiment Module on ISS, this mission flew a 5 DOF prototype of the Small Fine Arm (Figure 17), to be used for supporting the Japanese exposed experiment facility. The manipulator was teleoperated from both the ground and onboard, and performed a number of simple servicing-related experiments. To pass the shuttle payload safety review process, manipulator motions were highly constrained, ensuring that no inadvertent motions or payload releases could occur, and that the system would never violate payload bay door moldlines and prevent emergency deorbit and landing.

Figure 17: Manipulator Flight Demonstration Payload Integration In late 1997, the Japanese also flew the Engineering Test Satellite 7 (ETS-VII) mission, to investigate automation and robotics technologies for space. The system actually consisted of two linked spacecraft: a target spacecraft for rendezvous and capture experiments (described in a following section), and a “chaser” spacecraft which had the primary systems. The chaser spacecraft also included a 5 DOF manipulator with interchangeable end effectors, and a series of experiments on telerobotic spacecraft servicing and structural assembly (Figure 18).

Figure 18: ETS-VII “Chaser” and “Target” Spacecraft

An Overview of Space Robotics © 2007 David L. Akin

3.2.1.2 Planned Space Manipulation Systems, Past and Future Over the past decades, a number of robotic systems have been proposed or partially developed for space operations. In considering systems for inclusion in this overview, it was not feasible to include every system that was ever tested in a laboratory, or that was proposed for flight demonstration. The criteria adopted for accepting systems for incorporation was that the system must have been clearly intended for flight, substantially developed and at least part-way through the flight qualification process. Some of the systems discussed are no longer under active development, terminated at some point in the development process due to financial, programmatic, technical, or other roadblock in the process. This section will also document systems still funded and under development for future flight opportunities. All of the systems documented in this section fall under the category of dexterous robotics. Development of new positioning robots is limited by the singular venue for operations (ISS) as well as “market saturation” by RMS and SSRMS. While future planetary sampling manipulators will be developed, the phasing between missions (e.g., the next Mars landing opportunity will be in 2009) means that no future sampling arm is currently beyond the conceptual design phase at this time. In the mid-1980’s, with the first major development effort for Space Station Freedom, Congress mandated the allocation of 10% of "station development funds" for an advanced robotic servicing capability. This was instantiated as the Flight Telerobotic Servicer, or FTS. As shown in Figure 19, this system adopted crude anthropomorphism, with two dexterous 7DOF manipulators for servicing operations and “head”-mounted stereo cameras. The unit was designed to be maneuvered by SSRMS, or to berth at a power data grapple fixture for local operations.

Figure 19: An Early Grumman Concept for the Flight Telerobotic Servicer Critical design requirements for FTS were set “blind”: that is, in the absence of specific information on the impact of those requirements. As a result, the program was a textbook example of “feature creep”, leading to greater and greater overall cost estimates for the An Overview of Space Robotics © 2007 David L. Akin final robot. Combined with the perception that FTS was externally mandated rather than filling a recognized need, the program was cancelled well before it reached flight status. Several technological advances did come from the abortive FTS development process. The system architecture, with local processors co-located in the arm links and an overall data management system in the body, was an effective engineering approach that was later proven in the Ranger robotic servicing system. Had this been fully implemented, the manipulators would have been highly effective devices. However, since FTS was designed to operate for up to thirty years on-orbit with minimal servicing, a requirement was established for full operation of the robotic manipulators following any credible failure. This mandated a full analog wiring harness down the length of the arm, despite the fact that the primary system would operate with nothing more than power and data lines. These immense backup wiring bundles (with hundreds of signal and power lines) required significant new developments in flat cables, which would perform adequately in the innovative flat-wrap articulation developed for controlling wire bundles in roll joints. Early in the FTS development process, two “development flight tests” were identified to mature the robotics technology prior to operational use of the full robot. Following FTS program cancellation, the first of these (DTF-1) was pursued as a way to validate the technologies and to build a technology basis for future robotics developments. Support from this effort allowed the completion of the first FTS flight manipulator (Figure 20), along with limited qualification testing. Even this limited flight experiment was eventually canceled due to cost growth and budget limitations.

Figure 20: Manipulator for Development Test Flight 1 During the 1980’s, the Robotics Research Corporation developed a series of dexterous laboratory robots based on a set of modular components. With support from NASA Goddard Space Flight Center, one of these RRC laboratory robots was developed for flight certification. This was called the Servicing Aid Tool, or SAT, and was envisioned as a robotic augmentation for Hubble Space Telescope servicing activities (Figure 21). The SAT protoflight manipulator was developed and passed successfully through space flight environmental testing. Without sufficient funding to take the system through flight test, the program was cancelled. An Overview of Space Robotics © 2007 David L. Akin

Figure 21: Servicing Aid Tool In the early 1990’s, it was clear that the field of dexterous space robotics was impeded by a severe paradox: without a specific mission requirement, no funding was available for robotic flight hardware development. However, since no robotic flight hardware existed, no responsible mission manager would admit a need for robotic servicing, fearing that the robot development costs would be taken from their program budget. This was exacerbated by the cost growth of the FTS program, which by some estimates would have approached $1B by the time of flight. To break through this “Catch-22”, the University of Maryland Space Systems Laboratory proposed a low-cost dexterous robotic flight experiment to demonstrate that robotic systems could be (a) highly capable, and (b) affordable even with tight budget constraints. The Ranger Telerobotic Flight Experiment (Figure 22) was conceived of as a low-cost free-flying robotic servicing system, capable of being launched on a small vehicle such as Pegasus. Once on orbit, it would use the launch vehicle upper stage as a “target” spacecraft on which to perform robotic servicing tasks, as well as to demonstrate grappling, proximity operations, and orbital maneuvering and rendezvous. NASA supported the concept (then estimated at $7M to flight, as developed in an academic environment) by supporting vehicle development, but required the University of Maryland to seek out a donated launch opportunity.

Figure 22: Artist’s Concept of Ranger Telerobotic Flight Experiment An Overview of Space Robotics © 2007 David L. Akin

Throughout the early 1990’s, the University of Maryland developed a fully operational underwater version of Ranger TFX as a validation and training unit for the flight hardware (Figure 23). This system was extensively tested for the planned space operations, and was also used for groundbreaking investigations of cooperative human/robotic work sites (Figure 24). It consisted of two 7 DOF dexterous manipulators for accomplishing the servicing tasks, a 6 DOF positioning manipulator for a stereo camera pair, and a 7 DOF grappling arm for attachment to the target spacecraft. Based on an extensive study of space telerobotics done by the SSL for NASA Marshall in 1980- 1982, Ranger was designed to allow the use of EVA servicing interfaces, rather than requiring a target spacecraft to incorporate dedicated robotic interfaces. This required a more generically capable end effector than had been used on space robotics systems developed up to that time. Rather than attempt to develop an anthropomorphic hand, the UMd team developed interchangeable end effectors, allowing specialized tools to be provided for any necessary interface.

Figure 23: Ranger Neutral Buoyancy Figure 24: Ranger NBV Performing Vehicle Grappled to Target Mockup EVA/Robot Cooperative Activities

While extensive efforts were made to find a free launch vehicle, this task proved to be more daunting (and difficult) than the technology development. In 1996, realizing that no organization was likely to donate a $25-50M launch vehicle for a $7M experiment, NASA directed the University of Maryland to redesign Ranger to fly on a space shuttle mission. The redesigned program, renamed the Ranger Telerobotic Shuttle Experiment (TSX), kept the manipulator allocations, kinematics, and general layout, but exchanged the mobility base and grappling arm for a large positioning leg to provide rigidizable attachment to a Spacelab pallet carrier (Figure 25). Due to the program redirection, as well as additional costs associated with human-rating the robotic systems, the total program cost target was revised to $18M, which included all money spent in the early TFX program. Over the next five years, the University of Maryland team built a full-scale mockup of the complete robot (Figure 26), as well as highly capable dexterous manipulators, capable of test operations both underwater and in the laboratory environment (Figure 27), and an underwater prototype of the positioning leg (Figure 28). Modular sections of the robot arms passed successfully through flight environment An Overview of Space Robotics © 2007 David L. Akin testing, and the overall system was approved through the NASA Phase 2 Payload Safety Review Program. (It is worth noting that no other American-made robot system has ever made it this far through the NASA flight safety system). The neutral buoyancy validation and training unit was in final integration and testing, and 70% of all flight hardware had been fabricated and inspected, when the program was canceled due to NASA budget constraints following the recognition of ISS budget overruns. Some operations testing and limited development of Ranger robotics systems are currently ongoing under internal funds at the University of Maryland.

Figure 25: Computer Simulation of Figure 26: Neutral Buoyancy Mockup of Ranger Telerobotic Shuttle Experiment Ranger TSX

Figure 27: Ranger Dexterous Arms Figure 28: Ranger Positioning Leg in Performing Cooperative Activities Neutral Buoyancy Testing

At the same time as the Ranger program, NASA also supported a robotic development program at the NASA Johnson Space Center. Robonaut was intended to “push the envelope” of the anthropomorphic paradigm: the goal of this program was to develop an artificial EVA astronaut, capable of doing the same tasks, and using the same interfaces and tools, as a human in a space suit. Many of the design features of Robonaut, from the arrangement of limbs down to joint articulations and routing of wiring harnesses, are An Overview of Space Robotics © 2007 David L. Akin directly drawn from human anatomy. (Figure 29) Perhaps the most impressive technology development of Robonaut has been the development of highly capable robotic “human hands”. (Figure 30) Tests have demonstrated the ability of Robonaut to use the full suite of EVA tools, and to perform tasks as varied as removing and replacing multilayer insulation to sampling and inspection of geological samples. Like Ranger, Robonaut has also been used to investigate potential cooperative roles between dexterous robots and space-suited humans.

Figure 29: Robonaut Dexterous Figure 30: Two Robonauts Performing Anthropomorphic Hand Cooperative Structural Assembly

Also like Ranger, NASA support of Robonaut was largely terminated in the late 1990’s, as ISS budget impacts were assessed, and organizational changes in NASA Headquarters resulted in the dissolution of the highly productive NASA Telerobotics Intercenter Working Group. Continued Robonaut development is currently supported almost exclusively by the Defense Advanced Projects Research Agency (DARPA). As the final segment of the ISS , Canada is developing the Special Purpose Dexterous Manipulator, or SPDM. (Concern over various pronunciations of the acronym have led the Canadian Space Agency to recently announce that the system would be renamed “”.) This system is designed to perform some of the routine servicing tasks on ISS, and includes two 6DOF manipulators on an articulated body. (Figure 31) Capable of operating from a power data grapple fixture or from the end of the SSRMS, SPDM will be exclusively controlled by the on-orbit crew in teleoperation tasks. The actual flight date for SPDM is uncertain, based on return-to-flight uncertainties following the Columbia accident. An Overview of Space Robotics © 2007 David L. Akin

Figure 31: Special Purpose Dexterous Manipulator (“Dextre”) Although it represents a substantial improvement in dexterity over SSRMS, SPDM still has a number of shortcomings. Concerns about stability and stiffness, particularly when positioned by the SSRMS, have led to a requirement that one of the two arms be used for making a rigid connection to the local work site, thus limiting all operational tasks to single-arm ones. The Orbital Tool Changeout Mechanism (OTCM) is the end effector for SPDM, and has a diameter of 11 inches. (Figure 32) As such, it is substantially larger than the requirements for EVA glove clearance envelopes, and thus limits the tasks which can even be approached by SPDM. The large physical envelope of the OTCM also limits the external views of the end effector jaws at grappling, causing a near-total reliance on the OTCM camera and almost universal requirement for specialized stand-off visual targets at all OTCM interfaces. Finally, the OTCM is only capable of grasping a few specialized end effectors, none of which are directly compatible with EVA servicing. Thus, all ISS servicing tasks must be specified a priori as robotic or EVA tasks. Robotic tasks may be accomplished by EVA crew using generic interface devices, which convert from the robotic interface to one more acceptable to EVA operations. EVA tasks have no viable hope of being accomplished by SPDM. An Overview of Space Robotics © 2007 David L. Akin

Figure 32: SPDM Orbital Tool Changeout Mechanism Under contract to DARPA, a team led by Boeing is currently developing the Orbital Express program. (Figure 33) This system is designed to provide autonomous rendezvous and servicing. While the maneuvering system is described in the following sections, the servicing system will be briefly described here.

Figure 33: Orbital Express and Target Spacecraft Orbital Express is based on the fact that DOD mission planners can dictate to spacecraft developers that specific interfaces shall be incorporated in the design. This allows the adoption of interfaces which ease the difficulty of operations, and facilitate autonomous completion of servicing tasks. This approach is not feasible in commercial satellites, as conventional economic analysis argues for maximizing orbit maintenance propellants to maximize operations lifetimes, and therefore revenues. Orbital Express is primarily concerned with replenishment of propellants, allowing greater maneuverability of DOD orbital assets. The grapple mechanism designed for Orbital Express has integral accommodation of fluids replenishment, which will allow immediate refueling of target spacecraft after grappling is complete. Orbital Express will also incorporate a lightweight servicing arm, based on the sampling manipulator from Mars Polar Lander. This system incorporates a single-design grapple fixture, which allows the arm to capture and control an ORU, and to actuate retention latches for its An Overview of Space Robotics © 2007 David L. Akin removal and replacement. Blind-mate electrical connectors facilitate the use of this low- dexterity approach to ORU servicing.

3.2.2 Space Mobility Systems While numerous systems have the capability to maneuver in orbit, this overview will be limited to systems which maneuver in close proximity to other spacecraft, or to a specific target. The problems of inadvertent contact and constraints of proximity operations make it much more demanding than simple orbit changes or stationkeeping.

3.2.2.1 Historical Space Mobility Systems Mobility systems included here will be differentiated by the local gravitational environment. One class of spacecraft, focusing on rendezvous and proximity operations, is relevant to the microgravity environment found in planetary orbits. A second category will consider the requirements for planetary surface mobility. 3.2.2.1.2 Microgravity Mobility Early in the , there was a recognition that the shuttle was severely limited in its applications by the constraints on orbits which it can reach. For this reason, a great deal of research was done in the late 1970’s and early 1980’s, applicable to the Orbital Maneuvering Vehicle (OMV). Growing out of the Teleoperator Retrieval Systems (TRS), which was intended to be deployed from the shuttle orbiter to dock to Skylab and boost it to a higher orbit for later reuse, studies showed the clear utility of a robotic spacecraft to provide extend the reach of the Space Transportation System beyond the ~600 km peak orbital altitude of the shuttle. A contract was awarded to TRW for the development of an OMV to be carried in the space shuttle, and which could then carry payloads to and from higher orbits. The TRW concept incorporated interchangeable propulsion modules to facilitate refueling, and to allow space-basing the OMV while reducing launch masses for OMV resupply flights. (Figure 34) Due to budget restrictions, the OMV development contract was rescinded shortly after award.

Figure 34: Orbital Maneuvering Vehicle In 1994, the University of Maryland brought its Supplemental Camera and Mobility Platform (SCAMP) vehicle to the Johnson Space Center and demonstrated the benefits of An Overview of Space Robotics © 2007 David L. Akin a free-flying camera platform in the Weightless Environment Training Facility. (Figure 35) Shortly thereafter, the Automation and Robotics Division of NASA Johnson began the development of a vehicle to demonstrate the same capability in flight. The Autonomous EVA Robotic Camera (AERCam) system was developed as a rapid- prototyping project, leading to the designation of the flight unit as AERCam/SPRINT. Many of the components of AERCam/SPRINT were adapted from systems with prior flight heritage, including the Simplified Aid for EVA Rescue (SAFER) propulsion and control systems, and UHF video cameras from the EVA helmet cameras. AERCam/SPRINT was flown on STS-97 in late 1997. (Figure 36) For two hours during a planned EVA, the AERCam unit was controlled by an IVA crewperson in the shuttle aft flight deck. It provided two alternative video views of the EVA operations, and demonstrated that it was quite easy to safely operate a free-flying robot to supplement video coverage of external operations. Some limited funding has been found at JSC to continue AERCam development at a low level, aiming at grapefruit-sized systems, as compared to the basketball-sized SPRINT unit. However, no future flight opportunities are currently manifested.

Figure 35: SCAMP in JSC WETF Figure 36: AERCam/SPRINT on STS-87 At almost exactly the same time as the AERCam/SPRINT flight demonstration, Engineering Test Satellite 7 (ETS-VII) was launched by the National Space Development Agency of Japan (NASDA). Composed of two spacecraft, Orihime & Hikoboshi, ETS- VII was designed to investigate robotics technologies on-orbit, including remotely controlled and autonomous rendezvous and docking. (Figure 18) Degraded communications, due to the failure of the COMETS communications relay spacecraft, and thruster anomalies led to the descoping of ETS-VII maneuvering experiments. Proximity operations and grappling of the target spacecraft were accomplished in a “soft capture” mode, where the target was constrained to motions of only a meter or so by large capture arms on the chaser spacecraft. The flight did validate the NASDA approaches to near-field proximity operations and spacecraft capture, though. Orbital Express is a DARPA project currently in Phase C/D to perform a flight demonstration of autonomous space servicing capabilities. The Orbital Express flight demonstration will involve three spacecraft: the Autonomous Space Transfer and Robotic An Overview of Space Robotics © 2007 David L. Akin

Orbital Servicer (ASTRO), a NextSat target spacecraft representing a future generation of satellites designed for robotic servicing and upgrades, and the recently added SPAWN space awareness system, which is a small spacecraft to maneuver in proximity to ASTRO and NextSat to provide exterior views. (Figure 33) The demonstration plan involves separation of ASTRO from the NextSat target by substantial distances, and then initiating autonomous orbital maneuvering, rendezvous, and docking. Launch of the Orbital Express elements is expected in 2006. 3.2.2.1.2 Planetary Surface Mobility The first, and arguably most successful instance of planetary surface mobility was the Russian Lunokhod rover. Developed in response to the success of the American Apollo manned lunar program, the Lunokhod vehicle was designed to autonomously land on the lunar surface, and traverse it under teleoperation from Earth for extended lunar exploration. Lunokhod stood 135 cm high and had a mass of 840 kg. It was about 170 cm long and 160 cm wide and had 8 wheels each with an independent suspension, motor and brake (Figure 37). Lunokhod was equipped with a cone-shaped antenna, a highly directional helical antenna, four television cameras, and special extendable devices to impact the lunar soil for soil density and mechanical property tests. An x-ray spectrometer, an x-ray telescope, cosmic-ray detectors, and a laser device were also included. The vehicle was powered by a solar cell array mounted on the underside of the lid of a tub-like electronics housing. A polonium-210 isotopic heat source was used to keep the rover warm during the lunar nights.

Figure 37: Lunokhod Lunokhod 1 landed on the moon in late 1970. It was designed to operate over three lunar day-night cycles (each 28 days long), but Lunokhod 1 actually survived for 11 lunar days, with experiment operations terminated on October 4, 1971. At that time, it had traveled 10,540 m and had transmitted more than 20,000 TV pictures and more than 200 TV panoramas. It had also conducted more than 500 lunar soil tests. Lunokhod 2 landed on the moon in January, 1973. It apparently operated until mid-May, when it failed without warning. Lunokhod 2 operated for about 4 months, covered 37 km An Overview of Space Robotics © 2007 David L. Akin of terrain including hilly upland areas and rilles, and sent back 86 panoramic images and over 80,000 TV pictures. In 1997, the US sent the Pathfinder mission to Mars. This was intended to be a technology development mission, with science taking second priority to the demonstration of advanced technologies for future missions. As an augmentation to this mission, $25M was allocated for a small rover vehicle, which would be used to carry a few instruments to specific rock samples. The primary data source for the mission, however, would remain the fixed lander vehicle. The Pathfinder rover, named Sojourner in an educational outreach program, was a small (10 kg) roving vehicle deployed from a lander solar array petal. One of the more innovative systems on the rover was the six-wheel rocker-bogey suspension system, which transfers weight uniformly across all wheels as one moves over an obstacle. The rocker-bogey suspension theoretically allows a wheel to mount an obstacle up to 1.5 wheel diameters in height, while minimizing modes which could result in the rover getting caught in terrain obstacles (Figure 38).

Figure 38: Mars Pathfinder Sojourner Rover The Sojourner rover was primarily used to place an alpha proton X-ray spectrometer( APXS) against rock samples to measure constituent elements in the material. Rover navigation was performed by using panoramic stereo images from the lander, which were decoded on earth to provide rover position and orientation on a daily basis. Navigation during mobility tasks (which were generally performed out of communications with Earth) was performed via dead reckoning, which was updated by visual confirmation each sol, or Martian day. The Pathfinder rover was originally intended to operate for 7 sols. It was still active and operating on Sol 45, when the lander (which had to act as a communications relay for the rover) stopped communicating. It was a highly successful system, proving the concept of using rovers to take instruments to various locations for in situ measurements. Perhaps as important for the long run, it excited a great deal of attention from people all over the world, making the JPL Pathfinder web site the most popular web site on the Internet during the duration of the active surface mission. An Overview of Space Robotics © 2007 David L. Akin

Based largely on the success of the Pathfinder rover, the Mars Exploration Rover (MER) program for the 2003 launch window extended the paradigm to its logical conclusion, as all instruments and operating systems are resident on the rovers, and the lander vehicle is inert once landing has been accomplished. The two MER rovers, now in interplanetary cruise aiming at January and February 2004 landing dates, are each about a factor of three in linear scale larger than the Pathfinder rover. They maintain the rocker-bogey suspension system and the use of solar arrays for power, which will limit each rover to about three hours of activity each sol around the time of local noon (Figure 39). Each rover is designed for a nominal 90-sol mission lifetime, with a goal of a total traverse distance of 1 km cumulative by the two rovers.

Figure 39: Mars Exploration Rover (MER) 2003

3.2.2.2 Planned Surface Mobility Systems, Past and Future Few specific programs are currently in place for flight operation of planetary surface mobility systems. Due to cost overruns on the MER 2003 missions, NASA has decided to defer landing opportunities for the 2007 launch window, and concentrate instead on a 2009 launch for the Mars Science Lander (MSL). The configuration of this spacecraft is still in definition studies, but current expectations are for a long-range rover system, powered by radioisotope thermal generators, which can operate for a full Martian year and traverse several hundred kilometers. This system will also be considerably larger than the MER rovers (Figure 40), but will probably incorporate a rocker-bogey suspension and drive system with flight heritage from Pathfinder and MER. An Overview of Space Robotics © 2007 David L. Akin

Figure 40: Three Generations of Mars Rovers (l. to r.: 1997, 2003, 2009)

3.2.3 Conclusions from Historical Space Robots This has been a cursory overview of robotic systems which have flown in space, or have made it substantially through the development and qualification process at this point in time (or prior to program cancellation). As each system was briefly discussed, attempts were made to identify unique characteristics, or problems, or design issues arising from the development process. It is worth noting that a much larger number of potential space robot systems have been proposed, or were developed through preliminary designs, or have been built and tested in a laboratory environment. Many of these are more “blue sky” in form or technology than the systems described here, which were all conservative in nature to pass through the arduous flight qualification process. An obvious direction for further study would be to attempt to pull together all such speculative systems into a more comprehensive study of proposed space robots; this task would be a long and arduous one, and far beyond the scope of the present task. 3.3 Issues for Further Development Hopefully, the preceding section helped to clarify the status of space robotics today and in the near future, and to highlight some of the limitations evident in current implementations. This section will focus on needed technology advan ces with the potential for major impact on future space robotics.

3.3.1 Mechanisms Mechanisms include motors, gearing, brakes, fasteners, and other hardware components of robotic systems. An Overview of Space Robotics © 2007 David L. Akin

3.3.1.1 Limitations of Current SOA Current robotic mechanisms are limited by physical size and mass of the components. While the current SOA is ideal for manipulators in the 2-4 meter range, actuator technologies which have greater scaling ranges would allow the development of manipulators and mobility systems in small scale (beneficial for microsatellite applications) and larger scales (advanced cranes as RMS and SSRMS follow-ons.) Electrical motors generally include field coils wound around iron core plates, which makes them massive. An ideal actuator would develop high torque at low rotational velocities and would direct-drive the manipulator actuator or drive wheel; actual motors provide high RPMs at low torque, and require high-ratio gearing systems to match the needs of the application. Most robotic systems incorporate brakes to lock the joint when the actuator is not needed; invariably magnetic, brake systems also involve high mass, and substantial impact on mechanism volume. Current SOA systems only work across a typical Earth surface temperature range; future mechanisms should be capable of operational activity at much wider temperature ranges. This is particularly true on future optical telescopes, as most currently in planning will be infrared telescopes needing to remain at superconducting temperatures to protect mission objectives.

3.3.1.2 Needed Research and Development • Innovative solutions to high-torque, low-speed rotary actuators (e.g., magnetostrictive motors) • Low mass and volume gearing systems with high transmission efficiencies and low friction and backlash (e.g., harmonics and epicycloidal systems) • High-torque electric brakes with low pull-in power and no-power hold/release (e.g., sprag systems) • Electrically-driven linear actuator systems to perform high-force tasks traditionally accomplished on Earth by hydraulic actuators (e.g., front-end loader bucket control) • Actuators with inherent mechanical redundancy, and no failure modes which can result in locking up the actuator • Mechanisms which will operate at extreme temperatures, particularly extreme cold [short-term goal: operation at 200°K; long-term goal: operation at 50°K] • Thermal control strategies which will allow reliable, robust operations on the Mars surface, during lunar nights or in permanently shadowed regions around the lunar poles, or on infrared telescopes without needing a system warm-up. • Close-out fasteners to secure an actuator module without requiring high case wall thickness or other such impacts to system mass. • Modular interconnection systems to allow the on-orbit repair or reconfiguration of robotics to meet evolving mission requirements An Overview of Space Robotics © 2007 David L. Akin

3.3.1.3 Potential Benefits Advanced mechanisms will allow the development of lightweight, highly capable robotic systems for both manipulation and mobility. These systems would be capable of routine operations in permanently shadowed regions of the lunar poles while mining ice, or repairing infrared telescopes for science or ballistic missile early warningwhile the telescope is operating at cryogenic temperatures. Inherently redundant actuators will allow highly reliable systems, which can be repaired or reconfigured on-orbit as desired.

3.3.2 Sensor Systems Sensors can be separated into proprioceptive and exterioceptive categories. Proprioceptive sensors provide information on the parameters of importance to the internal state of the robot. Exterioceptive sensors provide information on the external world around the robot. Both are critical to robust and reliable operations.

3.3.2.1 Limitations of Current SOA Proprioceptive sensors are used for monitoring internal parameters of the robot. Typical sensors sets would include incremental and absolute encoders, resolvers, and tachometers for actuator position and velocity measurements; torque measurements on actuator outputs, force/torque sensors (typically six-axis) for measuring all transmitted forces at a single point in a string, generally a manipulator wrist; and engineering sensors monitoring parameters such as voltage, current, temperature, or strain. Position and velocity sensors, such as encoders, resolvers, and tachometers, generally mount either directly on the actuator shafts or are driven from it via gearing. These components, whcn off-the-shelf, are generally based on terrestrial applications where packaging is not highly constrained; as a result, they tend to be much larger than acceptable in space hardware. To manage wiring harnesses and prevent the need for external wire bundles, it is ideal to route wiring runs along the centers of actuator drive shafts, as these frequently represent the neutral axis of the articulation. Wiring that is run along these paths does not need to change lengths as the robot moves. However, this requires substantial thru-hole sizes in the drive shafts, and incrementally larger thru-holes in the encoders or similar sensors. Getting substantial resolution in output-side sensors for actuator position (12 bit/revolution or better) typically is done by increasing the diameter, which is also unacceptable for most space applications. Systems such as resolvers and tachometers are analog, which have some advantages but are strongly effected by electromagnetic interference and cross-talk between signal wires. They are also susceptible to noise, especially low-level signals which require substantial amplification and filtering prior to digital conversion for control signal processing. This is particularly true for torque measurements, which are almost universally based on strain gauges and instrumented flexible linkages. Analog data content is frequently in the microvolt range, and force/torque sensors require careful (and frequent) calibration, particularly due to thermal drift on the strain gauges with extended use. Exterioceptive sensors allow the robot to directly measure the external world, whether through contact or remote sensing. Touch and direct contact sensors are frequently made from variable-resistance elastomers, which may degrade unacceptably in the space An Overview of Space Robotics © 2007 David L. Akin environment. Proximity sensors have been demonstrated in the laboratory environment, including capacitance sensors and infrared light emitting diode/photodetector pairs. These systems may not be directly adaptable in the space environment, due to charging and high-energy particle impact on the capacitor plates, or specular reflections of sunlight into IR photodetectors. Vision and other remote sensors are critical to future operations, but are also affected by the intensive and variable lighting environment of space. The Space Vision System (SVS) has been used to good effect on International Space Station, providing motion cues to SSRMS during the berthing of modules without suitable external views. This system, however, is customized for the physical scale of the ISS assembly task, and required numerous large optical targets to be distributed across the exterior of all components. Video systems, including stereo and other multiple-source analysis systems, are strongly affected by the strong and variable lighting environment of space. Laser rangefinders, stripers, and scanners are often used to map out the worksite in the presence of large solar illumination variations, but are perturbed by specular reflections of surfaces, particularly such as the multilayer insulation (MLI) which is everywhere on modern spacecraft.

3.3.2.2 Needed Research and Development • Shaft angular position sensors (both incremental and absolute) with large (1 in) shaft thru-holes, overall diameter <3 in, which minimal height and encoder disks which are ruggedized to take launch vibration and acoustic loads • Torque sensors for actuator output shafts which do not induce flexibility and compliance into the mechanism, and provide direct digital measurements without the need for conditioning and converting low-level analog signals • Six-axis force-torque sensors for output force measurement which provide fine resolutions of forces (<0.1 N) and torques while still being immune to high-level applied loads due to inadvertent contact or impact loads. The measurement technique should provide high-resolution, low-noise measurements and not need frequent or temperature-dependent calibrations. Interior packaging of data conversion and interface electronics allows the sensor to interface directly to the robot data bus, such as 1553B. • Alternative status sensors, including acoustic and thermal • Sensor skins for proximity and contact sensing • Tactile feedback from end effectors – provides inherent sensing of slippage, temperature, surface finish • Stereo video machine vision with inherent registration/calibration • Laser scanners/stripers for range mapping work site • RF systems for high-resolution local navigation around work site An Overview of Space Robotics © 2007 David L. Akin

3.3.2.3 Potential Benefits Sensors (and sensor systems) are critical for all future space robotics applications. Packaging of actuator designs to minimize robot size and mass are currently largely driven by limitation in rotary shaft encoders, required to close the lowest-level servo loop and to understand the current state of the system. Long-lived operational robots will require highly redundant internal status sensors to monitor performance, identify degradation trends, and provide information in the event of a system failure. Robot autonomy is almost whole driven by the depth and accuracy of world knowledge, which is currently highly limited due to shortcomings in visual and laser data available to current systems. Safety of robotic components and target spacecraft is compromised by the lack of flight-qualified proximity and tactile sensors, which are needed to provide a reaction-layer response to move away from impending contact with some part of the robot. In the absence of such a sensor, the only other viable approach is to attempt to develop a full-featured predictive world model, and monitor conflict potential in the world model in real time.

3.3.3 Control System Hardware This category focuses on the hardware used to control robotic systems. This includes processors and support chips, wiring harnesses and bus architectures, motor controllers and instrumentation amplifiers. In a sense, this section references almost every development of the modern electronics industry. It is clear that no space program can afford to duplicate, or expect to exceed, the rate of technological advance in the computer and consumer electronics fields. However, this section will attempt to identify technologies particularly necessary for advanced space robotics, or areas where a government-supported development could significantly augment the pace of academic and corporate research.

3.3.3.1 Limitations of Current SOA One of the promising research trends in robotics is towards “mechatronics” – smart robot components where processors, controllers, and communications nodes are packaged directly into the actuators. On the other hand, space flight-qualified electronic components are somewhere around three generations behind commercially-available components at any point in time. This is partially due to physics: as feature size decreases in semiconductor fabrication techniques, the susceptability to radiation-induced failures is increased. Flight-demonstrated communications buses, such as 1553B, are reliable and robust, but not well suited to the packaging and power restrictions of advanced robotics systems. Current electronics packaging issues for flight includes the development of custom hybrid modules, particularly for high-current motor control for electrical actuators in manipulators and rovers.

3.3.3.2 Needed Research and Development • Flight-qualified and radiation-tolerant electronics in modern (commercial/military grade) packaging An Overview of Space Robotics © 2007 David L. Akin

• Wiring approaches to maintain allow wiring harnesses to be internalized to manipulator arms (modular self-contained wiring harnesses); this will require the capability to pass through the neutral axes of serial roll and pitch joints so that cable harnesses will not require special articulations to cope with changing cable lengths. • Innovative approaches to internal cable routing in robots, such as central slip rings in actuator packaging designs. • High-speed microprocessors and microcontrollers suitable for embedded control of robot actuators, incorporating on-chip communications protocols (e.g., 1553B) • Small-footprint high-power controllers for local servo control. Systems need to have highly effective means for sinking and dispersing waste heat.

3.3.3.3 Potential Benefits Developing electronic control units for space robots currently is a frustrating activity, as so many things wich are simple in a commercial development cycle have not been demonstrated on-orbit, or are actually known not to work. Rather than attempt to accelerate electronics development (Moore’s curve works quite nicely as it is), limited space robotics development funding should focus on the qualification and packaging of components so they can be space certified. Further attention should be paid to the process of adapting commercial off-the-shelf technologies for flight hardware.

3.3.4 Control Architectures and Software Robot control systems are generally tiered, or hierarchical, architectures. At the lowest levels are the servo loops, driving actuators in closed loop to reach and maintain desired positions or, more typically, torque output. Depending on the implementation, application, and sensor suite, there may be one or more reactive or subsumptive layers, where artificial reflexes are implemented for safety or other reasons. Higher levels provide overall trajectory control of individual manipulators or vehicles, in response to known way points in the trajectory space. At the highest level are the systems for planning, trajectory generation, and goal accomplishment. A number of implementations of this hierarchy have been implemented in laboratory or developmental versions of space robotics, including the five-level NASREM architecture adapted from the National Institute of Science and Technology for the Flight Telerobotic Servicing program, or the 3T (“Three Tiered”) architecture developed at the Johnson Space Center.

3.3.4.1 Limitations of Current SOA The greatest problem with the current state-of-the-art in robot control architecture is not with the concept, but with the implementation. The lower levels are the province of control system engineers, who tend to build highly capable systems up to the “tactical” layer where robot actions are controlled in response to an externally supplied trajectory. The higher “strategic” layers are typically developed by computer scientists, who do a great job at goal-oriented behaviors and optimal trajectory planning, but are not fully engaged in the details of controlling the physical device. As a result, past and current An Overview of Space Robotics © 2007 David L. Akin implementations of robot architectures tend to be “bottom-heavy” or “top-heavy”; no instantiation currently exists of a well-integrated architecture with equivalent depth and complexity in all the layers of the hierarchy.

3.3.4.2 Needed Research and Development • Embedded nonlinear adaptive control algorithms to monitor performance, and to evolve dynamic control responses (e.g., evolve and use a feed-forward model for actuator friction as a function of temperature) • Robust control architectures which will ensure stable control in the event of component failures • Develop a “unified field theory” of layered robot control across all layers, integrating computer science and control engineering in a optimalimplementation • Explore new means of specifying goals to intelligent control architectures, to reduce communications bandwidth and improve mission reliability

3.3.4.2 Potential Benefits While some impressive demonstrations of autonomy exist in laboratories, all current and planned operational space robots are teleoperators, requiring direct human control throughout the work operations. A completely vertically-integrated hierarchical control system will provide an order of magnitude reduction in demands for human direction and control during robot operations, making the system significantly more productive. Advanced nonlinear control techniques, such as monitoring actuator outputs to maintain and use a temperature-reference model for actuator friction, will allow the robot to behave identically across all thermal environment, again making a more productive system that has less demands on a human operator who has to learn heuristically how to adapt control inputs in various conditions.

3.3.5 Operator Interfaces While it is pleasant to visualize future space robotic systems as fully autonomous systems which only communicate with the ground to tell us they’re done, the truth is that any effective operational system in the foreseeable future will have a substantial amount of human involvement as a ground-based controller. This does not mean that future systems will necessarily be teleoperators, but that for many tasks some degree of human involvement and oversight will result in optimum task performance with maximum mission assurance.

3.3.5.1 Limitations of Current SOA Current flight robots are locally teleoperated devices, which require highly limited on- orbit crew time and dictate severe limitations of control stations. Current crew preference is for two 3DOF hand controller, used to maneuver RMS and SSRMS in resolved rate mode by “flying” the end effector in rotation and translation. No provision is currently made for ground control of operational space robots, or for minimizing the effects of time An Overview of Space Robotics © 2007 David L. Akin delays on the command data loop. Future dexterous robots will be highly complex devices with 30-50 controllable degrees of freedom; a priori mandating the use of standard hand controllers will drop robotic productivity by a factor of at least 5. As autonomy grows and human involvement is concentrated in higher levels of supervision and goal-directed inputs, there is a real need to develop a knowledge base in the human factors of advanced manipulator control.

3.3.5.2 Needed Research and Development • Control architectures which have the ability to move between control modes (such as autonomy, supervisory control, and teleoperation) simply and reliably at any point in the operation. • Commanded and predicted displays which give human operators maximum control performance in the presence of communications time delays • Ergonomic studies of robot control stations to determine optimum interfaces for the human operator • Detailed, focused studies on optimal systems for controls inputs and displays to the human robot controller, including the best teleoperation/telepresence/ supervisory control input devices for specifying motions of complex dexterous manipulators, and best methods for presenting stereo video and computer graphics to the operator

3.3.5.3 Potential Benefits Optimizing human interfaces will maximize robotic productivity while minimizing demands on the human operator. This will increase the likelihood of a successful mission.

3.3.6 End Effectors End effectors are the distal interfaces of manipulator arms which allow them to directly interact with the external environment. Issues of end effector design include the number of different interfaces to be grappled and requirements to be placed on their design, force and torque capabilities, and reliability against inadvertent release or failure to release when commanded.

3.3.6.1 Limitations of Current SOA Current flight end effectors (on RMS and SSRMS) are designed for dedicated single- purpose grapple fixtures on potential payloads, requiring substantial mass and volume allocations. Since there are no provisions for mechanical actuation beyond the capture and release functions, these end effectors can grasp, but cannot provide actuation for release or mating the payloads. The Orbital Tool Changeout Mechanism (OTCM) planned for SPDM will be capable of grasping two different types of interfaces directly, and there are some plans for interface tools grasped and actuated by the OTCM to add a few more types of potential interfaces. However, the OTCM requires a 14 in cylindrical clearance around each interface, along with an offset visual target aligned with the An Overview of Space Robotics © 2007 David L. Akin

OTCM video camera to alignment guidance in grappling operations. The OTCM interface can provide mechanical actuation across the interface, which is used for ORU removal and installation functions on ISS. The Orbital Express manipulator will grasp a single dedicated grapple interface, which is designed to allow the manipulator (once grappled) to actuate latches and remove or install payloads into mounting fixtures.

3.3.6.2 Needed Research and Development • Application studies to identify the necessary number and types of degrees of freedom for robotic end effectors, based on an overall systems optimization between the target spacecraft and the servicing system • Advanced design and testing of anthropometric and reconfigurable hand configurations • Improved systems for interchanging end effectors on a single manipulator, including mechanical, electrical, and fluid couplings • System analysis of the trades between dedicated robotic end effectors and the use of EVA tools via anthropomorphic hands

3.3.6.3 Potential Benefits The utility of a robot is ultimately determined by the set of interfaces it can interact with. While the approach of mandating a specific grapple fixture has worked with the large crane-type manipulators currently operational on-orbit, it is far too limiting of robotic applications in otherwise dexterous robotic systems. By carefully examining the role of end effectors sets and sharing of end effectors between robot arms, this task will provide a much better understanding of the benefits and limitations of each type of robotic end effector currently under consideration.

3.3.7 Robot Configurations What should a robot look like? How many manipulators should it have, where should they be placed, what tasks will it perform and how? While there are a large number of candidate tasks for space robots to perform, there have been almost no robotic systems taken to flight status. A number of conceptual robotic configurations have been proposed, but few or none of them really took a rigorous look at the task set to be performed and used that as a baseline to drive the robot configuration.

3.3.7.1 Limitations of Current SOA Whether conscious or subconscious, almost all robotic systems currently planned or operating show a substantial bias for anthropometric forms. This is perhaps inevitable, in that most of the space tasks deemed feasible for robots have grown out of EVA tasks, and anthropomorphic robot forms intuitively appeal as a logical solution for a task originally performed by humans. However, an era of space manufacturing would expand the range of necessary tasks by orders of magnitude, and will also reduce the importance of An Overview of Space Robotics © 2007 David L. Akin classical EVA-compatible tasks. In this environment, the optimal configuration of robots is a much broader, and more critical, issue. Robot configuration is dependent not only on the specifics of the task to be performed, but also on issue such as transportation to and from the work site, and stabilization while there. The physical size of RMS and SSRMS are one approach to enabling the end effector of the robot to reach all potential locations within the work site volume. If SPDM is added to SSRMS to provide a “dexterous front end” to the system, one of the two dexterous arms must be dedicated to grappling to a local fixture to stabilize the system against the large-deflection flexible modes of the 20-meter SSRMS arm. The original University of Maryland concept for a Ranger dexterous robot used a free-flying motion base for transportation between work sites, and a latchable grappling arm to attachment to the target spacecraft and maneuvering around the local work site. The neutral buoyancy version of this vehicle also demonstrated hand-over-hand robot mobility around a simulated space platform. The lesson to be drawn here is that mobility and manipulation are not compartmentalized, but use common capabilities to provide synergistic control functions. Especially with the wealth of candidate tasks at a space manufacturing complex, there will be a real need for robots tailored to a variety of different tasks. Some preliminary work is beginning under DARPA sponsorship at the University of Maryland to develop highly modular robots that can be based on a space platform as a stock of smart components, and assembled on demand into a wide variety of configurations to meet the requirements of a specific task. This “lego” approach to robotic capabilities has real potential for more complex work sites, and provides a useful approach to robotic system maintenance and upgrades.

3.3.7.2 Needed Research and Development Development and testing of integrated control approaches that blends stabilization systems with arm movements to increase the apparent work envelope of dexterous arms Research into the feasibility of performing dexterous manipulation from non-rigid platforms, whether free-flying or due to structural flexibility of positioning systems Research and development into rigorous methods of deriving optimal robotic configurations from task constraints and requirements Research and simulation on innovative modular approaches to allow robots to be configured on-site into optimum forms for specific functions. Investigation of robotic maintenance and technology upgrades for long-term operational programs

3.3.7.3 Potential Benefits Rather than the approach taken on ISS to select and adapt a small subset of servicing tasks to the limitations of a single robotic system, research and development into optimal robotic configurations and modular “lego” robot sets will provide the ability to create on demand a robot configuration ideally matched to the demands of any particular required An Overview of Space Robotics © 2007 David L. Akin task. This approach also provides easy accommodation of component failures and continual technology upgrades to an operational system.

3.3.8 Operations There has long been a “Catch-22” in space robotics. Because dexterous robotics do not currently exist, no program manager will allow a requirement to be written for robotics, lest their program be called upon to pay the cost of developing the robot. At the same time, since no operational program has a formal requirement for a robot, it has proven impossible to get the long-term commitment necessary to get a dexterous robot developed, qualified, and flown. Even an extremely low-cost program, the Ranger Telerobotic Shuttle Experiment, was canceled by NASA within $5M of completion and with 70% of all flight hardware already fabricated and ready for assembly. Until a knowledge base is created that clearly describes the capabilities and limitations of space robotic systems, there will be little or no demand for the use of robotic services on-orbit.

3.3.8.1 Limitations of Current SOA The lack of an robot operations data base is a logical outgrowth of the lack of flight robotic systems. A great deal of knowledge currently exists on the use of simple positioning systems such as RMS and SSRMS, due to their extensive flight heritage. Similarly, Mars Pathfinder developed an expertise in the use of roving vehicles to collect scientific data in planetary surface exploration. Failing the opportunity to develop and flight-test dexterous robotic systems, the next best approach would be to create a knowledge base built on realistic ground-based simulation. The University of Maryland has been developing and testing a series of dexterous robotic systems in neutral buoyancy for twenty years for this very reason. (The Ranger TSX flight experiment, mentioned in the previous section, was actually designed to correlate the large neutral buoyancy data base on robotic operations to quantitative flight data. In this way, it was hoped that a single flight experiment would be leveraged to validate a much larger data base drawn from ground-based simulations.)

3.3.8.2 Needed Research and Development • Focused programs to develop prototypes of space robotic systems designed for ground-based simulation environments, and testing of those systems against high- fidelity representations of space tasks to develop a knowledge base on robotic capabilities and limitations • Reliable funding of low-cost robotic demonstration experiments all the way through flight to provide “existence proofs” of robotic operations across a broad range of application areas, not just payload positioning • Development of a set of standard fiduciary operational tasks, along with instrumentation for quantitative metrics, to allow robot concepts to be tested and compared across various simulation media to provide hard numbers on performance for future operational planning An Overview of Space Robotics © 2007 David L. Akin

3.3.8.3 Potential Benefits Past robot developments have frequently been based on a single target task, or merely developing an intuitively appealing robotic system in a laboratory environment. Seldom have these systems been rigorously applied to realistic space tasks with the intention of gleaning quantifiable data on performance on end-to-end task simulations in realistic ground simulation environments. Developing this knowledge base will create a framework for shaping the direction of future space robot development, as well as educating potential operational users on the capabilities and limitations of various instantiations of space robots.

3.3.9 Simulation Before operational potentials of space robotics can be investigated and verified, there has to be a realistic means of simulating the combination of the robot(s), items in the work site, and the local environment. Since space systems, particularly space launches, are prohibitively expensive, there will be few opportunities to perform meaningful research in the actual space environment in such a manner as to meaningfully affect the development of operational programs. Therefore, any space experiments must be the logical culminations of a well-planned and extensively performed series of Earth-based simulations. There are a number of possible modes of Earth-based simulation. Each has benefits and limitations, and all will have some significant role to play in a comprehensive development program for highly capable space robotics.

3.3.9.1: Graphical Simulation This category pertains to computer graphics simulations, which require no instantiation of physical devices to create the simulation environment. This approach is the simplest and lowest-cost of all robotics simulation, particularly considering the rapid development of highly capable graphics cards for the computer gaming industry. 3.3.9.1.1 Limitations of Current SOA With the rapid development of computer hardware, including both processors and dedicated graphics cards, computer graphics systems are becoming ubiquitous. A $2000 commercial off-the-shelf computer today can outperform a dedicated $50,000 graphics computer from five or six years ago. Entry-level versions of complex and highly capable graphics software, such as Maya, is now being distributed for free. However, there are substantial limitations in the use of computer graphics alone to simulate robotics operations for space. The ability to draw frame-rate high resolution graphics, incorporating texture mapping, pixel mapping, and other similar technologies, only guarantees the ability to draw believable images. Many high-end graphics packages include the ability to specify and control kinematic motion; far fewer allow the development of realistic (and adaptable) dynamics to the system. Very few will accommodate realistic degrees of flexibility in the robot linkages to fully simulate motion on-orbit. An Overview of Space Robotics © 2007 David L. Akin

Another significant area of deficiency for these systems is the lack of availability of an accurate and real-time contact model. While dynamic models can be created which accurately represent unconstrained robot motion, the details of contact dynamics, surface texture, nonlinear friction, and other effects are extremely difficult to model in the general case. Most contact models are therefore somewhat simplistic, and lead to low- fidelity prediction of realistic contact dynamics. 3.3.9.1.2 Needed Research and Development • Available (hopefully open-source) codes for generic robot graphical simulation, including modules for kinematics, dynamics, contact forces, and high-resolution scene generation • Advanced models for contact forces and torques, capable of simulating realistic nonlinear contact dynamics in real time • Reference metrics (in terms of fiduciary task models) which can be transferred between systems and platforms to provide a “calibration” of comparable simulations in differing computing environments 3.3.9.1.3 Potential Benefits Computer graphic simulations will remain the “entry level” of space robot simulations, available to any organization with standard consumer computer systems and graphical simulation software. By increasing the range of applicability of this simulation mode through improved models, algorithms, and code distributions, more organizations can get involved in space robotics research and development.

3.3.9.2: Laboratory and Field Simulation This type of simulation involves the use of realistic robots in the laboratory or external (field) environment. It is assumed in this section that the presence of Earth gravity is not a concern, which implies either non-load-bearing tasks or the use of robotic elements (such as manipulators) strong enough to lift their own weight, as well as that of the test payloads, in Earth gravity. 3.3.9.2.1 Limitations of Current SOA This approach is commonly used for both planetary rovers and, to a lesser degree, dexterous robotics. Field trials of surface mobility systems must be performed without gravity offset systems described below, and therefore require the capability to perform all functions with Earth-normal applied loads. Manipulators for laboratory simulation require sufficient torque generation to lift their own mass in worst-case poses against Earth gravity, as well as the weight of end effectors and target interfaces. Frequently, simulated payloads will be manufactured from lightweight materials such as Styrofoam and cardboard, to reduce arm payload mass and minimize volumes of the workspace into which the manipulator cannot go because of load limits. 3.3.9.2.2 Needed Research and Development • Higher-strength affordable manipulators for working against gravity An Overview of Space Robotics © 2007 David L. Akin

• Lightweight end effectors which provide full motion without maxing out arm payload capacity • Simple and effective methods for simulating orbital lighting conditions • “Blended” laboratory/computer graphics simulations where far-field objects are computer generated for visual fidelity, but laboratory systems are used for near- field work site simulations for realistic contact dynamics 3.3.9.2.3 Potential Benefits Many of the proposed research and develoment goals (such as high-strength, low-mass manipulators) are also appropriate for more widespread adoption of robotics in the flight community. Laboratory and field trials represent good opportunities to simulate complete sequences of space robotic operations, at reasonable cost and without significant access limitations.

3.3.9.3: Counterweight Systems Beyond the most rudimentary foamcore mockup, some form of gravity offset is necessary to allow robotic simulation in Earth gravity. In its simplest form, it may merely be an adjustable weight on a pulley to counteract the mass of the robot or payload for ground- based testing. 3.3.9.3.1 Limitations of Current SOA Counterweight systems are typically based on the use of either mass balance weights or constant-force springs for gravity offset. The simplest systems rely on long suspension lines to minimize effects of off-center counterweights; more complex systems seek to ensure (either passively or actively) that the suspension point remains directly above the attachment to the test hardware. Mass-based systems invariably add to the inertia of the test hardware and therefore reduce the fidelity of the simulation; this can be ameliorated by the use of mechanical advantage (typically by concentric cams of different diameters) to reduce the required mass and inertia of the counterweight. However, this approach increases the travel of the counterweight, which again complicates the overall system dynamics. A limited amount of work has been done on powered active suspension systems, where electric motors provide programmable force on tension lines to the test hardware. One university-based system in Japan tried to create such an active suspension system for a dual-arm system with a payload element; the 18 degrees of freedom were suspended from motor-actuated wires, which were themselves moved about to stay positioned directly above the suspension points. The overall suspension system required 30 active degrees of freedom. An inevitable shortcoming of all suspension systems is that relative motion of robots and payloads are severely constrained, as suspension lines cannot cross each other without tangling. 3.3.9.3.2 Needed Research and Development Simple, low-cost, mass-producible active suspension systems Advanced suspension harnesses to provide realistic rotational motion to payloads as well as vertical suspension with provision for horizontal motion An Overview of Space Robotics © 2007 David L. Akin

Non-vertical suspension system where three active cables support each suspension point on the experimental hardware, and computer control is used to maintain net suspension forces directly upwards. (This eliminates the need to move the upper suspension points to keep the lines vertical.) 3.3.9.3.3 Potential Benefits Counterweights and suspension systems are always going to of limited fidelity. Their role is primarily in extending the capabilities of laboratory simulations which are constrained by payload masses in Earth gravity. Due to their constraints on numbers and relative motions of simulation components, they are of limited utility in Earth-based simulations of space manufacturing operational scenarios.

3.3.9.4: Air-Bearing Floors An alternate approach to modeling microgravity dynamics is to use the extremely low friction of an air-bearing surface to provide realistic planar dynamics. This approach has been used to advantage in proximity maneuvering and docking simulations for several decades. 3.3.9.4.1 Limitations of Current SOA Air-bearing simulation depends on extremely low friction air-bearing support surfaces (“pucks”) and the availability of an extremely flat and smooth floor. Given these two requirements, it produces high-fidelity reproduction of dynamic response to forces (such as contact loads or thruster firings) in three of six free axes: X, Y, and yaw. The other translational axis (Z, upwards) and two rotational axes (pitch, roll) are not accommodated on an unarticulated test vehicle. The requirements for a simulation-class air-bearing floor are typically smooth (less than 0.001” variance over 1 meter) and flat (less than 1 mm of horizontal over 10 meters). Local surface deviations will cause dragging on the air pucks and disrupt the otherwise “frictionless” response; a sloped test floor will cause vehicle motion due to gravity, which is unrealistic motion. Major air-bearing simulation facilities are located at NASA Marshall Space Flight Center and at the Johnson Space Center. Air-bearing pucks have to be developed for very low friction and appropriate scale. Following the failure of the EVA procedure to capture an Intelsat satellite, it was discovered that the flat-floor training system required a friction break-loose force of 16 lbs. This obscured dynamic response to capture tasks that were seen in space, but not discernable in the air-bearing training system. Lightweight payloads may have their mass properties completely obscured by the flotation system, which includes both the air-bearing pucks and bottles for pressurized gas, used for both flotation and cold-gas thruster simulations. (While it is possible to supply consumables from off-unit through the use of an umbilical, the stiffness and dynamics of the umbilical usually destroy the fidelity of disturbance responses.) Since three of the six degrees of freedom are not simulated with an air- bearing test vehicle, these usually are either ignored, simulated passively over a small range by the use of a balanced pivot mechanism on the vehicle, or actively simulated under computer control. The last approach can be accomplished either by actively- controlled articulation directly on the air-bearing test vehicle, or by mapping the An Overview of Space Robotics © 2007 David L. Akin appropriate relative motion into the movement of a target vehicle attached to a large robot arm, rigidly mounted above or externally to the air-bearing floor. 3.3.9.4.2 Needed Research and Development • Low-mass, low-flow air suspension systems to minimize inertial effects on target vehicles • Low-mass computer-controlled three-axis motion systems to provide active motion simulation in axes not mapped into the air-bearing vehicle motion • Better mappinig algorithms for combining air-bearing vehicle motion with driven movements of target payloads on external robots to extend the numbers of degrees of freedom simulated without exceeding physical constraints of the facility 3.3.9.4.3 Potential Benefits Air-bearing systems can produce realistic dynamics passively in three degrees of freedom. This allows them to realistically simulate (in planar motion) the response of a spacecraft to applied forces, including thruster firings and contact forces. These features make this approach well suited to visualization of attitude control system algorithms, proximity maneuvering, and docking dynamics.

3.3.9.5: Computer-Controlled Motion Carriages Since motions in space are effectively modeled by simple dynamic systems (“F=ma”), computer predictions of motion are apt to be highly accurate if disturbance forces and torques are well modeled. Given that air-bearing simulation already requires digital simulation and computer motion control to move beyond three degrees of freedom to six, it makes sense that the “next step” would be to eliminate the cost and complexity of the large flat floor and air suspension systems in favor of a system where all six axes are actively controlled by computer. 3.3.9.5.1 Limitations of Current SOA Motion carriages range from small camera transporters (traditionally used for simulations of human-in-the-loop proximity operations, now largely supplanted by computer graphics simulations) to large gantry cranes capable of carrying robotic “end effectors” holding models of simulated spacecraft. Perhaps the newest and most elaborate such system is at the Naval Research Laboratory in Washington, DC. In a large high-bay, two gantry cranes at different heights provide gross relative motions. Each mounts a large industrical 6DOF robot, which provides fine tranlation and rotational degrees of freedom for the satellite mockups, traditionally a “target” and a “chaser” vehicle. This facility is being used in support of DARPA’s Orbital Express program. It currently is limited to non- contact proximity operations, due to concerns about potential damage following inadvertent collisions. Such facilities represent a very large investment in time, money, and space, but provide an excellent facility for simulating relative motion and proximity operations, including the effects of flight sensor systems. Given appropriate instrumentation, these facilities are capable of realistic simulation of contact dynamics as well; in fact, a motion carriage facility at Martin-Marietta was human-rated and used in the early 1980’s to train astronauts for capturing the Solar Max spacecraft using the An Overview of Space Robotics © 2007 David L. Akin

Manned Maneuvering Unit. Due to the cost of the multiaxis positioning and control systems, motion carriage facilities are limited to one (or at most two) active simulated vehicles, and constrained by geometry as to allowable relative motions. 3.3.9.5.2 Needed Research and Development Integrated systems to allow the simple specification of vehicle performance parameters and integration of protoflight sensor packages for acceptance testing of autonomous proximity control systems Instrumentation and modeling to provide realistic responses to contact dynamics while automatically performing collision avoidance monitoring Accommodation for articulated payloads, allowing the end-to-end simulation of robotic activities on a servicing spacecraft in proximity to the target spacecraft 3.3.9.5.3 Potential Benefits Motion carriages are the most common means of simulating contact dynamics for docking and berthing activities, and offer extended simulation times without undue hardships on the simulation personnel. Advances simulation capabilities, such as the option to have a manipulator-equipped spacecraft as a motion carriage payload, would increase the potential range of simulation applications.

3.3.9.6: Neutral Buoyancy Neutral buoyancy refers to the status of a buoyant object in a fluid which is just at the neutral point between floating and sinking. Although the term is equally appropriate for balloon simulation in air (extensively used at the NASA Johnson Space Center for RMS crew training), it will be used in the context of this report to discuss hardware suspended in the underwater environment. The goal in neutral buoyancy is to adjust flotation and ballast so that the center of mass of a component is coincident with the center of flotation. In this way, the object not only will not attempt to rise or sink, but will have no preferred orientation, being neutrally stable in any attitude. 3.3.9.6.1 Limitations of Current SOA This approach to space simulation has many benefits, and more than a few drawbacks. While “neutral buoyancy” would only properly be used for microgravity simulations, the use of appropriate flotation and ballast also allows realistic simulation of the partial gravity environments of other planetary bodies, such as the Moon and Mars. It is (arguably) the best long-term ground-based simulation of weightlessness, as there are no limits on test size, complexity (in terms of numbers of components, or their relative positioning) or duration except the physical scale of the water tank and physiological limits if humans need to be in the water in conjunction with test operations. This is probably the only acceptable simulation method for studying large space structures assembly, unless a decision is made a priori to ignore the microgravity environment, in which case laboratory simulation has been used. Neutral buoyancy also allows the investigation of cooperative human/robotic interactions, as human test subjects are also in the realistic simulated microgravity environment working on full-scale hardware. An Overview of Space Robotics © 2007 David L. Akin

Many of the disadvantages of underwater simulation are obvious by inspection. It requires a large tank of water. Only two neutral buoyancy facilities are currently active in the United States: the Neutral Buoyancy Laboratory at NASA Johnson (rectangular, 102 ft. x 202 ft. x 40 ft. deep) and the Neutral Buoyancy Research Facility at the University of Maryland (cylindrical, 50 ft. diameter x 25 ft. deep). While such water tanks require large buildings and represent substantial nonrecurring costs, it should be remembered that water provides a passive suspension system, so it is not clear whether a neutral buoyancy facility would be more or less expensive than, say, a motion carriage of the same scale. All hardware to be used in the underwater environment must be designed to be safe and operable in that environment. This was not such an onerous task when used for EVA training with low-fidelity mockups, but underwater robotics simulations require robots which, although designed for space applications, are nonetheless fully functional underwater. Almost all off-the-shelf underwater manipulators are driven by hydraulics, which are totally unsuitable for space flight. This drove the Space Systems Laboratory to developing their own flight-rated dexterous manipulators. A major problem of designing a robot for underwater is manifest in the need for airtight enclosures around electronics housings, motors with encoders, video cameras, and other electrical equipment. Since divers are generally present in the water tank when robotic simulations are active, power supplies are constrained to stay below 32 VDC to minimize the shock hazard. The Ranger Telerobotic Shuttle Experiment sought to use this problem as a feature, though designing a single dexterous manipulator which would be ideally suited to both environments. Although not flown in space, the process of development, testing, and flight qualifying this manipulator taught several lessons, especially that the operating environments are synergistic: a manipulator built for one generally has many of the features required to operate in the other. Indeed, the plan for the Ranger TSX flight was to use manipulators identical to the underwater unit (used for validation and crew training), except in the omission of the waterproofing seals (required to allow the manipulator interior to vent to vacuum on launch and to atmosphere on entry.) The other major issue with underwater simulation is the effect of water drag, which makes the simulation a lower-fidelity replica of the actual space environment. While this is not a significant issue for manipulator research (since the manipulator dynamics and mass properties mean this effect is quite small overall), it is significant for free-flying vehicle simulations or for humans. The University of Maryland, which has been actively developing neutral buoyancy simulations of robotics for more than two decades, has developed a number of workarounds to this problem, including sensor systems and a free- flying vehicle control system which nullifies the effects of water drag and behaves as if it were a spacecraft in microgravity. 3.3.9.6.2 Needed Research and Development Better sealing technologies, particularly low-friction systems for reliable sealing around high-speed motor shafts. Acoustic and/or vision based position finding system, capable of locating and measuring objects in translation and rotation, and in both position and velocities (i.e., full state feedback, without limitations on number of instrumented payloads) An Overview of Space Robotics © 2007 David L. Akin

Instrumented multipoint scales to aid in designing flotation and ballast to get passive underwater mockups to be rotationally neutral “Smart” mockup with the capability to create external forces, and to respond to external forces with realistic dynamics, in order to simulate anything of interest while removing water drag effects Integration of computer graphics with neutral buoyancy simulation to provide realistic scene of space worksite while getting microgravity suspension advantages of neutral buoyancy 3.3.9.6.3 Potential Benefits Since the days of Skylab, neutral buoyancy has been the simulation medium of choice in human space flight operations and training. It offers all the same advantages for robotic space operations, at the cost of accepting the requirement to develop test hardware for the underwater environment.

3.3.9.7: Parabolic Flight The only method to achieve actual microgravity on earth is by flying parabolic trajectories in aircraft. By flying zero-lift parabolic trajectories, and using engine thrust to cancel aerodynamic drag, acceptable microgravity conditions (on the order of 0.01g) can be obtained in the aircraft interior for a short period of time. Transport-category aircraft, such as the KC-135 flown by NASA, can achieve 25-30 seconds of microgravity time in repeated parabolas, separated by 90 seconds of 1.8-2 g as the aircraft pulls out of one parabola and enters the next. 3.3.9.7.1 Limitations of Current SOA Few robots have been flown in parabolic flight research; probably the most notable example is that NASA Johnson adapted a laboratory manipulator to do grappling of free- floating payloads autonomously. Due to safety concerns, particularly with a large number of people on-board the shared research flight, there is a need to ensure that no one can inadvertently enter the operating envelope of the robot arm during operations. Flying in parabolic flight research is an arduous activity for humans, with a high rate of debilitation due to motion-induced nausea. Flight opportunities are limited, and costs are high, as operating expenses for the aircraft must be recovered from fees paid by researcher. Quality of microgravity is a strong function of individual pilot skills, and total times are limited by the physics of the situation to 25-30 seconds maximum for transport- category aircraft. Experiments are limited by cabin diameter, and manipulators longer than approximately five feet (depending on the specifics of mounting) would have the potential to impact the cabin walls, raising a catastrophic safety hazard. 3.3.9.7.2 Needed Research and Development • Parabolic flight aircraft with larger cabin sizes to relieve limitations on experiment scale • High data rate uplink/downlink to aircraft to allow remote experiment operation, relieving experiment operators from negative effects of aircraft maneuvers An Overview of Space Robotics © 2007 David L. Akin

• Proximity sensors, in conjunction with safety-qualified control systems, which will ensure that the manipulator will not inadvertently impact aircraft structure or personnel encroaching on the operating volume. 3.3.9.7.3 Potential Benefits Parabolic flight is the only feasible means of obtaining actual microgravity conditions on earth, albeit in 30-second increments. Due to the limitations on time and volume, as well as the cost of test operations, parabolic flight simulations should be limited to validation of other simulation media or testing of technologies not otherwise accurately simulated, such as grasping of lightweight free-floating targets. The suggested development activities would increase the range of potential simulation activities, as well as providing a more flight-like environment which includes the roles of ground-based participants.

3.3.10 Human/Robot Interactions Since the demise of X-20 and Manned Orbiting Laboratory in the 1960’s, it has become anathema for any non-NASA space program to even discuss the use of humans in routine space operations. However, given the complexities of the space manufacturing enterprises discussed previously, it would be remiss of this report not to explicitly consider humans as another option in the space manufacturing toolkit. As described earlier, the Ranger dexterous servicing system – even at a comparatively early stage in its development – has demonstrated its capability to perform 85% of the tasks currently required for the highly dexterous servicing activities on Hubble Space Telescope. Logical evolutionary developments to the robot would bring this total into the low 90’s. The Robonaut anthropomorphic robot, which is an order of magnitude more complex (at least in the end effectors) and significantly behind Ranger in flight certification, is currently estimated to be in the low 90’s, and will require major new development to migrate that value up into the 97-98% range. From a systems perspective, the question to ask is whether it is worth it to pay the marginal additional development costs to get the last few percentage points of capability, or to either forego some tasks which are beyond robotic capabilities, or (when the benefit makes the costs worthwhile) to provide humans on-site to accomplish the highly dexterous and complex tasks beyond robotic skill levels. (It should be mentioned that principle among this class of task is probably robotic repair.)

3.3.10.1 Limitations of Current SOA Humans are currently used in direct cooperative activities with robotics, in that both shuttle and space station remote manipulator systems are used to position and restrain EVA crew for servicing operations. This has been acceptable to safety review panels, in that the manipulators are constrained to slow movements only, and externally controlled brakes are generally applied at each operating location. Experiments at the NASA Marshall Space Flight Center by the University of Maryland have demonstrated that robotic augmentation of EVA increased overall mission performance between 60% (Hubble Servicing Missions SM1 and SM3b, both with substantial contingency activities) and 400% (SM2 and SM3a, both primarily simple ORU replacement tasks well suited to robotic involvement). Plans exist for augmentation of RMS manipulator foot restraints An Overview of Space Robotics © 2007 David L. Akin with two Ranger dexterous manipulators to assist EVA crew for the planned Hubble SM- 4 mission; current estimates are that this configuration will reduce total required EVA time from 30 hours to 17. To facilitate this level of human/robot interaction, there needs to be a greater emphasis on autonomous safety systems, probably in concert with sensor skins to provide external proximity warnings. Farther-term possibilities would include the direct incorporation of robotic systems into the space suit itself, augmenting human capabilities and maximizing capabilities of the symbiotic partnership.

3.3.10.2 Needed Research and Development Detailed system studies of marginal costs for humans and robots in space, to identify conditions under which human/robot teams become the most cost-effective solution to space operations requirements Investigate through simulation the most beneficial roles for human/robotic cooperation in a single unified work site Develop advanced technologies for cooperative human/robotic operations, including possible use of the partnership beyond low earth orbit for the development of space manufacturing using nonterrestrial materials Develop innovative systems which integrate human and robotic capabilities for maximizing mission performance and mission assurance

3.3.10.3 Potential Benefits If humans are present in the space manufacturing work site of the future, their skills and capabilities make them an essential part of the human/robot partnership for maximize system performance. Because there will always be categories of tasks ill-suited to robots (regardless of the growth of overall capabilities), the use of humans should be approached as a systems analysis question, rather than taking a priori dogmatic positions based on guesses of human capabilities or human costs. 3.4 Visions of Future Capabilities We came into space as tourists – gazing with awe at previously unseen wonders, looking back on our home with new perspectives, taking pictures and, in a few instances, picking up some rocks as a memento of the trip. We stand at the point of a sea change: making space into a place of commerce, of national security, a base camp for renewed exploration and exploitation of new sources of needed materials. To make this change requires new capabilities for working in space. We have used humans in the past when we needed to interact with objects in the space environment, and that will continue and expand. However, even the most optimistic projections of increased humans in space fall far short of the need for new capabilities, and functional presence in new and more remote locations. Only robotics can fill this need. Unfortunately, our current ability to develop highly capable robots in space falls far short of public perception. Far from the visions of R2D2 An Overview of Space Robotics © 2007 David L. Akin and C3PO, we in fact are only barely able to pick up a rock, or to replace a component specially designed to make it easy for the robot. The ability to build a car, or a computer, on a highly controlled assembly line is of little value in an environment with no roads, or no gravity. At the same time, the promise of space robotics is clearly there. For twenty years, crane- type manipulators have been indispensable for payload operations and the construction of the International Space Station. Development systems, operating in the best simulations of space we can achieve on the ground, have repaired grabbed free-floating balls out of the air and repaired Hubble Space Telescope. Prototype robots have demonstrated the functionality to move around a work site, both in space and on the surface of Mars, to gain new perspectives and to perform valuable tasks. Highly capable systems have been developed for space flight to demonstrate dexterous operations, but have been tied to the ground due to external budget shortfalls or internal management shortcomings. There is, perhaps, no single area of technology that has such great potential to expand our range of options and to transform our space capabilities as does robotics. What is lacking is a far-sighted sponsoring organization with the foresight to visualize the quantum leaps in capability that are within our reach, and the resources to unleash the eager brainpower in this country to develop the innovative technologies and systems required to bring about this revolution in space.