<<

H2020 PULSAR D4.1 - D11.1B - TECHNOLOGY REVIEW

Document reference D4.1 H2020_PULSAR-TAS-D11.1b Technology Review v2.6.docx Version 2.6 Delivery date 31/05/2019 Confidentiality Level Public Lead Partner ONERA Project reference Grant Agreement n 821858

Name Organisation Date Prepared by Consortium 28/02/2019

Reviewed by Vincent Bissonnette MAG 29/05/2019 Approved by Thierry Germa MAG 31/05/2019

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 821858

Distribution Organisation Name EC Christos Ampatzis PSA Sabine Moreno (CNES) Michel Delpech (CNES) Javier Rodriguez (CDTI) Daniel Nolke (DLR) Daniel Jones (UKSA) Gianfranco Visentin (ESA) Magellium DLR

Space Applications Graal Tech PULSAR Consortium ONERA CSEM

DFKI in France

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 2 of 158 Review v2.6.docx

Document Change Record Ed. Vers. Date Change description 1 0 27/02/2019 Document created 1 1 5/03/2019 Draft skeleton 1 3 1/04/2019 Draft table of contents and initial contributions. 1 5 1/05/2019 Contributions from partners. 2 0 15/05/2019 Complete restructuration. Split in three volumes: D11.1a – Mission Analysis D11.1b – Technology Review D11.1c – System Requirement Document 2 1 22/05/2019 Contributions from Magellium and DFKI 2 2 22/05/2019 Merge of previous versions. Addition of I3DS section (TAS) 2 3 28/05/2019 Additional contributions from CSEM 2 4 28/05/2019 Additional contributions from SAS 2 5 28/05/2019 Additional contributions from ONERA 2 6 29/05/2019 Review by TAS-F and SAS

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 3 of 158 Review v2.6.docx

Table of Contents 1 Introduction ...... 11 1.1 Context ...... 11 1.2 Associated documents ...... 11 1.2.1 Applicable Documents ...... 11 1.2.2 Reference Documents...... 11 2 Global System Architecture ...... 12 3 Hardware Components ...... 13 3.1 Robotic Assembly System (RAS) ...... 13 3.1.1 , Canadarm2, JEMRMS, ERA ...... 13 3.1.2 , SFA ...... 15 3.1.3 DEXARM ...... 17 3.1.4 Orbital Express Robotics (2007) ...... 18 3.1.5 Front-end Robotics Enabling Near-term Demonstration (FREND) Robotic Arm ...... 19 3.1.6 ...... 20 3.1.7 Compliant Assistance and Exploration SpAce Robot (CAESAR) ...... 21 3.1.8 References ...... 25 3.2 Segmented Mirror Tiles (SMT) ...... 25 3.2.1 Review of Telescopes with Segmented Primary Mirror ...... 26 3.2.2 Review of High Accuracy Multi-axis Positioning Systems ...... 33 3.2.2.1 Naos Fast Tip Tilt Mirror ...... 34 3.2.2.2 Hexapod for M2 mirror on VLT Auxiliary Telescopes ...... 36 3.2.2.3 EMIR – DTU ...... 38 3.2.2.4 SOFIA hexapod and tilt-chopping mechanism ...... 41 3.2.2.5 M2 Hexapod for the Gran Telescopio CANARIAS ...... 44 3.2.3 Technologies for High Precision Position Measurements and Actuation ...... 45 3.2.3.1 Generic Review of the Major 1D Position Sensors Families ...... 48 3.2.3.2 Generic Review of the Major Actuators Families ...... 49 3.2.3.3 Review of the Sensors Used in SMT (mainly edge sensors) ...... 52 3.2.3.4 Review of Actuators Used in SMT ...... 54 3.2.3.5 Review of the Metrology Techniques Used to Monitor Wave Front of Primary Mirrors in SMT 55 3.2.3.6 Sensors for Implementation in the dSMT Within the Framework of PULSAR dPAMT 61 3.2.3.7 Actuators for Implementation in the dSMT Within the Framework of PULSAR dPAMT 63 3.2.3.8 Sensors Foreseen in the Framework of PULSAR dPAMT to Perform dSMT Characterization ...... 63 3.2.3.9 Metrology of SMT Used in the Framework of PULSAR dPAMT ...... 64 3.2.3.10 Conclusions and Recommendations About Sensors and Actuators Technological Review 64 3.2.3.11 References ...... 64 3.2.4 SMT Review Conclusions ...... 65 3.3 Exteroceptive Sensors: I3DS Integrated 3D Sensors ...... 66

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 4 of 158 Review v2.6.docx

3.3.1 Review and Assessment of OG4 Components ...... 70 3.3.2 OG4 ICU ...... 70 3.3.2.1 Hardware Architecture ...... 70 3.3.2.2 FPGA IP ...... 74 3.3.2.3 The Zynq UltraScale+ Architecture ...... 75 3.3.2.4 Performance ...... 76 3.3.2.5 Limitations ...... 77 3.3.3 OG4 – Sensors ...... 78 3.3.3.1 Cameras ...... 78 3.3.3.2 LIDAR ...... 82 3.3.3.3 Star Tracker ...... 83 3.3.3.4 Radar ...... 84 3.3.3.5 IMU ...... 84 3.3.3.6 Contact/Tactile and Force/Torque Sensor ...... 85 3.3.3.7 Illumination Devices: Pattern Projector and Wide Angle Torch ...... 87 3.3.3.8 Sensors Performances Summary ...... 88 3.3.4 Conclusions ...... 90 3.4 Standard Interface ...... 91 3.4.1 Standard Interface in PULSAR ...... 91 3.4.2 OG5-SIROM Interface ...... 91 3.4.2.1 Mechanical Interface ...... 92 3.4.2.2 Electrical Interface ...... 93 3.4.2.3 Data Interface ...... 93 3.4.2.4 SIROM Controller ...... 94 3.4.2.5 OG5- SIROM Specifications ...... 94 3.4.3 HOTDOCK Interface ...... 95 3.4.3.1 Mechanical Interface ...... 96 3.4.3.2 Power Interface ...... 97 3.4.3.3 Data Interface ...... 97 3.4.3.4 HOTDOCK Controller ...... 97 3.4.3.5 Thermal Interface...... 98 3.4.4 iSSI (IBOSS) Interface ...... 98 3.4.5 Interface Assessment ...... 99 3.4.5.1 Mechanical Interface and Mechanisms ...... 99 3.4.5.2 Power Interface ...... 101 3.4.5.3 Data Interface ...... 101 3.4.5.4 Controller ...... 102 4 Software Components ...... 103 4.1 ESROCOS Framework ...... 103 4.1.1 Introduction ...... 103 4.1.2 Reference Implementation ...... 104 4.1.3 Capabilities and Limitations ...... 108 4.1.4 Conclusions ...... 110 4.2 ERGO Autonomy Framework ...... 111

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 5 of 158 Review v2.6.docx

4.2.1 Introduction ...... 111 4.2.2 Agent Reactors ...... 111 4.2.2.1 Reactor Definition ...... 112 4.2.2.2 Reactors Instantiation ...... 112 4.2.3 ERGO Orbital Case ...... 112 4.3 InFuse ...... 114 4.3.1 Introduction ...... 114 4.3.2 OG3-InFuse perception functions ...... 114 4.3.3 InFuse CDFF building blocks ...... 120 4.3.4 Available sensors from I3DS ...... 125 4.3.5 Calibration tools ...... 134 4.3.6 Conclusion ...... 134 4.4 RAS Motion Control ...... 137 4.4.1 Dextre, SFA ...... 138 4.4.2 DEXARM ...... 138 4.4.3 Orbital Express Robotics ...... 138 4.4.4 Front-end Robotics Enabling Near-term Demonstration (FREND) Robotic Arm ...... 139 4.4.5 Dragonfly ...... 139 4.4.6 Compliant Assistance and Exploration SpAce Robot (CAESAR) ...... 139 4.5 AOCS ...... 142 4.5.1 AOCS Functions ...... 142 4.5.2 AOCS Requirements ...... 142 4.5.3 Modelling tools ...... 144 4.5.4 Non-linear model for simulation purpose ...... 144 4.5.5 SDT model for control purpose ...... 144 4.5.6 Controller design strategies and tools ...... 146 4.5.7 Control design for the deployment phase ...... 146 4.5.8 Control design for the observation phase ...... 147 4.5.9 AOCS Implementation ...... 147 4.5.10 References ...... 148 4.6 Simulation Environments ...... 150 4.6.1 Unity3D ...... 150 4.6.2 Unreal Engine 4 ...... 151 4.6.3 Gazebo ...... 152 4.6.3.1 DART ...... 154 4.6.3.2 ODE ...... 155 4.6.3.3 Bullet ...... 155 4.6.4 Conclusion ...... 156 5 Conclusion ...... 157

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 6 of 158 Review v2.6.docx

List of Figures Figure 3-1 Canadarm2 and JEMRMS (MA + FSA) ...... 13 Figure 3-2:The European Robot Arm ...... 14 Figure 3-3: Dextre ...... 16 Figure 3-4 DEXARM (ESA) ...... 17 Figure 3-5 Orbital Express ...... 18 Figure 3-6 FREND system during demonstrations in the NRL Proximity Operations Test Facility ...... 19 Figure 3-7: Dragonfly Assembly Robot and Tool ...... 20 Figure 3-8 Joint of CAESAR ...... 23 Figure 3-9 Baseline manipulator: Straight configuration (left) and storage configuration (right) ...... 24 Figure 3-10 Different kinematics ...... 24 Figure -3-11: Characteristic of the main space robotics arm [Dubanchet2016] ...... 24 Figure -3-12: SSRMS, ERA and JEMRMS Specifications [Laryssa2002] ...... 25 Figure 3-13: Naos Field Selector ...... 34 Figure 3-14 FlexTec® membrane and rod parts used on the NAOS field selectors ...... 35 Figure 3-15: FlexTec® linear guiding parts and linear voice-coil used on the NAOS field selectors ...... 35 Figure 3-16: Silicon Carbide (SiC) mirror of the NAOS field selectors, 90 mm diameter lightweight equipped with rods (3x) and displacement sensor targets (3x)...... 35 Figure 3-17: Naos Field Selector Specifications...... 36 Figure 3-18: M2 mirror on Hexapod mounted on the VLT auxiliary telescope in Paranal...... 36 Figure 3-19:: M2 VLT auxiliary telescope Hexapod without mirror ready for implementation . 37 Figure 3-20: M2 VLT auxiliary telescope Hexapod Specifications ...... 37 Figure 3-21: M2 VLT auxiliary telescope Hexapod testing method ...... 38 Figure 3-22: EMIR - DTU concept ...... 39 Figure 3-23: EMIR - DTU during qualification tests ...... 40 Figure 3-24: EMIR – DTU Specifications ...... 40 Figure 3-25: SOFIA telescope ...... 41 Figure 3-26: SOFIA hexapod and tilt-chopping mechanism concept ...... 41 Figure 3-27: SOFIA Hexapod and Tilt-Chopping Mechanism ...... 42 Figure 3-28: SOFIA Actuators ...... 42 Figure 3-29: SOFIA Focus Center Mechanism Requirements ...... 42 Figure 3-30: SOFIA Tilt-Chop Mechanism Requirements ...... 43 Figure 3-31: M2 Hexapod for GranTeCan ...... 44 Figure 3-32 : Relations between displacement, resolution and maximal frequency of various type of sensors [3]...... 49 Figure 3-33 : Relations between displacement and force for various type of actuators [3]. ... 50 Figure 3-34 : Relations between displacement stroke and displacement resolution for various type of actuators [3]...... 51 Figure 3-35 : Relations between displacement and operating frequency for various type of actuators [3]...... 51 Figure 3-36 : TMT Edge Sensors CAD and spatial arrangement ...... 52 Figure 3-37 : TMT Edge Sensors layout ...... 53

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 7 of 158 Review v2.6.docx

Figure 3-38 : Keck Edge Sensor layout and performances ...... 53 Figure 3-39 E-ELT Edge Sensor ...... 54 Figure 3-40 CAD view of an E_ELT segment and a detailed CAD view of the actuator...... 54 Figure 3-41 : Principle schema of the two-stage PI actuator...... 54 Figure 3-42 : The first 21 Zernike polynomials, ordered vertically by radial degree and horizontally by azimuthal degree...... 55 Figure 3-43 : Unwrapping of the phased image of an eye surface (4D Technology). Left : wrapped image. Right : 3D reconstructed image after unwrapping...... 57 Figure 3-44 : JWST Wavefront Sensing and Control Process...... 58 Figure 3-45: Imagine-Optic Metrology ...... 58 Figure 3-46: Optocraft Metrology ...... 59 Figure 3-47: AkaOptics Metrology ...... 59 Figure 3-48:GEDMRF Metrology ...... 59 Figure 3-49: 4D Techology Metrology ...... 60 Figure 3-50:Zygo Metrology ...... 60 Figure 3-51: DifroTec Metrology ...... 60 Figure 3-52: Linear Gage ...... 61 Figure 3-53:Hall Sensor ...... 61 Figure 3-54:Capacitive Sensors ...... 62 Figure 3-55:Cofocal Light Sensor ...... 62 Figure 3-56 : A Possible Implementation of the Edge Sensors on dPAMT...... 63 Figure 3-57: I3DS Product Breakdown Structure ...... 67 Figure 3-58: OG4 ICU in its rackmount configuration...... 71 Figure 3-59: OG4 ICU hardware architecture...... 73 Figure 3-60: Zynq UltraScale+ architecture, reproduced from Xilinx UltraScale+ Product Selection Guide...... 75 Figure 3-61: ASN.1 coding performance ...... 77 Figure 3-62: Cosine Acquisition Board...... 78 Figure 3-63: Cosine High Resolution Camera (Note: Acquisition Board not shown)...... 79 Figure 3-64: Basler ace High Resolution Camera...... 79 Figure 3-65: Cosine Stereo Camera ...... 80 Figure 3-66: Cosine TIR Camera ...... 81 Figure 3-67: OG4 LIDAR sensor...... 83 Figure 3-68: Terma T1 Star Tracker Optical Head...... 84 Figure 3-69: OG4 Hertz radar...... 84 Figure 3-70: OG4 IMU – Silicon Sensing DMU-30 ...... 85 Figure 3-71: PIAP F/T and C/T Sensor ...... 86 Figure 3-72: I3DS Structured Light Sensor ...... 87 Figure 3-73: Gray coding pattern ...... 87 Figure 3-74: Structured Light Sensor Dimension ...... 88 Figure 3-75: I3DS Wide Angle Illumination ...... 88 Figure 3-76 : SIROM interface design and components ...... 92 Figure 3-77 : SIROM Latching Mechanism Concept ...... 93 Figure 3-78 : Electrical Interface Subsystem (EIS) Board ...... 93

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 8 of 158 Review v2.6.docx

Figure 3-79 : SIROM controller architecture and hardware implementation ...... 94 Figure 3-80 : HOTDOCK interface concept before and after coupling ...... 96 Figure 3-81 : HOTDOCK internal configuration ...... 97 Figure 3-82: iBOSS iSSI Interface (with thermal ring) ...... 98 Figure 3-83: Example of a triple connection with the Standard Interface ...... 100 Figure 4-1: PULSAR Concept Architecture ...... 103 Figure 4-2: ESROCOS Components (source: RD-6) ...... 105 Figure 4-3: ERGO Orbital Use Case Architecture ...... 113 Figure 4-6: InFuse CDFF architecture ...... 114 Figure 4-7: I3DS sensor suite components ...... 126 Figure 4-8: I3DS sensors for the Orbital scenario ...... 126 Figure 4-9: Basler Ace acA2040-25gmNIR ...... 127 Figure 4-10: I3DS stereo cameras ...... 129 Figure 4-11: Theoretical depth error w.r.t. the depth distance ...... 131 Figure 4-12: I3DS structured light sensor ...... 132 Figure 4-13: Gray coding pattern ...... 133 Figure 4-14: Structured light sensor dimension ...... 133 Figure 4-15: I3DS wide illumination projector ...... 134 Figure 4-16 Structure of joint level control ...... 140 Figure 4-17 Structure of Cartesian impedance control ...... 141 Figure 4-18: Conceptual frequency spectra of disturbance sources (bottom), spacecraft structural modes (middle) and strategies for damping these modes using either active or passive means (top)[2] ...... 143 Figure 4-19 Cantilevered connection of (rigid or flexible) appendages on a rigid hub...... 145 Figure 4-20 Block diagram of the TITOP system ...... 146 Figure 4-21 AOCS control structure ...... 147 Figure 4-22 : ERGO process [24] ...... 148

List of Tables Table -1 - List of applicable documents ...... 11 Table -2 - List of reference documents ...... 11 Table -11 – DLR model-based tracker InFuse results ...... 115 Table -12 – Summary on model-based tracking method ...... 116 Table -13 – Summary on pointcloud tracking method...... 117 Table -14 – Summary on fiducial marker detection method ...... 118 Table -15 – Summary on 3D pointcloud reconstruction ...... 120 Table -16 – Summary on 2D features detection methods available in InFuse ...... 120 Table -17 – Summary on 2D features descriptors extraction methods available in InFuse ..... 121 Table -18 – Summary on 2D features matching methods available in InFuse ...... 121 Table -19 – Summary on Perspective-n-Point methods available in InFuse ...... 121 Table -20 – Summary on 3D features detection methods available in InFuse ...... 122 Table -21 – Summary on 3D features descriptors extraction methods available in InFuse ..... 122 Table -22 – Summary on 3D features matching methods available in InFuse ...... 123 Table -23 – Summary on ICP methods available in InFuse ...... 123

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 9 of 158 Review v2.6.docx

Table -24 – Summary on image processing methods available in InFuse ...... 124 Table -25 – Summary on stereo reconstruction methods available in InFuse ...... 124 Table -26 – Summary on disparity to pointcloud methods available in InFuse ...... 125 Table -27 – Summary on pointcloud transformation methods available in InFuse...... 125 Table -28 – Summary on 3D transformation estimation methods available in InFuse ...... 125 Table -29 – Basler acA2040-25gmNIR specifications ...... 128 Table -30 – Zeiss Interlock Compact 2.8/21 specifications ...... 128 Table -31 – CMOSIS CMV4000 specifications ...... 129 Table -32 – COSINE HRC Optics specifications ...... 130 Table -33 – Comparison between different stereo parameters ...... 130

Acronyms and Definitions AOCS Attitude and Orbit Control System APM Advanced Payload Module AUV Autonomous Underwater Vehicle CA Collaboration Agreement CAD Computer-Aided Design CMM Coordinate Measuring Machine DoF Degree of Freedom dPAMT demonstrator of Precise Assembly of Mirror Tiles (demonstrator 1) dLSAFFE demonstrator of Large Structure Assembly in Free Floating Environment (demonstrator 2) dISAS demonstrator of In-Space Assembly in Simulation (demonstrator 3) JWST James Webb Space Telescope RAS Robotic Assembly System RCOS Robotic Control Operating System SI Standard Interface SMT Single Mirror Tile

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 10 of 158 Review v2.6.docx

1 Introduction

1.1 Context

Autonomous assembly of large structures in space is a key challenge to implement future missions that will necessitate structures to be self-deployed as a single piece. PULSAR (Prototype of an Ultra Large Structure Assembly Robot) objective is to develop and demonstrate key technologies for in- space assembly of the primary mirror of a large telescope.

The first main task of PULSAR aimed at providing a work baseline for the development of the project. This is achieved through three main activities: . Mission analysis, investigating previous and near-future similar missions, deriving high level recommendation for PULSAR-like missions and proposing a system architecture. . Technology review, analysing in-depth the different building blocks foreseen for PULSAR through a review of state-of-the-art technologies and near-future developments. . Compilation of system requirements, derived from the mission analysis and the technology review. This document focusses on the technology review. The first section presents the system architecture derived from the mission analysis and the trade-offs performed at system level. Sections 3 and 4 presents the analysis of the different building blocks foreseen for PULSAR, identifying the available technologies and the main knowledge gaps that this project would need to address.

1.2 Associated documents

1.2.1 Applicable Documents

Ref Description AD-1 Grant Agreement Nr 821858-PULSAR AD-2 Strategic Research “Space Robotics Technologies” Collaboration agreement Table -1 - List of applicable documents

1.2.2 Reference Documents

Ref Description RD-1 D4.1 H2020_PULSAR-TAS-D11.1a Mission Analysis RD-2 ESROCOS D2.1 Product Definition RD-3 ESROCOS D3.1 Detailed Design Document RD-4 ESROCOS D4.4 RCOS APIs and Tools document RD-5 ESROCOS D5.4 Evaluation of Test Results RD-6 ESROCOS D7.2 Final Report RD-7 SPACE-12-TEC-2018 Guidance Document, D3.2-Compendium of SRC activities (for call 2) Table -2 - List of reference documents

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 11 of 158 Review v2.6.docx

2 Global System Architecture

Please refer to D4.1-D11.1a Volume A Mission Analysis for a complete picture of the system architecture considered for the following technology review.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 12 of 158 Review v2.6.docx

3 Hardware Components

3.1 Robotic Assembly System (RAS)

Robot-based assembly in the absence of gravity addresses fundamental technical questions that do not exist for terrestrial applications. The robot is mounted on an actively regulated platform (either a simple satellite or a large assembly platform), and the motion of the robot has dynamic effects on the platform itself. The dynamic effects depend on the mass of the objects and the velocity of the motion. When the robot works on a platform different to its floating base, the required physical contact affects both platforms. The contact can also affect the perception system, as minor changes in position might lead to significant visual occlusions.

Robotics in orbital space has a long tradition. Already in 1969, Canada was invited by NASA to participate in the program. At the time a manipulator system was identified as an important component. Nevertheless there are only few space robot arms in orbital applications yet. These are described in the following sections.

3.1.1 Canadarm, Canadarm2, JEMRMS, ERA

Robotic manipulators were essential for the construction and maintenance of the International Space Station (ISS). Manipulators are used to support experiments, extra-vehicular activity (EVA) and scientific activities onboard the ISS. The Canadian, Japanese and European Space Agencies developed robotic arms, each with a specific role to play over the lifetime of the ISS. Each robotic manipulator has a unique design and task-specific characteristics.

Figure 3-1 Canadarm2 and JEMRMS (MA + FSA)

Probably, the most popular and most experienced robot arms are the Canadarm and Canadarm2 built by the Canadian company MDA, now a MAXAR subsidiary.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 13 of 158 Review v2.6.docx

The Canadarm was mounted in the cargo bay of the space shuttle orbiter for deploying, maneuvering and capturing payloads. The 6dof robot arm had a length of 15.2m with a mass of 450kg. The 7dof Canadarm2 attached to the International Space Station (ISS) is an advancement of Canadarm. The mission requirements were strongly extended. In general, the hardware has to survive for years and not for weeks, must be maintainable by and not by mechatronic experts. The workspace was extended in length and reachability that led to a 7dof kinematics and an arm length of 17.6m. The robot arm has to handle much larger payloads up to 116tons, which led to an own weight of 1800kg. Additionally, an innovative end- over-end mobility of the arm was implemented. Each end of Canadarm2 features an identical "hand," known as a Latching End Effector. These pieces contain cables that tighten to ensure a strong grip. They allow the robotic arm to firmly grasp objects or latch itself to the Station. The functional design of the officially named Space Station Remote Manipulator System (SSRMS) supports the tasks:  Assembling and maintaining the International Space Station  Relocating Dextre (see next subsection), science experiments, spare parts, and even astronauts  Capturing unpiloted resupply vehicles The Japanese Experiment Module (JEM) Remote Manipulator System (JEMRMS) is one of the Japan Aerospace Exploration Agency’s (formerly the National Space Development Agency of Japan) contributions to the ISS. The Main Arm (MA), a 9.91meter, 6dof manipulator, is permanently attached to the JEM Pressurized Module (PM). More dexterous tasks will be accomplished using JAXA’s Small Fine Arm (SFA), also a 6dof manipulator which can be operated when grappled at the end of the MA. The JEMRMS is system used primarily for experiment payload handling. The MA is used for capture/release and transfer of experiment payloads.

Figure 3-2:The European Robot Arm The (ERA) is an 11m, 7dof manipulator with a relocatable base, which will be used to assemble and service the Russian segment of the International Space Station. The arm consists of two end-effectors, two wrists, two limbs and one elbow joint together with electronics and cameras. Both ends act as either a 'hand' for the robot or the base from which it can operate. ERA will work with the new Russian airlock, to transfer small payloads directly from inside to outside the International Space Station. This will reduce the set-up time for astronauts on a spacewalk and allow ERA to work alongside astronauts. The launch of ERA and

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 14 of 158 Review v2.6.docx

the Russian Multipurpose Laboratory Module (Nauka), initially planned for 2007, has been repeatedly delayed for various reasons. Currently it is assigned to the summer of 2020.

Table 3-1: Characteristics of Large Space Robotic Arms The table summarizes the characteristics of the large space robot arms (7th ESA Workshop on Advanced Space Technologies for Robotics and Automation 'ASTRA 2002' ESTEC, Noordwijk, The Netherlands, November 19 - 21, 2002). Among other things, they are explicitly designed for assembly tasks. But it is obvious, these robot systems are not applicable for assembly tasks as required in PULSAR. The positioning accuracy is far beyond the needs for the assembly of structural components and most of satellite servicing tasks.

3.1.2 Dextre, SFA

SSRMS and JEMRMS are supplemented by smaller, more dexterous robot arms. The Canadarm2 is supported by Dextre (Special Purpose Dexterous Manipulator (SPDM)) and the JEMRMS by the Small Fine Arm (SFA). Dextre is a 15dof robot with two 7-jointed arms and a body roll joint that allows swivelling of its torso. It is designed to be operated based on the end of Canadarm2 or from one of the Mobile Base System (MBS) or ISS Power Data Grapple Fixtures (PDGFs). At the end of each of Dextre’s arms there is an ORU Tool Change-out Mechanism (OTCM) that can grasp and actuate payloads, tools, and bolts. It is designed to move only one arm at a time: to maintain stability, to harmonize activities Canadarm-2 on the Space Station, and to minimize the possibility of self-collision. Dextre’s primary role on the Space Station is to perform repair and replacement (R&R) maintenance tasks on robotically compatible hardware such as Orbital Replaceable Units (ORUs), thereby eventually easing the burden on the ISS crew. Many spares are stored on the ISS and Dextre is able to carry them to and from worksites and install replacements when failures occur. In 2013 CSA and NASA demonstrated in the experimental Robotic Refueling Mission (RRM) how robots could service and refuel satellites on location in space to extend their useful lifetime. Characteristics of Dextre:

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 15 of 158 Review v2.6.docx

 Height: 3,66m  Width: 2,34 (across shoulders)  Arm Length: 3,5m linear stroke  Mass (approx.): 1662kg  Mass Handling/Transportation Capacity: 600kg  Degrees of Freedom: 15  Peak Power (operational): 2,000 W  Avg. Power (keep alive): 600 W  Applied Tip Load Range: 0-111 N  Stopping Distance (under max. load): 5.9 inches  Relative positioning accuracy 2mm

Figure 3-3: Dextre The Small Fine Arm (SFA) (Figure 3-1 Canadarm2 and JEMRMS (MA + FSA) of JEMRMS will be used for capture/release, transfer, and maintenance of ORUs on the JEF and JLE, and communicating with and providing power to captured ORUs. The Small Fine Arm will be attached to the end of the Main Arm when operated. The SFA is grasped by the tip of the MA during operation; it is used for dexterous tasks such as the replacement of the EF ORU (Orbital Replacement Unit). The SFA consists of two aluminium booms, six joints, an electronics unit, an end-effector, a force/moment sensor and a TV camera on the tip. The end-effector, which is called the "Tool", grasps the tool fixture and supplies torque to the bolt. SFA is designed to handle small-sized payloads of the weight from 80 kg to 300 kg. Up to 80 kg SFA is able to use its force moment accommodation function. This function is useful when SFA needs to push the payload so that it can follow the contour of guide plates. SFA also has a wrench at the tip and is able to rotate a bolt head to apply torque to it. SFA is expected to support space experiment using its dexterous tool at the tip.

Characteristics of SFA:  Degrees of Freedom 6  Length 2.2m  Mass (weight) 190kg  Handling capacity

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 16 of 158 Review v2.6.docx

o Max. 80kg with Compliance Control Mode o Max. 300kg without Compliance Control Mode  Positioning accuracy o Translation 10(+/-) mm o Rotation 1(+/-) deg.  Maximum tip force: More than 30N  Lifetime More than 10 years

3.1.3 DEXARM

The goal of the DEXARM project (2004) is the development of a robot arm comparable in size, force and dexterity to a human arm, to be used for space robotics applications in which the manipulation/intervention tasks were originally conceived for humans. These applications are typically external or internal servicing of orbiting platforms or robotics for planetary exploration. The main challenges of this development laid in the minimisation of resources that the applications require. To achieve this goal, ESA has encouraged the exploitation of innovative approaches and technologies to drastically minimise mass, volume and power consumption while providing adequate performance (output torque capability and positioning accuracy/repeatability).

Figure 3-4 DEXARM (ESA) The main DEXARM characteristics are summarised here below:  Functional and performance characteristics: o lightweight dextrous robot arm; o Redundant kinematics (7 joints, with angular range of ±175º for roll joints, -175º.+45º for pitch joints); o force-torque capability of 200 N and 20 Nm at the arm tip; o payload handling capability of 10 kg at 1-g;  Physical characteristics: o mass of about 25 kg; o power consumption of about 100 W; o length of 1.2 m;

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 17 of 158 Review v2.6.docx

 Operational characteristics: o capability of performing 1-g operations without using any special off-loading device;  Safety: o Space station safety requirements are applicable, requiring special attention to robotic related hazards like possibility of collision with other space station elements. An engineering robot arm was set-up and application test were performed. DEXARM can be qualified and utilised in the frame of future ESA flight programmes.

3.1.4 Orbital Express Robotics (2007)

Orbital Express was a space mission managed by the Defence Advanced Research Projects Agency. The goal of the Orbital Express Space Operations Architecture program was to validate the technical feasibility of robotic, autonomous on-orbit refuelling and reconfiguration of satellites to support a broad range of future U.S. national security and commercial space programs. MDA developed the Orbital Express Autonomous Robotic Manipulator System comprising the following space and ground elements:

Figure 3-5 Orbital Express

 Small next generation Robotic arm o DOF 6 o Length 3m o Mass 71kg o Power 131 watts  Grapple fixtures and vision target for Free-Flyer Capture and ORU transfer  Docking interface camera and lighting system  Standard, non-proprietary ORU containers and mating interfaces  Proximity-Ops lighting system  Autonomous Software  Robotic Ground Segment

Each joint contained a motor, brake, gearbox, and position sensors. To support the large joint travel ranges required by mission operations, the arm was designed with offset joints and an external cable harness. An end-effector (EE) was mounted to the wrist cluster and formed the tip of the arm. The EE contained a Force-Moment Sensor (FMS) that was used to sense forces and moments at the tip of the arm. The EE housing was also used as the mounting support for the OEDMS Camera and Light Assembly (OLA), a component of the OEDMS vision system.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 18 of 158 Review v2.6.docx

3.1.5 Front-end Robotics Enabling Near-term Demonstration (FREND) Robotic Arm

The DARPA-sponsored FREND program was created to prove the capability of autonomously executing an unaided grapple of a spacecraft which was never designed to be serviced. The FREND program developed and demonstrated a flight robotic arm system with its associated avionics, end- effector, and algorithms. The robotic arm system, developed by Alliance Space Systems (now part of MAXAR), was integrated into a high-fidelity test bed in which machine vision, trajectory planning, and force feedback control algorithms were used to accomplish autonomous rendezvous and grapple of a variety of spacecraft interfaces.

Figure 3-6 FREND system during demonstrations in the NRL Proximity Operations Test Facility The primary requirement of the robotic arm system is to grapple a spacecraft. In addition, the arm must be highly dexterous to provide the appropriate workspace needed for a wide range of tasks and satellite targets; must be able to support its own weight during 1g testing; and must be capable of operating in thermal and radiation environments representative of geosynchronous orbits.

The key requirements can be summarized as follows:  One side of the arm end-effector needs to fit inside a 3” radius, 40” deep cylinder  The arm needs to be designed to maximize dexterity  The arm must achieve a 17cm/s (6cm/s) velocity along its end-effector axis in 75% (95%) of its dexterous workspace  The arm must weigh less than 80Kg  The arm must support its own weight as well as a 5 Kg payload  The arm must survive launch loads  The arm is required to position a tool tip with a tip linear resolution of +/- 2mm and an angular accuracy of +/- 0.4 deg.

This system includes a seven DOF flight robotic arm, a tool drive actuator and force/torque sensor at the end of the arm, the low-level closed-loop algorithms, the avionics to control it, the thermal system, launch locks, power system, spacecraft simulator, and other associated ground support equipment.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 19 of 158 Review v2.6.docx

Table 3-2: FREND Characteristics Two FREND arms are planned for the RSGS mission. The Naval Research Laboratory will assemble and test the payload. This payload will consist of a laser-ranging system to help with long-distance targeting, a dozen cameras, two robotic arms with cameras mounted on the ends, more than 100 circuit boards, and a tool kit of appendages the arms can swap in and out like a multibit screwdriver. The arms—each 88 kilograms and 2.3 meters long—are improved versions of the FREND arm. A second pair of FREND arms is planned for NASA’s Restore-L mission with launch date 2022.

3.1.6 Dragonfly

The Dragonfly project (2015) is to some degree comparable to PULSAR. Both are accepting the challenges of In-Space Assembly (ISA). Dragonfly's initial focus was the installation and reconfiguration of large antenna reflectors on a simulated geostationary satellite. The antennas are designed to focus the satellite signal to receivers on the ground. In August 2018 a first ground demonstration took place. Further demonstrations were planned through 2018 to further refine its processes and capabilities, including more fluid robotic arm movement and its ability to make even more precise reflector alignments. Dragonfly is led by Space Systems Loral (SSL), a Maxar subsidiary, funded by the Defence Advanced Research Projects Agency (DARPA). The study included a 7dof, 3.5m ultra-light-weight robotic system, derived from Mars manipulator designs, to complete assembly of portions of the antenna system using a tool derived from DARPA orbital express and National Aeronautics and Space Administration (NASA) automated structural assembly experience.

Figure 3-7: Dragonfly Assembly Robot and Tool

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 20 of 158 Review v2.6.docx

It has two identical assembly tools (one on each end) that allow the robot to step end over end from base point to base point to reach and access antenna assembly points around the anti-Earth deck of a large GEO ComSat. Important trades to conduct as the concept matures are to: 1) determine the minimum DOF’s required in the robot for assembly and, 2) determine the minimum robot precision requirements. The tool uses 2 sets of grippers to react forces locally, thus making assembly and disassembly independent of the robot capabilities. The tool design allows nearly a +/- 1 cm positioning error in all axes, reducing the precision requirements of the positioning robot. Local force control in the robot handles any remaining misalignments during contact operations such as grasping and insertion/withdrawal tasks during assembly and disassembly to achieve strain free assembly.

3.1.7 Compliant Assistance and Exploration SpAce Robot (CAESAR)

The Compliant Assistance and Exploration SpAce Robot (CAESAR) is DLR’s consistent continuation in the development of force/torque controlled robot systems. The basis is DLR’s world-famous light- weight robot technology (LWR III) which was successfully transferred to KUKA, one of the world’s leading suppliers of robotics. CAESAR is the space qualified equivalent to the current service robot systems for manufacturing and human-robot cooperation. It is designed for a variety of on-orbit services e.g. assembly, maintenance, repair, and debris removal in LEO/GEO. The dexterity and diversity of CAESAR will push the performance of space robotics to the next level in a comparable way as the current intelligent and sensor based service robots changed robotics on earth. The seven degrees of freedom (DoF) robotic system is intended to be capable of catching satellites in LEO/GEO, even ones that are in tumbling, and/or non-cooperative states. The dexterity and sensitivity of CAESAR enables assembly, maintenance, and repair of satellites. The CAESAR development resulted in many space shareholders being interested in the technology. Japan Aerospace Exploration Agency (JAXA) shows a big interest in cooperation between Japan and Europe regarding On-Orbit Servicing (OOS) and Active Debris Removal (ADR). Johns Hopkins University Applied Physics Laboratory (APL) was interested in the robot technology for an application for the comet sample return mission CORSAIR. In addition, Airbus DS and Jena Optronik currently thoroughly investigate a possible technology transfer of CAESAR for commercial use. The CAESAR design requirements are driven by the use-cases in Low Earth Orbit (LEO), Geostationary Earth Orbit (GEO), deep space mission e.g. CORSAIR, as well as on the Moon. Nevertheless, the design of CAESAR is advanced by DLR RM independent of, but aligned with existing missions. The strategic goal is to reach the Engineering Model (EM) level of a compliance-controlled space arm, demonstrating and assuring flight readiness within the time frame of future space missions. The key to CAESAR’s high performance is intelligent impedance and position controlled joints. Each joint is a building block for setting up diverse robot kinematics depending on the different mission goals. The scalability of the robot is determined by the number of joints and the length of the links. CAESAR’s seven DoF enables it to meet the dexterity and the kinematic redundancy requirements. Extending the impedance controller, the CAESAR arm can behave compliantly, while maintaining TCP position. The compliant behaviour is triggered if any part of the robot detects contact with the environment. Compliance is a significant safety feature in dynamic environments or in close vicinity to the astronauts. As often stated in the past, there is an increasing demand for manipulation devices in On-Orbit Servicing. The main goal of CAESAR is to provide a multi-purpose space proof manipulation system that is able to cope with various tasks on cooperative and non-cooperative targets. Based on the heritage of ROKVISS and the experience gained in various projects and studies like DEOS, DLR is

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 21 of 158 Review v2.6.docx

sure that reliable and robust manipulation on different objects can only be realised with configurable Cartesian impedance control. Therefore the robot needs to provide accurate joint torque control. In addition, the production and qualification of the system has to be efficient and accurate to ensure commercial success and confidence in operation. The robot system has to be adaptable to various carriers and different types of satellites or spacecrafts.

Manipulator

Joint Position 82.830 inc / 320° Sensor Resolution

Motor Position 11.650.644 inc / 320° Sensor Resolution after Gear Length of 2.4m + x (7dof) Manipulator arm RA Mass ~ 60kg

Thickness of 2mm Aluminum Housing

Internal Databus Deterministic, real- time EtherCAT with 100MBit/s Range of Motion 320° for all axis Joint output torque 80Nm for all axis Joint velocity Up to 10°/s Environment Operational -20°C to +60°C Temperature Non-Operational -50°C to +80°C Temperature

Radiation Hardness 40krad TID (with additional shielding 100krad TID) Mission Time Up to 10 years

The system concept consists of a seven-axis, impedance-controlled articulated arm with integrated electronics, a power conditioning unit and a robot control unit. Fast control loops in the joints and the high speed realtime communication bus EtherCAT, connecting the joints and the Robot Control Unit (RCU), ensure smooth and accurate motion and force control during operation. As most of the services will be provided to GEO-Satellites, radiation hardness and lifetime are designed for 10 years of service in GEO.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 22 of 158 Review v2.6.docx

Figure 3-8 Joint of CAESAR The architecture is based on a modular design. Since there’s no gravity load in space and the handling of large and heavy objects is envisaged, all joints have the same torque capability. The joint design provides a hollow shaft to enable internal cabling. To minimize moving harness four electronic blocks (EB) are located between every two joints. Each electronic block controls two joints and provides two redundant control and drive systems per joint. The joint consists of a synchronous motor with commutation sensor, a harmonic drive gear and angular bearings, the torque sensor and a joint position sensor. All sensors and the motor windings are redundant. If required and for 1G testing purposes, all joints can be equipped with a brake, which also provides redundant windings.

The excessive electronic system of the forth electronic block can be used to operate a gripper or a tool system. Due to the modularity different configurations are possible, especially such, that the system is foldable to achieve minimum stowage volume during transportation. Depending on the configuration several launch locks are necessary to provide robustness against vibration loads.

The electronic architecture of a joint is based on a signal processor, which is computing the impedance- and position control for two joints. It is connected by dual ported memory to two motion control DSPs that perform field oriented control of the motor currents. Each of the two redundant motor windings is connected to a MOSFET based power inverter. Data Position Measurement (DPM) and various filters and logics are implemented in an FPGA. By use of an EtherCAT communication device the joint processors communicate in real time with the RCU, that provides reference values for position and torque to achieve accurate Cartesian motions and damping.

Each joint is powered by a dedicated supply unit. The input voltage level and impedance as well as the inrush current are controlled by a unique Base Power Insulation Unit (BPIU) which is mounted at the base of the robot. Therefor reliable grounding and good EMC for the complete system is guaranteed, independent on the environment of the carrier. During operation various temperature signals as well as characteristics of the sensors are monitored to ensure reliable failure detection. An independent temperature control system operated by the main controller of the carrier is used to ensure correct heating before powering the robot. All components are housed and thermally connected by aluminium to stay well within their operating limits and to withstand the individual total dose radiation limits. By additional shielding even higher radiation levels can be achieved. With the modular design it is possible to realize different kinematics regarding the number of joints, joint configurations and limb lengths.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 23 of 158 Review v2.6.docx

To be able to route all necessary cables through the hollow shaft of the joints, the EB for controlling two joints must be placed between these two joints. For smaller stowage requirements foldable kinematics are possible. The minimum foldable length for a manipulator with 7 joints is 1,302 m folded and 2,177 m in straight position (base to TCP). The DLR baseline is a manipulator design with 7 joints and alternating roll and pitch joints, starting with a roll joint at the base. In straight configuration the length from base to TCP is 3,136 m and a folded length of 1,781 m.

Figure 3-9 Baseline manipulator: Straight Figure 3-10 Different kinematics configuration (left) and storage configuration (right)

Figure -3-11: Characteristic of the main space robotics arm [Dubanchet2016]

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 24 of 158 Review v2.6.docx

Figure -3-12: SSRMS, ERA and JEMRMS Specifications [Laryssa2002]

3.1.8 References

[Dubanchet2016] Dubanchet, V. (2016). Modeling and control of a flexible space robot to capture a tumbling debris (Doctoral dissertation, École Polytechnique de Montréal).

[Laryssa2002] Laryssa, P., Lindsay, E., Layi, O., Marius, O., Nara, K., Aris, L., & Ed, T. (2002, November). International space station robotics: a comparative study of ERA, JEMRMS and MSS. In 7th ESA Workshop on Advanced Space Technologies for Robotics and Automation (pp. 19-21)

3.2 Segmented Mirror Tiles (SMT)

The technology research related to the Segmented Mirror Tiles was carried out following the 3 different axes here after:  Telescopes with segmented primary mirror: along this axis, the technology review focused on the performances achieved and technologies implemented in existing telescopes with segmented primary mirrors. This review also extended to telescopes not yet in operation.  High accuracy multi-axis positioning systems: following this axis, the technology review looked after existing concept of high accuracy multi-axis positioning systems focusing especially onto 6dof hexapods and 3dof systems and system applicable to mirror tiles.  Technologies for high precision position measurements: among the aspects to consider in the development of the SMT, the capability to measure precisely the mirror tiles position is essential for various aspects: for the calibration of the mechanism, for the mechanism performance assessment. Such technologies may also be implemented in the space telescope in order to measure the actual shape of the mirror and provide corrective inputs to the primary mirror assembly to adjust its shape.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 25 of 158 Review v2.6.docx

3.2.1 Review of Telescopes with Segmented Primary Mirror

On ground, many big telescopes have a segmented primary mirror. The biggest monolithic primary mirrors are 8.4m wide and equip the Large Binocular Telescope (LBT in Arizona USA). Such monolithic mirrors are as large as the current technology permits. Segmented primary mirrors technology allows the construction of wider mirrors and also provides significant advantages:  Segments are easier to manufacture, transport, handle, install and maintain than a monolithic mirror,  Segmented mirrors can be made thinner and thus lighter.  Mechanism behind segmented and monolithic mirrors are used to correct the mirror shape under changing gravity orientation. Such mechanism are simplified and lighter behind a segmented construction. Various big telescopes with segmented primary mirrors were reviewed and are compared in the next tables.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 26 of 158 Review v2.6.docx

Table 3-3: Segmented Telescopes – General Information

Telescope Picture Architecture Size Status Gran Telescopio Ritchey–Chrétien 10.4m Observation Canarias running since 2009 (GranTeCan - GTC)

Hobby-Eberly Tilted Arecibo 9.2m (10m) Upgrade completed Telescope (HET) design in 2015

Keck 1 &2 Ritchey–Chrétien 2 x 10.0m Observation telescopes running since 1993 (for Keck1)

Southern African Tilted Arecibo 10.0m Observation Large Telescope design running since 2005 (SALT)

LAMOST Meridian 4.9m; Observation reflecting Ma 5.72m×4.4m running since 2009 Schmidt (reflecting Schmidt) telescope Mb 6.67mx6.05m (Primary) Multi Mirror 6 X Cassegrain 4.5 m From 1979 to Telescope (MMT) - telescopes with 1998. Primary Old segmented beam mirrors replaced configuration combination by one in 1999

James Webb Space Korsch 6.5m Launch predicted Telescope in 2021

Giant Magellan Aplanatic 25.4m First light planned Telescope Gregorian in 2024

Thirty Meter Ritchey-Chrétien 30.0m First light Telescope (TMT) estimated in 2027

European Novel 5 mirror 39.3m First light planned Extremely Large design in 2025 Telescope (E-ELT)

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 27 of 158 Review v2.6.docx

Table 3-4: Segmented Telescopes – Optical Parameters

Telescope Focal Resolution FoV Observed Length Wavelengths Gran Telescopio 169.9 m 0.82 mm/arcsec; 20 arcmin (Nasmyth 0.36-2.5 µm Canarias 0.4 arcsec @ 500 nm focus) 8-25 µm (GranTeCan - GTC)

Hobby-Eberly 13.08 m 5 arcsec/mm 22 arcmin 0.35-1.8 µm Telescope (HET)

Keck 1 &2 17.5 m 0.04 arcsec, 20 arcmin with f/15 0.3-5 µm telescopes 0.4 arcsec secondary

Southern African angular ≤ 0.6" 8 arcmin 0.32-1.7 µm Large Telescope (SALT)

LAMOST 20 m Spectral resolution: 5° 0.37-0.9 µm 1nm(@370nm) 0.25nm(@900nm)

Multi Mirror 5.75 m 0.5 arcsec f/9 leads to 4 arcmin 0.33-1 µm Telescope (MMT) with 6 mirrors

James Webb 131.4 m ~0.1 arcsec 9.7 arcmin (NIRCam), 0.6-28 µm Space Telescope 74x113 arcsec (MIRI)

Giant Magellan 202.7 m 0.01 arcsec @ 1 µm 20 arcmin diameter 0.32-25 µm Telescope

Thirty Meter 450 m 0.015 arsec @ 2.2 µm 40 arcmin (WFOS) 0.31-28 µm Telescope (TMT)

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 28 of 158 Review v2.6.docx

Telescope Focal Resolution FoV Observed Length Wavelengths European 743.4 m 0.005 arcsec 10 arcmin 0.37-14 µm Extremely Large Telescope (E-ELT)

Table 3-5: Segmented Telescopes – Primary Mirror Parameters Telescope Shape Surface Number of Segment Segment Material segment Diameter shape Gran Telescopio hyperbolic 73 m2 36 1.87 m Hexagonal Zerodur Canarias (GranTeCan - GTC)

Hobby-Eberly Spherical 100 m2 91 1 m Hexagonal Zerodur Telescope (HET)

Keck 1 &2 hyperbolic 76 m2 36 1.8 m Hexagonal Zerodur telescopes

Southern African Spherical 79 m2 91 1 m Hexagonal Sitall Large Telescope (SALT)

LAMOST Spherical 18.9 m2 37 (Mb) 1.1m Hexagonal Zerodur (Plane for 24 (Ma) Ma)

Multi Mirror 6 parabolic 6 6x 1.80m Circular Borosilicat Telescope (MMT) mirrors

James Webb concave, 25 m2 18 1.32m Hexagonal Berillium coated Space Telescope aspherical with gold

Giant Magellan Concave 368 m2 7 8.417m Circular E6 borosilicate Telescope paraboloid glass

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 29 of 158 Review v2.6.docx

Telescope Shape Surface Number of Segment Segment Material segment Diameter shape Thirty Meter hyperbolic 655 m2 492 1.40 m Hexagonal Clearceram Telescope (TMT)

European Concave, 978 m2 798 1.4 m Hexagonal Zerodur Extremely Large aspherical Telescope (E-ELT)

Table 3-6: Segmented Telescopes – Primary Mirror Accuracy

Telescope Shape Surface Number of Segment Segment Material segment Diameter shape Gran Telescopio hyperbolic 73 m2 36 1.87 m Hexagonal Zerodur Canarias (GranTeCan - GTC)

Hobby-Eberly Spherical 100 m2 91 1 m Hexagonal Zerodur Telescope (HET)

Keck 1 &2 hyperbolic 76 m2 36 1.8 m Hexagonal Zerodur telescopes

Southern African Spherical 79 m2 91 1 m Hexagonal Sitall Large Telescope (SALT)

LAMOST Spherical 18.9 m2 37 (Mb) 1.1m Hexagonal Zerodur (Plane for 24 (Ma) Ma)

Multi Mirror 6 parabolic 6 6x 1.80m Circular Borosilicat Telescope (MMT) mirrors

James Webb concave, 25 m2 18 1.32m Hexagonal Berillium coated Space Telescope aspherical with gold

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 30 of 158 Review v2.6.docx

Telescope Shape Surface Number of Segment Segment Material segment Diameter shape Giant Magellan Concave 368 m2 7 8.417m Circular E6 borosilicate Telescope paraboloid glass

Thirty Meter hyperbolic 655 m2 492 1.40 m Hexagonal Clearceram Telescope (TMT)

European Concave, 978 m2 798 1.4 m Hexagonal Zerodur Extremely Large aspherical Telescope (E-ELT)

Table 3-7: Segmented Telescopes – Adaptive Optics and Correction Systems Telescope M1 correction M2 correction Dynamic Adaptive Number of Mirrors Optics Gran Telescopio 3 Canarias (GranTeCan - GTC)

Hobby-Eberly 3 + 4 mirrors Wave Telescope (HET) field corrector (WFC)

Keck 1 &2 2 telescopes

Southern African x + 4 spherical Large Telescope aberration corrector (SALT) (SAC)

LAMOST active optics

Multi Mirror deformable Telescope (MMT) secondary mirror

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 31 of 158 Review v2.6.docx

Telescope M1 correction M2 correction Dynamic Adaptive Number of Mirrors Optics James Webb 7dof / tile used 6dof used for Not needed: No 3 mirrors Space Telescope for wavefront wavefront atmosphere and sensing and sensing and minimal perturbation control process control process

Giant Magellan Telescope

Thirty Meter Multi-conjugate Telescope (TMT) adaptative optics

European 5(4.2m,3.8m, 2.4m) Extremely Large Telescope (E-ELT)

Table 3-8: Segmented Telescopes – Accuracies Telescope Telescope M1 Tip-tilt M1 Piston M1 M1 Clock Wavefront Centering rotation Gran Telescopio 30 nm @ 500 nm 90 nm ? 90 nm ? Canarias (GranTeCan - GTC)

Hobby-Eberly 1.651 mm 20 µm rms ? Telescope (HET) (0.065'') 0.14 arcsec rms

Keck 1 &2 ±0.02 arcsec telescopes

Southern African <0.07 arcsec Large Telescope (SALT)

LAMOST

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 32 of 158 Review v2.6.docx

Telescope Telescope M1 Tip-tilt M1 Piston M1 M1 Clock Wavefront Centering rotation Multi Mirror ~15 nm? Telescope (MMT)

James Webb <100 nm rms for 7 nr (mech. 15 nm @ 600nm 100 nm 217 nr Space Telescope NIRCam after Tolerance) (λ/40); 5nm mechanical mechanical completion of the mechanical tolerance tolerance WFSC process tolerance

Giant Magellan Telescope

Thirty Meter Telescope (TMT)

European Extremely Large Telescope (E-ELT)

3.2.2 Review of High Accuracy Multi-axis Positioning Systems

In the past, CSEM developed various multi-axis positioning mechanism for different application including telescopes mirror positioning. These various systems are described in the following subsections. A summary of the mechanism functions and performances in provided in Table 3-9. Table 3-9: Reviewed Mechanism Functions and Performances

Mechanism Function Actuator and sensor Performances Mass Environment Naos Adaptive Optics Voice-coil actuators Tilt amplitude : ±6° 2kg Outdoor: Field System: Fast and Eddy current position Stability: 0.42 arcsec -5°C to +45°C selector precise Tip-Tilt sensor rms Mechanism Frictionless structure Resolution: 0.62 arcsec 3dof (1 fixed) Speed: 40ms for 0.24° VLTI M2 Secondary mirror Voice-coil actuators Ranges: 8kg for the Outdoor: Hexapod positioning (with LVDT position sensor - Piston: ±1.2 mm mechanism -15°C to 45°C dynamic correction) Frictionless structure - Tip-tilt: ±300 arcsec 2.5kg Mirror 6dof - Centring: ±0.7 mm Accuracy: - Piston: < 6 µm - Tip-tilt: < 20 arcsec - Centring: < 20 µm EMIR - Translation Vis and nut actuated Ranges: 11kg Cryogenic: DTU correction without by stepper motor. - Piston: ±2 mm 80K Rotation cross- Capacitive sensor - Centring: ±0.4 mm Repeatability

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 33 of 158 Review v2.6.docx

Mechanism Function Actuator and sensor Performances Mass Environment coupling - Piston: < 5 µm 6dof (3 fixed) - Centring: < 9 µm Cross coupling: - Tip-tilt: < 200 µrad - Clock: negligible SOFIA Secondary mirror Motor and roll-vis Ranges: 37.5 kg with Airborne: Hexapod positioning (with actuator with - Piston: ±5 mm mirror and -60°C to dynamic correction) integrated encoder - Tip-tilt: ±0.312° Tilt-chopping +70°C 6dof - Centring: ±0.7 mm increment & error - Piston: < 1 ± 0.5 µm - Tip-tilt: < 2 ± 1 arcsec - Centring: < 4 ± 2 µm SOFIA Very fast tilt Frictionless structure Tip-tilt correction 37.5 kg with Airborne: Tilt- chopping plus tip- Voice coil actuators accuracy: 0.22 arcsec mirror and -60°C to Chopping tilt correction Force sensing (piezo) Tip-tilt range: 0.625° Tilt-chopping +70°C Mechanis 2dof Capacitive position m sensor GTC M2 Secondary mirror Motor and roll-vis Range: ±15 mm 100 kg Outdoor: Hexapod positioning (with actuator with Overall accuracy: mirror -15°C to dynamic correction) integrated encoder < 10 µm rms +45°C + Fast tip-tilt Tip-tilt shopper: Incremental accuracy: chopper optical ruler position < 0.15 µm rms over 6dof +2dof sensor, voice coil 0.2mm stroke actuator

3.2.2.1 Naos Fast Tip Tilt Mirror

NAOS (Nasmyth adaptive optics system) with its CONICA camera (a cryogenic near infrared camera designed by the Max Plank Institute of Heidelberg, DE) were mounted on the Yepun 8-meter telescope of the ESO VLT facility.

Figure 3-13: Naos Field Selector Mechanism design is based on CSEM proven Flexure Structures Technology (FlexTec®). Three linear mobile magnet actuators were mounted with an elastic linear guiding structure.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 34 of 158 Review v2.6.docx

The mobile mirror is connected to the three actuators by three flexible steel rods of 0.2 mm in diameter and through a central CuBe flexible membrane allowing a friction free displacement. The mirror position is directly measured by three non-contact Eddy current position sensors with down to 20 nm resolution over some 7 mm total range.

Figure 3-14 FlexTec® membrane and rod parts used on the NAOS field selectors

Figure 3-15: FlexTec® linear guiding parts and linear voice-coil used on the NAOS field selectors

Figure 3-16: Silicon Carbide (SiC) mirror of the NAOS field selectors, 90 mm diameter lightweight equipped with rods (3x) and displacement sensor targets (3x).

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 35 of 158 Review v2.6.docx

Figure 3-17: Naos Field Selector Specifications

3.2.2.2 Hexapod for M2 mirror on VLT Auxiliary Telescopes

Figure 3-18: M2 mirror on Hexapod mounted on the VLT auxiliary telescope in Paranal.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 36 of 158 Review v2.6.docx

Figure 3-19:: M2 VLT auxiliary telescope Hexapod without mirror ready for implementation

The hexapod is based on 6 linear actuators acting on flexible rods in a particular geometry. The actuators use frameless motors with resolver encoders acting on a Rollvis satellite screw. The linear displacement of each flexible pivot is measured using LVDT sensors.

Figure 3-20: M2 VLT auxiliary telescope Hexapod Specifications

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 37 of 158 Review v2.6.docx

Figure 3-21: M2 VLT auxiliary telescope Hexapod testing method

3.2.2.3 EMIR – DTU

This work was performed under contract with the Instituto de Astrofísica de Canarias as part of the development of the EMIR instrument to be installed at the GTC telescope. The translation mechanism specified for this function, called DTU (Detector Translation Unit), has to provide three translation axes (X,Y,Z) but also avoid any significant tilt and rotation cross-coupling. This latter requirement, in particular, led to the design of a 3-axis parallel mechanism, based on kinematic principles similar to those of the Delta robotic manipulator. The mechanism is working in a cryogenic environment at 80 K.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 38 of 158 Review v2.6.docx

Figure 3-22: EMIR - DTU concept

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 39 of 158 Review v2.6.docx

Figure 3-23: EMIR - DTU during qualification tests

Kinematic requirements Performance Focus (z) range  2 mm, Focus resolution ~ 1 µm Focus repeatability  5 µm Focusing speed > 1000 µm/s Center (x,y) range  0.4 mm Center resolution 3 µm (large displacements), 0.3 µm (for displacements < 50 µm) Center repeatability  9 µm (large displacements),  1 µm (for displacements < 50 µm) Centering speed > 400 µm/s Cross-coupling < 200 µrad tilt over the entire XY range, negligible rotation Other requirements Envelope Ø 450 x 450 x 230 mm Mass ~ 11 kg approximately

Figure 3-24: EMIR – DTU Specifications

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 40 of 158 Review v2.6.docx

3.2.2.4 SOFIA hexapod and tilt-chopping mechanism

SOFIA is an infrared telescope install inside a B747 airplane, CSEM was in charge to build the M2 mirror hexapod and tilt-chop mechanism.

Figure 3-25: SOFIA telescope

Figure 3-26: SOFIA hexapod and tilt-chopping mechanism concept

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 41 of 158 Review v2.6.docx

Figure 3-27: SOFIA Hexapod and Tilt-Chopping Mechanism

Figure 3-28: SOFIA Actuators

Figure 3-29: SOFIA Focus Center Mechanism Requirements

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 42 of 158 Review v2.6.docx

Figure 3-30: SOFIA Tilt-Chop Mechanism Requirements The focusing mechanism is based on a hexapod with three horizontal actuators and three vertical actuators holding the tilt-chopping mechanism on six flexible rods. All the tilt-chopping mechanism is based on friction-free flexible elements.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 43 of 158 Review v2.6.docx

3.2.2.5 M2 Hexapod for the Gran Telescopio CANARIAS

The Gran Telescopio CANARIAS (GranTeCan - GTC), is currently the largest and one of the most advanced optical and infra-red telescopes in the world. Its primary mirror consists of 36 individual hexagonal segments that together act as a single mirror. The light collecting mirror surface area of GTC is equivalent to that of a telescope with a 10.4m diameter single monolithic mirror. Thanks to its huge collecting area and advanced engineering the GTC classes amongst the best performing telescopes for astronomical research. CSEM in collaboration with NTE/Sener (Spain) was in charge of developing and building the M2 Mirror hexapod and tilt-chop mechanism.

Hexapod requirements:  Range ±15 mm  Overall accuracy < 10 μm RMS  Resolution and Incremental accuracy (over 0.2 mm stroke) < 0.15 μm RMS  Speed > 1 mm/s  Axial Stiffness > 150 N/μm  Operative temperature range -15 C + 45 C 

Figure 3-31: M2 Hexapod for GranTeCan

Hexapod linear actuator design:  Each linear actuator is built around a Rollvis recycling 1 mm pitch high precision planetary roller screw.  A Kollmorgen DC motor is mounted on the actuator screw in a direct drive configuration.  A Heidenhain rotating encoder is directly mounted on the actuator shaft as well.  A LIP 401 Heidenhain linear encoder is put in parallel to the actuator itself to provide an absolute position feedback (evaluated with respect to its central zero reference).  Each actuator embeds an ELECTROID electromagnetic brake directly coupled to the shaft.  A pair of electrical limit switches are mounted into each actuator to define its working stroke.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 44 of 158 Review v2.6.docx

3.2.3 Technologies for High Precision Position Measurements and Actuation

This section of the technology review is segmented according to Table 3-10. Though this representation can include some redundancy, it allows to have a clear overview of the different fields. Table 3-10 : Technology review segmentation

Generic Final application PULSAR, PULSAR, for implemented in characterisation and dSMT demonstration Sensor 1 3 6 8 Actuators 2 4 7 / SMT / 5 / 9 metrology

The numbers indicated in the table have the following meaning: 1) Generic review of the major 1D position sensors families. 2) Generic review of the major actuators families. 3) Review of the sensors used in SMT (mainly edge sensors). 4) Review of the actuators used in SMT. 5) Review of the metrology techniques used to monitor the shape of primary mirrors in SMT, includes the Zernike polynomials decomposition approach. Commercial solutions for optics metrology are shown. 6) Sensors for implementation in the dSMT within the framework of Pulsar dPAMT. Commercial solutions of 1D sensors are presented. 7) Actuators for implementation in the dSMT within the framework of Pulsar dPAMT. 8) Sensors foreseen in the framework of Pulsar dPAMT to perform dSMT characterisation. It includes sensor available at CSEM for the project. 9) Metrology techniques foreseen in the framework of Pulsar dPAMT to perform demonstration of capabilities. In addition, an acronyms table is given and most of the terms are related to the metrology techniques for SMT. It contains also generic terms related to astronomy and astrophysics fields.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 45 of 158 Review v2.6.docx

to automatically large stitch

)

®

Awavefront sensor is a device formeasuring the aberrationsof an optical wavefront.

WFIRSTis based on an existing 2.4 m wide field-of-viewtelescope andwill carry two instruments. scientific

surfaceswith departures up waves to 1000 from the best-fitsphere.

Removingastigmatism and coma with a variable optical null allows an aspheric interferometer stitching to measure

Hawaii.

hasbecome the source ofcontroversy over planned its location on Mauna Kea on the islandof Hawaii in state the of US

The ThirtyMeter Telescope (TMT)is a proposed astronomical observatory with an extremely large telescope (ELT)that

components.

ATwyman–Green interferometer is a variant of the Michelsoninterferometer principally used optical to test

initialconditions and equilibrium point to be zero.

ATransfer Function is the ratioof the outputof a system to the inputof a system, in the Laplace domainconsidering its

Set of Set InstrumentsScience embedded into a Space Telescope.

diameterand high numerical aperture sphericalsurfaces (including hemispheres).

In 2004, QED InTechnologies 2004, introduced the SubapertureInterferometer Stitching (SSI

little noise. little

achievesmid cyc/m) to high (20 to 1000 spatial frequency optical surface metrologyfiltering with and very little very

The Slope-MeasuringPortable Optical Test System (SPOTS) is a new, portable, high-resolution, deflectometry device that

desiredsignal the to levelof background noise.

Signal-to-noise ratio(abbreviated orSNR S/N) is a measure usedin science andengineering that compares the levelof a

Mappingof the reducedgain and phase lagat each pixel for a specific frequency.

reflectionfrom the mirrorand calculates phase shiftto obtain the mirrororientation change.

The presentedSMOTS uses a conventional LCD monitor and CMOS camera. The cameratakes images of the screenin

(calledlenslets) of the same focallength.

imagingsystem. It is a wavefront sensor commonly used in adaptive opticssystems. It consists of an array of lenses

AShack–Hartmann (or Hartmann–Shack) wavefront sensor (SHWFS) is an optical instrument used for characterizing an

measuringhighly large, aspherical shapes.

SCOTSis based on the geometryof the fringe reflection/phase measuringdeflection for rapidly, robustly, and accurately

the circulararc which best approximates the curveat that point.

Indifferential the geometry, radiusof curvature, is the R, reciprocalof the curvature.For a equals curve, it the radiusof

The rootmean square (orRMS) is the square rootof the average ofthe squaresof a set of quantities or numbers.

speckle pattern,

responsiblefor the formationof the interference patternin situation the static produce the changesthat are seenin the

Quasi-Staticspeckle isa result of the temporalevolution of a speckle patternwhere variationsin the scatteringelements

Peakto valley refers to the amplitude betweenthe highestpoint of a signal the to lowestpoint.

generalterm for is a the PSF system's impulse response,being the the PSF impulse response ofa focused optical system.

The pointspread function (PSF) describes the response ofan imaging system to a point source orpoint object. A more

signal.

The powerspectrum of a time seriesdescribes the distributionof power into frequency components composing that

Assemblyprocess of the PrimaryMirror Segments of a large segmentedmirror telescope.

ofa telescope.

The PrimaryMirror Backplane SupportStructure isthe large structure thatholds and supports the hexagonal big mirrors

Aprimary mirror (or primary) is the principallight-gathering surface (the objective)of a telescope. reflecting

light coming light from space andprovides to the science instruments.it

The OpticalTelescope Element(OTE) is the eye ofthe JamesWebb Space Telescope Observatory.The OTEgathers the

througha given system, and the indexof refraction of the mediumthrough which propagates. it

Inoptical optics, path length (OPL) or optical distance isthe productof the geometriclength of the pathfollowed by light

Adifference inoptical path length between two paths is often called the opticalpath difference (OPD).

channel.

istemporarily removed, minimizing effectively the monochromaticaberrations in the illuminationpath of the imaging

NCP aberrations NCP correction is achieved by maximizing an image sharpnessmetric while the confocaldetection aperture

solid.

subjectedto a magnetic field, the fluidgreatly increases apparent its to the viscosity, pointof becoming a viscoelastic

Amagnetorheological fluid (MR fluid,or MRF) is a type ofsmart fluid in a carrier fluid, usually a type ofoil. When

Reference pointlocated at the centerof the primarymirror.

Observatory'sLarge Very Telescope observe to celestial objects with most of the atmosphere'sblurring removed.

Multi-conjugate Adaptive opticsDemonstrator (MAD) is an instrument that allowed the EuropeanSouthern

currentlyunder construction, that will photograph the entire availablesky every few nights.

Large SynopticSurvey Telescope (LSST)is a wide-field survey telescope reflecting withan 8.4-meter primary mirror,

Telescope.

The JamesWebb Space Telescope (JWSTor "Webb") is a space telescope thatwill be the successorto the HubbleSpace

operation.

The HubbleSpace Telescope (HST)is a space telescope thatwas launched into low Earth orbit and in 1990 remains in

Interferometre capableof characterizing both and static dynamic characteristics of a mirror.

diameter,will provide amajor advance inground-based astronomy over the world'slargest optical telescopes.

The GiantSegmented Mirror Telescope (GSMT),a ground- based telescope witha mirror approximately meters 30 in

completionin It will 2025. consist of seven 8.4 m (27.6 ft) diameter primary segments.

The GiantMagellan Telescope (GMT)is a ground-based extremely large telescope underconstruction, planned for

domainand versa.vice

Fourieranalysis converts a signal from original its domain (often time orspace) to a representation in the frequency

withoptically rough surfaces.

togetherwith video recording detection, and processing to visualise and static dynamic displacements of components

Electronicspeckle patterninterferometry (ESPI), also known as Holography, TV is a technique whichuses laser light,

Exoplanetor extrasolar planet is a planet outside the SolarSystem.

primarymirror from metres 20 up metres to 100 across.

Anextremely large telescope (ELT)is an astronomical observatory featuring an optical telescope withan aperture for its

extremelylarge telescope nowunder construction.

The ExtremelyLarge Telescope (ELT)is an astronomical observatory and the world'slargest optical/near-infrared

Functionalprototype readyto be used in operational conditions.

passage offringes and calculates the change inposition of an object.

Displacementmeasuring interferometry or DMI for stage positioning,in most its rudimentary form, monitors the

controlsystems in adaptive optics.

correctionof optical aberrations. Deformable mirrors are usedin combination with wavefront sensors and real-time

Deformablemirrors (DM) are mirrorswhose surface canbe deformed,in order to achieve wavefront control and

valuesfor processing.

Dataacquisition systems, abbreviated by the acronymsDAS or DAQ, typically convert analog waveforms into digital

ofcurvature on lying the normalvector.

Inthe geometry, centerof curvature ofa curve is found at a point that is at a distance fromthe curveequal the to radius

maskor film for subsequent illumination by suitable coherent source. light

holographicimage canbe generatede.g. by computing digitally a holographic interference patternand printing onto it a

Computer-generatedholography (CGH) is the methodof generating digitally holographic interference patterns.A

thatoperates astronomical observatories and telescopes.

The Associationof Universities for Research in Astronomy (AURA) is a consortium of universities and other institutions

lensesor computer-generated holograms, reducing the lead time forproducing aspheres.

The ASImeasures aspheres with as much as waves 1000 of departure fromthe bestsphere fit withoutthe use ofnull

incomingwavefront distortions by deforming a mirror in order to compensate forthe distortion.

Adaptive optics(AO) is a technology used to improve the performance ofoptical systems by reducing the of effect

Definition

Optics

Space telescope

Optics

Groundtelescope

Metrologytechnique

Mathematics

Hardware component

Metrologytechnique

Metrologytechnique

Mathematics

Mathematics

Metrologytechnique

Metrologytechnique

Metrologytechnique

Geometry

Mathematics

Metrologytechnique

Mathematics

Mathematics

Mathematics

Hardware system

Hardware component

Hardware component

Hardware system

Optics

Optics

Metrologytechnique

Materials

Geometry

Optics

Groundtelescope

Space telescope

Space telescope

Metrologytechnique

Astronomy

Groundtelescope

Mathematics

Metrologytechnique

Astronomy

Astronomy

Groundtelescope

Engineering

Metrologytechnique

Hardware component

Hardware system

Geometry

Metrologytechnique

Astronomy

Metrologytechnique

Optics

Category

WavefrontSensor

Wide FieldInfrared Survey Telescope

VariableOptical Null

ThirtyMeter Telescope

Twymann-Greeninterferometer

TransfertFunction

Space Telescope Instrument Science

Sub-apertureInterferometry Stitching

System

Slope-measuringPortable Optical Test

SignalNoise to Ratio

SpatiallyMapped Transfer Function

OrientationTest System

SimultaneousMulti-segmented mirror

Shack-HartmannWavefront Sensor

System

Software ConfigurableOptical Test

Radiusof Curvature

RootMean Square

QuasiSpeckle Static

Peakto Valley

Point Spread Point Function

PowerSpectral Density

PrimaryMirror Segment Assembly

Structure

PrimaryMirror Backplane Support

PrimaryMirror

Science InstrumentScience Module

OpticalTelescope ElementIntegrated

OpticalPath Length

OpticalPath Difference

NonCommon Path wavefront error

Magneto-RheologicalFluid

MirrorMaster Reference

MultiConjugate Adaptive Optics

Large SynopticSurvey Telescope

JamesWebb Space Telescope

HubbleSpace Telescope

HighSpeed Interferometer

GiantSegmented Mirror Telescope

GiantMagellan Telescope

FastFourier Transform

Interferometry

ElectronicSpeckle Pattern

ExtraSolar Planets

ExtremelyLarge Telescope

EuropeanExtremely Large Telescope

EngineeringDevelopment Unit

Distance MeasuringInterferometer

DeformableMirror

DataAcquisition System

Centerof Curvature

ComputerGenerated Hologram

in Astronomy in

Association of Universities for Research of Association Universities

AsphericInterferometer Stitching

Adaptive Optics

Signification

WFS

WFIRST

VON

TMT

TG

TF

STScI

SSI

SPOTS

SNR

SMTF

SMOTS

SHWS

SCOTS

RoC

RMS

QSS

PV

PSF

PSD

PMSA

PMBSS

PM

OTIS

OPL

OPD

NCP

MRF

MMR

MCAO

LSST

JWST

HST

HSI

GSMT

GMT

FFT

ESPI

ESP

ELT

E-ELT

EDU

DMI

DM

DAQ

CoC

CGH

AURA

ASI

AO Acronyme

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 46 of 158 Review v2.6.docx

to automatically large stitch ) ® Definition Adaptive optics(AO) is a technology used to improve the performance ofoptical systems by reducing the of effect incomingwavefront distortions by deforming a mirror in order to compensate forthe distortion. The ASImeasures aspheres with as much as waves 1000 of departure fromthe bestsphere fit withoutthe use ofnull lensesor computer-generated holograms, reducing the lead time forproducing aspheres. The Associationof Universities for Research in Astronomy (AURA) is a consortium of universities and other institutions thatoperates astronomical observatories and telescopes. Computer-generatedholography (CGH) is the methodof generating digitally holographic interference patterns.A holographicimage canbe generatede.g. by computing digitally a holographic interference patternand printing onto it a maskor film for subsequent illumination by suitable coherent source. light Inthe geometry, centerof curvature ofa curve is found at a point that is at a distance fromthe curveequal the to radius ofcurvature on lying the normalvector. Dataacquisition systems, abbreviated by the acronymsDAS or DAQ, typically convert analog waveforms into digital valuesfor processing. Deformablemirrors (DM) are mirrorswhose surface canbe deformed,in order to achieve wavefront control and correctionof optical aberrations. Deformable mirrors are usedin combination with wavefront sensors and real-time controlsystems in adaptive optics. Displacementmeasuring interferometry or DMI for stage positioning,in most its rudimentary form, monitors the passage offringes and calculates the change inposition of an object. Functionalprototype readyto be used in operational conditions. The ExtremelyLarge Telescope (ELT)is an astronomical observatory and the world'slargest optical/near-infrared extremelylarge telescope nowunder construction. Anextremely large telescope (ELT)is an astronomical observatory featuring an optical telescope withan aperture for its primarymirror from metres 20 up metres to 100 across. Exoplanetor extrasolar planet is a planet outside the SolarSystem. Electronicspeckle patterninterferometry (ESPI), also known as Holography, TV is a technique whichuses laser light, togetherwith video recording detection, and processing to visualise and static dynamic displacements of components withoptically rough surfaces. Fourieranalysis converts a signal from original its domain (often time orspace) to a representation in the frequency domainand versa.vice The GiantMagellan Telescope (GMT)is a ground-based extremely large telescope underconstruction, planned for completionin It will 2025. consist of seven 8.4 m (27.6 ft) diameter primary segments. The GiantSegmented Mirror Telescope (GSMT),a ground- based telescope witha mirror approximately meters 30 in diameter,will provide amajor advance inground-based astronomy over the world'slargest optical telescopes. Interferometre capableof characterizing both and static dynamic characteristics of a mirror. The HubbleSpace Telescope (HST)is a space telescope thatwas launched into low Earth orbit and in 1990 remains in operation. The JamesWebb Space Telescope (JWSTor "Webb") is a space telescope thatwill be the successorto the HubbleSpace Telescope. Large SynopticSurvey Telescope (LSST)is a wide-field survey telescope reflecting withan 8.4-meter primary mirror, currentlyunder construction, that will photograph the entire availablesky every few nights. Multi-conjugate Adaptive opticsDemonstrator (MAD) is an instrument that allowed the EuropeanSouthern Observatory'sLarge Very Telescope observe to celestial objects with most of the atmosphere'sblurring removed. Reference pointlocated at the centerof the primarymirror. Amagnetorheological fluid (MR fluid,or MRF) is a type ofsmart fluid in a carrier fluid, usually a type ofoil. When subjectedto a magnetic field, the fluidgreatly increases apparent its to the viscosity, pointof becoming a viscoelastic solid. aberrations NCP correction is achieved by maximizing an image sharpnessmetric while the confocaldetection aperture istemporarily removed, minimizing effectively the monochromaticaberrations in the illuminationpath of the imaging channel. Adifference inoptical path length between two paths is often called the opticalpath difference (OPD). Inoptical optics, path length (OPL) or optical distance isthe productof the geometriclength of the pathfollowed by light througha given system, and the indexof refraction of the mediumthrough which propagates. it The OpticalTelescope Element(OTE) is the eye ofthe JamesWebb Space Telescope Observatory.The OTEgathers the coming light from space andprovides to the science instruments.it Aprimary mirror (or primary) is the principallight-gathering surface (the objective)of a telescope. reflecting The PrimaryMirror Backplane SupportStructure isthe large structure thath Category Optics Metrologytechnique Astronomy Metrologytechnique Geometry Hardware system Hardware component Metrologytechnique Engineering Groundtelescope Astronomy Astronomy Metrologytechnique Mathematics Groundtelescope Astronomy Metrologytechnique Space telescope Space telescope Groundtelescope Optics Geometry Materials Metrologytechnique Optics Optics Hardware system Hardware component Hardware component Hardware system Mathematics Mathematics Mathematics Metrologytechnique Mathematics Geometry Metrologytechnique Metrologytechnique Metrologytechnique Mathematics Mathematics Metrologytechnique Metrologytechnique Hardware component Mathematics Metrologytechnique Groundtelescope Optics Space telescope Optics Signification Adaptive Optics AsphericInterferometer Stitching for Research of Association Universities Astronomy in ComputerGenerated Hologram Centerof Curvature DataAcquisition System DeformableMirror Distance MeasuringInterferometer EngineeringDevelopment Unit EuropeanExtremely Large Telescope ExtremelyLarge Telescope ExtraSolar Planets ElectronicSpeckle Pattern Interferometry FastFourier Transform GiantMagellan Telescope GiantSegmented Mirror Telescope HighSpeed Interferometer HubbleSpace Telescope JamesWebb Space Telescope Large SynopticSurvey Telescope MultiConjugate Adaptive Optics MirrorMaster Reference Magneto-RheologicalFluid NonCommon Path wavefront error OpticalPath Difference OpticalPath Length OpticalTelescope ElementIntegrated InstrumentScience Module PrimaryMirror PrimaryMirror Backplane Support Structure PrimaryMirror Segment Assembly PowerSpectral Density Spread Point Function Peakto Valley QuasiSpeckle Static RootMean Square Radiusof Curvature Software ConfigurableOptical Test System Shack-HartmannWavefront Sensor SimultaneousMulti-segmented mirror OrientationTest System SpatiallyMapped Transfer Function SignalNoise to Ratio Slope-measuringPortable Optical Test System Sub-apertureInterferometry Stitching Space Telescope Instrument Science TransfertFunction Twymann-Greeninterferometer ThirtyMeter Telescope VariableOptical Null Wide FieldInfrared Survey Telescope WavefrontSensor

Acronyme AO ASI AURA CGH CoC DAQ DM DMI EDU E-ELT ELT ESP ESPI FFT GMT GSMT HSI HST JWST LSST MCAO MMR MRF NCP OPD OPL OTIS PM PMBSS PMSA PSD PSF PV QSS RMS RoC SCOTS SHWS SMOTS SMTF SNR SPOTS SSI STScI TF TG TMT VON WFIRST WFS

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 47 of 158 Review v2.6.docx

3.2.3.1 Generic Review of the Major 1D Position Sensors Families

The major types of linear position sensors are given in the list below:  Resistive Position Transducers  Capacitive Position Transducers  Inductive Position Transducers  LVDT Position Transducers  Hall Effect Transducers  Magnetoresistive Transducers  Magnetostrictive Transducers  Optical Linear Encoders  Magnetic Linear Encoders  Laser interferometer The Table 3-11 give a qualitative comparison between the types of sensors [1], while the Table 3-12 indicates typical values of nanometric position sensors [2]. Plots of Figure 3-32 illustrate the relation between the displacement strokes of various type sensors and their resolutions and maximum operating frequencies.

Table 3-11 : Application suitability of various sensors

Table 3-12 : Quantitative performances of various sensors

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 48 of 158 Review v2.6.docx

Figure 3-32 : Relations between displacement, resolution and maximal frequency of various type of sensors [3].

3.2.3.2 Generic Review of the Major Actuators Families

In the same review paper [3], the major actuator families are identified and assessed. These are listed in Table 3-13. Plots of Figure 3-33, Figure 3-34 and Figure 3-35 illustrate the relations between the displacement strokes of various type of actuators and respectively their force, resolution and maximum operating frequency.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 49 of 158 Review v2.6.docx

Table 3-13 : Various types of actuators [3] Electrostatic Piezoelectric Thermal Magnetic

Comb drive Bimorph Bimorph Electromagnetic

Scratch drive Expansion Solid expansion Magnetostrictive

Parallel plate Topology optimized External field

Inchworm Shape memory alloy Magnetic relay

Impact Fluid expansion

Distributed State change

Repulsive force Thermal relay

Curved electrode

S-shaped

Electrostatic relay

Figure 3-33 : Relations between displacement and force for various type of actuators [3].

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 50 of 158 Review v2.6.docx

Figure 3-34 : Relations between displacement stroke and displacement resolution for various type of actuators [3].

Figure 3-35 : Relations between displacement and operating frequency for various type of actuators [3].

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 51 of 158 Review v2.6.docx

3.2.3.3 Review of the Sensors Used in SMT (mainly edge sensors)

In this section, examples of edge sensor implementation are given for the three following terrestrial telescopes: TMT, Keck and E-ELT. They are found in [4]. The two first examples are capacitive (TMT is capacitive, face-on and Keck is capacitive, interleaved) and the third example, E_ELT has inductive sensors. TMT sensors can give as output the height and tilt of the segments as well as the gap between them. Keck sensor can only give the height and tilt as outputs. And E-ELT edge sensor give the height and tilt of the segments as well as the gap and shear between them. Table 3-14 summarizes some properties of the edge sensors of the three SMT examples, while Figure 3-36 - Figure 3-39 illustrate the CAD, layout, spatial arrangement and numerical properties. Table 3-14 : Properties of 3 SMT edge sensors TMT KECK E-ELT Telescope PM diametre 30m 2 x 10m 39m Number of segments 492 36 798 Number of edge sensors 2772 168 4524 Sensor type Capacitive Capacitive Inductive Configuration Face-on Interleaved Face-on Height output Yes Yes Yes Tilt output Yes Yes Yes Gap output Yes No Yes Shear Output No No Yes

Figure 3-36 : TMT Edge Sensors CAD and spatial arrangement

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 52 of 158 Review v2.6.docx

Figure 3-37 : TMT Edge Sensors layout

Figure 3-38 : Keck Edge Sensor layout and performances

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 53 of 158 Review v2.6.docx

Figure 3-39 E-ELT Edge Sensor

3.2.3.4 Review of Actuators Used in SMT

An example of actuator developed by PI for the E-ELT is shown here (Figure 3-40 and Figure 3-41). This actuator has two stages, one for a long stroke driven by a spindle and harmonic drive gearheads and one for precision driven by a piezo. These two stages allow to dynamically reject perturbation exerted on the E-ELT structure while ensuring precision with a positioning error kept under 50 nm during the motion. The position is measured via one high resolution optical sensor. Its excellent amplitude and phase stability allows an interpolation factor of 2000 and results in a resolution of 0.125nm.

Figure 3-40 CAD view of an E_ELT segment and a detailed CAD view of the actuator.

Figure 3-41 : Principle schema of the two-stage PI actuator.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 54 of 158 Review v2.6.docx

3.2.3.5 Review of the Metrology Techniques Used to Monitor Wave Front of Primary Mirrors in SMT

In the context of optical metrology, the Zernike polynomials are commonly used. They are a sequence of polynomials that are orthogonal on the unit disk. The polynomials are defined such that:

Figure 3-42 : The first 21 Zernike polynomials, ordered vertically by radial degree and horizontally by azimuthal degree.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 55 of 158 Review v2.6.docx

Table 3-15 : The first few Zernike modes

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 56 of 158 Review v2.6.docx

The main optical metrology techniques are listed below and not technically detailed. For some more information, refer to [7]:  Deflectometry (SCOTS)  Shack-Hartmann  Lateral shear interferometry  Radial shear interferometry  High-density detector arrays  Sub-Nyquist interferometry  Long-wavelength interferometry  Two-wavelength holography  Two-wavelength interferometry  Tilted wave interferometry  Stitching interferograms  Scanning interferometry Most of them are based on interferometry techniques and are able to quantify the wavefront of an asphere optics [7]. Asphere optics are used in order to reduce the spherical aberrations found in optical systems that use elements with spherical surfaces. However, there is still some errors that remains in the measured wavefront, compared to a perfect wavefront. This wavefront error can be reduced to have an amplitude lower than the wavelenght of the light gathered by the optical system. Then it comes out that the phasing of light waves contains much information [8]. Interferogram uses interferometry over a surface. From this technique, a phased image is obtained (Figure 3-43 left). It can be unwrapped by a dedicated algorithm and a 3D reconstruction is obtained (Figure 3-43 right). Let us notice that unwrapping is needed when the wavefront error is bigger than one wavelength.

Figure 3-43 : Unwrapping of the phased image of an eye surface (4D Technology). Left : wrapped image. Right : 3D reconstructed image after unwrapping. Among the many techniques listed above, some of them have experienced a huge development these last decades, due to the realization of large telescopes and the pertaining need for fast and precise optical metrology techniques. Thus dynamics methods as well as stitching methods are proposed respectively by 4D Technologies and Qedmrf.

A relevant implementation of wavefront sensing is made on the JWST. In this example, interferogrametry is made on ground in order to validate the geometry and control strategy of the SMT adjustment. While in operation the sequence shown on Figure 3-44 is applied. The alignment consist in two phases: first, the image of a selected star is observed independently on each segment of the mirror, the segments are then moved to see them all superimposed. Second, the PSF is reduced in order to have a clear focus of the observed star and the Strhel ratio is then also maximized. The NIR camera is used to perform this alignment sequence.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 57 of 158 Review v2.6.docx

Figure 3-44 : JWST Wavefront Sensing and Control Process.

Commercial solutions for optical system metrology exist. Some of them are shown in the following Figures:

Figure 3-45: Imagine-Optic Metrology

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 58 of 158 Review v2.6.docx

Figure 3-46: Optocraft Metrology

Figure 3-47: AkaOptics Metrology

Figure 3-48:GEDMRF Metrology

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 59 of 158 Review v2.6.docx

Figure 3-49: 4D Techology Metrology

Figure 3-50:Zygo Metrology

Figure 3-51: DifroTec Metrology

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 60 of 158 Review v2.6.docx

3.2.3.6 Sensors for Implementation in the dSMT Within the Framework of PULSAR dPAMT

In the framework of PULSAR, commercial edge sensors could be used in the positioning mechanism. Coarse specifications are listed here below:  Resolution : 200 nm  Stroke : 5-10 mm  Frequency : 2 kHz  Number of parts : 5 to 10  ROM price : Low to mid-price range  Sensor type : Capacitive, inductive, hall, Eddy current

Linear Gages https://shop.mitutoyo.ch/web/mitutoyo/fr_CH/mitutoyo/01.04.081/Linear%20Gage%20standard %C2%A0LGF/$catalogue/mitutoyoData/PR/542-171/index.xhtml

Figure 3-52: Linear Gage Hall sensors AMS NSE-5310 Linear Sensor https://ams.com/nse-5310#tab/features

Figure 3-53:Hall Sensor

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 61 of 158 Review v2.6.docx

Capacitive sensors https://www.mtiinstruments.com ASP-5000M-PSR

Figure 3-54:Capacitive Sensors

Confocal light sensor Keyence CL-P030

Figure 3-55:Cofocal Light Sensor

Finally, Figure 3-56 illustrate a possible implementation of the edge sensors on dPAMT. It would consist of a total of 5 edge sensors. This number corresponds to a total of 3 per tile, considering one common edge between the two active tiles. On the central tile, the sensors are located as far as possible one from each other. But on the border tile, the sensors will be necessarily located closer on from each other.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 62 of 158 Review v2.6.docx

Figure 3-56 : A Possible Implementation of the Edge Sensors on dPAMT.

3.2.3.7 Actuators for Implementation in the dSMT Within the Framework of PULSAR dPAMT

For the targeted performance, most of the actuators presented in 3.2.3.2 could be used. However, to fit within the framework of PULSAE, actuator technologies are limited to existing low cost commercial solutions. The present favoured solution is to use rotary motors used in combination with their angular position encoders. One can imagine to use a double loop control design. One loop with high frequency (about 1 kHz) is controlling at the motor level with their encoders, while a second high level control loop can act based on the edge position sensors and turn at lower frequency (about 10 Hz). A linear encoder can be also considered at the screw level, though this would require more integration effort than rotary encoders at the motor level.

3.2.3.8 Sensors Foreseen in the Framework of PULSAR dPAMT to Perform dSMT Characterization

At the tile level, the sensors used need a good resolution (< 1um), in order to characterize the actuated tiles for themselves. The available metrology solution are shown in the Table 3-16. These that are suited to characterize the attitude of moving tiles are highlighted in colors. Three solutions are possible. The first (in green) consists in the use of one interferometer SIOS plus one autocollimator. The second (in yellow) consist in the use of one triple interferometer. The third (in salmon pink) consist in the use of three microepsilon sensors. Let us notice that each of the two moving tiles will be measured independently.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 63 of 158 Review v2.6.docx

Table 3-16 : Metrology instruments available at CSEM. Quantity Type Producer Model Num. Inventory Num. axes Range Resolution Frequency Calibration 1 Interferomètre SIOS MI 5000 201819 1 5000 mm 0.1 nm 25 kHz None 1 Interferometer Triple-Beam Plane Mirror SIOS SP 2000 TR 201217 3 x 1 (tip tilt piston) 2000 mm 20 pm - None 1 Interferomètre Attocube FPS 3010 201838 3 x 1 400 mm 1 pm 10 MHz None 1 Autocolimateur digital Newport LDS1000 200256 2 ±2000 μrad 0.1 μrad 2 kHz None 3 microEpsilon optique microEpsilon ILD 2200-20 201193 1 20 mm 0.3 μm 10 kHz None 3 microEpsilon optique microEpsilon ILD 2200-30 201798 1 20 mm 0.3 μm 10 kHz None 4 microEpsilon capacitif microEpsilon capaNCDT 202710 1 0.2 mm 0.15 nm 8 kHz None 1 Scanner 3D Nikon MCAx20+ NA 3 (x,y,z) 2000 mm 14 um NA Yes

3.2.3.9 Metrology of SMT Used in the Framework of PULSAR dPAMT

At the demo level, one of the instruments available at CSEM, listed in Table 3-17 could be used in order to characterize the alignment of the tiles. Suited instrument for the characterization at the demo level is the Scanner 3D Nikon. Table 3-17 : Metrology instruments available at CSEM Quantity Type Producer Model Num. Inventory Num. axes Range Resolution Frequency Calibration 1 Interferomètre SIOS MI 5000 201819 1 5000 mm 0.1 nm 25 kHz None 1 Interferometer Triple-Beam Plane Mirror SIOS SP 2000 TR 201217 3 x 1 (tip tilt piston) 2000 mm 20 pm - None 1 Interferomètre Attocube FPS 3010 201838 3 x 1 400 mm 1 pm 10 MHz None 1 Autocolimateur digital Newport LDS1000 200256 1 ±2000 μrad 0.1 μrad 2 kHz None 3 microEpsilon optique microEpsilon ILD 2200-20 201193 1 20 mm 0.3 μm 10 kHz None 3 microEpsilon optique microEpsilon ILD 2200-30 201798 1 20 mm 0.3 μm 10 kHz None 4 microEpsilon capacitif microEpsilon capaNCDT 202710 1 0.2 mm 0.15 nm 8 kHz None 1 Scanner 3D Nikon MCAx20+ NA 3 (x,y,z) 2000 mm 14 um NA Yes

3.2.3.10 Conclusions and Recommendations About Sensors and Actuators Technological Review

The technology review was segmented as much as possible in order to clearly distinguish the different technological fields implied in this broad topic. Mainly, the different reviews are the edge sensors, the SMT actuators, optics metrology techniques, Zernike polynomials, optics metrology commercial devices, commercial edge sensors and the available metrology solutions available at CSEM. Basic specifications are provided for the edge sensors to be implemented in dPAMT, and a few different commercial solutions are proposed. Two preferred solutions are the Mitutoyo linear gage and Keyence confocal light sensor. In the framework of JWST, let us notice that no edge sensors are used while, they are legion in ground-based segmented telescope. One of the reason is that the perturbation acting on the structure is different in the two cases.

3.2.3.11 References

[1] D. S. Nyce, Linear Position Sensors: Theory and Application, 2004. [2] A. J. Fleming, A review of nanometer resolution position sensors: Operation and performance, February 2013, Sensors and Actuators A Physical 190:106–126. [3] D. J. Bell, T. J. Lu, N. Fleck, S. M. Spearing, MEMS Actuators and Sensors: Observations on Their Performance and Selection for Purpose, June 2005, Journal of Micromechanics and Microengineering 15(7):S153 [4] C. Shelton, L. C. Roberts, Jr. How to calibrate edge sensors on segmented mirror telescopes, Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA 91109, USA, 2012. [5] Ultra-High Precision Positioning Actuators to Align Mirror Segments in the ELT Telescope, www.pi-usa.usb

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 64 of 158 Review v2.6.docx

[6] en.wikipedia.org/wiki/Zernike_polynomials [7] E. P. Goodwin, J. C. Wyant, Field Guide to Interferometric Optical Testing, 2006 [8] F. Zernike, How I Discovered Phase Contrast, March 1955.

3.2.4 SMT Review Conclusions

Throughout the review of the various existing segmented telescopes, various information was collected which are to be considered for the SMT requirements definition:  Earth based telescopes have two major needs for optical correction systems: gravity and atmospheric perturbation. Gravity pulls on the telescope parts and especially on the heavy primary mirror thus deforming the overall telescope. When the telescope position changes, the deformation of the telescope under gravity also changes. This need to be corrected by adapting the mirrors shape and position (wavefront corrections systems). Mechanism for that purpose need to be very accurate but they do not need to be fast. Then the big telescope are limited by their capacity to correct atmospheric perturbation. To do so, they use deformable mirrors with many degree of freedom to compensate for the rapidly changing atmospheric perturbation. This system needs to be fast and accurate.  James Webb Space Telescope, when in L2, will suffer from no atmospheric perturbation and very little gravitational pulls. However some wavefront correction system is still needed for two reasons: the telescope will be mounted and tested under gravity. When in space the absence of gravity will change its shape. The telescope need to be tuned for its operation without gravity. Thermal condition of the telescope may also affect its shape and thus such wavefront correction system will be performed regularly. However, all the perturbation seen by the JSWT are small enough or slow enough that the telescope will remain stable during an observation sequence thus needing no dynamic wavefront correction.  Accuracy requirement of the SMT positioning system need to be defined along 4 parameters described in Table 3-18Erreur ! Source du renvoi introuvable.. An order of magnitude for each error is proposed based on the reviewed documentation. Such levels of precision are extremely challenging and should only be considered as long term goals. These errors are mainly related to the primary mirror maximal acceptable wavefront error which is part of the overall telescope wavefront error budget. Table 3-18: SMT accuracy requirement SMT accuracy Description Expected Order of requirement Magnitude Maximal piston error Error of translation positioning along the Z 10 nm axis (normal to the mirror plane) Maximal tip-tilt error Error of angular positioning around the X and 10 nrad Y axes (in the plane of the mirror) Maximal centring error Error of translation positioning along the X and 100 nm Y axes (in the plane of the mirror) Maximal clock error Error of angular positioning around the Z axis 100 nrad (normal to the mirror plane)

In the review of existing high accuracy multi-axis positioning systems, a wide variety of mechanism were reviewed proposing a wide range of technologies for actuation and position sensing. All these systems are however more than 10x less accurate that the targeted order of magnitude. However, hexapod configuration seems well suited for these methods. Note also that the development of any of these systems largely exceed the scope the dSMT development proposed for the PULSAR project.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 65 of 158 Review v2.6.docx

In the review technologies for high precision position measurements and actuation, a great variety of technologies were reviewed. However, for the application within the framework of PULSAR, available technologies are limited to:  Commercial low cost sensors to be implemented in the dSMT  Commercial low cost sensors to be implemented in the dSMT  Existing equipment at CSEM for characterization of the dSMT  Existing equipment at CSEM or at DLR for demonstration dSMT capabilities

3.3 Exteroceptive Sensors: I3DS Integrated 3D Sensors

I3DS is the output of the previous Operational Grant n°4 (OG4). I3DS is a suite of sensors developed and designed to aid autonomous robotics missions by providing the means to interface with and use data products from the sensor suite. I3DS project aimed at designing and developing an inspector sensors suite with integrated pre- processing and data concentration functions. The sensors were chosen to enable pose estimation and motion tracking for two operational scenarios within the scope of OG4 - a planetary use case (for rover exploration) and an orbital use case (autonomous rendezvous/docking with a satellite).

The outputs from OG4 (I3DS) included: ● An Instrument Control Unit (ICU)

● A suite of sensors:

 Inertial Sensors . Star Tracker (STR) . Inertial Measurement Unit (IMU)  Relative Sensors . Radar . LIght Detection And Ranging (LIDAR) . Time-Of-Flight (TOF) camera . Stereo camera . High-Resolution camera . Thermal Infra-Red (TIR) camera . Force/Torque sensor and tactile sensors  Illumination Devices . Wide-angle torch illumination device . Pattern Projector

● A software framework, based on ASN.1 encoding of data products.

The OG4 product tree is shown in the following Figure.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 66 of 158 Review v2.6.docx

Figure 3-57: I3DS Product Breakdown Structure

 The star tracker that finds the orientation and location of the vessel using the stars.  The time of flight (TOF) camera that captures depth images to generate 3D point clouds.  The stereo camera that delivers two synchronized image streams that can be processed into a disparity map and 3D point clouds.  The high-resolution camera that delivers a monochrome image stream.  The thermal infra-red (TIR) camera that delivers thermal image stream.  The force/torque sensors and contact/tactile sensors that delivers the contact information with the target used in the final docking phase of a rendezvous.  The light detection and ranging (LIDAR) that delivers distance measurements to be used to generate 3D point clouds.  The radar that is used for ranging measurements.  The inertial measurement unit (IMU) that keeps track of the systems rotational and spatial acceleration, and feeds data into a Kalman filter to keep track of the system’s state.

For illumination items the projected pattern illumination can be used with the high-resolution camera and a pre-processing algorithm to create 3D point clouds. The wide-angle torch provides general illumination when needed for both the high-resolution and stereo cameras. The ICU contains the networking equipment for connecting devices and high-performance MPSoC and FPGA for control of the devices, processing of data streams and interfacing with the OBC. The software components of the system are pre-processing of imaging streams, the sensor interfaces for controlling and accessing the sensors, the system interface for receiving commands and sending data to the OBC, and the real-time operating system.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 67 of 158 Review v2.6.docx

Name of the sensor Sensor image Pattern Projector

Wide Angle Illumination

Star Tracker

Stereo Camera

High Resolution Camera 1

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 68 of 158 Review v2.6.docx

Name of the sensor Sensor image Thermal InfraRed Camera

High Resolution Camera 2

Time of Flight Camera

High Frequency Radar

Lidar

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 69 of 158 Review v2.6.docx

Name of the sensor Sensor image Force Torque & Tactile Sensors

Inertial Measurement Unit

Instrument Control Unit

3.3.1 Review and Assessment of OG4 Components

This section is split into two main sub-sections - one focused on the ICU and the other detailing the OG4 Sensor Suite.

3.3.2 OG4 ICU

The OG4 ICU was used as the central point through which all of the OG4 Sensor Suite connected, as a “data concentrator” function. This allowed for a single, standard interface between the OBC/EGSE and all Sensors, via the ICU (Gigabit Ethernet was chosen for I3DS demonstration).

3.3.2.1 Hardware Architecture

The OG4 ICU comprised of two CompactPCI cards, vertically mounted inside a 6U CompactPCI rack: the Zynq ICU and the SSDP (Scalable Sensor Data Processor).

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 70 of 158 Review v2.6.docx

Figure 3-58: OG4 ICU in its rackmount configuration. The Zynq ICU board occupies four slots on the far right of the rack. The SSDP is housed in the left of the rack, with 3 SpW ports showing on the front panel. A mains-powered standard ATX power supply sits in the bottom left of the rack. A fan tray is fitted at the bottom of the rack (notice the perforated air intake).

The SSDP is a custom DSP designed by Thales Alenia Space in Spain, based on a LEON3 rad tolerant processor combined with two Xentium DSP cores, providing up to 900 MIPS of processing power. The SSDP is designed for spaceflight, being based on a rad tolerant processor and including industry standard interfaces (i.e. SpaceWire and CAN). However, it lacks the processing power and sheer number of physical interfaces required to provide the level of data handling/concentration that the ICU requires. The LEON3 processor provides 100 MIPS of performance, while the remaining performance figure comes from the DSP cores, which require additional expertise to fully utilise. For these reasons, it is not recommended to re-use the SSDP in this call.

The Zynq ICU is based on a Xilinx Zynq UltraScale+ (ZU3EG), a System-on-Chip (SoC) with the following components linked on the same chip on a high speed AMBA bus: ● 4x ARM A53 Application Processors. ● 2x ARM R5 Realtime Processors. ● FPGA fabric.

The FPGA fabric allows the ICU to implement a number of standard and bespoke sensor interfaces, including: ● 6x SpaceWire Ports ● 3x Gigabit Ethernet ● 1x RS422 IMU power & data interface ● 1x PIAP Force/Torque sensor interface – data & power (separate connectors) ● 1x PIAP Contact/Tactile sensor interface – data & power (separate connectors) ● 1x RS422/LVDS PPS input ● 1x RS232 for LIDAR ● 13x External Thermistor interfaces ● 13x External Power Enable outputs ● 5x Camera Trigger outputs ● 1x Pattern Projector interface ● 1x Wide Angle Illumination interface ● 1x CAN interface

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 71 of 158 Review v2.6.docx

● 4x USB 2.0/3.0 ports ● 2x USB 3.0 only ports ● Debug interfaces (JTAG, UART, DisplayPort)

These interfaces occupy 16 HP (horizontal rack units), which is 4 slots of a CompactPCI rack. This dimension can be reduced if less interfaces are required on the front panel. As a 6U CompactPCI card, the PCB is 233 x 160 mm. This sets the lower bounds for the other two dimensions of the enclosure that houses it. The OG4 hardware architecture is summarised in the next figure. This architecture is designed as a platform to allow for: ● flexible interface support (easy to develop new interfaces), and by extension: ● a mixture of space interfaces and commercial interfaces to be connected to the same unit. ● easy migration of commercial sensor interfaces to flight-ready interfaces. ● flexible processing capability: the ability to add more processing cards. ● supporting a credible route to flight hardware implementations: ○ future SoC offerings (EU FPGA) ○ discrete co-processing (FPGA + CPU)

The FPGA fabric was a key enabler in allowing for any number of bespoke (and standard) interfaces to be easily integrated during development of OG4, but it also offers the benefits of being able to handle low level tasks in hardware, freeing up CPU resources and providing an optimised solution – for example, the camera triggers in OG4 are generated in the FPGA using TAS-UK IP which can be configured with different pulse widths, delays and periods, and connected to internal/external triggers as required e.g. to trigger a shutter a precise period delay after an external flash trigger signal is received, with much finer granularity than software.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 72 of 158 Review v2.6.docx

Figure 3-59: OG4 ICU hardware architecture. Note that the SSDP is not being considered for use in future work, but this does illustrate the concept of being able to have multiple processing cards in a single ICU, depending on processing requirements.

The rack used in OG4 is an 8-slot CompactPCI rack with a large ATX power supply. This has a total weight of approximately 12 kg fully populated, and dimensions 325 x 462 x 340 mm with handles and feet attached. Much of this weight is in the use of the CPCI rack and the ATX power supply – the power supply is able to supply 350+ Watts across 4 voltage rails, but the ICU consumes < 15 W in typical operation and uses only a 12 V rail. The use of a CPCI rack was largely for mechanical convenience and was guided by the original OG4 architecture which included a COTS SpaceWire router board in 6U CPCI form factor. The architecture evolved over time to remove the SpaceWire router, but the CPCI form factor was maintained. Prime candidates to reduce size and mass are thus to use a smaller enclosure with less empty space (and thus less metalwork) and to use a different form factor power supply, which may also have a single rail and much lower power rating for a fanless design. The combination of ARM processors in the ICU provide flexibility for rapid development (using the A53 processors), with an option to prototype implementations that are suitable for flight (using the

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 73 of 158 Review v2.6.docx

R5 processors). While the processors are relatively powerful in the I3DS configuration – more so than existing flight processors, running at 1.2 GHz/500 MHz for the A53/R5 – it is possible to clock the processors at different frequencies to emulate the performance of a flight processor. However, it became clear in OG4 that, even at higher processor clock speeds, hardware acceleration using the FPGA fabric will be a key enabler for realtime applications with intensive data processing requirements (e.g. stereo vision). All of the ICU software is built around a bespoke software framework that was developed by SINTEF as part of OG4. The key components of this software framework are the use of ASN.1 encoding for all messages, and the use of ZeroMQ (LGPL license) for the implementation of efficient messaging between components. Each producer (e.g. sensor) and consumer (e.g. pre-processing algorithm) has an asynchronous communication channel with a subscriber-publisher model. The EGSE uses the same framework to subscribe to producers that provide data products from the sensors, which are present as unique nodes on the network. The ICU software primarily runs on the A53 processors, with one of the R5 cores also being dedicated to servicing the IMU.

3.3.2.2 FPGA IP

The FPGA can be configured with a large number of IP cores. In OG4, a number of these were used directly from the Xilinx catalogue of free IP cores, which is built into the Vivado design tool. This included IP cores for UARTs, Ethernet MAC interface conversion (GMII to RGMII) and various AXI bus infrastructure IP. These are easily configured in a block diagram design within the Vivado GUI and can be added/removed as needed in a relatively fast development cycle. In addition to the standard Xilinx IP, the following TAS-UK IP was generated as blocks for the Vivado IP catalogue: ● Camera Trigger IP ● ADC128 Interface IP ● SpaceWire IP

The Camera Trigger IP is used to generate accurate trigger signals, with software-configurable delays, trigger routing, pulse widths and periods. For example, it is possible to output a trigger of a given period and mark-space ratio, a specified delay period after an external trigger is received. For example, an illumination flash may output a trigger signal, which can be routed to the FPGA and used to output an exposure trigger e.g. 10 µs later, held active for a given exposure period. Alternatively the configurable, internal counters in the IP can be used to set the trigger. The IP is configured by a set of registers that are exposed directly to the memory mapped AXI bus and are thus accessible directly by software. The ADC128 interface IP manages a SPI interface, implemented in VHDL, with a state machine that reads all 8 channels of a 12 bit Texas Instruments ADC128 device. This is perhaps the most common ADC used for general HKTM and low bandwidth instrumentation in flight designs. In OG4, it is used for the thermistor interface on the ICU, but it is also used by PIAP in their Force/Torque and Contact/Tactile sensors. The IP was thus adapted to be used for the PIAP sensors (managing both ADCs) as part of OG4. The IP exposes the ADC data in a set of registers, which can be read directly by software. In most implementations, additional filtering is applied to the ADC data in the FPGA, but this was not implemented in OG4. It is relatively trivial to add this feature if required. The SpaceWire IP was a newly developed IP that TAS-UK undertook as self-funded R&D in parallel with OG4. In addition to being included in OG4, it has been implemented in the Phase A ESA project SMILE, and is currently being implemented on the Mars Sample Fetch Rover Co-Processor. The IP implements a codec that easily achieves 200 Mbps (the maximum data rate in the SpW standard) and can double this to 400 Mbps on the UltraScale+. Testing on high end space-grade FPGAS (RTG4, V5QV…) is yet to confirm whether this could be achieved on Rad-Hard By Design (RHBD) devices. The IP has an AXI4 (full) interface, which allows it to write directly into memory, however it is not currently RMAP-enabled. The DMA descriptors are configured via an AXI4-Lite interface from

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 74 of 158 Review v2.6.docx

software. A baremetal driver is provided, which runs on the R5. RMAP and a Linux driver are both future development items that are on the IP development roadmap, if sufficient project need (and funding) arises.

3.3.2.3 The Zynq UltraScale+ Architecture

This section provides some background information on the Xilinx Zynq UltraScale+ architecture, which is the key technology in the OG4 ICU. It is useful to understand the nomenclature of the Xilinx ecosystem. The most fundamental terms are “Processing System” and “Programmable Logic”. Programmable Logic (PL) refers to the FPGA fabric – the logic gates, block RAM, many of the I/O and the generic high speed (Gigabit) transceivers. The Processing System (PS) contains all of the hard IP in the SoC – the ARM processors, supervisory circuits, graphics and I/O peripherals such as USB, UART, SPI, CAN and many more. The Processing System also contains a number of high speed (Gigabit) transceivers that are multiplexed between specific peripheral controllers (DisplayPort, USB 3.0, SATA and PCIe). The PS also contains a hard DDR3/4 memory controller, which is used in the DDR3 configuration in the OG4 ICU. The figure below shows a high level view of the key components of the Zynq US+ SoC. Note that there are a number of models within the Zynq US+ family, and this figure represents the EG (Embedded Graphics) family. Within this sub-category, smaller devices lack some of the high speed connectivity. The ZU3EG used in I3DS does not have Gigabit transceivers in its PL – it only has GTRs in the PS.

Figure 3-60: Zynq UltraScale+ architecture, reproduced from Xilinx UltraScale+ Product Selection Guide. The PS and PL are connected by a high speed on-chip bus, the AXI 4.0 bus (part of the AMBA bus specifications developed by ARM). There are high and low performance (32 and 64 bit) buses

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 75 of 158 Review v2.6.docx

between the PL and PS, with both master and slave interfaces to the PS being made available. This means that the FPGA designer can write high performance IP that performs DMA operations as an AXI master, or configuration registers or block RAM as an AXI slave. Almost every component of the SoC exists on the AMBA bus, with everything being memory mapped into the same address space. This creates a powerful environment with a lot of flexibility for software and hardware to interact – for example, software drivers can easily read or configure registers in FPGA IP simply by reading from/writing to a specific memory location. The A53 processors each contain an MMU, while the R5 processors implement a simpler Memory Protection Unit. Additionally, ARM TrustZone can be employed to create tightly controlled software that runs in protected memory regions for sensitive or critical code. A Global Interrupt Controller is shared between the PL and all of the ARM processors in the PS, allowing interrupts to be generated in both directions.

3.3.2.4 Performance

There are some key performance-limiting factors in the OG4 ICU that are important to understand. It was found during the integration phase of OG4 that the image pre-processing algorithms were not performing well enough on the ICU hardware (i.e. the ARM processor). The algorithms written by Cranfield University targeted a typical x86 platform and made heavy use of OpenCV. While it makes sense to use a well-established library for common image processing operators during initial development of a processing pipeline, this does not always lead to an optimal implementation for a given platform. A combination of time constraints and lack of available software engineers with embedded software experience within Cranfield University led to the majority of pre-processing routines being run on the EGSE laptop. Two image processing algorithms were implemented on the ARM processor of the ICU – lens distortion correction and CLAHE (Contrast Limited Adaptive Histogram Equalisation). Running these two algorithms on the high resolution camera on a 2048 x 2048 pixel image resulted in an output frame rate of 2 fps. The original target was to achieve a 10 Hz update from the cameras. The Zynq ICU hardware was benchmarked to give an indication of memory throughput compared to a modern PC. Benchmarking a series of memcpy operations on different size blocks showed a fairly consistent throughput of around 1300 MiB/s, which is around 25% of the performance of the EGSE laptop (a Core i7 with DDR3 memory). Note that options for flight memories are severely restricted, and DDR2 is currently the fastest memory technology for which flight parts are readily available. A large performance hit was observed when using the ASN1CC compiler. Although this was found to be an early development of an ASN.1 compiler that requires optimisation effort, it was prescribed early in the project as it is an ESA-sponsored product. However, during the close-down of OG4 it was made clear that the use of a more performant ASN.1 compiler would be accepted by the PSA, with the expectation that the ASN1CC compiler would be optimised over time. To give an indication of the overheads imposed by the ASN.1 encoding/decoding, the figure and table below show encode and decode times for various sensors. Note that the dense point cloud of the LIDAR achieves little more than a 1 Hz update rate. Further investigation into the ASN.1 compiler revealed that the encode/decode procedures used bit shifts to operate on the binary data bit-by-bit. Clearly, this becomes prohibitively slow on large datasets. To overcome this issue, the ASN.1 encoding was adapted to encode only the metadata associated with a payload, while the payload itself is sent as a raw binary payload. By extracting e.g. image data from the ASN.1 packetisation, the system was able to achieve the throughput it needed (a 10 Hz update rate).

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 76 of 158 Review v2.6.docx

Figure 3-61: ASN.1 coding performance

Table 3-19:: ASN.1 coding performance. Test Description Sample size (KiB) tir_mono 640 x 480, 2 bytes per pixel, mono frame 600 hr_mono 2048 x 2048, 2 bytes per pixel, mono frame 8192 stereo 1920 x 1080, 2 bytes per pixel, stereo frame 8100 lidar LIDAR 200K points 5908 radar Radar 400K points 4004 analog Analog 1000 points 9.78

While most of the sensors were evaluated in terms of performance (resolution, accuracy, fit for purpose…), the LIDAR was not evaluated due to priority being placed on other integration tasks after schedule slippage. Thus, performance of this sensor is known only from the datasheet specifications, but the limited empirical testing that was done did not highlight any major issues.

3.3.2.5 Limitations

Aside from performance considerations, there are some other software limitations to be aware of when considering the use of the OG4 ICU. The one major limitation is that the drivers for the Time of Flight camera are not currently available for the ARM64 architecture, and so this was not integrated into the ICU software. All ToF camera data/testing was performed under the direct control of the EGSE. There is no indication as to when ARM64 drivers will be available from the manufacturer. Two licensing issues exist for the ICU board support package, which limit how it may be re-distributed to partners. Most notably, the cameras provided by Cosine requires the use of the Pleora SDK to grab camera images. This software requires a paid license, otherwise the image frames that are returned have a watermark superimposed on them to indicate that a free version of the SDK is in use.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 77 of 158 Review v2.6.docx

A second, minor licensing issue exists with the use of the Basler Pylon SDK, which is required to grab frames from Basler cameras. The only limitation here is that the binary may not be re-distributed, which means that the user must download the binary directly from Basler and add it to their build of the ICU software. To simplify this process, a script is provided to create the correct build directory structure from the user-provided Basler archive.

3.3.3 OG4 – Sensors

The OG4 sensor suite includes a large number of sensors. A summary of each of these sensors is provided in the following sections.

3.3.3.1 Cameras

3.3.3.1.1 Cosine Acquisition Board

Because it is common to all of the Cosine cameras, the Acquisition Board is discussed briefly in this dedicated section and applies equally to the High-Resolution, Stereo and TIR cameras provided by Cosine. The Acquisition Board is an FPGA board designed by Cosine with a number of interfaces, supporting connections to various cameras in addition to an optional Single Board Computer (a Qseven module) which allows further processing and provision of streaming interfaces such as Gigabit Ethernet. The Acquisition Board used for I3DS (Cosine EP12) was a Breadboard design and has no radiation hardening. It has dimensions 95.89 x 90.17 mm, and this was housed in its own enclosure (not provided by Cosine) as part of the I3DS demonstrator setup. A Flight Model Acquisition Board (Cosine EP9) also exists. This version has been qualified for the LM- 2D (Static load, vibration and shock), has undertaken irradiation testing campaigns (230 MeV at a fluence of 1011 particles/cm2 and a flux of 108 particles/cm2/s) and has undergone a full environmental Proto Flight Model Testing campaign. The Flight Model launched in August 2017 onboard the ESA-Danish Cubesat mission GOMX-4B.

Figure 3-62: Cosine Acquisition Board.

3.3.3.1.2 High-Resolution Cameras

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 78 of 158 Review v2.6.docx

Although they are lumped under one category, there were actually two High Resolution Cameras used in I3DS. One was provided by Cosine, while another was a COTS model from Basler. Both cameras have a Gigabit Ethernet interface and use GigE Vision for control and acquisition and both are based on the same sensor, making them largely interchangeable.

3.3.3.1.2.1 Cosine High Resolution Camera

The Cosine High Resolution Camera is based on a CMOSIS CMV4000 sensor - a scientific-quality CMOS image sensor with 12-bit, 2048 x 2048 pixel resolution, global shutter and a 5.5 μm pixel pitch. The Camera Head is equipped with a fixed focal length COTS Zeiss Interlock Compact 2.8/21 lens. The Camera Head contains a PCB with an FPGA to handle the sensor interfacing. This then connects to the Cosine Acquisition Board, which provides standard interfaces to the outside world – in this case, Gigabit Ethernet is used. Table 3-20: Cosine High Resolution Camera Lens Specifications. Parameter Description Focal length [mm] 21 Field of view [deg] 30.73 x 30.73 Instantaneous Field of View [arcsec] 54 F# 2.8 - 22 Min working distance [m] 0.25 GSD @ 4 m [mm] 1

Figure 3-63: Cosine High Resolution Camera (Note: Acquisition Board not shown).

3.3.3.1.2.2 Basler High Resolution Camera

The Basler ace GigE camera was also used in I3DS (Basler acA2040-25gmNIR). This model has similar characteristics to the Cosine offering, being based on the same CMOSIS CMV4000 sensor with a 12 bit, 2048 x 2048 pixel resolution, global shutter and a 5.5 µm pixel pitch. Notably, the Basler camera does not require an external Acquisition Board to provide a Gigabit Ethernet connection, and so it is quite considerably smaller in terms of its overall footprint. The Basler camera requires the use of the Basler Pylon SDK (C++) for GigE Vision.

Figure 3-64: Basler ace High Resolution Camera.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 79 of 158 Review v2.6.docx

3.3.3.1.3 Stereo Camera

The Stereo Camera used in OG4 was again provided by Cosine, and uses their Acquisition Board as the interface to two co-planar camera heads. The cameras have an interocular distance of 150 mm. Mounted on the alignment board, with lenses, harnessing and an acquisition board, the total mass is < 1.6 kg. The arrangement shown in Figure 0-4 occupies a space of 270x140x52 mm. This does not include the single Acquisition Board necessary to interface with both cameras. The cameras used are again based on the CMOSIS CMV4000 (12 bpp, 2048 x 2048 pixels, global shutter, 5.5 µm pitch) but are coupled with Kowa LM12JC optics, with the following specifications shown in the table below. In operation, the resolution of the stereo image pair for OG4 was 1920 x 1080 pixels. Table 3-21: Stereo Camera optics specifications Parameter Description Focal length [mm] 12 Field of view [deg] 53.7 x 53.7 Instantaneous Field of View [arcsec] 94.5 F# 1.4 - 16 Min working distance [m] 0.117 GSD @ 4 m [mm] 2 Resolution centre [line/mm] 100 Resolution corner [line/mm] 60 interocular distance [mm] 150 Vergence [m] 4

Figure 3-65: Cosine Stereo Camera

3.3.3.1.4 TIR Camera

The OG4 TIR camera was provided by Cosine. It is based on a Xenics XTM-640-Analog module, a thermal imaging module based on an uncooled amorphous Silicon microbolometer array. The camera has the following characteristics, and is again connected over Gigabit Ethernet via the Cosine Acquisition Board.

Table 3-22: Cosine TIR Camera Specifications

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 80 of 158 Review v2.6.docx

Parameter Specification Array type a-Si microbolometer Resolution 640 x 480 Pixel pitch 17 μm Spectral range 8 μm – 14 μm NETD <75 mK @ 30°C Shutter Rolling Image Raw data 0.62 MB Frame rate Tunable, 50 FPS NEDT typical 50 mK (max <75 mK) @ 30C with F/1 lens Mass 100 g

Figure 3-66: Cosine TIR Camera

3.3.3.1.5 ToF Camera

The OG4 Time of Flight camera is a COTS Basler device – the Basler Tof640-20gm. This is based on a Panasonic MN34902BL sensor. A summary of the camera specifications is reproduced below, but the datasheet should be consulted for further detail. IMPORTANT: It is very important to note that Basler do not provide ARM/Linux drivers for the Basler ToF camera. This means that the ToF camera cannot be directly interfaced to the ICU using Basler’s own software. Therefore a Windows x86 PC platform will be required if this ToF camera is used with the Basler software. However, third party commercial libraries do claim support for the Basler ToF camera. This was not explored in OG4 and so carries additional risk. Third party software support can be provided by several vendors who’s software supports GenICAM (Generic Interface for CAMeras). These are commercials packages which come with additional costs attached, and are typically part of large image processing libraries/tools. One such software tool is provided by MVTec as part of their HALCON product. The HALCON tool and associated camera interfaces are available for 64 bit ARM. While it is a different architecture (ARMv7 versus ARMv8), the Zynq 7000 is also listed as a device that HALCON has been tested on “extensively” by the developer. This gives additional confidence that this is an appropriate choice of software.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 81 of 158 Review v2.6.docx

Table 3-23: Basler ToF Camera Specifications Specification Basler ToF Camera Resolution (H x V pixels) 640 x 480 Sensor Type Panasonic MN34902BL Optical Size 1/4" Lens 3.6 mm Field of View (H x V) 57° x 43° Max. Frame Rate (at full 20 fps resolution in Standard processing mode) Mono/Color Mono Wavelength 850 nm, ±30 nm Non-ambiguity Range 0–13.320 m (default channel) Absolute Accuracy: ±1 cm Repeatability (1 σ): 8 mm Drift with Temperature 0.7 mm/K Image Components Range map Intensity image Confidence map Dimensions 142 x 77 x 62 mm Weight ~450 g Enclosure Rating IP30

3.3.3.2 LIDAR

IMPORTANT: It is very important to note that drivers for the Beamagine LIDAR do not exist for ARM. This means that the LIDAR cannot be directly interfaced to the ICU. Therefore an x86 PC platform will be required if this LIDAR is used. The drivers support Windows but were also adapted for Linux, 32 and 64 bit. The OG4 LIDAR is a development unit of a Beamagine LIDAR, which was customised for OG4. The LIDAR is a relatively large and heavy unit, at 190 x 170 x 130 mm and 2.5 kg. The LIDAR outputs a point cloud (range and intensity) over Gigabit Ethernet (UDP). Power consumption is 30 W, operating from a 12V supply.

The datasheet specifications are as follows: Table 3-24: LIDAR Performance Specifications. Parameter Value Field-of-view 35x35º Image resolution 600x360 px Frame rate ~3 Hz Point rate 600 Kpx/s Angular resolution (x- 0.06 - 0.1º y) Angular sampling <0.01º accuracy

Range resolution +/- 35mm (over 14.5m) => 0.40% relative error

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 82 of 158 Review v2.6.docx

+/-47mm (over 14.5m) => 0.70% relative std # of returns 4

Figure 3-67: OG4 LIDAR sensor.

3.3.3.3 Star Tracker

The T1 STR Star Tracker was developed by Terma for OG4. The development resulted in a new European Star Tracker with a SpaceWire interface, delivered as an “e-box”: an Optical Head (OH) and Electronic Enit (EU). The EU contains an embedded processor that calculates and outputs quaternion data directly, thus relieving the ICU/OBC from further pre-processing. The e-box contains a space-grade processor (GR712) – coupled with the SpaceWire interface, this is one of the most developed sensors in terms of flight-ready technologies.

However, there are few options for testing the Star Tracker and thus it was not used in any of the OG4 scenarios. Terma are developing a physical test harness that emulates a star field, but little is known about this as of the conclusion of OG4 and it is considered a future development. Additionally, there is currently no way to simulate star field imagery and feed this into the star tracker directly.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 83 of 158 Review v2.6.docx

Figure 3-68: Terma T1 Star Tracker Optical Head.

3.3.3.4 Radar

The radar was designed by Hertz, who based the hardware around a flight processor and a SpaceWire data interface. The radar has a diameter of 0.5 m and weighs approximately 4 kg, so it is relatively large and heavy for the demonstration purposes of the project.

Figure 3-69: OG4 Hertz radar.

3.3.3.5 IMU

The OG4 IMU was provided by TAS-UK. The unit provided is a COTS equivalent of the TAS-UK IMU, of which an FM already exists; an FM unit was shipped in early 2018 for integration on the ExoMars rover. The COTS unit supplied in OG4 was a Silicon Sensing DMU-30. Before shipment, TAS-UK applied the same calibration routines as for a Flight Model.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 84 of 158 Review v2.6.docx

Figure 3-70: OG4 IMU – Silicon Sensing DMU-30

In OG4, the IMU was interfaced to the ICU using a cable assembly carrying power and RS422 data from a DB9 on the ICU front panel to the 15w Micro-D on the IMU. The IMU data is provided at a rate of 200 Hz. This is a relatively fast update rate to accommodate on Linux, alongside all of the other application software running on the ICU’s A53 processor. Thus, the IMU data is read and processed on one of the ICU’s R5 cores and data is provided to the A53 at a reduced data rate after filtering.

A summary of the IMU performance is reproduced below. For full details, inspect the DMU-30 datasheet. Table 3-25: IMU Performances Specifications Parameter Min Typ Max Notes Dynamic Range (°/s) -490 - +490 Scale Factor Error (ppm) -500 ±250 +500 Up to ±200°/s Scale Factor Error (ppm) Over -700 ±500 +700 Life Bias (°/hr) -20 ±15 +20 Bias Instability (°/h) - < 0.1 0.2 As measured using Random Walk ( °/√h) - < 0.02 0.04 the Allan Variance method. Bias Repeatability (°/h) - 20 100

After delivery of the calibrated IMU, testing at project partner SINTEF’s premises in Norway (at a Latitude of 63° 26' 48'' N ) showed an hourly drift (X, Y, Z) of: [-2.780702 -6.767613 -15.487639]

3.3.3.6 Contact/Tactile and Force/Torque Sensor

The OG4 Force/Torque and Tactile sensor was developed and provided by PIAP Space. The mechanical design of the sensor was adapted for the OG4 orbital demonstration with a lengthy end effector, but the next figure shows the original, shorter design. Both Force/Torque and Contact/Tactile sensors are co-located in the same mechanical head. There are three Contact sensors in the head, with force readings for each returned on the C/T sensor data interface. The Force/Torque sensor is a 6 axis sensor, returning three force and three torque measurements on its data interface. The sensors also have temperature monitoring available on one of the spare ADC channels (the ADC on each sensor is an 8 channel device).

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 85 of 158 Review v2.6.docx

Figure 3-71: PIAP F/T and C/T Sensor

The sensors operate from a 12 V supply, drawing a nominal 3.1 W for the F/T sensor and 2.3 W for the C/T sensor. The measurement range and noise characteristics of the sensors are shown in the following table: Table 3-26: PIAP Sensor Measurement Range. Sensor Sensor Range Time Random Maximal Scale Unit Response Noise (3 Bias Error Factor (at 63%) sigma) (error drift) F/T Fx -150N/+ 0.175 ms 8.8 µV/√Hz 0.755 0.001·ΔT Sensor Fy 150N Unit Fx Mx -10/+10 0.175 ms 8.8 µV/√Hz 0.185 0.001·ΔT My Nm Mz Tactile F1 0-20N 0.175 ms 8.8 µV/√Hz 0.245 0.001·ΔT Sensor F2 Unit F3

The F/T and C/T sensors each have a 12V power connector (M12 connector type) and a separate data connector (9-way Micro-D). The data interface is a SPI interface via an LVDS transceiver, which connects directly to the ADC in the corresponding sensor head. The ADC used is the ADC128D102. The OG4 ICU contains TAS-UK firmware IP to interface with this particular ADC. The IP reads the ADC values at a predetermined rate (1 kHz, adjustable) and makes them available in registers, which can be directly accessed through a memory-mapped software interface. No filtering/decimation of data is performed in the IP at this point in time, so only the latest raw sample is returned. This can be adapted with relatively little effort from TAS-UK, if filtering is required. The OG4 sensor is housed in a very long probe tip due to the mechanics of the orbital test facility used in OG4.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 86 of 158 Review v2.6.docx

3.3.3.7 Illumination Devices: Pattern Projector and Wide Angle Torch

Structured light pattern projection is provided by SINTEF. It must be coupled with high-resolution cameras in order to retrieve the depth of the scene. When properly calibrated, a priori knowledge about the projected pattern and camera / projector geometry will allow for 3D measurements through triangulation based principles. The technique is known as 3D reconstruction using Structured-Light.

Figure 3-72: I3DS Structured Light Sensor The pattern projector emits in the infrared wavelength (860 nm) at a high intensity. A sequence of patterns following a Gray coding with phase shifting can be used. Phase shifting method allows increasing the resolution of the reconstruction. An overview of the pattern images can be seen in the image below.

Figure 3-73: Gray coding pattern

Dimensions of the projector are the following.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 87 of 158 Review v2.6.docx

Figure 3-74: Structured Light Sensor Dimension

Performance results of the pattern projector + the high-resolution camera could not be assessed properly during the testing. Unfortunately, the projected patterns were not enough visible due to the intensity of the spot or the local wide illumination and the lack of dedicated filter during the experiments. However good results were obtained by SINTEF during the sensor validation tests.

Wide illuminator provided by SINTEF is used to ensure close to optimal lighting condition for the different vision tasks. Illumination wavelength ranges from 400 nm to 700 nm. A trigger signal is used to synchronize the illumination device with a camera, through a RS232 serial communication.

Figure 3-75: I3DS Wide Angle Illumination

3.3.3.8 Sensors Performances Summary

A summary of the sensors performances is required in the following table:

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 88 of 158 Review v2.6.docx

Table 3-27: I3DS Sensors Performance General Accuracy Physical Electrical Interface Average Error Random Noise Input Power Name Manufacturer Data Type Data Unit Mass Interface with the ICU (bias) (3 sigma) Voltage Consumption TERMA X/Y: 4.5 arcsec, STR Attitude deg 1 arcsec per 10 degC 0.5 kg 5.0 V 0.75 W SpaceWire (T1 model) Z: 27 arcsec COTS Point Cloud TOF meter +/- 9mm (over 10m) +/- 12mm (over 10m) 0.45 kg 24 V 30 W Gigabit Ethernet (Basler) (640 x 480 pts) +/- 236mm (over 14.5m) +/-80mm (over 14.5m) STEREO CAM COSINE 2x Greyscale image - 1.5 kg 12 V 18 W Gigabit Ethernet => 3.0% relative error => 1.6% relative std TIR CAMERA COSINE Grayscale image Gray level +/- 2 degC - 0.6 kg 12 V 18 W Gigabit Ethernet Grayscale image meter 8mm / 107mm 4.3mm / 133mm 0.6 kg 12 V 3 W Gigabit Ethernet COTS => Aruco Markers deg 0.8deg / 2.1deg 0.52deg / 2.4deg HIGH RES CAMERA (Basler) Point Cloud 5mm @2.4m 100-150W meter * (1) < 5 kg (proj.) 24V (proj.) D-sub (proj.) => Pattern projector 0.9mm @1.3m (proj.) FORCE TORQUE N SpaceWire PIAP Force/Torque < 0,1 % < 0,1 % 0.5 kg +18 to +36 V 336 mW SENSOR N/m Analog TACTILE/CONTACT SpaceWire PIAP Force N < 0,1 % < 0,1 % 0.01 kg +18 to +36 V 336 mW SENSOR Analog range meter - 0.01m @ <100m HF RADAR HERTZ - 0.1; 1; 5; 10 depending app. 4kg 18-36 V app. 60W SpaceWire angle deg. on distance COTS Angular Rate deg/h 12 deg/h 0.52 deg/h IMU 0.3 kg 12 V 4W RS422 Acceleration 2 (SiliconSensing) m/s 13 mg 0.8 mg +/- 35mm (over 14.5m) +/-47mm (over 14.5m) LIDAR BEAMAGINE Point Cloud meter 2 kg 12 V 60 W Ethernet => 0.40% relative error => 0.70% relative std

* (1) : the performance assessment was not finalised due to lack of time

D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 89 of 158 Review v2.6.docx

3.3.4 Conclusions

The PULSAR scenarios focus on the use of cameras for visual servoing in order to assemble the tiles autonomously but also 3D measurements are needed to have a reconstruction of the scene.

The OG4 components selected for PULSAR are: - The ICU o Upgrades are foreseen as part of the other OGs part of the 2018 SRC calls o A smaller version would be foreseen, TAS-UK is performing a trade-off for the housing o FPGA acceleration is also foreseen therefore depending on the algorithm chosen for pre-processing, it will be implemented for PULSAR - The HR camera is foreseen: o In order to perform visual servoing at short range o The Basler HR camera is proposed specifically for the demonstration as it does not require an acquisition board, therefore the accommodation will be easier at the end of the robotic arm - The pattern projector is also foreseen o It will serve two purposes: first as an illumination device for the scene and second coupled with the HR camera, it can provide 3D measurements - The stereo camera is proposed as a back-up: o This can be used on the platform side to monitor the assembled mirror Table 3-28: I3DS trade-off table for PULSAR I3DS elements Chosen for PULSAR Justification ICU YES Must have to interface with the platform and have the pre-processing available Cosine HR camera NO Requires an extra acquisition board Basler HR camera YES Easier interface also with the ICU Stereo camera YES as back-up Can be used for perception functions TIR camera NO Not needed for the assembly ToF camera NO Usually for longer range sensor LIDAR NO Usually for longer range sensor Star Tracker NO Inertial sensor not needed for the assembly Radar NO Long range sensor IMU NO Inertial sensor not needed for the assembly F/T and Contact NO The tiles mechanical assembly is ensured by the docking interface (SIROM or Hotdock) Pattern Projector YES To serve as both illumination device and for 3D sensor with the HR camera Wide angle torch NO Covered by the pattern projector illumination

In the interest of providing better value i.e. focusing on algorithms and proving the system works as a whole (data fusion, navigation functions etc.), it is recommended that COTS cameras be used wherever possible. The Basler HR camera is an excellent drop-in replacement for the Cosine offering. The Basler ToF camera was not validated as being compatible with the ICU during OG4, and so this carries a risk plus the additional cost of third party software to interface with it.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 90 of 158 Review v2.6.docx

Replacement COTS cameras for the stereo camera and LIDAR (if downselected for use) should be considered. A great deal of time was spent in the extrinsic calibration of the Stereo Camera in OG4. COTS stereo cameras such as the Zed camera from Stereolabs provide a factory-calibrated camera with a standard USB 3.0 connection that can be supported by the ICU directly. Software support is also provided for the ARM and specifically for the UltraScale+, making it a natural fit. In comparison, the Stereo Camera and HR camera from OG4 will require the purchase of an additional license for the Pleora SDK in order to avoid the overlay of a watermark on the stereo image pairs. It is to be noted that none of the I3DS elements are not waterproof and are not designed to be operated underwater. Therefore, if there is a need to go for underwater testing, a special shielding needs to be foreseen and developed for PULSAR. Another solution is to use readily available underwater cameras as a drop-in replacement. The retained solution shall be evaluated at design phase.

3.4 Standard Interface

3.4.1 Standard Interface in PULSAR

In PULSAR, the standard interface is a robotic device providing mechanical, data and power transfer between different components of the system. During assembly of the telescope, they are used as end-effector of the robotic manipulator to manipulate the individual segmented tiles with manipulator to tile and tile to tile connections. Once assembled, during operation, through the mechanical connection, they provide the mechanical integrity of the segmented telescope. At the same time, the integrated connectors enable data and power transmission between the different tiles and with the OBC. In the context of this project, different configuration of standard interface can be envisaged (in order to reduce integration and manufacturing costs):  Active: full feature standard interface (mechanical + actuation + control);  Passive: mechanical interface + data and power pass-through, no actuation, mechanical connection managed by another active interface;  Mechanical: mechanical interface without data or power transfer, no actuation, mechanical connection managed by another active interface;  Dummy: visual, no capability of mechanical transfer. The following section describes different available standard interface technologies that are candidate for PULSAR. The OG5-SIROM interface is the one developed during the first call of the SRC and initially planned to be re-used in the second call. However, due to some anticipated weakness to address the requirements of the second call, two other technologies have been also investigated: HOTDOCK and iBOSS. After the description of each technology, an assessment of them related to the specificity of PULSAR application will be presented.

3.4.2 OG5-SIROM Interface

The SIROM interface is the product of the OG5 activity, one of the building block of the first SRC call. The SIROM interface is a standardized robotic interface consisting of mechanical, electrical data, thermal and control interfaces. It aims connecting payloads to payloads or payloads to structures, with the capability to transfer mechanical loads, data power and heat. With an androgynous design, it can match and couple another SIROM interface on one side. On the other side, it can be connected to an Active Payload Module (APM) or the end-effector of a robotic manipulator. The SIROM interface is composed of the following components that are described in the next sections (Erreur ! Source du renvoi introuvable.):

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 91 of 158 Review v2.6.docx

● The mechanical interface that provides the SIROM connection capability as well as the housing of the system; ● The electrical interface (EIS) that allows the transfer of electrical power in both directions through SIROM, as well as the power supply for the SIROM controller; ● The data interface that supports SpaceWire (SpW) as high-bandwidth data transfer protocol (pass-through) as well as CAN bus interface for SIROM TM/TC; ● The thermal interface that provides heat exchange capability between two attached APMs ● The SIROM controller that implements the communication and control algorithms for the operations and monitoring of the interface.

Figure 3-76 : SIROM interface design and components Apart of the thermal interface that was tested in a parallel track, all the other components have been integrated, tested and validated in two demonstration validation scenarios (orbital and planetary) during OG5 activity.

3.4.2.1 Mechanical Interface

The SIROM housing is a cylinder with an external diameter of 120mm and a total height of around 76mm (30mm and 45mm protruding respectively outside and inside the APM). The housing includes all the mechanical and drive components as well as the internal harnessing and the connector plate. The controller and the electrical interface boards are not included in this envelope. The housing features protruding guiding petals to guide the system during the last stage of the connection process (not supporting the approach stage). The connection is based on a latching system, basically formed by three capture hooks (or latches) evenly distributed with 120º apart, that enter inside the opposite SIROM pockets and retracts (Figure 3-77Erreur ! Source du renvoi introuvable.). The latches retraction preloads the opposite SIROM capture tabs, resulting in the approximation and compression of both SIROMs. Each latch consists of a titanium four-bar-linkage moved by its own pinion, synchronized by an internal gear and connected to a DC actuator. Once the latching process is finalized, the DC actuator is then used to extract the SIROM connector plate from the envelope in order to allow the connection to the other SIROM connector plate. The second SIROM connector plate can be activated by three rotating spindles from the first SIROM. We talk then of an active and passive SIROM. The connector plate features all the data, power and thermal interfaces. It has no design symmetry (as the mechanical housing), limiting the possibility of orientating the two connected SIROM differently. In the current state of the design, the covers lids have not been integrated, meaning that there is no specific dust protection on the system.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 92 of 158 Review v2.6.docx

Figure 3-77 : SIROM Latching Mechanism Concept

3.4.2.2 Electrical Interface

The SIROM Electrical Interface Subsystem (EIS) allows to transfer and control electrical power in both directions on two buses of 24V and 100V, for a total power rating of 150W. The EIS implements overcurrent and under voltage protection with automatic triggering, as well as the possibility to monitor and control power transfer on each line (ON/OFF TM/TC). The interface provides also unprotected 5V and 24V lines for the SIROM controller and actuator. The interface is implemented on a 120mmx120mm board and is not integrated directly in the SIROM housing (Figure 3-78).

Figure 3-78 : Electrical Interface Subsystem (EIS) Board

3.4.2.3 Data Interface

The SIROM data interface supports two data buses. The SpaceWire (SpW) interface is implemented as high-bandwidth data transfer interface (e.g. video streaming) and the CAN bus interface as data transfer and control of SIROM interfaces and APMs. The SpW data interface is composed of 2 links (nominal and redundant) supporting full duplex operation and link rates of up to 100Mbps. The SpW data interface and the electrical link characteristics complies with the European SpaceWire standard ECSS-E-ST-50-12C. The CAN bus data interface is also composed of 2 redundant lines supporting baud rates of up to 1Mbps. The CAN bus data interface complies with the European standard ECSS-E-ST-50-15C. It implements the CANopen protocol including automatic switch to redundant CAN bus and flying master capabilities in order to support the change of the master CAN node in case of SIROM disconnection. Each SIROM is also equipped with a mechanical switch to trigger the CAN termination resistor in order to always have a valid bus configuration in the case of system re-configuration.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 93 of 158 Review v2.6.docx

3.4.2.4 SIROM Controller

The SIROM controller is an avionics component part of the SIROM system and connected as a slave device to a dual redundant CAN bus. It receives its commands from a master device remotely connected to the bus (OBC from spacecraft, planetary rover or APM). The controller implements the algorithms that monitor and control: ● The locking and coupling mechanisms of the SIROM including the actuator and monitoring of the sensors (hall effect position and temperature); ● The Electrical Interface System (EIS) responsible for switching the 100V and 24V power lines across the SIROM; The dynamic switching of the SIROM controller to the redundant CAN bus in case of the nominal bus failure in compliance with the ECSS-E-ST-50-15C CAN bus extension protocol and the flying master management. The SIROM I/F controller consists of 4 main components (Figure 3-79): ● Raspberry Pi Zero is to control overall operation of the SIROM interface (communication, control and monitoring), running a full embedded Linux. ● Teensy 2.0, provides additional I/O interfaces (including analogue I/O) and controls I/O operations such as reading the sensors and latching switch status, and controlling the motor. ● Teensy 2.0 adaptor board is a simple board consisting of passive components and connectors for interface and conditioning of the actuator/sensors. ● PiCAN2 Duo is to communicate via CAN bus. The current status of the SIROM controller is considered to be at the breadboard level to validate and analyse the needs for future developments. Due to the use of COTS components and the current low level integration, the avionics hardware was not integrated in the SIROM housing.

Figure 3-79 : SIROM controller architecture and hardware implementation

3.4.2.5 OG5- SIROM Specifications

The following table provides the SIROM specifications as presented by OG5 consortium at the end of the OG5 project. The total mass of the standard interface is around 1kg (not including the EIS and controller electronics).

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 94 of 158 Review v2.6.docx

It has to be noted that to our knowledge some of these specifications are theoretical and have not been physically tested. This is including the force/torque capabilities (estimated based on simulations), the operational temperature and the number of cycles. Also the current design doesn’t feature dust protection. During OG5, the SIROM interface was validated, in the context of the orbital scenario, with a KUKA LWR robotic arm in a vertical manipulation and connection scenario. Unfortunately, the device could not be validated in the context of the planetary scenario due to the unavailability of the robotic platform. Table 3-29 : OG5 SIROM Specifications

3.4.3 HOTDOCK Interface

HOTDOCK is a standard robotic interface supporting mechanical, data, power and thermal transfer. Under development by Space Applications Services, his design has been initiated to provide an answer to the highlighted weaknesses of SIROM design in the context of the second call of OG’s, while keeping the same targeted applications.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 95 of 158 Review v2.6.docx

Figure 3-80 : HOTDOCK interface concept before and after coupling

3.4.3.1 Mechanical Interface

HOTDOCK is designed in a coaxial manner to reach maximum packing density. Featuring and androgynous and 90-degree symmetry design, the form-fit contour has been specifically designed to support off-axis (through mechanical guidance) and diagonal engagement. Approach angles up to 110 degrees are supported, allowing coupling up to three orthogonal HOTDOCK interface connections simultaneously for complex spacecraft assembly. It is equipped with a mechanism which allows to mate or demate using a single drive unit. A unique (patent pending) coupling mechanism, based on industrial design, has been developed (Figure 3-81). It is based on ball pressing an external locking ring mechanism, offering self-locking and powerful load transfer through mechanical transmission all along the circumference. The active motion of the system is provided by an integrated motor drive associated with a mechanical state machine and a motion pattern generator. For safety and reliability, if one HOTDOCK is inoperable (e.g. in case of a power failure), another instance is still able to drive alone the pair of interfaces to complete the coupling or decoupling process. Depending the required features and specifications of the mission, the HOTDOCK design is scalable. The design of HOTDOCK has been thought for both orbital and planetary applications. Both in open and connected mode the design offers good dust protection. The internal motion mechanism is connected to dust protection petals that will open the connector plate, once the two interfaces are connected. After coupling, the external ring configuration offers good dust protection during operation. The system will be equipped with several sensors, for temperature, motion monitoring and control of the drive unit and detection of the good alignment (e.g. to validate before enabling the connection) and coupling of the interfaces. The mechanical design is foreseen to integrate the control and communication electronics and possibly the power electronics, as function of the power and data requirement of the mission.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 96 of 158 Review v2.6.docx

Figure 3-81 : HOTDOCK internal configuration

3.4.3.2 Power Interface

The power interface of HOTDOCK allows a bidirectional power transmission on two buses of 24V and 100V, with an envisaged power rating up to 1kW. The interface can integrate switching and protection capabilities on the different lines. The 24V line is also used for HOTDOCK interface powering (controller, actuation). The power is transmitted through POGO connectors, connected on a movable connector plate, fully embedded and protected in the structure in open configuration. They are configured to support the androgynous and 90-degree symmetry characteristic of the interface.

3.4.3.3 Data Interface

The data interface of HOTDOCK is based on the expertise and technology developed during the OG5 activity. HOTDOCK features a redundant CAN interface for device interface control and monitoring. The CAN bus data interface complies with the European standard ECSS-E-ST-50-15C. It implements the CANopen protocol including automatic switch to redundant CAN bus and flying master capabilities in order to support the change of the master CAN node in case of HOTDOCK disconnection. For high rate data communication, it natively supports SpaceWire and Ethernet technologies. As for the power interface, the data interface implements the same kind of connectors.

3.4.3.4 HOTDOCK Controller

The HOTDOCK controller supports the low-level monitoring and control of the interface as well as the CAN bus communication (the data interface being a pass-through although other configurations could be adapted). Based on the expertise built during the OG5 project, it will be integrated in an optimized design, offering the capability to be embedded inside the mechanical structure. The components will be selected in a family providing customization for rad-hard and rad-tolerant applications, simplifying the transition to space mission.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 97 of 158 Review v2.6.docx

3.4.3.5 Thermal Interface

HOTDOCK will offer limited thermal transfer through conduction (20-50W). Under the MOSAR project, there are currently investigation to upgrade and integrate the development of OG5 for active thermal transfer through the HOTDOCK interface. The hollow-haft coaxial design offers the possibility to have concentric tube for liquid exchange, while keeping the symmetry characteristic.

3.4.4 iSSI (IBOSS) Interface

The iSSI interface has been design by the IBOSS consortium and is commercially available from iBOSS Gmbh for Laboratory version. The multifunctional interface iSSI combines the four subassemblies for power, data, heat and mechanical load transfer Figure 3-82). The main parts of the mechanical interface include the coupling and guiding elements as well as the positioning pins that are arranged around the data interface. Around the mechanism the thermal interface consists of a carbon-nanotube copper-alloy composite material (the mechanical interface can be implemented without the thermal). The system features an androgynous design with 90° rotatable symmetry. The mechanical interface is remotely actuated by a brushless DC motor actuating a barrel cam mechanism. It allows coupling and decoupling with a passive device, as for SIROM and HOTDOCK. The power interface is provided by transmission through the alignment pins with a power transfer capability of 2.5kW. Switching and protection capabilities are offered by an external power interface board. The system implements a redundant data interface. The main interface is realized over a short range optical communication interface, which is accommodated in the center of the iSSI, implementing TTEthernet protocol. As backup solution, Optical CAN Interface on the circumference able to support low data rate communication. Control electronics are provided through an external controller box to be embedded in the payload structure.

Figure 3-82: iBOSS iSSI Interface (with thermal ring)

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 98 of 158 Review v2.6.docx

3.4.5 Interface Assessment

This section provides our analysis of the adequacy of the different interfaces in the context of the PULSAR project. The following table provides high level feature comparison between the different envisaged interfaces. Table 3-30 : Standard Interfaces Specifications Comparisons

3.4.5.1 Mechanical Interface and Mechanisms

Mass and volume characteristics are important features when talking about an application where several interfaces need to be integrated in the same module, as it is the case for the segmented tiles in PULSAR. It has also evident influence on the mission costs but also constraints on the demonstrator integration for the project. Regarding volume and mass characteristics, the three interfaces are in the same range, considering that it has to be put in perspective that they do not present the same set of functionalities (e.g. form/fit, force/torque capabilities, symmetry, switch active/passive behaviour, electronic integration). There is no distinction of functions between the interfaces connecting the tiles, meaning that the SI need to implement an androgynous design to enable connection between any interface, which is the case for the three options. For PULSAR, there is no requirements related to the symmetry of the interface as tiles would not be rotated along the axis of the SI, but kept in the plane of the mirror structure. SI symmetry could have some benefit regarding the manipulation of the tiles though the robotic manipulator, to simplify and limited the required workspace of the robot. This is strongly related to the specification of the manipulator. In the context of the PULSAR demonstration scenarios, we can foresee three configurations of connections:  Single Connection, when only one set of interfaces is implicated  Double Connection, when two sets of interfaces are implicated  Triple Connection, when three sets of interfaces are implicated

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 99 of 158 Review v2.6.docx

Figure 3-83: Example of a triple connection with the Standard Interface The need for triple connection shall be confirmed by the scenario analysis in the PULSAR project, but at least double connections are mandatory to assemble the mirror (with specific sequence). In each case, this imposes a minimal approach angle that the interface shall be able to perform without interaction between the mechanical components (0deg for single connection (vertical), 30 deg for double and 60 for triple). Due to the current design of the envelope and petals and the presence of the latching mechanism, SIROM doesn’t allow diagonal engagement. From our analysis, the current SIROM design is not compatible with the PUSLAR application and demonstration. HOTDOCK and iBOSS provide the adequate design for this aspect. Before starting the connection process of the interfaces, there is a need to ensure that both interfaces involved in the connection are well aligned. Either this has to be supported by the interface or by the robot manipulating the standard interface (and potentially other components) during the precise approach phase. This is directly related to the required precision of the manipulator. In the case of HOTDOCK, the external envelope provides form fit guidance, which in conjunction with a compliant manipulator, will automatically self-align when pressing the interfaces. This allows to reduce the required precision of the manipulator (up to 15 mm in translation). In the case of SIROM, there is in theory a guidance provided by the petals. However, the latching mechanism requires an accurate alignment before the petals begin to play their role (3-5 mm). The petals ensure the alignment during the latching process, this is why we talk about locking support by form fit. In the case of iBOSS, there is no support for mechanical alignment. Only when the electrical pins extract from the structure (first step of the connection), there is some final adjustment possible. There is then a need to rely on an external component to ensure the alignment before connection, like visual servoing. The three interfaces have some kind of means to force a passive side to lock/unlock. For SIROM, during the connection, spindles form the active side will allow to drive the other passive side. To our knowledge this is only affecting the connector plate motion (last step of the connection) and not the latching mechanism. This can then be an issue in the scenario of unlocking if both latching mechanisms were initially engaged. In the case of iBOSS, one active side can grasp and attach to another side, with the capability to invert the roles for the central connection. However, the connections pins are driven by the actuator, such that there is no mean to retract them if the interface is passive (the other active side would have no effect). This can be problematic if several planes are implemented like in PULSAR configuration. The HOTDOCK interface enables full operation from one active side to a passive interface, with the possibility to reverse the roles. This feature is obtained by the implementation of a mechanical state machine engaged by driving rods. For the demonstration the three interfaces provide means to have active, passive and mechanical configurations of the SI. Related to the mission, this has no effect on the nominal assembly and

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 100 of 158 Review v2.6.docx

operations. But, having the full capability of reversing operations allow to increase the reliability of the system. Regarding load transfer requirements, for PULSAR demonstration, we have mainly two principal load cases:  Manipulation and connection of a tile by the robotic manipulator,  Structural integrity of the telescope under gravity conditions. For the manipulation case, with a tile weight of 7kg, depending on the size and COG position, we currently estimate the transversal force at 84N and the bending moment between 30Nm and 50Nm. Aspects related to structural integrity is strongly related to the configuration of the demonstrator (e.g. horizontal or vertical) which is not yet defined. To our knowledge only the iBOSS values have been tested (provided here at rupture level). SIROM and HOTDOCK values correspond to simulation. HOTDOCK present theoretically higher bending moment transmission thanks to transfer through the circumference. Related to the SIROM maturity, we consider that the current electromechanical system requires additional maturation and update of the design. From testing activities, we highlighted some issues with the latching mechanism due to play in the transmission. The connector plate needs also to be re-worked and the connectors adapted to improve the robustness of the connections (beside the possibility to achieve connecting symmetry). In comparison HOTDOCK is currently in the prototyping phase with a recursive approach for the testing and validation of the mechanism and connector transmission. iBOSS is the more mature technology, with at our current knowledge, no major concern about reliability of the mechanical connection.

3.4.5.2 Power Interface

The OG5 power interface (EIS board) provides the required feature for the PULSAR application (e.g. bi-directional, switching, protection), however, it presents limitations in regards to the transmitted power rating and the physical size of the board. The PULSAR approach requires multiple interfaces to be integrated in the same SMT, multiplying the number of boards. Moreover, one interface could require to transfer the full power of several tiles, depending on the power architecture of the system. For this reason, through the HOTDOCK development, we target to re-implement this interface targeting a reduction of the size and increase of the power rating. It is also currently envisaged at a concept level to propose to delocalize the power interface of individual tiles to a centralized component on the tile to gain in volume, mass and complexity.

3.4.5.3 Data Interface

In the context of the PULSAR application, there is two types of communication:  TM/TC of the different SI, to and from the OBC, to control the operation of manipulation and connection of the SMT.  TM/TC of the SMT payloads , to and from the OBC to control the mirror activity

The three envisaged interfaces propose similar data communication protocols with a high and low bandwidth protocol. It was highlighted during OG5 activity that the CAN interface was not the best option for assembled star configuration. For PULSAR, we would target to implement the high data rate bus for data exchange between SMT (e.g. Ethernet or Space Wire), both for communication for the payload and the control of the connected SI. The CAN interface would then be implemented locally on each tile between the connected SI, with a data translation between the two data buses.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 101 of 158 Review v2.6.docx

3.4.5.4 Controller

The existing controller architecture and selection of components would fit correctly the functional requirement of the SIROM control and telemetry in the new OGs. However, the current design is cumbersome and heterogeneous with two physical components requiring a specific software and a communication link in between. We recommend, through the next iteration of projects, to adapt the current controller design with the following goal: ● Merge the hardware components into a single element enabling low-level interface to system mechatronics and CAN communication ● Improve the compactness of the controller sub-system, for possible integration in the mechanical housing ● Optimize internal harnessing and power consumption ● Having hardware compatibility (e.g. similar product family) with space qualified components.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 102 of 158 Review v2.6.docx

4 Software Components

Figure 4-1: PULSAR Concept Architecture

4.1 ESROCOS Framework

4.1.1 Introduction

The experience at ESA has shown that robotic systems, such as the ExoMars rover or the European Robotic Arm (ERA) developed for the International Space Station (ISS), require significant software engineering effort when compared with other satellite space missions. This is due to the complexity introduced by the robotic application, together with the lack of software heritage. Little or no software commonality and reuse exists across missions that are not directly related [RD-6]. To mitigate this lack of reuse and develop robotics software in a more cost-effective manner, the usage of Robot Control Operating System (RCOS) frameworks is an obvious solution. However, existing frameworks are in general not suited for use in space applications. The most popular open- source frameworks have not been developed with critical applications in mind, and lack the Reliability, Availability, Maintainability and Safety (RAMS) characteristics required by space software. On the other hand, systems currently used in industrial or critical applications are normally proprietary and tied to specific robot platforms [RD-6]. In the past, efforts to develop a standard space robot control software at European level have succeeded in their immediate objectives but failed to get traction and to be adopted in operational missions due, among other reasons, to their proprietary origin, the lack of a sizable user community, and a design for a particular type of robot [RD-6]. For these reasons, it was decided to build the European Space Robotics Control and Operating System (ESROCOS), an RCOS specifically designed for space robotics, as a software building block for future missions. The ESROCOS framework is a set of tools and software components that support the development of robotics applications with demanding RAMS requirements [RD-6].

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 103 of 158 Review v2.6.docx

The main objectives set for the ESROCOS were [RD-6]: . To develop a space-oriented RCOS, considering the RAMS attributes, the avionics environment and the communication protocols characteristic of space systems. . To integrate advanced modelling technologies to model both robots and software systems, that facilitate the development of correct-by-construction software. . To allow for the integration of software components of different criticality and real- time requirements, in order to address the needs of complex robotics applications. . To prevent vendor lock-in by releasing the RCOS as open-source software and avoiding dependencies on proprietary components. . To leverage existing technologies, frameworks and tools and benefit from their maturity and usage track. . To interoperate with existing robotics frameworks and facilitate the integration of legacy software with newly-developed algorithms and functions.

4.1.2 Reference Implementation

ESROCOS is a framework for developing robot control software applications. It includes a set of tools that support different aspects of the development process from architectural design to deployment and validation. In fact, it is at the same time a Robot Control Operating System (RCOS) and RCOS Development Environment (RDEV). The RCOS provides a runtime framework to support the execution of robotics applications, including an operating system, communications middleware and runtime services (or libraries) for common robotics functionalities. The RDEV on the other hand provides a set of tools, such as model editors, code generators or data visualizers, to support the development and validation of robotics applications [RD-6, RD-3]. The main elements/characteristics of the ESROCOS framework are illustrated in Figure 4-2Erreur ! Source du renvoi introuvable.. The tools for robots, software and failure modelling are illustrated at the top of the figure. These are supported by the common data types for component interfacing and basic libraries for robotic functions, logging and telecommand. The middleware layer allows for the management and communication of software components at runtime. A mixed criticality layer isolates application components at runtime. In fact, ESROCOS can be used to model applications using time and space partitioning, in order to build mixed-criticality systems in which components with different RAMS levels can safely coexist. These applications can be deployed on a SPARC (LEON) platform using the AIR hypervisor and deployed in space-quality systems. Finally, the applications may run on three different environments according to the desired software quality level: laboratory, high reliability and space quality. The boxes on the sides of the figure represent orthogonal capabilities of the framework. Firstly, ESROCOS integrates third-party tools and frameworks to support different activities and facilitate the reuse of existing code. Secondly, ESROCOS supports the configuration and deployment of complex applications via continuous integration. Finally, the figure highlights the open-source nature of ESROCOS in order to encourage usage and contributions from the community [RD-6].

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 104 of 158 Review v2.6.docx

Figure 4-2: ESROCOS Components (source: RD-6) The individual software components of the ESROCOS framework can be seen in Erreur ! Source du renvoi introuvable.. The duality of the ESROCOS framework mentioned at the beginning of the section can be noticed also here. For instance, the kinematic chains modelling software is a modelling tool (RDEV) that generates code that is integrated in the application (RCOS) [RD-3]. Additionally, the individual components in Table 4-1 are classified according to their scope into laboratory and space quality components. Laboratory components are intended for use in non- critical systems and run on a regular Linux workstation. Space quality components are tools or libraries targeted for critical systems, and are developed to a higher level of quality, and are in line with ECSS standards [RD-3]. Finally, the ESROCOS framework combines existing and newly developed components. Depending on the scope of the work foreseen for each component, a different level of detail is provided in the design: new development, extension and integration of existing software, and integration of existing software and is reflected in Table 4-1 [RD-3]

Table 4-1: ESROCOS Software Components (source: RD-3)

Activity Component Description Scope of

the

Lab work

RCOS RDEV

Space

Model Robot X X X X Solvers for all possible New kinematic modelling kinematics and developm chains tools dynamics ent transformations of lumped parameter robot chains are generated from formal and semantically validable models. Model TASTE X X X X Framework for model- Extension driven SW development of and and analyse real-time integration distributed systems. The of existing real- time main SW systems components are:

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 105 of 158 Review v2.6.docx

Activity Component Description Scope of

the

Lab work

RCOS RDEV

Space

- Orchestrator - ASN.1 compiler - Ocarina - Editors - PolyORB-HI (middleware) - HW library - SDL tools - RTEMS BIP compiler X X Compiler tool for Extension generating C++ code and from BIP models. integration of existing SW BIP engine X X Runtime for executing Extension C++ code generated and from BIP models. integration of existing SW TASTE2BIP X X Generation of BIP New models from TASTE developme models. nt. SMC-BIP X X Statistical model-checker Extension for BIP models. and integration of existing SW Common Base X X X Elementary data types for Extension and roboti cs robotics data types robotics applications. integration of functions existing SW OpenCV X X Computer vision library. Integration of existing SW Eigen X X Linear algebra library. Integration of existing SW Transformer X X X Library to support New component developers with geometric development transformations. Stream aligner X X X Library to support New component developers with temporal development alignment of time-stamped data streams. PUS services X X X Implementation of the New following PUS services in development TASTE: - TC verification - Housekeeping TM - Event management - Function management - Time management - Connection test - Timeline-based scheduling - OBCP - Parameter management - File management Deploy and AIR X X ARINC-653 hypervisor. Extension run and integration

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 106 of 158 Review v2.6.docx

Activity Component Description Scope of

the

Lab work

RCOS RDEV

Space

of existing SW

HAIR X X AIR emulator and tools Extension and integration of existing SW CAN bus X X Driver for the GR Extension driver CAN controller for and RTEMS integration of existing SW Ethernet X X Driver for the GR Extension driver Ethernet controller for and RTEMS integration of existing SW SpaceW X X Driver for the GR Extension ire SpaceWire controller for and driver RTEMS integration of existing SW EtherCAT X X Support for EtherCAT New driver in RTEMS, TASTE, developm AIR. ent Monitor, Data logger X X Tool for logging data New debug, test from TASTE developm components ent vizkit3d X X 3D data visualization Integration of existing SW RVIZ X X Data and image Integration visualization. of existing SW Gazebo X X Robot simulator. Integration of existing SW PUS console X X GUI application to show New PUS communication in the developm control PC. ent Integr Middlew X X X Tools and libraries to New ate are support a runtime bridge developm legacy bridges between TASTE and the ent SW ROS/ROCK environments. Framewo X X Tools and libraries to Extension and rk import support the importing of integration of tools components from existing SW, ROS/ROCK frameworks new into ESROCOS. development Framewo X X Tools and libraries to Extension and rk export support the exporting of integration of tools components from the existing SW, ESROCOS framework to new ROS/ROCK. development Manage Autoproj X X Software package Integration b management and build of existing uild and tool. SW

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 107 of 158 Review v2.6.docx

Activity Component Description Scope of

the

Lab work

RCOS RDEV

Space

dependenci ESROCOS X X Collection of Extension es developm development tools for and ent scripts setting up/editing integration projects in the of existing ESROCOS environment. SW For more information about each component, please consult the RD-3 and RD-4 documents of the

ESROCOS project.

4.1.3 Capabilities and Limitations

The evaluation and validation of the capabilities of the framework took place in two separated steps. At first, the individual components of the framework were tested and evaluated during the “Unitary Testing” phase. Then, the framework was validated in three real-world scenarios. The space reference scenarios took place in two test facilities provided by the EU FACILITATORS project: the BRIDGET rover and the Mars Yard at Airbus DS in Stevenage (UK), and the platform-art© orbital simulation facility at GMV in Madrid (Spain). The nuclear reference scenario was implemented in the International Thermonuclear Experimental Reactor (ITER) robotics test facility at VTT in Tampere (Finland) [RD-6]. The focus of the validations was the development and testing of robotics applications that involve different elements of ESROCOS. The validation use-cases focused on the functional layer of three robotics systems, i.e. a wheeled rover and two manipulator arms. The aim of the tests was not to test autonomy of the systems, but to demonstrate the software capabilities that in combination with the other building blocks developed within the PERASPERA’s projects would allow autonomous operations in the future [RD-6]. Hereafter, the summary of the main capabilities and limitations of the framework in the context of the objectives of the framework is exposed.

Develop a space-oriented RCOS

The ESROCOS framework globally addresses the RCOS objective. It provides a runtime and execution environment of robotics applications in the target domain. To this aim, the framework provides the following capabilities [RD-5, RD-6]:  A component model defined by TASTE and interfaces using the common robotics data types defined in ASN.1 with a limitation in handling large data structures, e.g. images, point clouds, etc., although a workaround has been provided (see https://github.com/ESROCOS/tools-imagetransfer)  PolyORB-HI as the runtime environment of an application, provided by TASTE, and possibility to communicate with ROS and ROCK environments via middleware bridges.  Integration of tools needed for robotics application development and testing, such as the Gazebo simulator or RVIZ and vizkit3d visualizers.  Improved documentation and traceability of the AIR hypervisor according to the ECSS standards.  RAMS properties of newly developed components, such as PolyORB-HI drivers or the Packet Utilization Standard (PUS) library with limitations since the achieved RAMS attributes are at the moment not high enough to tackle the development of critical systems.  Enhancement of the capabilities of TASTE by integrating Behavior, Interaction, Priority (BIP) tools: o TASTE2BIP: for the generation of BIP models from TASTE (Specification and Description Language (SDL) functions) with limitations since the model verification is not fully automated.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 108 of 158 Review v2.6.docx

o BIP compiler: for the generation of C++ code from BIP models. o BIP engine: for the runtime for executing BIP models. o SMC-BIP: for the statistical model-checker for BIP models.  BIP-based Failure Detection, Isolation and Recovery (FDIR) modelling and generation capabilities developed in the ERGO project.  Telemetry/Telecommand (TM/TC) services addressed by implementing a PUS services component that can be integrated as a standalone library or as reusable TASTE functions.  Controller Area Network (CAN) bus (GRCAN), Ethernet (GRETH) and SpaceWire (GRSPW2) drivers for the latest version of RTEMS, targeting the GR740 system.  Controller Area Network (CAN) bus (GRCAN), Ethernet (GRETH) and SpaceWire (GRSPW2) drivers for the AIR hypervisor, based on the RTEMS.  PolyORB-HI driver for the SpaceWire on Linux using a USB-SpaceWire adaptor.  Simple Open Ethernet for Control Automation Technology (EtherCAT) Master (SOEM) library for RTEMS with limitations since the testing involved only an ARM architecture, due to hardware availability, instead of the GR740 platform with a LEON4 processor.

Integrate advanced modelling technologies

The ESROCOS framework supports model-based development of applications in two main dimensions [RD-6]:  The software architecture is modelled using TASTE, which describes the system using four views: data, interface, deployment and concurrency.  The robot architecture is modelled using the kin-gen tools, which model robot kinematics, define queries and generate kinematics solvers. Nevertheless, being based on model composability it can be extended to cover other aspects of the robot, such as dynamics and interaction with the environment.

Allow integration of complex robotics applications

ESROCOS provides the AIR hypervisor, which allows that components with different criticality or real-time properties are deployed in separate partitions that share the computer resources in a deterministic way, hence allowing for the safe coexistence of the different parts of the system. To this aim, the framework provides the following capabilities [RD-5, RD-6]:  An AIR hypervisor adapted to the selected hardware platform (GR740) and the latest version of RTEMS selected by ESA for qualification and use in future projects.  Device drivers for Ethernet, CAN and SpaceWire on the GR740 board for RTEMS and AIR.  An AIR hypervisor integrated in the TASTE framework with limitations since its integration has been performed at the TASTE’s editor level but not yet in the build support infrastructure, thus allowing to model AIR systems with multiple partitions but not distributed systems that combine AIR nodes with other types of nodes.  An Autoproj configuration, which should facilitate the development of complex applications combining new, existing and 3rd-party components, since it automatizes the process of the creation of a dedicated workspace and management of the build and software dependencies.

Leverage on existing assets

The ESROCOS framework is largely based in existing tools and framework that have been updated, if necessary, and integrated in a consistent framework [RD-6]. Existing assets that have been improved or evolved and integrated in ESROCOS are: the TASTE framework, the BIP tools, the AIR hypervisor, the RTEMS operating system and the SOEM EtherCAT library [RD-6].

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 109 of 158 Review v2.6.docx

Other existing components have been integrated in ESROCOS without modification. In some cases, the software is used as is, and in others it has required a specific configuration or a software interface layer. The most important of these assets are: the ROCK framework (specifically, the Autoproj build system, the basic data types, and certain components like the transformer and stream aligner), the ROS framework, the Gazebo simulator, the RVIZ and vizkit3d visualizers, the Eigen linear algebra library, and the OpenCV image processing library [RD-6].

Ease the development of robotics systems

A number of existing assets from the ROCK and ROS ecosystems, including simulation and visualization tools, have been integrated in ESROCOS as described in the previous subsections [RD- 6]. In addition, the ESROCOS framework includes specific tools that have been developed to allow interoperation with these frameworks [RD-6]:  A set of tools that can translate the robotics data types defined in ROCK (C++) and ROS (IDL) to ASN.1, which is the data modelling language of the TASTE framework. In the case of the ROCK types, the tool also generates type conversion functions (ROS type conversions are handled at runtime).  A set of tools to create bridge components that can communicate at runtime the TASTE PolyORB-HI middleware with the ROCK and ROS middleware, allowing that ESROCOS (TASTE), ROCK and ROS components run together and exchange messages to fulfil the purpose of the application.  A tool that can export a TASTE model to ROCK, intended for ROCK users to become familiarized with the ESROCOS modelling approach. For more information about the capabilities and limitations of the framework, please consult the

RD-4 document of the ESROCOS project.

4.1.4 Conclusions

ESROCOS is a framework for developing robot control software applications. It includes a set of tools that support different aspects of the development process, from architectural design to deployment and validation. In addition, it provides a set of core functions that are often used in robotics or space applications [RD-6]. More specifically, the framework offers:  Development elements: o Modelling tools for software (TASTE, BIP) and robot kinematics (kin-gen), with code generators. o A dependency management and build system (Autoproj).  Runtime elements: o A middleware (PolyORB-HI) and data types (ASN.1) that run components modelled in TASTE. o Runtime components for robotics (e.g., stream aligner) and space systems (e.g., PUS, OBCP). o Runtime platforms: Linux, RTEMS, AIR hypervisor, with TASTE/PolyORB-HI support.  Operation elements: simulation and visualization tools (e.g., vizkit3d, Gazebo) and bridges to interface with ROS and ROCK components. The framework is intended to support the development of software following the ECSS standards for space software. It does not by itself cover all the development phases and verification steps, but it facilitates certain activities and ensures that the software built can be made compatible with the Reliability, Availability, Maintainability and Safety (RAMS) requirements of critical systems. To fulfil this purpose, the framework combines a set of characteristics in a novel way [RD-6]:

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 110 of 158 Review v2.6.docx

 It is built specifically for the needs of the space robotics community, with inputs from the community.  It is provided under open-source licenses in order to facilitate its adoption.  It integrates a Time and Space Partitioning hypervisor, which is proposed as a possible way to integrate non-deterministic algorithms in critical systems with real-time constraints.  It relies on model-based and formal approaches, which are relevant for space and other critical application domains.  It is compatible with existing, widely-used robotics frameworks. Nevertheless, ESROCOS supports currently a limited set of target platforms, and there is a discrepancy in the maturity of the provided tools. Moreover, the scalability and performance might be an issue considering the suboptimal memory usage of moderately complex applications developed with TASTE for resource-constrained platforms such as GR740 [RD-5]. Therefore, a trade-off between the benefits of the model-driven approach/application fidelity and additional effort/cost necessary to adapt the framework for supplementary platforms/robots is needed to identify which subsets of the ESROCOS framework could be used in the PULSAR’s use cases.

4.2 ERGO Autonomy Framework

4.2.1 Introduction

The ERGO architecture is composed of two main components:  A functional layer which performs the requested actions by the executive layer. The functional layer are instantiated a set of TASTE functions (software components) to provide the interface with the hardware,  An ERGO agent which controls the execution of the functional layer. This agent will enclose a set of control, which can implement deliberative or reactive behaviours, and a central agent which ensure the correct interaction among the different control loops. The interface among different components of the ERGO agent will be based on goals (action or state desired to be achieved) and observations (sensors data, or internal state deduced from the functional layers information).

The instantiation of ERGO is a process that can be decomposed in five different tasks:  Architectural design which consists in defining interfaces between the functional layer components and the ERGO Agent. This task is done via TASTE.  Planning modelling which consists in describing the planning model and the implementation of the external functions. This definition is done via PDDL files.  Agent development which consists in defining the reactors that are composed the executive and deliberative layers.  Functional layer development which consists in developing a set of TASTE functions  Agent and functional layer integration which consists in generating executables based on TASTE.

4.2.2 Agent Reactors

One of the most important components of ERGO is the main agent controller, based on T-REX. The controller follows the “Sense-Plan-Act” paradigm. Time is discretized, and it is responsibility of the controller or agent to control a set of coordinated control loops, also known as “reactors” to perform the execution.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 111 of 158 Review v2.6.docx

4.2.2.1 Reactor Definition

Reactors can be reactive or deliberative. Reactive reactors will execute the goals immediately and does not need to deliberate. Deliberative reactors need time to deliberate, and may be busy for more than one tick. A timelines definition is needed for the reactor to work and communicate with the system. Reactors can have internal timelines and external timelines. There are a set of rules and guidelines regarding the timelines:  A timeline is owned (internal) by one and only one reactor.  Only the reactor that owns a timeline can post observations (facts) on it.  All reactor that are subscribed (external) to a timeline will receive this observations and can post goals as requests for actions to the timelines.  The reactor should be subscribed to all the timelines which state is needed and all the ones it may want to command.  Internal timelines are used to publish the state of the reactor and to give an interface to accept goals form other reactors. The timelines work like state machines, where only one predicate (state) can be true at any moment. The states are set by the observations published on the timeline. The definition of a new internal timeline starts by defining the state machine. Each of the states is a predicate, composed by parameters. All the timelines are defined in a timeline configuration file.

4.2.2.2 Reactors Instantiation

The main functionality of the new reactor can be done as a complete separate library. This can ease the process of testing the functionality. All the reactor already available in ERGO are divided into functionality and reactor. The base class for all reactors is the teleoreactor class which has a set of methods that interface with the agent, and a set of methods that have to be overloaded when defining a new reactor. Four type of reactors have been tailored from the generic component in the ERGO framework:  Ground Control interface reactor which handles use-case specific telemetry and telecommands. It is able to process direct telecommands (E1), time-tagged commands (E2), event-driven actions (E3) and goal commanding (E4), via the mission planner.  Mission Planner reactor which process High-level commands (E4) to generate a mission plan. This mission plan, as generated by the planner, contains a set of sub-goals to be executed at given times, together with a set of constraints to be matched. The mission planner reactor uses a specific PDDL domain and problem.  Scientific detector reactor. GODA reactor (that uses the GODA component provided by SCISYS) which receives high-level goals in order to detect serendipitous events in the context of planetary missions. This reactor could analyse collected data to send a new goal sent to the mission planner.  Command Dispatcher reactors which interface directly to the functional layer. These receive low level goals from the mission planner and receive observations from the functional layer that indicate the results of the execution.

4.2.3 ERGO Orbital Case

The OG2 consortium developed two distinct reference implementations of the ERGO toolkit: Planetary and Orbital cases. These implementations are aimed at deploying a real-world demonstrator of the capabilities of ERGO.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 112 of 158 Review v2.6.docx

The Orbital use case is the most relevant to the context of PULSAR, as illustrated in Figure 4-3. It will serve as a first basis for the reuse of ERGO for the PULSAR demonstrators. A large part of the work will consist in adapting and extending the functional layer in order to support the various hardware and software components of our demonstrators:  Robotic arm motion planning and control, which is, for now, foreseen to be implemented as an independent component, built upon existing technologies. This component would then expose a command and monitoring API to the ERGO functional layer,  Sensor acquisition, would use the I3DS framework and ICU, and then provide a command and monitoring API to the functional layer,  AOCS sensors, controller and actuators, when applicable,  SMT command and monitoring, in order to communicate with and control the delocalised control unit on each mirror tile,  InFuse data fusion functions. Then, some adaptations will be required to be made to the Agent in order to implement our various scenarios, probably instantiating a new Agent for each scenario. First, new command dispatcher reactors will be needed for each new functional layer component. Some uncertainty remains at this stage about the need for a full-fledged deliberative reactor, as the sequencing of our scenarios may be well known right from the start, negating the need for replanning or online goal optimization. Depending on the level of autonomy of our scenarios (still TBD), new mission planner reactors will need to be added to properly implement the desired assembly sequence. It is also possible that the assembly sequence could be autonomously determined by the RAS’ planning functionality. In that case, this part of the autonomy could be delegated to the Functional Layer, giving the Agent only a high-level responsibility of monitoring the execution of the mission. All these design considerations are very preliminary and are still being studied by the PULSAR partners. They will be refined in upcoming project development phases.

Figure 4-3: ERGO Orbital Use Case Architecture

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 113 of 158 Review v2.6.docx

4.3 InFuse

4.3.1 Introduction

InFuse provides software components for perception, navigation and data fusion functions for OG3. The architecture of the Common Data Fusion Framework (CDFF) is composed of a collection of core libraries with data types compatible with the sensors provided by OG4-I3DS. OG3 is integrated with OG2-ERGO with the mechanisms to allow interaction and communication between the two projects to enable autonomy for orbital and planetary robotic applications. To this end, complete processing chains are available in InFuse to process raw sensors data and return high-level perception results by reusing the core functions of the CDFF. The following figure describes the integration of InFuse CDFF with respect to the related projects.

Figure 4-4: InFuse CDFF architecture

Numerous data acquisitions have been performed in OG3 to provide to the spatial robotic community a common base for algorithms performance and accuracy benchmarks. The perception functions suitable to fulfill the needs of the current project are presented in the next section, with the corresponding results on the collected InFuse datasets.

4.3.2 OG3-InFuse perception functions

A review of OG3-InFuse CDFF is needed to list the relevant core functions suitable for the Pulsar scope as well as the missing functionalities. Performance and accuracy numbers are reported when available, according to the InFuse V&V activities and results report. A focus is made towards perception functions that will allow on-orbit precise assembly of a very large structure by an autonomous robotic system. In the proposed scenario, multiple

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 114 of 158 Review v2.6.docx

segmented mirror tiles will be assembled to form the final primary mirror of the satellite. Real- time 3D reconstruction of the structure is also targeted to be able to monitor and detect incidents during the assembly process.

Reference implementations for model-based visual tracking

For on-orbit servicing, a model-based visual tracking method is used to accurately localize a target satellite for rendezvous scenario. Precise 3D localization is possible thanks to the knowledge of the target CAD model and by minimizing the registration error between visual features detected in the current image and from the model projected by the estimated pose. This processing chain can be directly reused for our needs where the mirror tiles and the final structure must be accurately localized for the on-orbit structure assembly scenario. Two reference implementations are provided in the InFuse framework by DLR and Magellium partners. The first one is a proprietary implementation based on [Oumer16] works. The second one is based on the model-based tracking method implemented in the ViSP open-source library [Marchand05]. Both methods have been tested on the OOS-SIM dataset that consists on the combination of the following cases:  Translational motion  Rotational motion  Various lighting conditions (eclipse condition, suboptimal condition, optimal condition)

Sensors used are the stereo-cameras with an image resolution of 528x406 and image acquisition at 3Hz.

Verification and acceptance condition have been defined as the following criteria:  A maximum position deviation from the ground truth of 5% at mid-range (i.e. from 0.1m to 0.05m)  A maximum angular deviation from the ground truth from 10deg to 5deg  An update rate > 1Hz

Both model-based trackers passed the tests, with the following performance numbers for DLR.

Translational motion Rotational motion Roto-translational motion Processing time 40-60ms 40-60ms 40-60ms Accuracy 17-22mm 1.7-2deg 2-3deg 22-30mm 25-30mm Pass/Fail Passed Passed Passed Table -3 – DLR model-based tracker InFuse results

For the ViSP model-based tracker, the authors have reported a tracking time of 30Hz in average.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 115 of 158 Review v2.6.docx

Main difficulty for these trackers is the lighting condition. In degraded condition, shadows or over-illumination lead to bad visual-features tracking. Indeed, some parts of the object contours become hardly distinguishable and since these algorithms rely mainly on contours or edges tracking, tracking drift can occur. An external light projector can be used to avoid issues with under-illumination conditions. To cope with over-illumination, control of the shutter speed and aperture are required.

With regards to the current use case, the accuracy needed to perform the structure assembly should be around the millimeter in translation and less than one degree in rotation. Vision- based pose accuracy is dependent on the distance to the target. Another key factor is the quality of the visual-features tracking, which is dependent on the scene and on the object to track. In good conditions (lighting, object accurately tracked, camera image quality, ...), we could expect to reach the required accuracy when close to the object. Good camera image resolution will improve the tracking accuracy but will increase the tracking time at the same time. This is a trade-off and a parameter to be tuned experimentally.

Input: Depending on the method, input can be RGB images, RGB-D data or stereo-images. Output: Object to camera pose, i.e. the rotation and the translation that transform a 3D point in the object coordinates system in the camera coordinates system. Advantages: Good processing time. Good pose accuracy thanks to the knowledge of the CAD model of the object to track. Disadvantages: Need an initial pose. Difficulties to detect a tracking drift. InFuse V&V results: Pass the tests. Good lighting conditions are required. Remarks: Table -4 – Summary on model-based tracking method

Reference implementation for pointcloud tracking

This CDFF module tracks a 3D model in a pointcloud, with an emphasis on LIDAR sensor. A Kalman Filter is used internally to predict the relative pose transformation for better convergence of the 3D-3D registration method. Since the LIDAR data acquired in the OOS-SIM environment are not directly exploitable, the author has conducted experiments with simulated data.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 116 of 158 Review v2.6.docx

Input: Pointcloud (asn1SccPointcloud) Output: State vector (asn1SccRigidBodyState):  Position  Orientation  Linear velocity  Angular velocity  Covariance matrix Advantages: Pose prediction should improve the convergence basis for the ICP and reduce the convergence time. Disadvantages: A linear Kalman filter is currently implemented with classical singularities with Euler angles. InFuse V&V results: 1 passed test: static target, average velocity: 4.7mm/sec and 0.355°/sec around Z axis 1 failed test: rotational movement of the target 1°/sec around Y axis, average velocity: 3.33 mm/sec Remarks: LIDAR data are emulated from a Velodyne HDL-32E at a frequency of 1Hz. Table -5 – Summary on pointcloud tracking method

Fiducial markers for object pose estimation

Fiducial markers are frequently used in robotics and in space robotics [Huntsberger05] to determine the 6D object pose with respect to a camera. It greatly facilitates complex cooperative vision-based applications such as grasping or positioning tasks. Most popular fiducial markers are the AprilTag [Krogius19], [Wang16] library, the ArUco [Ramirez18] library or the Chilitags [Bonnard13] library. Circular fiducial markers exist also such as the WhyCon [Nitsche15] system or the CCTag [Calvet16] method, but require multiple tags to fully estimate the 6DoF tag pose or a specific tag encoding. Accuracy and detection speed are crucial elements. Fast detection is a prerequisite to allow real-time vision processing. Good pose accuracy and detection robustness are necessary for precise manipulation or positioning tasks in various and challenging conditions. Fiducial markers can play a role in multiple scenarios in the current context. A marker positioned beside the mirror tiles loader can allow the robotic arm to accurately grasp the mirror element. Small tags located near the SIROM interface can be used to guide and assist the robot end-effector during the mirror tiles assembly task. Preliminary tests should be conducted to foresee the challenging conditions of the orbital scenario. Some methods are not robust to partial marker occlusion and a trade-off between marker size, the potential number of markers and the detection range should be needed.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 117 of 158 Review v2.6.docx

Input: RGB images. Output: Object to camera pose, i.e. the rotation and the translation that transform a 3D point in the object coordinates system in the camera coordinates system. Advantages: Target pose is returned directly. No tracking drift issue (tags are detected for each new image). Disadvantages: Markers must be printed and glued to the object of interest. Most methods require no tag occlusion for proper detection. InFuse V&V results: Remarks: This feature is not available in the OG3 CDFF. Special care must be done when selecting the material used for the marker printing to avoid issues with glare or reflection. Experiments should be conducted to evaluate the required minimum tag size w.r.t. the working range and the best library w.r.t. in terms of detection precision, false positive, processing time and ease of integration. Table -6 – Summary on fiducial marker detection method

Object pose estimation

Object pose estimation aims to solve the following problem: 푢 푋 푌 (푣) = 퐾(푅 푡) ( ) 1 푍 1

That is the rotation 푅 and translation 푡 that transforms 3D points expressed in the object coordinates system into the camera coordinates system, assuming known camera intrinsic parameters 퐾.

A first approach consists in using sparse features matching in order to solve the perspective- n-points (PnP) problem [Lepetit04]. It requires a prior training step in order to have the correspondence between extracted salient 3D object points coordinates and descriptive information. To recover the object pose from a new image, the principle is to try to find the learned salient points in the current image, through a features matching process. Then, the PnP pose estimation problem can be solved, using 3D information extracted during the training step and 2D information from keypoints detected in the current image. A robust scheme such

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 118 of 158 Review v2.6.docx

as the RANSAC approach is classically used to avoid using outliers from badly matched keypoints in the pose estimation process. With the availability of low-cost 3D sensors such as the Kinect sensor, RGB-D information can be exploited to take fully advantage of the 3D structure of the scene. Object pose retrieval from a template-based approach has been proposed in [Hinterstoisser12] with multiple modalities extracted from RGB and depth images. Closest pose from a set of training poses is used to initialize a pose refinement method such as the classical ICP [Besl92] algorithm. In a same manner than keypoints can be detected in RGB images, 3D features can be extracted from RGB-D data and object pose retrieved after a 3D-3D registration [Choi12]. Following the advances on deep neural networks topics, multiple methods [Tekin18], [Tremblay18] have been recently proposed to infer 6DoF object pose from RGB images. The core idea consists in inferring the 2D location of the projection of the 3D object bounding box coordinates in the current image. Indeed, the classical tasks of keypoints detection, descriptors extraction and keypoints matching are replaced by a deep neural network that will predict the object bounding box location. Well-known PnP pose estimation methods are then used to retrieve the 6DoF pose from the 2D-3D correspondences. In [Li18], a novel deep neural network architecture has been proposed to iteratively refine the object pose, by matching a rendered image with the observed image. In the current context, a fast and reliable object pose estimation method can be used to initialize the model-based tracker or to provide redundancy.

Reference implementations for 3D pointcloud processing

Complete processing chains are available in OG3 CDFF to perform stereo-reconstruction from raw left and right images. Basic building blocks for 3D data exploitation are directly available:  3D features extraction  3D descriptors  Robust 3D features matching  Pointcloud transformation, assembly and filtering  3D-3D pointcloud registration with several ICP variants  Bundle adjustment methods

Complete pipeline for 3D dense reconstruction from stereo-images and pointcloud assembly has been tested on various orbital and planetary datasets. Summary of the results gives mixed results with one passed test, two close to acceptance tests and three failed tests. Nevertheless, there is still room for improvements, by exploring better stereo-reconstruction algorithms for instance. Dense pointcloud reconstruction can then be used to monitor the primary mirror assembly process, by comparing the reconstructed pointcloud with a reference pointcloud. Processing time is not crucial and accuracy in the reconstruction is more important.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 119 of 158 Review v2.6.docx

Input: Input depend on the method:  Stereo-images for 3D reconstruction from stereo-cameras.  Pointcloud returned by LIDAR and depth sensors. Output: Dense pointcloud of the object of interest. Advantages: 3D structure of the scene is available for other perception algorithms. Disadvantages: Processing time needed for stereo reconstruction. InFuse V&V results: Mixed results. Remarks: Table -7 – Summary on 3D pointcloud reconstruction

4.3.3 InFuse CDFF building blocks

Several core CDFF building blocks can provide relevant output to vision functionalities needed for the PULSAR project. A short description of the necessary input, the output data type and the available implementation in OG3-CDFF are listed in the tables below. ASN.1 data format is used as a common base and for interoperability between each OG framework.

 2D features detectors Keypoints are detected in salient locations in an image, typically corner-like or blob-like locations. Good properties for an interest points detector are invariance to scale, invariance to rotation and invariance to viewpoints change. Repeatability measures and different metrics [Mikolajczyk05] can be used to assess the quality and the accuracy of the features detection method. Detection speed is also crucial, especially for vision applications that require real-time performance such as visual odometry applications.

Input RGB image (asn1SccFrame) Output Detected features (asn1SccVisualPointFeatureVector2D) Available Harris [Harris88] features detector implementations ORB [Rublee11] features detector Table -8 – Summary on 2D features detection methods available in InFuse

 2D features descriptors extractors Features descriptors encode the information about a keypoint location in a compact way. A typical approach consists in building the histogram of oriented gradient extracted from a patch at the keypoint location, and assembled in a 1D vector. High similarity measure or low descriptor distance between two keypoints seen at different viewpoints must be exhibited for robust descriptor methods and the opposite when two keypoints are not related.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 120 of 158 Review v2.6.docx

Input RGB image (asn1SccFrame) Detected features (asn1SccVisualPointFeatureVector2D) Output Descriptors computed for the input 2D features (asn1SccVisualPointFeatureVector2D) Available ORB [Rublee11] descriptors extractor implementations Table -9 – Summary on 2D features descriptors extraction methods available in InFuse

 2D features matching This step is the process of matching features between those detected in the source and the target images. Simplest methods use a brute-force approach and a nearest-neighbor criterion. Optimized data structures (e.g. kd-trees or the FLANN library [Muja09]) are often used to speed-up the process as the matching is a O(n^2) complexity. Different metrics can be used to compare features descriptors such as the L2-distance, the Hamming distance for binary descriptors or the cosine similarity measure.

Input Features with descriptors computed for the source image (asn1SccVisualPointFeatureVector2D) Features with descriptors computed for the target image (asn1SccVisualPointFeatureVector2D) Output List of matched features (asn1SccCorrespondenceMap2D) Available FLANN-based matcher implementations Table -10 – Summary on 2D features matching methods available in InFuse

 Perspective-n-Point algorithms Perspective-n-Point (PnP) algorithms aim to estimate the pose of a calibrated camera from n 3D-to-2D point correspondences.

Input 3D object points (asn1SccPointcloud) Corresponding projected 2D image points (asn1SccVisualPointFeatureVector2D) Output Pose of the object in the camera system coordinates (asn1SccPose) Available Iterative PnP solver initialized internally from a DLT or a planar implementations pose estimation Table -11 – Summary on Perspective-n-Point methods available in InFuse

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 121 of 158 Review v2.6.docx

 3D features detectors Similarly, 3D features can be detected from pointcloud.

Input Pointcloud (asn1SccPointcloud) Output Detected 3D features (asn1SccVisualPointFeatureVector3D) Available Corner detector implementations Harris detector Intrinsic Shape Signatures (ISS) [Zhong09] keypoints detector Table -12 – Summary on 3D features detection methods available in InFuse

 3D features extractors Various algorithms have been proposed to describe the local geometry in a neighborhood of 3D feature point. They can be used to estimate the rigid transformation between two sets of sparse features, and therefore provide and initial estimate for iterative registration algorithms such as the Iterative Closest Point (ICP) method.

Input Pointcloud (asn1SccPointcloud) Detected 3D features (asn1SccVisualPointFeatureVector3D) Surface normals (asn1SccPointcloud) Output Descriptors computed for the input 3D features (asn1SccVisualPointFeatureVector3D) Available Point Feature Histogram (PFH) [Rusu08] implementations Signature of Histograms of OrienTations (SHOT) [Tombari10] Table -13 – Summary on 3D features descriptors extraction methods available in InFuse

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 122 of 158 Review v2.6.docx

 3D features matching Rigid transformation between two pointclouds can be estimated through various 3D registration methods, for instance the well-known ICP algorithm. Robust scheme can be employed to try to reject outliers. The following implemented methods expect in input 3D features and not dense pointcloud. Main advantage of using 3D interest points is to decrease the influence of outliers (moving objects, location with bad 3D reconstruction, ...).

Input 3D features with descriptors computed for the source pointcloud (asn1SccVisualPointFeatureVector3D) 3D features with descriptors computed for the target pointcloud (asn1SccVisualPointFeatureVector3D) Output Pose of the source pointcloud in the target pointcloud coordinates systems (asn1SccPose) Success flag (bool) Available Best matches algorithm implementations ICP (PCL implementation) RANSAC-based pose estimation Table -14 – Summary on 3D features matching methods available in InFuse

 Dense registration Several variants of ICP methods are available in CDFF. The following implementations use in input dense pointcloud contrary to the implementations of the previous section.

Input Source pointcloud (asn1SccPointcloud) Target pointcloud (asn1SccPointcloud) Transform guess (optional) (asn1SccPose) Use guess flag (bool) Output Pose of the source pointcloud in the target pointcloud coordinates systems (asn1SccPose) Success flag (bool) Available ICP (PCL implementation) implementations ICP (CloudCompare implementation) ICP (libpointmatcher implementation) Table -15 – Summary on ICP methods available in InFuse

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 123 of 158 Review v2.6.docx

 Image processing toolbox Multiples image processing techniques are available to perform common low-level image operations.

Input Raw Image (asn1SccFrame) Output Processed image (asn1SccFrame) Available Background extraction implementations Canny edge detection Color conversion Derivative edge detection Image degradation Image rectification Image undistortion K-means clustering Normal vector extraction Table -16 – Summary on image processing methods available in InFuse

 Stereo-reconstruction Stereo-reconstruction algorithms are implemented to compute the disparity map from a stereo-images pair.

Input Left image (asn1SccFrame) Right image (asn1SccFrame) Output Disparity image (asn1SccFrame) Or pointcloud (asn1SccPointcloud) depending on the implementation Available Stereo block matching implementations Stereo semi-global block matching [Hirschmuller08] Stereo matching (Edres) Adaptive Cost 2-pass Scanline Optimization Stereo Matching [Wang06] Table -17 – Summary on stereo reconstruction methods available in InFuse

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 124 of 158 Review v2.6.docx

 Disparity to pointcloud Two methods to reconstruct the pointcloud from the disparity map are available.

Input Disparity image (asn1SccFrame) Intensity image (asn1SccFrame) Output Pointcloud (asn1SccPointcloud) Available Disparity to pointcloud implementations Disparity to pointcloud with intensity Table -18 – Summary on disparity to pointcloud methods available in InFuse

 Pointcloud toolbox Homogeneous transformation for 3D point is possible with the following module.

Input Source pointcloud (asn1SccPointcloud) Pose transformation (asn1SccPose) Output Source pointcloud transformed Available Cartesian system transformation implementations Table -19 – Summary on pointcloud transformation methods available in InFuse

 Transformation toolbox Finally, various estimation methods to solve for the transformation between two sets of points are implemented.

Input Matches (asn1SccCorrespondenceMaps3DSequence, or correspondenceMapsSequence) Optional guess poses (asn1SccPosesSequence) Optional guess pointcloud (asn1SccPointcloud) Output Estimated poses (asn1SccPosesSequence) Available Bundle adjustment (Ceres solver or SVD decomposition) implementations Least squares minimization Ceres solver Table -20 – Summary on 3D transformation estimation methods available in InFuse

4.3.4 Available sensors from I3DS

Multiple sensors have been used during the OG3-InFuse project. For the experiments with the OOS-SIM testbed, stereo-cameras, LIDAR, IMU and force-torque sensors have been exploited to collect data for the different benchmarks.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 125 of 158 Review v2.6.docx

OG4-I3DS provides multiples sensors and components, both for the Planetary and Orbital scenarios.

Figure 4-5: I3DS sensor suite components

More in-depth details about the sensors used for the Orbital use case and that are relevant to the current framework are described below.

Figure 4-6: I3DS sensors for the Orbital scenario

 High-resolution cameras

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 126 of 158 Review v2.6.docx

The acquisition board is provided by COSINE with a model specifically designed and qualified for space environment. The COSINE EP12: Acquisition Board Engineering Model embedded a FPGA and a CPU with debug interface. A Basler HRC is used in I3DS Orbital sensor suite with a Basler Ace acA2040-25gmNIR camera head.

Figure 4-7: Basler Ace acA2040-25gmNIR

Specification acA2040-25gmNIR Resolution (H x V Pixels) 2048 x 2048 4Mpx Sensor Type CMOSIS CMV4000-2E12M Progressive scan CMOS Global shutter Optical Size 1"

Effective Sensor Diagonal 15.9 m Pixel Size (H x V) 5.5 μm x 5.5 μm Max. Frame Rate 25 fps (at Full Resolution) Mono / Color Mono (NIR) Image Data Interface Fast Ethernet (100 Mbit/s) Gigabit Ethernet (1000 Mbit/s) Pixel Formats  Mono 12  Mono 12p (Mono 12 Packed)  Mono 8  YCb Cr422_8 (YUV422_8) Synchronization Via hardware trigger Via software trigger Via freerun Exposure Time Control Via hardware trigger Programmable via the camera API

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 127 of 158 Review v2.6.docx

Specification acA2040-25gmNIR Camera Power Power over Ethernet (PoE) 802.3af Requirements compliant supplied via Ethernet connector 12 VDC supplied via I/O connector

≈3.1 W when using Power over Ethernet ≈2.6 W @ 12 VDC when supplied via I/O connector I/O Lines 1 opto-coupled input line 1 opto-coupled output line

Lens Mount C-mount

Size (L x W x H) (42 x 29 x 29) mm without lens mount or connectors Weight <90 g Housing temperature and 0–50 °C (32–122 °F) Humidity during operation 20-80%, relative, non-condensing Housing temperature max. 70 °C (158 °F) according to UL 60950-1(*) Ambient temperature max. 30 °C (86 °F) according to UL 60950-1(*) Table -21 – Basler acA2040-25gmNIR specifications (*) UL 60950-1 test conditions: no lens attached to camera; no heat dissipation measures; ambient temperature kept at 30 °C (86 °F).

Optic used is a 21mm focal length Zeiss Interlock Compact 2.8/21 lens.

Parameter Description Focal length [mm] 21 Field of view [deg] 30.73 x 30.73 Instantaneous Field of View 54 [arcsec] F# 2.8 - 22 Min working distance [m] 0.25 GSD @ 4 m [mm] 1 weight ~ 500 g Table -22 – Zeiss Interlock Compact 2.8/21 specifications

 Stereo-cameras The acquisition board is the same COSINE EP12 model. The two camera heads are the COSINE MO6X, comprising of:  Monochrome global shutter 4 Mpx Sensor (CMOSIS CMV4000), sensible in the visible and near infrared regions

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 128 of 158 Review v2.6.docx

 Front-End Electronics  Each Camera Head is equipped with its own aluminium housing and a C-mount lens interface. The left (L) and right (R) Camera Heads are equipped with 12 mm optics (not motorized) Kowa LM12JC lens (COTS).

Figure 4-8: I3DS stereo cameras

Specifications on the CMOSIS CMV4000 Sensor:

Parameter Description Sensor size 4 Mpx - 2048 (H) x 2048 (V) Pixel pitch 5.5 x 5.5 μm Optical format 1" Shutter type Global Shutter Max frame rate [fps] 15 Sensitivity 5.56 V/lux.s Conversion gain 0.075 LSB/e- Full well charge 13500 e- Dark noise 13 e- (RMS) Dynamic range 60 dB Dark Current 125 e-/s (25 degC) Power 600 mW Fixed pattern noise <1 LSB (<0.1% of full swing) Operating temperature -30 to + 70 degC Table -23 – CMOSIS CMV4000 specifications

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 129 of 158 Review v2.6.docx

Specifications for the COSINE HRC Optics:

Parameter Description Focal length [mm] 21 Field of view [deg] 30.73 x 30.73 Instantaneous Field of View 54 [arcsec] F# 2.8 - 22 Min working distance [m] 0.25 GSD @ 4 m [mm] 1 weight ~ 500 g Table -24 – COSINE HRC Optics specifications

In order to accordingly select the appropriate baseline with respect to the expected workspace range, an analysis of the theoretical stereo performance is computed according to different parameters. We are especially interested in stereo reconstruction range and reconstruction error.

Senso Sensor Pixel Focal Baselin Disparit Expected Min Depth r size resolution binnin lengt e y range disparity depth error @ g h distanc error min e depth 1” 1024x1024 1x1 12mm 5cm 256px 0.25px 21.3cm 0.02cm 1” 2048x2048 2x2 12mm 5cm 256px 0.25px 21.3cm 0.02cm 1” 2048x2048 1x1 12mm 5cm 256px 0.25px 42.6cm 0.04cm 1” 2048x2048 1x1 12mm 5cm 128px 0.25px 85.5cm 0.17cm 1” 2048x2048 1x1 12mm 5cm 256px 0.5px 42.6cm 0.09cm 1” 2048x2048 1x1 12mm 8cm 256px 0.25px 68.1cm 0.07cm 1” 2048x2048 1x1 15mm 5cm 256px 0.25px 53.2cm 0.05cm Table -25 – Comparison between different stereo parameters

Increasing the sensor resolution will increase the minimum depth, which is the minimum distance the depth can be reconstructed. Similar effect can be reproduced with pixel binning to reduce the image resolution. Increasing the baseline distance allows seeing farther, but at the expense of the minimum depth distance. Finally, disparity range and expected disparity error are parameters or depend on the stereo- reconstruction algorithm. Increasing the disparity range allows better minimum depth distance but increases the processing time. Expected disparity error mostly depends on the accuracy of the stereo-reconstruction method. The more the method is accurate, the less the depth error is.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 130 of 158 Review v2.6.docx

Figure 4-9: Theoretical depth error w.r.t. the depth distance

The figure shows the theoretical depth error with respect to the distance to the object. The depth error increases quadratically with the true depth: 푍2 ∆푍 = ∆푑 퐵푓

 LIDAR The LIDAR model is the L3CAM manufactured by BEAMAGINE. The dimensions of the scanning LIDAR fits within a box of 15x15x14 cm, with an overall mass of 2kg. The scanning system can run up to 3Hz for the customized I3DS design, while adaptations can either increase the number of points measured per point cloud and reduce the measurement frequency, or, on the contrary, increase the measurement frequency by reducing the number of points per point cloud. The following table resumes the LIDAR performance with respect to the target distance.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 131 of 158 Review v2.6.docx

 Structured light sensor Structured light pattern projection is provided by SINTEF. It must be coupled with high- resolution cameras in order to retrieve the depth of the scene. When properly calibrated, a priori knowledge about the projected pattern and camera / projector geometry will allow for 3D measurements through triangulation based principles. The technique is known as 3D reconstruction using Structured-Light.

Figure 4-10: I3DS structured light sensor

The pattern projector emits in the infrared wavelength (860 nm) at a high intensity. A sequence of patterns following a Gray coding with phase shifting can be used. Phase shifting method allows increasing the resolution of the reconstruction. An overview of the pattern images can be seen in the image below.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 132 of 158 Review v2.6.docx

Figure 4-11: Gray coding pattern

Dimensions of the projector are the following.

Figure 4-12: Structured light sensor dimension

Performance results of the pattern projector + the high-resolution camera appeared to be unsuccessful with the datasets acquired in OG6. Unfortunately, the projected patterns were not enough visible due to the intensity of the Sun spot or the local wide illumination and the lack of dedicated filter during the experiments.

 Wide illumination Wide illuminator provided by SINTEF is used to ensure close to optimal lighting condition for the different vision tasks. Illumination wavelength ranges from 400 nm to 700 nm. A trigger signal is used to synchronize the illumination device with a camera, through a RS232 serial communication.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 133 of 158 Review v2.6.docx

Figure 4-13: I3DS wide illumination projector

Information about the 3D structure of the scene can therefore be retrieved by multiple ways. Dense pointcloud can be computed with stereo-reconstruction algorithms available in OG3- InFuse. Pattern projector combined with high-resolution camera can be exploited to get depth information with structured-light technology. Finally, LIDAR technology provides pointcloud with a density dependent on the number of beams. Vision sensors will be mounted on the robot end-effector. This configuration enables eye-in- hand visual servoing. Cameras must be wisely located to always be able to focus toward the object of interest. Cameras locations and operational space of the robotic arm in this assembly scenario are necessary information to choose the camera focal lens accordingly.

4.3.5 Calibration tools

Calibration tools are required for the following operations:  Monocular and stereo cameras calibration to estimate the intrinsic camera parameters, the distortion coefficients and the extrinsic parameters in the stereo case. Open-source tools already exist or DLR calibration tools (DLR CalDe - CalLab [Strobl]) can be used.  Hand-eye calibration [Tsai89], to estimate the rigid transformation between the camera frame and the robot end-effector frame.  Extrinsic calibration between the different sensors, e.g. between the LIDAR frame and the camera frame.

4.3.6 Conclusion

Two vision tasks emerge from the PULSAR scenario. A first one involves all the necessary operations to perform the structure assembly. Vision will be omnipresent to guide the robotic arm, from mirror tile grasping to precise positioning of the mirror tile into the structure

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 134 of 158 Review v2.6.docx

assembly. Model-based tracking should be a key element to accurately retrieve the pose of the object of interest. This component should be reusable with little modification from the OG3 CDFF. It is also planned to use fiducial markers as a complementary solution for robustness and redundancy or as a substitution depending on the working field of view. This tool is not available in OG3-InFuse but standard libraries exist and should be integrated into the project. Object pose estimation methods are yet to be defined in order to initialize the model-based tracker. Finally, methods to detect tracking drift and to automatically reinitialize the model- based tracker must be investigated and tested to provide reliability. A second task consists in monitoring the primary mirror assembly process. To this end, the current pointcloud of the scene will be analyzed and compared to some references. Dense pointcloud can be reconstructed from stereo-images or directly computed from a dedicated system (high-resolution camera + pattern projector or LIDAR) or a combination of the various systems. Main core components already exist in InFuse CDFF to perform dense stereo- reconstruction. Precision and accuracy of these methods need to be improved to satisfy the requirements.

4.3.7 References

[Besl92] P. J. Besl and N. D. McKay, “A method for registration of 3-d shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, Feb 1992.

[Bonnard13] Q. Bonnard, S. Lemaignan, G. Zufferey, A. Mazzei, S. Cuendet, N. Li, A. Özgür, and P. Dillenbourg, “Chilitags 2: Robust fiducial markers for augmented reality and robotics.” 2013.

[Calvet16] L. Calvet, P. Gurdjos, C. Griwodz, and S. Gasparini, “Detection and accurate localization of circular fiducials under highly challenging conditions,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016, pp. 562–570.

[Choi12] C. Choi and H. I. Christensen, “3d pose estimation of daily objects using an rgb-d camera,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct 2012, pp. 3342–3349.

[Harris88] C. Harris and M. Stephens, “A combined corner and edge detector,” in In Proc. of Fourth Alvey Vision Conference, 1988, pp. 147–151.

[Hinterstoisser12] S. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige, and N. Navab, “Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes,” 2012.

[Hirschmuller08] H. Hirschmuller, “Stereo processing by semiglobal matching and mutual information,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 328–341, Feb 2008.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 135 of 158 Review v2.6.docx

[Huntsberger05] T. Huntsberger, A. Stroupe, and B. Kennedy, “System of systems for space construction,” vol. 4, 11 2005, pp. 3173 – 3178 Vol. 4.

[Krogius19] M. Krogius, A. Haggenmiller, and E. Olson, “Flexible layouts for fiducial tags (under review),” under Review.

[Lepetit04] V. Lepetit, J. Pilet, and P. Fua, “Point matching as a classification problem for fast and robust object pose estimation,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 2, June 2004, pp. II–II.

[Li18] Y. Li, G. Wang, X. Ji, Y. Xiang, and D. Fox, “Deepim: Deep iterative matching for 6d pose estimation,” in European Conference Computer Vision (ECCV), 2018.

[Marchand05] E. Marchand, F. Spindler, and F. Chaumette, “ViSP for visual servoing: a generic software platform with a wide class of robot control skills,” IEEE Robotics and Automation Magazine, vol. 12, no. 4, pp. 40–52, 2005, special issue on Software Packages for Vision- Based Control of Motion, P. Oh, D. Burschka (Eds.).

[Mikolajczyk05] K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615– 1630, Oct. 2005.

[Muja09] M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” in International Conference on Computer Vision Theory and Application VISSAPP’09). INSTICC Press, 2009, pp. 331–340.

[Nitsche15] M. Nitsche, T. Krajnı́k, P. Čı́žek, M. Mejail, and T. Duckett, “Whycon: An efficent, marker-based localization system,” in IROS Workshop on Open Source Aerial Robotics, 2015.

[Oumer16] N. W. Oumer, “Visual tracking and motion estimation for an on-orbit servicing of a satellite.”

[Ramirez18] F. Romero Ramirez, R. Muoz-Salinas, and R. Medina-Carnicer, “Speeded up detection of squared fiducial markers,” Image and Vision Computing, vol. 76, 06 2018.

[Rublee11] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in 2011 International Conference on Computer Vision, Nov 2011, pp. 2564– 2571.

[Rusu08] R. B. Rusu, N. Blodow, Z. C. Marton, and M. Beetz, “Aligning point cloud views using persistent feature histograms,” in 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sep. 2008, pp. 3384–3391.

[Strobl] K. H. Strobl, W. Sepp, S. Fuchs, C. Paredes, M. Smisek, and K. Arbter. DLR CalDe and DLR CalLab. Institute of Robotics and Mechatronics, German Aerospace Center (DLR). Oberpfaffenhofen, Germany.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 136 of 158 Review v2.6.docx

[Tekin18] B. Tekin, S. N. Sinha, and P. Fua, “Real-time seamless single shot 6d object pose prediction,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2018, pp. 292–301.

[Tombari10] F. Tombari, S. Salti, and L. D. Stefano, “Unique signatures of histograms for local surface description,” in In Proc. of the European Conf. on Computer Vision (ECCV), Heraklion, Greece, September 5-11 2010.

[Tremblay18] J. Tremblay, T. To, B. Sundaralingam, Y. Xiang, D. Fox, and S. Birchfield, “Deep object pose estimation for semantic robotic grasping of household objects,” in Conference on Robot Learning (CoRL), 2018.

[Tsai89] R. Y. Tsai and R. K. Lenz, “A new technique for fully autonomous and efficient 3d robotics hand/eye calibration,” IEEE Transactions on Robotics and Automation, vol. 5, no. 3, pp. 345–358, June 1989. [Wang06] L. Wang, M. Liao, M. Gong, R. Yang, and D. Nister, “High-quality real-time stereo using adaptive cost aggregation and dynamic programming,” in Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT’06), June 2006, pp. 798–805.

[Wang16] J. Wang and E. Olson, “AprilTag 2: Efficient and robust fiducial detection,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016.

[Zhong09] Y. Zhong, “Intrinsic shape signatures: A shape descriptor for 3d object recognition,” in 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, Sep. 2009, pp. 689–696.

4.4 RAS Motion Control

How are the space robotic arms controlled? The technology of the robot arms as described in previous section differs significantly in its current or intended applications. The ISS robot systems Canadarm, Canadarm2, JEMRMS, and ERA are manually controlled. This means there is always a human operator the on ISS or an operator on ground in the loop. The control from ground is mainly designed to reduce crewmember workload as much as possible since tasks include operations conducted over long periods of time such as payload transfer. The ERA is the only manipulator that could be controlled by an EVA, beyond contingency operations e.g, driving a joint with a tool. An operator can command the ERA from a man machine interface that is either internal or external to the station. Both the Canadarm2 and JEMRMS are controlled from dedicated workstations within the ISS. Both manipulators can be controlled using translational and rotational hand controllers. These two manipulators are ‘flown’ by the operator using hand controllers and require more crew skill and dexterity. The operator commands the end effector motion and depends on camera views for situational awareness. This requires a higher level of training and skill from the operators and allows more room for operator error. Similar to the ERA, the JEM and Canadarm2 can be controlled via auto sequences and joints can be operated individually when needed. For safety reasons the redundancy of the 7dof robot arms is not used. The ERA designers have designated a nominal

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 137 of 158 Review v2.6.docx

operation with the shoulder yaw locked, while the Canadarm2 allows the mission designer or operator to choose which joint to lock.

4.4.1 Dextre, SFA

Dextre was originally designed to be operated by astronauts from inside the International Space Station (ISS). The Canadian Space Agency revised Dextre's software and worked with NASA to come up with a series of tests (called On-Orbit Checkout Requirements) that would ensure that Dextre could be safely operated from the ground. Today, Dextre is programmed by robotics planners on ground, who prepare all the robotic handyman's activities and the software he needs to get the job done. Dextre is operated by robotics controllers both at NASA's Johnson Space Center in Houston and from the Canadian Space Agency's headquarters in Saint-Hubert.

The MA (Main Arm) and SFA (Small Fine Arm) are operated from the RMS Console by a crew member in the JEM-PM. Both the MA and SFA have six joints, thus a great amount of freedom is ensured and human-like movements are realized. The robotic control workstation, known as the JEMRMS Console, is used for manipulating the JEMRMS. TV cameras are mounted on the arms, so the crew members can manipulate the JEMRMS while watching the camera's image on the TV monitor on the JEMRMS Console inside the PM (Pressurized Module).

4.4.2 DEXARM

DEXARM foresees a number of control strategies for free space and contact operations, among which:  Implicit impedance control;  Explicit impedance control;  Implicit hybrid control;  Operational space control.

In the implicit control schemes, the Cartesian level loop generates joint position set points, while in the explicit control schemes; the Cartesian level loop generates joint torque set points. The DEXARM architecture allows the incorporation of any of those schemes, because the joint is equipped with both position and torque sensors, allowing the selection between joint position control and joint torque control. Mixed control schemes can also be envisaged, controlling joint impedance, in which the Cartesian level loop generates position, torque and impedance set points.

4.4.3 Orbital Express Robotics

All robotic operations were scripted prior to execution and performed autonomously as part of increasingly complex mission scenarios. The arm was commanded to perform its operations by either direct command from the ground, or autonomously by the ASTRO Mission Manager software. Scenarios in the early phases of flight operations incorporated a number of Authority to Proceed (ATP) pause points, which required a signal to be sent from the ground to authorize the ASTRO Mission Manager to continue the sequence. This allowed the ground operations team to verify that the scenario was proceeding as planned before continuing to the next step.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 138 of 158 Review v2.6.docx

Later scenarios incorporated fewer ATPs. The final scenarios were compound autonomous sequences, performing rendezvous, capture, ORU transfer and fluid transfer without any ATPs.

4.4.4 Front-end Robotics Enabling Near-term Demonstration (FREND) Robotic Arm

For the RSGS mission the NRL engineers developed the robotics control software necessary for safe and efficient on-orbit autonomous and ground-controlled robotic operations in space. The software comprises all modes from direct teleoperation to fully autonomy. The latter requires sufficient computing resources on-board as the control loops cannot be closed via on- ground ressources.

Robotic Control Methodologies  Scripting o flight processor controls robot via precomputed trajectories that are cued by ground operators o used wherever possible  Partial autonomy o flight processor controls robot via feedback from onboard sensors (EE cameras, F/T sensor, etc.) o includes supervision from ground via stop/go commanding o used during contact or fine alignment operations with engineered environment  Full autonomy o same as partial autonomy except stop/go authority given by onboard software o used only during time–critical or communication–denied operations (e.g. robotic grapple or release)  Tele–Operation o human operator controls robot in real time via hand controllers such as joysticks and “standard” telemetry/video displays o used only during manipulation of “non–engineered” environments, e.g. emplacing a new payload on an RSO or freeing a stuck Deployable

4.4.5 Dragonfly

The robot is commanded from the standard GEO ComSat Mission Control Center (MCC).

 The robot is controlled through a combination of pre-programmed scripts and an in-situ registration process to match trajectories to the actual geometric environment.  A set of visual situational awareness cameras provides compressed imagery to ground based operators who monitor and provide simple, asynchronous commands such as “trigger next sequence”, “bump two steps left, then continue script”, and “re-register now”.  A bore sight camera provides imagery for proximity and contact alignment as well as verification of tool operation

4.4.6 Compliant Assistance and Exploration SpAce Robot (CAESAR)

The control of the robot system can be roughly divided in the control on Cartesian level and

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 139 of 158 Review v2.6.docx

the control on joint level. The latter is completely executed in the Electronic Boxes. The software for the Cartesian control, the robot applications, and the task directed programming is executed on a separate Robot Control Unit (RCU).

DLR’s light-weight robots are controlled by a cascaded structure of current, joint and Cartesian level controllers. The modular joints are equipped with motor-side position sensors and link- side torque sensors. Additional link-side position sensors are available for safety checks and referencing. The collocated design of the joint with actuator and sensor in close proximity is advantageous from a control point of view and enables robust, passivity-based control approaches.

The fastest and most inner loop is the space-vector modulated PI current control of the RoboDrive BLDC motors. It runs at 20kHz on the local floating point MDSP for every joint and ensures high bandwidth for the upper control layers. The middle layer of the control structure is the joint control. It runs at 3kHz on the floating point CDSP on the JCU for two joints. The joint controller is a state feedback controller that uses the motor position and the link-side torque and their derivatives as state. The feedback of the torque signal allows for active vibration damping of the flexible robot joints. Additionally, friction and other disturbances like HarmonicDrive and motor ripple can be observed and compensated online. Depending on the application, the higher level controller can adapt the feedback gains and tune the behavior of the joint seamlessly from compliant torque control to stiff high-performance position control.

Figure 4-14 Structure of joint level control The high level control of the complete robot system is done on an external control computer the Robot Control Unit (RCU), running at 500Hz. Due to the light-weight design and the elasticity in the HarmonicDrive gearboxes and the torque sensors, a flexible joint robot model is considered for the controller design. The high level control calculates the dynamic model of the robot to provide proper feed-forward terms for the joint control. Depending on the desired behavior it also adapts the gains of the joint control loop for optimal performance in position control or implements a compliant control law like Cartesian impedance control.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 140 of 158 Review v2.6.docx

Figure 4-15 Structure of Cartesian impedance control The following controllers are typically used for applications:

 Position control - a full state-feedback position controller for high performance movements in free space, for example for transfer motions. The feedback includes an term for high accuracy and the torque feedback actively dampens the vibrations of the structure. Feedback gains are adapted online according to current load and robot configuration.  Cartesian impedance control - a Cartesian controller that mimics the behavior of a mechanical multidimensional spring-damper-mass system with user-defined stiffness and damping. In contrast to normal admittance controlled industrial robots, the robust passivity-based implementation allows a high range of desired stiffness settings down to zero and a stable execution even under rigid contacts and contact transitions. This mode is especially useful for all movements in contact with the environment, for applying forces to workpieces or for aligning parts in on orbit assembly applications.

The precise knowledge of the dynamic model and the availability of torque measurements in the joints allow a detection of all interaction forces with the environment and therefore a sensitive collision detection. As the control mode can be changed from one control cycle to the next (within 2ms), an immediate reaction to collisions with the environment is possible. This way, CAESAR combines the advantages of high-performance position control and sensitive compliant behavior in assembly and on orbit servicing scenarios and physical human-robot interaction. The robot software on the RCU is covering all aspects of communication, the Cartesian commanding, the house keeping data management, and the mission applications. The EtherCAT Master designed for CAESAR specifies and manages the data packages to be communicated between the joints, the gripper and the application software. The type and profile of a data package can vary from cycle to cycle. The Cartesian impedance, force, and position control offer the user/application programmer a powerful interface for the very individual mission applications and needs. The robot is commanded by giving the needed impedance, force, or position of the tool center point (TCP) in the working area of the manipulator. The according joint values are automatically generated not only for the target configuration but also for the safe trajectory reaching the requested Cartesian target position. Depending on application also software for e.g. visual data processing and visual servoing for autonomous grasping of objects is executed on the RCU.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 141 of 158 Review v2.6.docx

4.5 AOCS

4.5.1 AOCS Functions

AOCS has to be designed to support all mission’s needs from launch and early orbit phase (LEOP) to satellite disposal. The satellite must implement at least the following operational phases: launch phase, transfer phase, deployment phase, mission phase, deorbit phase. Each phase is supported by one or more AOCS modes. The transition between different modes is carried out by the on-board software (OBS) upon receiving a ground remote control or when triggered by a timer. More details on functional requirements and hardware architecture of these different modes are presented in [1]. In the context of PULSAR, our main concern will be to design efficient controllers for the deployment phase and mission phases.

The deployment phase is normally entered when the satellite has finally reached the target orbit. In this phase all satellite’s appendages (i.e. arrays, antennas, instruments) are deployed. The AOC mode used to deploy all appendages is selected considering the dynamic conditions and the thermal and power constraints. The main issue consists in preserving a reasonable pointing accuracy of the satellite to maintain the link with the ground station.

The mission phase begins after the deployment and it is maintained up to the end of mission before satellite disposal. In this phase AOC system has to support all nominal operations required by the satellite. Basically the nominal pointing mode (i.e. mission mode) is designed to achieve a good attitude pointing accuracy and knowledge necessary for the correct payload operations. Specific functions linked to the payload requirements are implemented in this phase like agile pointing or steering capability. For space telescope, pointing stability is the driving requirement for high-quality imaging as presented by Blackmore and al. in [2].

4.5.2 AOCS Requirements

The frequency of the pointing stability requirement is critical in determining the spacecraft design used to achieve the requirement. Figure 1 (top), presented in [2], shows a conceptual spectrum of possible damping approaches. At low frequencies, the spacecraft AOCS can be used to correct pointing errors. Above a few Hertz, the actuation capabilities of the AOCS will not have sufficient bandwidth to correct attitude errors. In this range, passive techniques, such as vibration isolators, must instead be used to ensure that disturbances do not affect the instrument. Different disturbances sources can affect the attitude of a spacecraft and are represented on Figure 1 (bottom). Low frequency disturbances are generally external to the spacecraft, and come from sources such as solar pressure or atmospheric drag. During the deployment, torques induced by the robotic arm motion have to be also considered in this range. High frequency disturbances come from internal sources, such as the reaction wheels, thrusters or the payload cooling system.

Each disturbance source excites different structural modes of the spacecraft, as shown in Figure 1 (middle). At the far left of the spectrum (steady-state) there are the rigid-body modes of the spacecraft, while on the right there are flexible body dynamics. These modes will progressively change during the tiles deployment : - displacement of the center of mass and variation of the inertia of the whole system,

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 142 of 158 Review v2.6.docx

- a progressive decrease of the flexible structural modes of the primary mirror.

Figure 4-16: Conceptual frequency spectra of disturbance sources (bottom), spacecraft structural modes (middle) and strategies for damping these modes using either active or passive means (top)[2] So effective AOCS should involve two specific controllers for the deployment and the observation phases. The critical aspects in the first stage are to manage simultaneously the varying dynamic properties of the system induced by the tiles deployment and the disturbance torque induced by the robotic arm movements. Thus AOCS controllers may include robust estimator of the Robotic Arm System states and adaptive gain to fit with the current deployment state of the primary mirror. During the observation phase, the central issue will consist of ensuring an enhanced pointing accuracy despite the low flexible modes of the assembled structure. Attitude knowledge is usually provided with Star Tracker Assemblies (STAs) and gyros. Tight pointing stability requirements at low frequencies may cause a mission to require more accurate attitude knowledge than can be achieved using a conventional STA. Then the mission can use dedicated sensors on the payload to determine the pointing error relative to a celestial target; this is known as payload assist. This functionality gives direct knowledge of the telescope boresight error. By providing higher frequency attitude error information, payload assist increases the bandwidth of the AOCS. Actuation for attitude control is usually achieved using Reaction Wheel Assemblies (RWAs). The maximum torque and angular momentum that the wheels can provide are limited [3]. These saturations constrain the spacecraft maneuvers and could request specific controller [4]. Finally, an additional challenge is to ensure that the real-time implementation of designed AOCS controllers satisfy the mission requirements and the stringent on-board computational constraints. Delays and limited sensors bandwidth, as well as sampling times strongly affect the AOCS performance.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 143 of 158 Review v2.6.docx

In this project, we aim to develop a tool chain from the AOCS requirements to the controllers implementation. To fulfill this objective we need :  representative models of the system describing uncertainties on system dynamics and chain acquisition,  robust control design framework considering the system requirements and system uncertainties,  efficient auto-coding tools to easily iterate between the design process and the hardware integration

4.5.3 Modelling tools

In this project, two type of model will be used : accurate non-linear model as references and uncertain/varying linear models for control design

4.5.4 Non-linear model for simulation purpose

The reference model of the full system requires a simulation core engine ables to accurately simulate: - the disturbances induced by the non-linear dynamic of the robotic arm and contact forces involve at the different connection points, - the flexible behavior of the structural dynamics and their evolution induced by the progressive connection of the mirror tiles.

4.5.5 SDT model for control purpose

The main idea of this task is to build uncertain/varying linear models of the satellite dynamics in the form of Linear Fractional Transformations (LFT) for robust control synthesis and analysis purposes.

The modelling procedure must be sufficiently generic to take into account torques induced by the robotic arm motions and different kinds of appendages such as flexible solar arrays and the primary mirror of the telescope.

Constraints of genericity, modularity, ease of obtaining parametrized models lead to consider a multi-body modelling approach. Thereby developments have been carried out to build a parametric linear model of a satellite composed of a rigid hub, rigid appendages and/or flexible appendages, and have already proved their efficiency in many space studies, which aimed at the synthesis of a robust attitude control law. The key idea, detailed in [17-20], is to compute –for each substructure- its inverse dynamic model: This transfer matrix has • a input vector, composed of external forces and torques (wrench torque) applied on this substructure • and a output vector, composed of the linear and angular accelerations (derivative of the twist vector) at a point of the substructure. A cantilevered connection of a -flexible or not- appendage on a satellite hub results in a simple feedback, since the appendage is subjected to a force opposite to that which the hub

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 144 of 158 Review v2.6.docx

undergoes. It is also at this connection, that reference points and reference frame can be changed or that different kinds of connection (pivot,…) can be considered (see Figure 4). All couplings are then taken into account.

Figure 4-17 Cantilevered connection of (rigid or flexible) appendages on a rigid hub. Such an approach allows one to split the geometric and dynamic parameters of each substructure and each link into specific blocks. Moreover, as each substructure is separately modelled, the physical parameters are repeated minimally: it is also one of the advantages of the approach. It helps to perform optimally sensitivity analyses. In fact, parametric uncertainties can be taken into account: the modelling procedure gives then directly a minimum order uncertain model (i.e. a Linear Fractional Representation with an uncertain block of minimum size), compatible with the Matlab Robust Control Toolbox. An existing toolbox, the Satellite Dynamics Toolbox (SDT), is available [21]: it is a MATLAB package containing basic functions to compute the (direct and indirect) linear dynamic model of spacecraft.

These models are sufficient to describe all configurations of the satellite with the primary mirror partially or completely assembled and the robotic arm only linked to the main hub. In these cases we need: • the definition of the different frames : S/C reference frame, appendages reference frame, robotic arm reference frame • the characteristics of the main body (hub) : o its total mass o its centre of mass in S/C reference frame o its moments of inertia at the S/C centre of mass in the S/C reference frame • the characteristics of the appendages (solar array, the part of the primary mirror already assembled)

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 145 of 158 Review v2.6.docx

o their mass o the position of its their centre of mass in their reference frame o their moments of inertia at the solar array centre of mass in their reference frame o their connection point to the main body in the S/C reference frame o the number of flexible modes, with the matrix of modal participation factors w.r.t. interface point of the appendages (participations in translations and rotations), frequencies of the (cantilevered) modes and their damping. • and an estimation of torques applied by the robotic arm on the main hub.

An extension of these first developments –namely the Two-Input Two Output Port (TITOP) modelling technique [22-23]- must be applied to consider a flexible substructure connected to two other mechanical substructures through two different connection points and . The block called (in figure 5) is now a block, described in figure 5, which allows the interconnection of the appendage within a chain-like assembly.

Figure 4-18 Block diagram of the TITOP system As presented in [19] the TITOP concept is clearly adapted to the modelling of flexible multi- component appendages which is potentially actuated or submitted to forces and torques.

This TITOP concept could give any model of the spacecraft when the robotic arm is linked with both the main body and the primary mirror. New data are needed : the estimation of the torques applied by the robotic arm on the primary mirror, where a new tile is being assembled.

4.5.6 Controller design strategies and tools

As clarified above, the AOCS design problem can be divided in two parts: the deployment phase and the observation phase.

4.5.7 Control design for the deployment phase

This phase is certainly the most challenging as far as the control design problem is concerned. During this time period indeed, as already observed in the early work of [5] for example, it is important to stabilize the attitude with a reasonable accuracy to keep communication link despite the torque perturbations that are generated by the robotic arm. What is even more challenging in the current application is that the robotic arm is used to build the solar panel from tiles that are progressively deployed from the main body. As a result, the inertia of the total satellite varies rather slowly but significantly during this deployment phase. Many different strategies have been developed in the literature over the past thirty years to handle attitude control problems in the presence of time varying inertia. Among possible approaches, one can mention Adaptive Control Techniques for Linear Time Varying systems initially developed in [6] and recently revisited in [7] and [8]. The central difficulty with adaptive control techniques is to obtain a guaranteed performance level. This is why alternative LPV- based methods are often preferred [9]. Moreover, when the variations in the inertia matrix remains sufficiently small, the latter can be viewed as time-varying uncertainties and robust

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 146 of 158 Review v2.6.docx

control techniques become applicable. In [10] for example such a method, based on a smoothed sliding mode control strategy is applied to provide a robust attitude controller using reaction wheels. Sliding mode control techniques are indeed very interesting since they exhibit high robustness properties and well suited to nonlinear systems. However, they tend to generate aggressive control inputs which often cannot be realized by limited reaction wheel systems. In this respect, a better compromise is generally reached by robust control techniques mixing the LPV concept and the Hinfinity design framework [11]. Based on results presented in [12], our approach to solve the problem will be based on a multi- model Hinfinity design framework from which a robust and possibly parameter-varying attitude controller will be obtained. As illustrated in the AOCS requirements, the weighting functions in the Hinfinity design phase will be adequately chosen in order to enforce perturbation rejection properties in specific frequency ranges. In addition, the perturbations (typically those generated by the onboard manipulator) that cannot be rejected without a significant performance loss, will be online estimated by a torque observer as shown in Figure 1. The latter can be designed either using the Hinfinity [13] or the LPV framework [14].

Figure 4-19 AOCS control structure

4.5.8 Control design for the observation phase

During the observation phase, the inertia matrix stops varying significantly. However, the desired pointing accuracy becomes more stringent and the fully deployed solar panel tends to generate badly damped and rather low frequency torque perturbations. The main control design issue will then consist of an enhanced weighting functions tuning to optimize the compromise between a reasonable pointing accuracy and disturbance rejection. The general framework used during this phase will be essentially the same as above. Moreover a similar structure will be used to facilitate control switching from the deployment phase to the observation phase. Control design and analysis will be performed with the help of tools implemented in the SMAC toolbox [15,16].

4.5.9 AOCS Implementation

According to control design strategies and tools exposed in the previous paragraph, a continuous-time AOCS control structure will be designed. For implantation purpose, both controller and observer have to be discretized. It’s well known that this step, seen as an

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 147 of 158 Review v2.6.docx

approximation problem (approximation of continuous-time representation by a discrete-time one) can degrade performances of the closed-loop. According to the operating sampling period, the discretization of AOCS control will be addressed and validate by Matlab/Simulink simulations, leading to a relevant Matlab/Simulink representation of the AOCS structure. Indeed, this Simulink model will be used as input of the TASTE tool-chain in the ERGO framework for embedded code generation.

Figure 4-20 : ERGO process [24]

TASTE includes backends to give access to ASN.1 types to Simulink models (among others) [25] and allows integration of a Simulink scheme as part of a TASTE system. In this case, a blank Simulink skeleton can be automatically generated with respect to inputs/outputs and data types of the TASTE component. The AOCS structure will be included in the skeleton and C code will be generated by using the Matlab Embedded Coder toolbox.

4.5.10 References

[1] Mazzini, L. (2015). Flexible Spacecraft Dynamics, Control and Guidance. Springer, Rome.

[2] Blackmore, L., Murray, E., Scharf, D. P., Aung, M., Bayard, D., Brugarolas, P. & Kang, B. (2011). Instrument pointing capabilities: past, present, and future.

[3] Markley, F. L., Reynolds, R. G., Liu, F. X., & Lebsock, K. L. (2010). Maximum torque and momentum envelopes for reaction wheel arrays. Journal of Guidance, Control, and Dynamics, 33(5), 1606-1614.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 148 of 158 Review v2.6.docx

[4] Burlion, L., Biannic, J. M., & Ahmed-Ali, T. (2017). Attitude tracking control of a flexible spacecraft under angular velocity constraints. International Journal of Control, 1-17.

[5] M. Oda. Coordinated control of spacecraft attitude and its manipulator. In Proceedings of the 1996 IEEE lntemational Conference on Robotics and Automation Minneapolis, Minnesota - April 1996. [6] R. H. Middleton and G.C. Goodwin L Adaptive Control of Time-Varying Linear Systems. IEEE Transactions On Automatic Control, Vol. 33, No. 2, February 1988.

[7] D. Thakur and M. R. Akella. Adaptive Attitude-Tracking Control of Spacecraft with Uncertain Time-Varying Inertia Parameters. Journal of Guidance, Control, and Dynamics, Vol. 38, No. 1, January 2015. [8] K. Chen and A. Astolfi Adaptive Control of Linear Systems with Time-Varying Parameters. 2018 Annual American Control Conference (ACC) June 27–29, 2018. Wisconsin Center, Milwaukee, USA.

[9] R. Jin, X. Chen, Y. Geng and Z. Hou. LPV gain-scheduled attitude control for satellite with time-varying inertia. Aerospace Science and Technology, Vol. 80. pp 424-432. 2018.

[10] L Shi, J. Katupitiya and N. Kinkaid. A Robust Attitude Controller for a Spacecraft Equipped with a Robotic Manipulator. 2016 American Control Conference (ACC). Boston MA, USA. [11] D. Navarro-Tapia, A. Marcos, S. Bennani and C. Roux. Structured H-infinity and Linear Parameter Varying Control Design for the Launch Vehicle. 7th European Conference for Aeronautics and Aerospace Sciences (EUCASS) [12] J-M. Biannic, C. Roos and J. Lesprier. Nonlinear structured Hinfinity controllers for parameter-dependent uncertain systems with application to aircraft landing. AerospaceLab Journal #13. Special issue on design & validation of aerospace control systems. http://www.aerospacelab-journal.org. November 2017. [13] A. Bourdelle, J-M. Biannic, H. Evain, C. Pittet, S. Moreno and L. Burlion. Propellant Sloshing Torque H-infinity-based Observer Design for Enhanced Attitude Control, submitted to IFAC 21st symposium on Automatic Control in Aerospace (ACA), 2019. [14] J-M. Biannic, A. Bourdelle, L. Burlion, H. Evain and S.Moreno. On robust LPV- based observation of fuel slosh dynamics for attitude control design, submitted to IFAC 3rd workshop on Linear Parameter Varying Systems (LPVS), 2019. [15] J-M. Biannic, F. Demourant, G. Ferreres, G. Hardier, and C. Roos, "The SMAC Toolbox: a collection of libraries for Systems Modeling, Analysis and Control", June 2016, online available at http://w3.onera.fr/smac/. [16] C. Roos, "Systems Modeling, Analysis and Control (SMAC) toolbox: an insight into the robustness analysis library", in Proceedings of the IEEE Multiconference on Systems and Control, Hyderabad, India, August 2013, pp. 176-181, available with the SMAC toolbox at http://w3.onera.fr/smac/smart [17] D. Alazard, Ch. Cumer and K. Tantawi, “Linear dynamic modeling of spacecraft with various flexible appendages and on-board angular momentums”, 7th International ESA Conference on Guidance, Navigation & Control Systems , June 2008, Tralee, Ireland

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 149 of 158 Review v2.6.docx

[18] N. Guy, D. Alazard, Ch. Cumer and C. Charbonnel, "Dynamic modeling and analysis of spacecraft with variable tilt of flexible appendages", Journal of Dynamic Systems Measurement and Control, vol. 136 (2), 2014. [19] D. Alazard, Th. Loquen, H. De Plinval and Ch. Cumer, "Avionics/Control co-design for large flexible space structures", AIAA Guidance, Navigation, and Control Conference, Boston, United States, 2013. [20] F. Ankersen, L. Massotti, M. Arcioni, P. Silvestrin and M. Casasco, “Modern Attitude Control and Co-Design for the BIOMASS Satellite”, Presentation Slides from IFAC Symposium on Automatic Control in Aerospace, Wurzburg. [21] D. Alazard and Ch. Cumer, SDT_V1.3 Satellite Dynamics Toolbox, https://personnel.isae-supaero.fr/daniel-alazard/matlab-packages/satellite-dynamics- toolbox.html [22] D. Alazard, J.A. Perez Gonzalez, Th. Loquen and Ch. Cumer, "Two-input two-output port model for mechanical systems", 53rd AIAA Aerospace Sciences Meeting, Kissimmee, United States, 2015. [23] J.A. Perez Gonzalez, D. Alazard, Th. Loquen, Ch. Pittet and Ch. Cumer, "Flexible Multibody System Linear Modeling for Control Using Component Modes Synthesis and Double- Port Approach", Journal of Dynamic Systems, Measurement, and Control, vol. 138 (12), 2016.

[24] European Robotic Goal-Oriented Autonomous Controller (ERGO) user manual https://www.h2020-ergo.eu/wp-content/uploads/ERGO_D4_3_User_Manual_V2.0.pdf [25] M. Perrotin, E. Conquet, J. Delange, A. Schiele and T. Tsiodras, “TASTE: A Real- Time Software Engineering Tool-Chain Overview, Status, and Future” in SDL 2011: Integrating System and Software Modeling, Springer Berlin Heidelberg 2012 [26] L. Flückiger, K. Browne, B. Coltin, J. Fusco, T. Morse and A. Symington, Astrobee Robot Software: Enabling Mobile Autonomy on the ISS. [27] N. Koenig, and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot simulator”. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566) (Vol. 3, pp. 2149-2154). IEEE.

4.6 Simulation Environments

4.6.1 Unity3D

The possibilities of Unity3D are substantial, thanks in part to the great community behind the app. Plugins for orbital simulation already exist on the platform. A great freedom of interaction is possible and allows to realize the desired simulator easily while some limitations can be pointed out:. - The language used by Unity3D is the C # which does not interface well with C ++, especially on Linux. - Using Unity3D on Linux is risky. Indeed, the only version available on this platform is a so-called "experimental" version. We therefore expose ourselves to risks of instability, unless we realize the simulator under windows, which will not necessarily be the most practical solution for this project.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 150 of 158 Review v2.6.docx

3D engine Unity Physics engine Unity engine / Bullet 3D modeler Internal Main language C# Model formats ??? supported Middleware ??? support Supported sensors Odometry, IMU, Collision, GPS, Monocular cameras, Stereo cameras, Depth cameras, 2D laser scanners, 3D laser scanners License ???

Inverse Kinematics Yes

Soft Bodies Yes with Unity Physics or Bullet

Working with Experimental Version Linux ?

Programming C# Language ? Interface Possibility of C++ interfacing, but that’s not optimize for that. Can work with file, but C# is not working with ROS

Existing pluggins ? Many orbital Pluggins

Business Model Free if the revenue does not rexceed $100k per year. If it does, ~$25 per months to get advantages Visual and Physics Possibility to simulate orbital Physics possibilities (orbital?) Accessibility Unity is very friendly to use. Many tutorials.

4.6.2 Unreal Engine 4

The Unreal Engine 4 is an equivalent of Unity3D with the only difference that the language used is C ++.

This environment therefore has the same qualities as Unity3D ie its great possibilities, an active community (and therefore already existing pluggins to work on orbital) and a more interesting optimization than for other platforms. One of the most interesting point about UE4 is the realistic graphism. That can help to get stereo results for exemple. However, there is a problem with Linux. As for Unity3D the version proposed by UE4 for Linux does not seem to be stable. No executable is provided, it is necessary to recover the sources on the github of Epic Games, to compile the software itself. It is therefore necessary to count an entire afternoon (at least) to have an operational environment.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 151 of 158 Review v2.6.docx

If the choice is made on this software, it will also be necessary to carry out preliminary tests. Indeed, the UE4 seems to have problems with the Vulkan driver when used on Linux. Used without any launch option, the software makes the computer crash. So you have to add the "- opengl flag" option to override the Vulkan driver. There are possibilities to do orbital simulation but also to reuse data simulate in another environment (Simulink for example). Indeed, since the code is in C ++, we can very well read input files and use the simulator as a simple visual interface. Another problem is that we should rewrite every sensor as UE4 is not made for engineering used. There is the Flex Pluggin to simulate soft bodies but we can assume that it is not accurate for the same reason than missing of sensor.

3D engine UrealEngine4 Physics engine PhysX 3.3 (possibility to use PhysX 4 which is more sophisticate) 3D modeler Internal Main language C++ Model formats You can import FBX or OBJ files into the scene supported Middleware support ??? (because the language is C++, it should work with at least ROS)

Supported sensors None, have to integrate them

License ???

Inverse Kinematics Yes

Soft Bodies Yes with NVidia Flex (possibily not accurate)

Working with Linux ? Experimental Version. Problem with Vulkan driver

Programming C++ Language ?

Interface Possible to read files to get positions, etc.

Existing pluggins ? Yes

Business Model Free under $3,000 and 5 % of revenue after

Visual and Physics Possibility to simulate orbital Physics possibilities (orbital?) Accessibility Same as U3D. Many Tutorials

4.6.3 Gazebo

Gazebo is a balanced candidate to realize the simulator.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 152 of 158 Review v2.6.docx

Gazebo [2] is a general purpose robotics simulator developed and maintained by the Open Source Robotics Foundation (OSRF) and partially meets these requirements. It uses OGRE for the visualization rendering and gives the option of interfacing with four physics engines (ODE, Bullet, Simbody, and DART) which could simulate the disturbances (ODE for contact forces and DART for the manipulator). Gazebo also has bindings to ROS, allowing the extension of the simulation to the ROS middleware quite straightforward and so ESROCOS as presented in section 2.2.1. This platform allows the development of custom plugins for developing new features and modules for sensors, environment and dynamics using its C++ API, which allows a high degree of modularization of the simulation. The interface between the different plugins is implemented in a socket-based fashion using Google Protocol Buffers, allowing the server to run on a different machine than the GUI. Although it offers no specific applications for space robotics, implementations of plugins extending Gazebo’s functionalities to this purpose have been published in [1]. Furthermore, the Gazebo project has a clear roadmap and simulation schedule and a clear code and integration standard for new contributions delivered by developers external to OSRF. This ensures that Gazebo is continuously under improvement, new features and bug fixes officially released every six months. The main drawback is the lack of official plugins for the computation of distributed flexible behavior. This feature should be added by implementing model plugins using the Gazebo API that simulate additional forces acting on the rigid degree of freedom of the system.

For modelling 3D objects, it is possible to import several formats (OBJ, etc.). It is therefore possible to work on his 3D model on Blender for example. But the rendering is not realistic. That can be a problem for visual processing as we can’t get good shadow and light.

Many sensors are easily implemented in the simulation. Finally, if we want to work on Gazebo we still have to check if it is possible to deal with open/closed loop thanks to ODE or Bullet

3D engine OGRE Physics engine ODE (default), Bullet, Simbody, DART 3D modeler Internal (but should prefer Blender or another 3D modeler) Main language C++ Model formats SDF, URDF, Collada supported Middleware ROS, Player, Sockets (through protobuf messages) Supportedsupport sensors Odometry, IMU, Collision, GPS, Monocular cameras, Stereo cameras, Depth cameras, 2D laser scanners, 3D laser scanners License Apache 2.0

Inverse Kinematics Yes

Soft Bodies No, Bullet can do it, but that’s not compatible with Gazebo. But it’s possible to simulate local

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 153 of 158 Review v2.6.docx

Working with Release stable version Linux ? Programming C++ Language ?

Interface Possible to link your simulator with ROS, read file ...

Existing pluggins ? No

Business Model Free (open source) Visual and Physics Hard (or impossible) to developp Orbital (physics) possibilities simulator (orbital?) Accessibility Harder to access than U3D et UE4, but many tutorial on the net exist

4.6.3.1 DART

DART (Dynamic Animation and Robotics Toolkit) is a collaborative, cross-platform, open source library created by the Graphics Lab and Humanoid Robotics Lab. The library provides data structures and algorithms for kinematic and dynamic applications in robotics and computer animation. DART gives full access to internal kinematic and dynamic quantities, such as the mass matrix, Coriolis and centrifugal forces, transformation matrices and their derivatives. DART also provides an efficient computation of Jacobian matrices for arbitrary body points and coordinate frames. Contacts and collisions are handled using an implicit time-stepping, velocity-based LCP (linear complementarity problem) to guarantee non-penetration, directional friction, and approximated Coulomb friction cone conditions.

Its main features are :  Open source under BSD license written in C++.  Fully integrated with Gazebo.  Support models described in URDF and SDF formats.  Support comprehensive recording of events in simulation history.  Provide extensible API to interface with various optimization problems such as nonlinear programming and multi-objective optimization.  Support multiple collision detectors: FCL, Bullet, and ODE.  Support numerous types of Joint.  Support numerous primitive and arbitrary body shapes with customizable inertial and material properties.  Support flexible skeleton modeling: cloning and reconfiguring skeletons or subsections of a skeleton.  Provide comprehensive access to kinematic states (e.g. transformation, position, velocity, or acceleration) of arbitrary entity and coordinate frames  Support flexible conversion of coordinate frames.  A fully modular inverses kinematics framework.  Support both rigid and soft body nodes.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 154 of 158 Review v2.6.docx

4.6.3.2 ODE

ODE is an open source, high performance library for simulating rigid body dynamics. It is fully featured, stable, mature and platform independent with an easy to use C/C++ API. It has advanced joint types and integrated collision detection with friction. ODE is useful for simulating vehicles, objects in virtual reality environments and virtual creatures. It is currently used in many computer games, 3D authoring tools and simulation tools.

Its main features are :

 Rigid bodies with arbitrary mass distribution.  Joint types: ball-and-socket, hinge, slider (prismatic), hinge-2, fixed, angular motor, linear motor, universal.  Collision primitives: sphere, box, cylinder, capsule, plane, ray, and triangular mesh, convex.  Collision spaces: Quad tree, hash space, and simple.  Simulation method: The equations of motion are derived from a multiplier velocity based model due to Trinkle/Stewart and Anitescu/Potra.  A first order integrator is being used. It's fast, but not accurate enough for quantitative engineering yet. Higher order integrators will come later.  Choice of time stepping methods: either the standard ``big matrix'' method or the newer iterative QuickStep method can be used.  Contact and friction model: This is based on the Dantzig LCP solver described by Baraff, although ODE implements a faster approximation to the Coloumb friction model.  Has a native C interface (even though ODE is mostly written in C++).  Has a C++ interface built on top of the C one.  Many unit tests, and more being written all the time.  Platform specific optimizations.

4.6.3.3 Bullet

Bullet Physics is a professional open source collision detection, rigid body and soft body dynamics library written in portable C++. PyBullet exist too for Python environment. The library is primarily designed for use in games, visual effects and robotic simulation. The library is free for commercial use under the ZLib license.

Its main features are:

 Discrete and continuous collision detection including ray and convex sweep test.  Maximal coordinate 6-degree of freedom rigid bodies connected by constraints as well as generalized coordinate multi-bodies connected by mobilizers using the articulated body algorithm.  Fast and stable rigid body dynamics constraint solver.  Soft Body dynamics for cloth, rope and deformable volumes with two-way interaction with rigid bodies, including constraint support.  Open source C++ code under ZLib license and free for any commercial use.  Maya dynamica plugin, Blender integration, …  Quickstart Guide, Doxygen documentation, wiki, etc…

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 155 of 158 Review v2.6.docx

4.6.4 Conclusion

The present document objective was to present a few simulation environments which can work for the PULSAR project. We can see that MARS Robotic is not interesting because it’s focus is on planetary exploration robots. . Unity3D was a good suitor for us. But we realized that the constraint of the C # was too important for us, considering the possibilities present elsewhere. The possibilities of this environment remain very interesting, the community has done a very good job, there are many interesting pluggins, and Unity3D is compatible with physical engines to create a precise simulator. Unreal Engine 4 contrary to what we thought can provide decent results in areas other than video gaming. Unfortunately, there is little work done for the field of simulation. Indeed, no sensor is already integrated (and there does not seem to be a plug-in made by the community on it), the different physical engines (PhysX, Flex) do not have the precision necessary for a precise simulation work. On the other hand, the realism of graphics is very interesting for image processing work. UE4 for the simulation is not a mature environment, but it remains to watch, Nvidia (developers of PhysX) seem to want to evolve their tools for industrial use. It remains Gazebo, the most widespread environment in the field of simulation. The possibilities are great: ROS compatibility, many sensors already implemented, powerful physics engines, large and active community. Gazebo therefore seems to be the best environment to realize the simulator in the PULSAR project. Nevertheless, there are two points to explore: - The work on soft bodies, which is complex in Gazebo. Work has been done to incorporate flexibility in Gazebo, but this is not yet conclusive given the needs we have. - The ability to switch from open loop to closed loop thanks to ODE. Indeed, the pass from one to the other will have to be fluid, and we do not know yet the possible performance on this point there.

4.6.5 References

[1] L. Flückiger, K. Browne, B. Coltin, J. Fusco, T. Morse and A. Symington, Astrobee Robot Software: Enabling Mobile Autonomy on the ISS. [2] N. Koenig, and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot simulator”. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566) (Vol. 3, pp. 2149-2154). IEEE.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 156 of 158 Review v2.6.docx

5 Conclusion

All the software and hardware technologies reviewed in this document have been pointed out because they have been identified as relevant to address some of the technical requirements of PULSAR project. This review will then be useful during Preliminary design phase while some of these specific technologies are going to be used in unitary prototypes in order to confirm their usefulness within the scope of the project.

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 157 of 158 Review v2.6.docx

-- END OF DOCUMENT --

Ref.: D4.1 H2020_PULSAR-TAS-D11.1b Technology 31/05/2019 Page 158 of 158 Review v2.6.docx