University of Colorado Department of Aerospace Engineering Sciences Senior Projects - ASEN 4018 Visual In-Situ Sensing for Inertial Orbits of NanoSats (VISION) Conceptual Design Document

Monday 30th September, 2019

1. Information

1.1. Project Customers

Name: Dr Penina Axelrad Email: [email protected] Phone: (303) 492-6872

1.2. Group Members

Name: Max Audick Name: Cameron Baldwin Email: [email protected] Email: [email protected] Phone: 858-603-2275 Phone: 304-282-2037 Name: Adam Boylston Name: Zhuoying Chen Email: [email protected] Email: [email protected] Phone: 303-720-9755 Phone: 303-264-7129 Name: Tanner Glenn Name: Ben Hagenau Email: [email protected] Email: [email protected] Phone: 720-219-4960 Phone: 650-701-4092 Name: Adrian Perez Name: Andrew Pfefer Email: [email protected] Email: [email protected] Phone: 720-250-7538 Phone: 303-519-8777 Name: Ian Thomas Name: Bao Tran Email: [email protected] Email: [email protected] Phone: 843-670-1593 Phone: 720-288-5845 Name: Theodore Trozinski Name: Mathew van den Heever Email: [email protected] Email: [email protected] Phone: 973-349-8821 Phone: 970-402-0332 Contents

1 Information 1 1.1 Project Customers...... 1 1.2 Group Members...... 1

2 Project Description 4 2.1 Purpose...... 4 2.2 Objectives / Levels of Success...... 4 2.3 Concept of Operations (CONOPS)...... 5 2.4 Functional Block Diagram (FBD)...... 6 2.5 Functional Requirements...... 7

3 Design Requirements 7

4 Key Design Options Considered 11 4.1 Sensor Suite ...... 11 4.1.1 Wide Angle Visual and ToF Camera ...... 11 4.1.2 Telephoto Visual and ToF Camera ...... 12 4.1.3 Wide Angle Visual and ToF Camera ...... 13 4.1.4 Stereoscopic Cameras ...... 13 4.1.5 Single Visual Camera ...... 14 4.1.6 Staggered Dual Visual ...... 15 4.2 Structural Interface Method ...... 15 4.2.1 Internal Mounting ...... 16 4.2.2 External Mounting ...... 16 4.2.3 Launch Tube Replacement ...... 17 4.2.4 Modular Internal/External Mounting ...... 18 4.3 Embedded Systems ...... 18 4.3.1 Nvidia Jetson Nano ...... 19 4.3.2 Intel NUC8i7HVK ...... 20 4.3.3 Intel NUC8i7HNK ...... 20 4.3.4 Intel NUC8i5BEH ...... 21 4.3.5 Xilinx Zynq-7000 SoC ZC702 ...... 22 4.3.6 Xilinx Zynq Ultrascale+ MPSOC ...... 22 4.4 Flight Programming Language ...... 23 4.4.1 MATLAB ...... 23 4.4.2 Python ...... 25 4.4.3 ++ ...... 26 4.5 State Estimation Method ...... 26 4.5.1 Weighted Least Squares...... 27 4.5.2 Recursive Least Squares...... 27 4.5.3 Classical Kalman Filter...... 28 4.5.4 Extended Kalman Filter...... 28 4.5.5 H∞ Filter...... 29 4.5.6 Dynamics Model Considerations...... 29

5 Trade Study Process and Results 31 5.1 Sensor Suite Trade Study...... 32 5.1.1 Trade Study Results...... 33 5.2 Structural Interface Method Trade Study...... 34 5.2.1 Metric Weight Determination...... 34 5.2.2 Trade Study Results...... 35 5.3 Embedded Systems Trade Study...... 35 5.3.1 Measure of Weighting...... 35 5.3.2 Trade Study Background...... 36

09/30/19 2 of 46 CDD

University of Colorado Boulder 5.3.3 Trade Study Results...... 37 5.4 Flight Software Programming Language Trade Study...... 37 5.4.1 Metric and Weight Determination...... 37 5.4.2 Trade Study Results...... 38 5.5 State Estimation Method...... 39 5.5.1 Robustness...... 39 5.5.2 Optimizability...... 40 5.5.3 Learnability...... 40 5.5.4 Computational Expense...... 41 5.5.5 Weight Assignments...... 41 5.5.6 Trade Study Results...... 41

6 Selection of Baseline Design 42 6.1 Sensor Suite Baseline Design...... 42 6.2 Structural Interfacing Method Design...... 42 6.3 Embedded System Baseline Design...... 42 6.4 Flight Software Programming Language...... 42 6.5 State Estimation Baseline Design...... 43 6.6 Summary...... 43

7 Appendix 44

8 Appendix A Trade Study Theory 46 8.1 General Relative Orbit Equations of Motion...... 46 8.2 Linearized General Relative Orbit Equations of Motion...... 46 8.3 Hill-Clohessey-Wiltshire Equations...... 46 8.4 Tschauner-Hempel Equations Equations...... 46

09/30/19 3 of 46 CDD

University of Colorado Boulder 2. Project Description

2.1. Purpose The motivation behind the VISION (Visual In-Situ Sensor for Inertial Orbits of NanoSats) project is to improve Space Situational Awareness (SSA) by developing a modular in-situ CubeSat tracking system. Using optical sensors, VISION observes CubeSats deployed from a variety of launch providers to estimate and deliver preliminary, inertial two-line-elements (TLE’s). With the ever-increasing number of NanoSats being launched in batches, keeping track of the numerous satellites is difficult, expensive, inefficient, and has lead to lost vehicles. [3] The current method of tracking orbiting satellites is to create a series of tracks for each satellite as they pass over ground-based surveillance systems. This can only be done when a satellite passes through the field of view (FOV) of a surveillance site. Despite the large number of these sites around the globe, this method has proved impractical for predicting and identifying orbits of small satellites deployed in groups. The available resolution of ground based systems hinders their ability to track closely-clustered objects. Multi-satellite deployments, especially of NanoSats in low earth orbit (LEO), can take 5-10 days or more to identify and track using ground based tracking stations. The vast majority of these mass-deployed satellites have no on-board propulsion or maneuvering systems, [4] and may remain in these clusters for days. A successful VISION project will supply launch providers with a preliminary tracking system to estimate the states of individual CubeSats immediately following deployment. These initial estimates will be used to augment the ground based efforts later, increasing the probability of successful identification and tracking. VISION will be the middle platform between the CubeSat customer and the verification of the success of the mission. More importantly, a successful project will reduce the number of unidentified objects in Earth’s orbit. This will ultimately improve and support the cause of space situational awareness and enable flexible tasking and rapid decision making for the avoidance of potential collisions between space objects. This year’s purpose is to advance last academic year’s VANTAGE project by incorporating the proof-of-concept into a working prototype. The system will be designed to integrate with, at a minimum, one CubeSat deployment platform where it will visually monitor up to 6 individual CubeSats after deployment. The system will undergo a ground simulation and testing to ensure project success as defined by the levels in the following sub-section. VISION’s goal is to achieve a NASA Technical Readiness Level (TRL) of 4.

2.2. Objectives / Levels of Success The following table represents the three levels of success of the VISION project. Each category has increasing levels of success by which the team intends to measure the end results of the project.

Structures/Mechanical Level 1 • Mission-ready chassis that meets material requirements, verified by tests. • Chassis can interface with one deployer. • All components fit within volumetric and mass constraints of chassis/mechanical structure. Level 2 • Chassis interfaces with multiple launch deployers. • Component Testing in Relevant Environment (Launch and mission environment component testing, TRL-5). Level 3 • System Testing in Relevant Environment (Launch and mission environment system testing, TRL-6).

Software/Dynamics Level 1 • Estimates TLEs of deployed cubesat shaped objects using known deployer state and initial relative position, velocity and attitude. Level 2 • Estimates TLEs of deployed CubeSat shaped objects using simulated optical sensor data and simulated deployer position, velocity, to a position accuracy of 10 centimeters at a distance of 10 meters. • Assigns identification markers to each individual CubeSat shaped object. Level 3 • Estimates TLEs of deployed CubeSat shaped objects using experimental optical sensor data and experimental deployer position, velocity, and attitude. • Attaches a coordinate system according to the markers for attitude estimation, up to 6 CubeSats. • Follows and tracks the assigned coordinate system on each CubeSat shaped object.

09/30/19 4 of 46 CDD

University of Colorado Boulder Electronics/Sensors Level 1 • Interfaces with the deployer’s electrical and telemetry systems. • On-board sensors provide visual verification of successful deployment with a minimum resolution of 1080x1920 pixels. • On-board embedded system interfaces and receives data from optical sensors for processing. Level 2 • Complies with deployer’s EMI/EMC emitter regulations. • Data buffer size downlink/relay data to the deployer in 15 minutes. • Stores proof of evidence and processed data on-board for the duration of the data processing and downlink period. Level 3 • Interfaces with multiple deployers’ electrical and telemetry systems. • On-board electronics are interchangeable and compatible with space-grade counterparts. • Components and system testing in relevant environment (launch and mission environment system testing, TRL-4/5).

System Testing Level 1 • Test state estimation using simulated sensor data. Level 2 • Test state estimation using ground-based physical testing systems. Level 3 • Integrated simulation and physical testing.

2.3. Concept of Operations (CONOPS) The Concept of Operations (CONOPS) diagrams in Figures1 and2 differentiate the mission objectives with the team capabilities. VISION understands that in order to reach mission operation, there must first be a ground architecture in which each component and system is critically defined and evaluated in performance. The team CONOPS differs by showing the structure that VISION finds feasibly possible in the two semester period given to produce a product.

Figure 1. VISION Mission Concept of Operations

Figure1 shows the overall mission overview of the whole operation from launch, deployment, data processing, and ground-track. The greyed steps (3, 7 & 8) are not included in the scope of our project leaving the black steps (1, 2 & 4-6) as the responsibility of VISION. Figure2 shows the team mission overview which encompasses boot

09/30/19 5 of 46 CDD

University of Colorado Boulder up as the start of the test and completes when all the data is processed and relayed back to the launch provider. The process is split into two parallel operations of the software mission and physical mission. The software test system will simulate the deployment of the CubeSat, then use the sensor data from the simulation to finally calculate the TLEs of the tracked objects. The physical test system will use a physical system to replicate the deployment of the CubeSat. Then the on-board sensors will produce data that will be processed using on-board computational hardware, which will eventually calculate the relative position and velocity of CubeSats.

Figure 2. VISION Team Concept of Operations

2.4. Functional Block Diagram (FBD) The functional block diagram (FBD) in Fig.3 displays a systems level architecture of VISION, describing how the principle elements interact and highlighting elements within the design scope of this project. VISION can be broken into five portions: chassis, adaptable electro-mechanical interface, electrical power systems (EPS), sensor suite, and command & data handling (CDH). The adaptable mechanical interface enables VISION to be compatible with different CubeSat deployment systems. This allows VISION to be modified to be compatible with data, electrical, and mechanical interfaces of different deployment systems. VISION’s EPS receives power from the deployment system and distributes it to all VISION systems that require power through a power distribution board (PDB). The PDB also monitors power draw of each dependent system and relays the information to CDH. VISION’s sensor suite contains optical tracking sensors as well as an on board GPS used for inertial positioning. The optical tracking sensor used for CubeSat state estimation includes the time of flight camera and visual imaging camera. This position data is used to calculate the inertial position of the deployed CubeSats. VISION’s CDH system has three primary functions: command handling, data storage, and processing of the science data produced by the sensor suite. The data processing unit receives data form the optical sensors to estimate deployed satellite position, velocity, attitude, and TLEs. GPS data will be used in conjunction with the CubeSat relative position and velocity estimates to determine the inertial position and velocity of each CubeSat. This will then be used to determine individual CubeSat TLEs. VISION will also estimate the attitude and attitude rates of the deployed satellites by using ToF camera and image data. In addition to performing these estimations, VISION stores all sensor data, intermediate calculated values, and final inertial estimations in on board memory so that it can be packaged and

09/30/19 6 of 46 CDD

University of Colorado Boulder relayed to the deployment provider. The command handling portion of VISION’s CDH system communicates with the deployment vehicle, EPS, and sensor suite to control and monitor operations.

Figure 3. VISION Functional Block Diagram

2.5. Functional Requirements The following are the Functional Requirements for project VISION. These represent the high-level requirements given by the customer in order to ensure a successful project. They have been approved by the customer and will be refer- enced in the following sections as FR. to illustrate the flow down process. Derived requirements, denoted as DR. , are developed from the functional requirements. Motivation for all functional and derived requirements can be found in the next section.

FR.1: The tracking system shall observe 6 separate CubeSats from the deployment platform. FR.2: VISION shall report Two-Line Element (TLE) estimates of deployed CubeSats. FR.3: VISION shall report proof of deployment. FR.4: VISION shall integrate the functionality of both software and hardware within a single package. FR.5: The system shall integrate with a deployment system defined by an Interface Control Document (ICD). FR.6: Components within VISION shall be space-grade or interchangeable with comparable space-grade components.

3. Design Requirements

The motivations and methods of validation for each functional requirement (FR- ) and resulting derived requirement (DR. )can be found below. Derived requirements are created in order to verify its parent requirement. Most values are derived from customer desires or technical documents of possible launch providers such as NanoRacks. Some requirements refer to an Integration Control Document (ICD) which the team will develop to contain all technical details for integration with launch providers.

FR-1: The tracking system shall observe up to 6 separate CubeSats from the deployment platform. Motivation: Customer requirement. The customer wants to be able to track an entire deployment from one NanoRacks bay. Currently a NanoRacks bay is 6U. The maximum number it can deploy is 6: 1U CubeSats.

09/30/19 7 of 46 CDD

University of Colorado Boulder DR-1.1: VISION shall characterize and differentiate up to 6 CubeSats of 1U and up to 3U sized CubeSats. Motivation: Objects are closest together immediately after deployment, so it is crucial for VISION to differentiate each object from the other in order to enable the tracking process. This means that the program must be able to detect the edges of each object and calculate the centroid in the first few frames of data. Validation: Testing from simulation as well as physical experimental test. The simulated sensor data as well as a physical testing system that will allow images to be taken and used as inputs. The output shall be the edges and centroid of each object.

DR-1.2: VISION shall utilize a deployment manifest provided by the deployer to identify each object. Motivation: In order to provide unique state estimations for the deployed CubeSats, the system must be able to identify each CubeSat and collect motion data in order to estimate TLEs. From utilizing a deployment manifest, VISION will be able to identify to each object allowing them to be tracked independently by ground based tracking systems. Validation: Testing from simulation as well as physical experimental test. An input file of a deployment manifest will be read into the program which will be able to assign unique identifiers to each object.

FR-2: VISION shall report Two-Line Element (TLE) estimates for tracked CubeSats. Motivation: Customer Requirement. The customer wants the orbital estimation in the form of a Two-Line Element (TLE) because it is a generally accepted form of orbital expression in industry. Furthermore, the TLE will be the result of data processing and therefore requires much less bandwidth to downlink from the launch provider to a ground station than the entirety of captured the data from VISION. TLEs would allow a more timely and accurate ground tracking of individual CubeSats, decreasing the likelihood of loosing any after deployment.

DR-2.1: VISION shall estimate the relative position and velocity of each CubeSat using data of the CubeSats while they are within 100 meters of the deployment system. Motivation: The relative estimation algorithms will be designed to perform CubeSat state estimation for as long as the sensors can accurately observe deployed CubeSats. As prescribed by the customer, the trackable range will be improved upon from the current capabilities which act as a minimum range during which estimation can occur. Validation: This requirement will be validated through demonstration that using only telemetry gained from data collection inside the 100 meter mark, the system can generate predictions of both velocity and position of each CubeSat.

DR-2.2: VISION shall estimate the relative position of each CubeSat within 1 std of 0.1 meters at 10 meters. Motivation: The listed accuracy requirement at this short range distance baselines the capabilities of the previous years model, and sets a floor for the estimation process of VISION. Validation: Using both test and analysis, VISION will ground test the system’s ability to not only detect the Cube- Sats, but estimate proficient data using telemetry provided by the sensors subsystem that will directly translate into improving future orbit predictions.

DR-2.3: VISION shall estimate the relative position of each CubeSat within 1 std of 10 meter at 100 meters after deployment. Motivation: The listed accuracy requirement at this long range distance baselines the capabilities of the previous years model, and sets a floor for the estimation process of VISION. Validation: Using both test and analysis, VISION will ground test the system’s ability to not only detect the Cube- Sats, but estimate proficient data using telemetry provided by the sensors subsystem that will directly translate into improving future orbit predictions.

DR-2.4: VISION shall record it’s own inertial position during deployment of the CubeSats. Motivation: Using the relative measurements and state estimates of the deployed CubeSats, the deployer inertial position and velocity is required to calculate the CubeSats’ inertial states. Validation: VISION will demonstrate this capability during ground testing. The verification of this metric is vital to generating TLE estimates.

DR-2.5: VISION shall estimate cubesat TLE such that they can be used to calculate CubeSat position and veloc- ity. Motivation: The customer has explicitly required that cubesat TLEs are produced. TLEs are the industry standard

09/30/19 8 of 46 CDD

University of Colorado Boulder method of packaging information required to fully describe cubesat orbits and ephemeris. This information is required to enable ground based tracking capabilities quickly after deployment. Validation: Testing. TLE will be validated in software simulation using know orbital mechanics. Physical test data will be used instead of simulated sensor data to validate the TLE estimation process in the loop with hardware.

DR.2-6: VISION shall calculate and package TLE estimates within 15 minutes of the end the deployment se- quence. Note: Relevant data collection during the deployment sequence is considered finished when all CubeSats have traveled farther than 100 linear meters from the deployment system. Motivation: In order for the data to be useful, the estimations shall be delivered to the deployer as soon as possible after the deployments happened. NanoRacks deploys a tube of up to 6 CubeSats in 90 minutes intervals so the system shall finish processing and downlinking before the next deployment happened. Validation: Analysis. A data file simulating the desired data that will be collected during the deployment of 6 CubeSats will be loaded into the system. The system will be tasked with transmitting the data via a USB2.0 port to a PC simulating the NanoRacks use-case system. The requirement will be considered verified if the data transfer occurs in under 15 minutes.

FR-3: VISION shall report proof of deployment. Motivation: Customer requirement. Proof of deployment is required in order to make VISION commercially viable. Future launch customers want verification of a successful launch, proof of deployment will show launch customers the initial status of their product.

DR-3.1: VISION shall deliver a still image of each individual CubeSat in a deployment. Motivation: The deployment company, as well as the CubeSat operators, need to know if all of the CubeSats have been jettisoned properly. Still images showing all of the CubeSats outside of the deployer are able to verify basic deployment success. Each customer will receive an image with their CubeSat in clear view (at the back of the deployment). Validation: VISION will verify deployment proof can be communicated through demonstration during an inter- facing test. This test will encompass all components of interfacing with potential launch provider, including data and power transmission.

FR-4: VISION shall integrate the functionality of both software and hardware within a single package. Motivation: As project maturation occurs, full system functionality depends on VISION’s ability to combine compo- nent level operations. Instead of testing individual tasks using isolated conditions, VISION will look to incorporate multiple inputs and conditions in order to better evaluate operational success. This consolidation of component/sub- system functions will allow VISION to test both hardware and software in a manner that would not be capable using solely the respective component’s capabilities. As an example, if test data measured from visual sensors can be com- municated or inputted into testing of the state estimation algorithms, then VISION will have a better understanding of system compatibility and overall cohesion when performing on-orbit operations.

DR-4.1: The chassis envelope shall enclose all components, excluding protruding instrument sensors. Motivation: The testing and operation of the system requires an integrated hardware and software package to allow for end-to-end testing. This is opposed to a grouping of hardware and software systems where data is transferred manually in a FlatSat configuration. Validation: Through visual inspection and measurement, VISION will show that the mechanical chassis contains the entire package product, and its components.

DR-4.2: The system shall operate with no more than 120 VDC, 3 Vpp ripple voltage, and 5 A. Motivation: In order to keep the system operating, it shall accept the power supplied by the deployer. Validation: Ground testing.The electronics subsystem will be hooked up to a variable power source set to 120 VDC with 3 Vpp and 5 A. The requirement will be verified if the electric subsystem successfully operates through a full mock deployment cycle and reports data while receiving power within this range.

DR-4.3: The system shall draw no more than 520 Watts. Motivation: The system cannot exceed the power to keep the system working and it is the preferred power for NanoRacks. Validation: Ground Testing. The system will be tested at max power capability under worst-case conditons availbe

09/30/19 9 of 46 CDD

University of Colorado Boulder to team. Verified if over the course of all test, system does not draw more than the listed metric

DR-4.4: The system shall store images, raw data, and estimates of one deployment cycle, onboard, for the dura- tion of the data processing and downlink period. Motivation: The onboard sensors will collect large amounts of data that need to be stored prior to processing and transferring to the deployer. (FS 1.5) Validation: The data storage device will be tested with images, videos, and data collected up to 135 minutes to verify the storage device has enough physical memory.

FR-5: The system shall integrate with a deployment system defined by an Interface Control Document (ICD). Motivation: Integration with the deployment system is critical for operation of VISION. VISION must be able to communicate with the launch deployer to report TLE and proof of deployment as well as receive power from the launch deployer.

DR-5.1: The chassis shall mechanically integrate with one deployment system, as defined in the Mechanical Interface Control Document (MICD). Motivation: The customer has explicitly stated that the system needs to mechanically interface with at least one deployment system. All information for the interface with be written by VISION Validation: Due to limited access to proprietary information of potential deployment system (ICDs), a physical interface cannot be verified without a better definition. Inspection of preliminary CAD model will suffice to verify this mission need.

DR-5.2: The system shall have dimensions less than 6000 cm3. Motivation: The system needs to meet the volumetric requirements of a 6U CubeSat in order to ensure interfacing is possible in the case of an internal or modular interfacing design. Validation: CAD and hardware will be inspected to verify all components and system meet this requirement.

DR-5.3: The system shall have a mass of less than 8 kg. Motivation: The system needs to be able to meet the mass requirements of a 6U CubeSat in order to ensure inter- facing is possible in the case of an internal or modular interfacing design. Validation: Mass Budget will be inspected to verify all components and system meet this requirement.

DR-5.4: The electrical power distribution subsystem shall interface with a PC. Note: This interface will simulate all data and power commuincations between a potential deployment system. Motivation: The system will be constrained based on power,data, and mechanical interfaces. This requirement defines the need to create a simulation procedure in order to test they system’s capabilities. Validation: A test will be used to verify the systems ablility to communicate over a power interface

DR-5.5: The system shall comply with the deployer’s EMI/EMC emitter regulations. Motivation: Customer Requirement. The electrical system can function acceptably in their electromagnetic envi- ronment with the EMI/EMC emitter regulations limiting the unintentional generation, propagation and reception of electromagnetic energy which may cause unexpected effects. Validation: Detecting EM Waves. To detect the electric fields, use a conducting rod. The fields cause electrons to accelerate back and forth on the rod, creating a potential difference that oscillates at the frequency of the EM wave and with an amplitude proportional to the amplitude of the wave.

FR-6: Components within VISION shall be space-grade or interchangeable with comparable space-grade compo- nents. Motivation: As required by the customer, the TRL of VISION needs to advance to near or complete space-readiness, equivalent to TRL-4 or TRL-5. However, due to budgetary constraints, it is unlikely that all components can be made to be space-grade. Therefore, as much as possible, VISION will be space-grade, or TRL-4 equivalent, and for expen- sive components, a comparable component will be used.

DR-6.1: Chassis structure shall reach an advanced TRL certification, or equivalent. Motivation: This is a requirement of the customer, that the system advances in TRL from the 2018-2019 VAN- TAGE project. An advanced TRL certification or equivalence is an achievable goal with given budgetary, time, and

09/30/19 10 of 46 CDD

University of Colorado Boulder resource constraints. Validation: Component Level testing in relevant environmental conditions will provide verification of a TRL clas- sification. (Vibe, Thermal Vaccuum)

DR-6.2: Chassis structure shall maintain structural integrity within a safety factor of 2. Motivation: The chassis structure must maintain structural integrity in order to perform mission operation and meet the TRL, or equivalent, rating. A factor of safety of 2 was chosen as this is general engineering practice. Validation: Finite Element Method analysis will provide verification that the structure integrity meets the mission needs.

DR-6.3: The electrical components shall have similar protocols and functions as space rated components. Motivation: Customer Requirement. The system needs to be ready for flight environment but the team does not have enough budget to purchase space-rated components so the hardware shall be interchangeable with space rated hardware. Validation: Analysis. The electrical components need to have been used or have a similar device used in space.

4. Key Design Options Considered

4.1. Sensor Suite The design of the sensor array on VISION is critical to the overall mission. The initial relative velocity and position measurements must be accurate per Functional Requirement 1. The sensor array is the driving factor determining the error in these estimates. Therefore, this trade study will value accuracy as it relates to the desired measurements. A number of different design options are considered that may satisfy this requirement. These alternatives are presented in the following subsections.

4.1.1. Wide Angle Visual and ToF Camera

Figure 4. Option 1: Wide Angle Visual and ToF Camera

Consideration Pro Con Existing Software Framework x Existing Test Suite x Accurate Relative Measurements x High Power Draw x Large Volume Requirement x

09/30/19 11 of 46 CDD

University of Colorado Boulder The first possible configuration, shown in figure4, is based on the legacy equipment from VANTAGE. It uses data from one wide angle optical sensor and a Time of Flight (ToF) camera. A ToF camera uses pulses of Infrared (IR) light to determine the distance to an object based on the time the IR particles take to leave the sensor, bounce off the object and return to the sensor. The wide angle optical camera will use visual data to augment the data from the ToF camera to provide a more accurate estimate. This system will be easier to implement, as the work done by VANTAGE will significantly lower it’s development time. However, the wide angle visual camera provides little help for observations beyond the first few meters. After this point the ToF camera will be the main observer but with a limited resolution compared to standard visual cameras. Additionally, the ToF camera has a high power draw.

4.1.2. Telephoto Visual and ToF Camera

Figure 5. Option 2: Telephoto Visual and ToF Camera

Consideration Pro Con Similar to Existing Software Framework x Existing Test Suite x Accurate Relative Measurements x High Power Draw x Large Volume Requirement x

In order to better compliment the data gathered from the existing ToF camera, an optical camera with a narrower field of view (FoV) may be considered in place of the wide angle optical camera. This approach is illustrated in figure5. A narrower FoV will allow the sensor to observe the CubeSats for a longer time, providing better relative measurements, but the observation will begin later. This may cause issues differentiating between CubeSats due to compression, where the distances between objects appear shorter as the focal length is increased. This also makes the system sensitive to overlap, which may limit the more accurate observations to just the trailing CubeSat. This system also suffers from a relatively high power draw due to the ToF camera. Higher accuracy may give this suite an edge over the wide angle solution used last year. The zoom camera will require a similar development time to the wide angle/ToF approach used by the VANTAGE team.

09/30/19 12 of 46 CDD

University of Colorado Boulder 4.1.3. Wide Angle Visual and ToF Camera

Figure 6. Option 3: Wide Angle Visual, Telephoto Visual and ToF Camera

Consideration Pro Con High Accuracy at Short and Long Range x Existing Test Suite x Accurate Relative Measurements x High Power Draw x Large Volume Requirement x High Development time x

The third option is a three sensor system, where the ToF camera is supplemented by a wide angle camera and a narrow field-of-view lens. This option provides the best visual coverage, as displayed in figure6, combining excellent initial wide angle differentiation of CubeSats with the longer narrow FOV observation. Two visual-spectrum cameras, when combined with the ToF sensor, best capture the entire deployment process. This system will provide the best accuracy for both short and long range, at the cost of development time and power draw.

4.1.4. Stereoscopic Cameras

Figure 7. Option 4: Two Stereoscopic Visual Cameras

Consideration Pro Con Low Power Draw x Small Volume Requirement x Inaccurate Measurements of Overlapping CubeSats x High Development Time x Poor Low Light Vision x

The fourth option worth considering is a stereoscopic system of two identical visual cameras offset by a known distance, as shown in7. The data from both cameras, and the known separation, is used to triangulate a relative

09/30/19 13 of 46 CDD

University of Colorado Boulder distance based on the different angles of the pictures being observed by each camera. This method is proven to work on two very common applications. Cars with dynamic or adaptive cruise control use (in part) data from two visual sensors to determine the distance and relative velocity of the car(s) in front of the vehicle. Additionally, cell phones use two cameras to differentiate between the foreground and the background of a nearby subject when used in portrait mode to blur out the background. This method will be useful to detect and eliminate the background of the CubeSats, and may provide useful, short distance data. However, as the distance increases (>10m), the ability of a stereoscopic camera system becomes severely limited. It will also take a longer time to develop the necessary software for this determination. The visual cameras may be limited in certain lighting conditions as well, potentially hurting accuracy. Despite its drawbacks, it is still worth investigating this alternative, due to its accuracy over a single camera system.

4.1.5. Single Visual Camera

Figure 8. Option 5: Single Visual Camera

Consideration Pro Con Low Power Draw x Small Volume Requirement x Inaccurate Measurements of Overlapping CubeSats x High Development Time x Poor Low Light Vision x

The simplest, smallest and lowest power solution to consider is a single, calibrated camera, displayed in figure8. With a known object size, such as the length of a side of the CubeSat, and the FoV, the distance away from a calibrated camera can be determined. This software is relatively simple and will have a low development time, as there is no sensor fusing. However, the single camera means that the observation range is limited to one FoV, and different lighting scenarios may render the data useless. In order to better differentiate between CubeSats, it will have to observe the short distance range, requiring a wide FoV. This trade-off limits the time of observation, so the camera will be inaccurate at longer distances.

09/30/19 14 of 46 CDD

University of Colorado Boulder 4.1.6. Staggered Dual Visual

Figure 9. Option 6: Staggered Dual Visual Cameras

Consideration Pro Con Good Coverage of CubeSats at Short and Long Range x Low Power Draw x Small Volume Requirement x Inaccurate Measurements of Overlapping CubeSats x Poor Low Light Vision x

The final design option considered is a system of two staggered cameras, illustrated in figure9. This system works similarly to the single camera solution, however it has two sensors with two separate fields of view. One wide angle sensor will be accurate in the short term, and a narrow FoV camera will take over as the CubeSats get further away. This is anticipated to be fairly simple to implement, and will have benefits for short and long distance accuracy when compared with a single camera or a stereoscopic approach. Again, this solution may be limited by lighting conditions.

4.2. Structural Interface Method In order to ensure the system is equipped with the most effective method of interfacing with deployment systems, the team explored multiple integration options. The system must structurally interface with the deployment system in a way that allows the sensor suite to observe the deployed CubeSats, while maintaining structural integrity, and minimizing the opportunity cost for the deployer from giving up space that could be used for potential customers.

09/30/19 15 of 46 CDD

University of Colorado Boulder 4.2.1. Internal Mounting

Figure 10. Internal Mounting Mock-Up

Consideration Pro Con Clearly defined interfacing from Interface Control Documents x Low development time x High opportunity cost for deployer x Volumetric constraints x Self contained radiator not possible for thermal control x

In this configuration, VISION is located inside a launch bay as a 6U, or smaller, CubeSat, modeled in Fig. 10. It will remain in the silo during CubeSat deployment. This will allow it to see the other payloads ejected from other bays. This configuration is potentially universal to all deployment systems that support 1U - 6U CubeSats and will have plenty of volume to contain all components. However, this configuration would cost the deployer as one launch tube would be consumed by the system. This configuration does not allow for proper heat dissipation via radiator.

4.2.2. External Mounting

Figure 11. External Mounting Mock-Up

09/30/19 16 of 46 CDD

University of Colorado Boulder Consideration Pro Con Low opportunity cost for deployer x Volume constraints are flexible x Self contained radiator possible for thermal control x Complex interface with launch deployer x May not be possible for multiple deployment systems x

This configuration has the system attached to a launch bay externally, as shown in Fig. 11. This is a favorable configu- ration as it will not reduce the number of launched CubeSats, decreasing the deployer’s opportunity cost , and will have a very liberal volumetric constraints. This also allows for a radiator for heat dissipation. However, this configuration is much more complex in terms of designing the interface. It also might not be very universal as a bespoke mount will need to be created for each launch provider. It may even be impractical, as some deployment platforms may not allow an external mount.

4.2.3. Launch Tube Replacement

Figure 12. Tube Replacement Mock-Up [1]

Consideration Pro Con Flexible volumetric constraint x Low structural complexity from legacy work x Self contained radiator possible for thermal control x Very high opportunity cost loss for deployer x May not be modular to multiple deployment systems x Requires launch provider to make alternations x

This configuration has VISION replacing a launch tube entirely, as shown in Fig. 12. This configuration is advanta- geous in that there more available volume, less complex to integrate, and has the availability of a radiator. This was the configuration used for the previous iteration of this project which makes development of this configuration easier. This configuration does cost the deployer an entire launch tube and would be difficult to implement for multiple deployment systems. Something to note, due to this being the configuration the 2018-2019 team decided to implement, there is applicable carry-over information.

09/30/19 17 of 46 CDD

University of Colorado Boulder 4.2.4. Modular Internal/External Mounting

Figure 13. Modular Mounting Mock-Up

Consideration Pro Con Modular to multiple deployment systems x Radiator for thermal control is possible for external deployment x attachment Flexible volumetric constraints for external attachment x Extremely complex development x Radiator for thermal control is not possible for internal deployment x Size limitation on VISION due to internal mounting x

This configuration has VISION with the ability to either interface externally via an attachment or internally as a 6U or smaller CubeSat, modeled in Fig. 13. This configuration would be very universal, as for any deployers that do not allow for external mounting, it can be mounted internally. It also has the possibility for a radiator and saving the deployer costs. This configuration would be very complex however, and the size may be limited by the interfacing attachment.

4.3. Embedded Systems VISION’s embedded system design choices will directly affect the software aspects of the project. Since the team has based its design options on the NanoRacks deployment system, the avionics will be required to integrate via a USB 2.0 connection. This will allow the team to explore multiple options to best suit the needs of VISION. This will involve considering a few different metrics based on the demands of the sensor suite and the software package. Due to the lack of experience with developing the embedded systems, VISION has chosen to look at different development boards and micro-controllers rather than building our own.

09/30/19 18 of 46 CDD

University of Colorado Boulder 4.3.1. Nvidia Jetson Nano

Figure 14. Nvidia Jetson Nano

Consideration Pro Con Inexpensive x Low power requirement x High level language capability x Not space grade x High heat generation x Thermally unstable x Requires extra cooling method x Limited documentation x

The Nvidia Jetson Nano is a system on a module and reference carrier board meant for high processing power, machine learning, and vision/video processing applications. It comes equipped with Quad-core ARM Cortex-A57 MPCore processor with 4 GB 64-bit LPDDR4 memory at 1600MHz and 25.6 GB/s. It is capable of storing up to 16 GB eMMC. Nvidia has produced this model to shorten the learning curve and is accompanied by tutorial videos and user- friendly manuals. This is an attractive characteristic as well as it’s low market price of $99. It generates a lot of heat but it can be coupled with a fan while it’s on Earth. However, VISION is looking at space environments, so convective cooling methods are not applicable. So the team needs to determine other methods to cool the system while in mission environmental conditions.

09/30/19 19 of 46 CDD

University of Colorado Boulder 4.3.2. Intel NUC8i7HVK

Figure 15. Intel NUC8i7HVK

Consideration Pro Con Fast x Reliable x Support for high level language x Low development time x Low power requirement x Graphic processor x High heat generation x Not space grade x Expensive x Need extra cooling method x

The Intel NUC8i7HVK is a small-form-factor personal computer. It has an Intel Core i7-8809G processor inside and a F+R HDMI 2.0a Graphics output. This device has a smaller thermal power design than a conventional PC and also has a high image processing speed. It can also support windows or system, so many high-level languages can be applied in this embedded system. This option also contains a wide range of design potential due to its sensor integration capability and processor power. However, this device does not have good space-grade interchangeability. This system uses airflow to cool the processor which produces a significant amount of heat. In space, this cooling method would not be practical and another cooling method would need to be used.

4.3.3. Intel NUC8i7HNK

Figure 16. Intel NUC8i7HNK

09/30/19 20 of 46 CDD

University of Colorado Boulder Consideration Pro Con Fast x Reliable x Support for high level language x Less development time x Graphic processor x High heat generation x Not space grade x Expensive x Requires extra cooling method x

The Intel NUC8i7HNK (HNK) is a small-form-factor personal computer just like the NUC8i7HVK (HVK), but with a slightly scaled down graphics card. This allows it to operate at a slightly lower wattage (65W compared to 100W). This will experience the same issues as the HVK but may produce a little less heat and draws less power. These are the reasons that it is included as an alternative to the HVK. The complaints are the same as with the HVK model where space interchangeability is limited.

4.3.4. Intel NUC8i5BEH

Figure 17. Intel NUC8i7BEH

Consideration Pro Con Support for high level languages x Faster x Reliable x Low development time x Requires extra cooling method x Not space grade x Limited configurations x

The Intel NUC8i7BEH is in the same family as the previous two Intel NUC models, but the graphics capabilities are scaled down significantly. This version has a higher CPU benchmark and lower thermal design power (28W) than the previous two. It is also what the VANTAGE team chose for the heritage project which will make it much easier to develop. However, it not space rated, so the team must come up other methods to cool the device.

09/30/19 21 of 46 CDD

University of Colorado Boulder 4.3.5. Xilinx Zynq-7000 SoC ZC702

Figure 18. Xilinx Zynq-7000 SoC ZC702

Consideration Pro Con Fast x Reliable x History of being used in space x Low power x Expensive x No x Very little Experience/Resources x

The Xilinx Zynq-7000 is a space rated board and contains two Arm Cortex-A9 MpCore application processors, AMBA interconnects, internal memories, external memory interfaces, and peripherals. It has been used in space missions before, so it can be used in the ”space ready” iteration of this project. Different from the previously mentioned, this board is an FPGA so it runs faster, has shorter wires and less power consumption. However, due to it being an FPGA board, it does not support any operating system. This means the team has to use hardware description language (HDL) to build the code which takes considerable development and implementation time.

4.3.6. Xilinx Zynq Ultrascale+ MPSOC

Figure 19. Xinlinx Zynq Ultrascale+ MPSOC

09/30/19 22 of 46 CDD

University of Colorado Boulder Consideration Pro Con High data transfer rates x Space grade capability x Low power consumption x Long development time x Long integration time x Expensive x

The Xinlinx Zynq Ultrascale+ MpSoC is a device that includes a Multi-Processor system-on-chip. This chip, shown in figure 19 includes an FPGA, dual-core Cortex-R5, real-time processing unit, and dual-core Cortex-A53. It is widely accepted and used through commercial, civil, and military applications. This board is capable of real-time processing that would help the VISION team obtain velocity and position measurements faster. This is an important feature when considering the time aspect of the mission. In addition, this board has already passed the in-orbit mission radiation performance validation by the US Government and would be easily transferred to a space-ready system. Unfortunately, this board uses hardware description language (HDL) and the VANTAGE team lacks experience with this software package. Learning to use it would significantly increase the software and electrical team’s workload and for this reason, both Xinlinx Zynq models were not considered for the embedded systems trade study.

4.4. Flight Software Programming Language This trade study will highlight the advantages and disadvantages of each programming language in the aspects that relates to VISION’s mission scope. The main capabilities considered in this trade study are image processing, re- source ecosystem, readability/development time, embedded system implementation, and run-time. The decision of programming language will be implemented to flight (on-board) software and will ensure that VISION will achieve an optimized balance between development time, run-time speed, and fulfillment of design requirements.

4.4.1. MATLAB

Figure 20. Detecting overlapping circular objects from Image Processing Toolbox [19]

09/30/19 23 of 46 CDD

University of Colorado Boulder Figure 21. Integrated design flow for embedded software and hardware [20]

Consideration Pro Con Concise language involving math and matrix operations x Robust image processing toolboxes x Embedded encoder to C/C++ x Test algorithms without compiling x Lack of open resource ecosystem x Very slow run-time x Not common in commercial applications x

MATLAB is a high-performance language for technical computing, more specifically: matrix operations. Since VI- SION’s software program will consist of very complex state estimation algorithms and image processing, MATLAB is a qualified language for these tasks. MATLAB is a very well-documentation language that makes prototyping and debugging very fast and easy. Because of this, simulations and testing algorithms in MATLAB are quick and efficient because it is a programming interface and does not need to compile. MATLAB’s toolboxes are very robust because of the extensive development and verification from credible sources, which reduces development and implementation time for the user. Toolboxes such as the Image Processing Toolbox are very powerful because of MATLAB’s ability to perform fast numerical matrix manipulation, which is the main component of image processing. An example of the Image Processing Toolbox is shown in Figure 20. Since MATLAB is a high-level language and interactive environ- ment, it has one of the slowest overall run-times due to the complexity of the built in interpreter. This results in high computational cost (RAM) and overhead memory allocation, which is not ideal for product development. Another downside is that it does not have an open resource ecosystem and the user is limited to the toolboxes and packages available. However, MATLAB and Simulink have the ability to generate MATLAB code with Embedded Encoder that can be implemented and run on an embedded system. This function can prepare, generate, and test code into C/C++ and run it on hardware, as shown in Figure 21. The Embedded Encoder can translate basic MATLAB operations but does not run external functionalities like the Image Processing Toolbox.

09/30/19 24 of 46 CDD

University of Colorado Boulder 4.4.2. Python

Figure 22. Community interests of programming languages over time [10]

Figure 23. Filtered edge detection using skimage package [8]

Consideration Pro Con Open source with significant community development x Compact and readable language x Extensive libraries/packages for various purposes x Powerful object-oriented programming x Optimized library calls and compilers x Lack code generation for embedded systems x Slower run-time due to being a interpreter language x

Python is one of the most popular and widely used programming language. Just like MATLAB, it is also an interpreted language, but it is for general purpose programming rather than heavy technical computing. Python has been the fastest growing language in the computer science community, as shown in Figure 22. Because of this growing popularity, it is an open source and community development with extensive support libraries. In recent years the number of data manipulation and visualization packages for Python has increased exponentially. Examples of these are skimage (Figure 23), NumPy, SciPy, Scikit-learn, and Matplotlib, which can work in conjunction with OpenCV for a powerful

09/30/19 25 of 46 CDD

University of Colorado Boulder Computer Vision and Machine Learning environment. Python is also built for readability with the use of block-based indentation, simple syntax (like using square brackets and parenthesis), and augmented assignments. This greatly improves ’s productivity and development time because there is very little time wasted in organization, readability, and debugging efforts. The main downside of Python compared to C++ is its speed because it is not a compiler language. However, C/C++ extensions for Python can be used with the package Cython, providing speed increases of up to 100x native Python. Another alternative is the PyPy Just-In-Time (JIT) compiler, which work in parallel with Python’s interpreter to generate compiled machine instructions, which is able to increase Python’s execution speed by nearly a factor of two. Python is also known for simple and flexible object-oriented programming (OOP) because of its ability to handle and manipulate data structures with ease from libraries such as Pandas. Another downside to Python is the lack of code generation for embedded systems compared to C/C++. However, Python is also one of the fastest growing language for embedded computing. One of Python’s best features is being able to communicate and send messages to an embedded system to allow the user to automate testing and receive data for analysis. The remaining 5% of embedded systems code that aren’t C/C++ are in Python, and in the next few years, it can easily outgrow C/C++’s dominance in embedded systems programming [10].

4.4.3. C++

Figure 24. OpenCV canny edge detection [21]

Consideration Pro Con Fastest overall run-time x OpenCV source code x Most commonly used in embedded systems x Difficult debugging x Poor readability x Requires more learning/development time x

C++ a compiled language and is one of the oldest programming languages. There are many advantages to a compiled language, such as having compact and faster run-time code. This is obviously an optimal choice in mobile computing, and that is why 95% of embedded system programs are in C/C++ [10]. Additionally, C++ is used in many production- grade Computer Vision systems because it has great libraries such as OpenCV, which has excellent performance and is easy to put into a production environment. OpenCV is also written in C/C++ which means that developers can easily modify the source library for a specific need. An example of edge detection using OpenCV is shown in Figure 24. The biggest downside to C++ is that it is slow to write, error prone, and frequently unreadable. Since it is a hard environment to visualize and debug, it is not an ideal language for beginners and/or group collaborations. This will dramatically increases the team’s effort in development and learning time, which is a huge disadvantage.

4.5. State Estimation Method The capability to accurately estimate each deployed CubeSat’s position and velocity relative to the deployer is a critical element of VISION’s functionality. Relative state estimates are used with deployer inertial states to approximate CubeSat inertial states, which in turn are used to calculate the CubeSat TLE. Relative CubeSat state estimates will be computed using relative relative position data extracted from images gathered by VISION’s sensors using image processing methods. This data inherently includes uncertainties from the sensors themselves and from the image

09/30/19 26 of 46 CDD

University of Colorado Boulder processing. In order to effectively minimize the uncertainties introduced by these sensors the following state estimation methods are considered.

4.5.1. Weighted Least Squares

Consideration Pro Con Very easy to understand x Very simple to implement x Only handles linear systems x Not very robust x Can handle different levels of sensor reliability x

The method of least squares is a standard approach to regression analysis to approximate a solution to systems that contain more equations than unknown variables. Least squares analysis thus requires that the profile of the equation be known, so it is only applicable to linearized systems. This section considers a batch implementation of this approach - that is, all of the recorded data is processed at the same time. Least squares attempts to minimize the sum of the squared residuals. Residuals are the difference between each observed value and the fitted value at the same time-step. This direct dependence on residuals will adversely affect the accuracy of the regression’s result if the measurement noise is not zero-mean, white, and Gaussian. Additionally, the regression requires than the measurements have some amount of noise present, as the residual matrix will otherwise become singular, thereby preventing the algorithm’s function. Weighted least squares (WLS) considers the case where some of the sensors are more reliable than others, and that their relative reliability is generally known. This method modifies the least squares algorithm to assign weights to each observation dependent on the sensor than made that observation, and then attempts to minimize the sum of the squared residuals of the weighted values. Therefore, more reliable sensors will have a greater impact on the fit result, but the measurements of less reliable sensors can still be considered. However, if the relative reliability weights are inaccurate to the sensors, there is the potential for noise to be amplified, thereby reducing the accuracy of the results over a non-weighted regression method. The linearized dynamics models considered in this document all include sinusoidal functions in their systems of equations. It is possible to use least squares regression to fit a sinusoidal functions; however, this requires that the frequency of the function is known. Generally, this frequency can be determined via frequency domain analysis of the observed data. However, because VISION only records satellite motion over a period of time that is significantly shorter than the cycle period of the system, the efficacy of this method may be adversely affected.

4.5.2. Recursive Least Squares

Consideration Pro Con Fairly easy to understand x Fairly simple to implement x Only handles linear systems x Not very robust x Can’t handle different levels of sensor reliability x

The recursive least squares (RLS) method modifies the least squares method by removing the requirement for batch processing. This means that is can process data in real time, thus reducing computational storage requirements. The update algorithm calculates an unbiased estimator and a measurement noise covariance to find the estimation-error covariance, and then attempts to minimize that estimation-error covariance. This adds the consideration of process noise additional to measurement noise. However, as long as these are independent, the method is generally considered to be viable. RLS can produce more optimal results than WLS for certain noise profiles. RLS does not have the ability to consider the reliability of the measurements. That said, the lack of dependence on an accurate sensor reliability profile means that RLS is more generalized, and thus requires less tuning and removes the potential of over fitting the sensor reliability profile.

09/30/19 27 of 46 CDD

University of Colorado Boulder 4.5.3. Classical Kalman Filter

Consideration Pro Con Somewhat difficult to understand x Somewhat difficult to implement x Only handles linear systems x Fairly robust x Can be modified to be made more robust for specific x scenarios Can somewhat adapt to incorrect assumptions made in x the model

The Classical Kalman filter (CKF) propagates the mean and covariance of the system’s observed state through time, which allows it to adapt to and compensate for model errors. The filter’s most basic implementation requires a lin- earized system and that the measurement noise is mostly zero-mean, white, and uncorrelated. However, modifications can be made to the input data and elements of the filter algorithm to increase accuracy for non-zero-mean or colored noise. Smoothing methods are another method of reducing the effects of noise, but are very computationally expensive and can introduce new optimization issues. CKFs are generally considered to produce the best linearized solutions to systems, so if the system under consid- eration is linear, it would be considered the best overall solution. However, this comes at a computational cost, as the Kalman filter has significantly more overhead than a recursive least-squares filter. Additionally, the filter has the potential of producing divergent results if the arithmetic precision of the measurements are low, if the model is highly inaccurate, or if the initial state is unreasonable. Kalman filters have no one single implementation, and are highly dependent on the system being considered. Care must be taken to select viable and compatible methods of calculating the covariance and state prediction matrices, and the individuals doing this must have a good understanding of the theory and operation of the filter methods being applied.

4.5.4. Extended Kalman Filter

Consideration Pro Con Difficult to understand x Difficult to implement x Can handle nonlinear systems x Very robust x Can be modified to be made more robust for specific x scenarios Can somewhat adapt to incorrect assumptions made in x the model

The Extended Kalman filter (EKF) is capable of approximating solutions to nonlinear systems. This is done by linearizing the system around the current estimate, and then using the linearized system to calculate the covariance and the next estimate. The EKF has many of the same advantages as the CKF, and all of the modifications that can be made to the CKF are similarly applicable to the EFK. However, the EKF is significantly more complex, and thus is more difficult to implement and has a higher computational cost. Another nonlinear filter that was considered for the project was the Unscented Kalman filter (UKF). The UKF reduces the errors introduced by linearization in the EKF, and also is much better at handling systems with dynamics that can’t be expressed or are not easily found analytically. The systems under consideration for this project already have analytical representations, reducing the advantage of the UKF over the EKF. Additionally, there are far more readily available resources regarding EKF implementation for orbital dynamics than there are for UKF implementa- tion. Therefore, we decided to not add the UKF to the trade study at this time. However, should it be determined at a later stage of the project that a nonlinear dynamics system is required, we will reconsider the UKF and its application to the project.

09/30/19 28 of 46 CDD

University of Colorado Boulder 4.5.5.H ∞ Filter

Consideration Pro Con Very difficult to understand x Very difficult to implement x Can handle nonlinear systems x Extremely robust x Can be modified to be made more robust for specific x scenarios Can adapt to incorrect assumptions made in the model x

The H∞ filter, or minimax filter, attempts to minimize the error for worst-case estimates. This makes it extremely robust, but also errs heavily on the side of measurement pessimism, which can reduce the effect of new accurate observations on the model estimate. The H filter is ideal for dealing with systems where the model is uncertain, not well-known, inaccurate, or changes unpredictably. Because our model is generally well-known, this advantage is unlikely to be a significant reason to select this filter. The H∞ filter also limits the frequency response of the estimator better than the EKF. However, the H∞ filter is highly sensitive to design parameters, for which implementation requires a good understanding of the filter’s theory and operation. Since this filter is very complex and its theory is very abstract and complicated, implementation of this filter will likely be very difficult.

4.5.6. Dynamics Model Considerations The dynamics of the system being estimated will drive if the state estimation method needs to be linear or nonlinear, and directly affect under what conditions the method will produce the most accurate results. The system that VISION will estimate is defined by relative orbit dynamics. This section discusses the implications of selecting different dynamics models on state estimation selection. The models considered are the general relative orbit equations of motion, the Hill-Clohessey-Wiltshire (HCW) Equations, and the Tschauner-Hemple (TH) Equations. The Differential equations that define these systems can be found in Appendix B. Figure 25 depicts the relative orbit dynamics system with one chief and one deputy, where the deputy state is defined in the chief local-vertical-local-horizontal (LVLH) frame.

Figure 25. Relative Orbital Dynamics Diagram

General Relative Orbit Equations of Motion: The general relative orbit equations of motion describe the motion of a deputy’s position relative to a chief while they are orbiting the same central body. The derivation requires that the chief is in a Keplerian orbit, meaning that it is travelling unperturbed along a conic section. This motion is described

09/30/19 29 of 46 CDD

University of Colorado Boulder by a set of three coupled second order nonlinear differential equations in terms of the rotating chief LVLH frame. The numerical integration of these equations requires knowledge of the true anomaly rate of the chief, the inertial position and velocity of the chief, and the inertial position of the deputy. In order to implement this set of equations in a nonlinear filter, a linearized form is derived using first order Taylor series expansion. The linearization implies that the chief and deputy are in close proximity. The linearization removed a dependence on the deputy inertial position and in turn decoupled the third equation from the other two. The linearized set still requires chief inertial states and true anomaly rate at each integration step. The need to have these values at each step makes them very difficult and significantly more work to implement effectively with real time measurements due to the need to obtain estimates for these values using inertial chief measurements. Hill-Clohessey-Wiltshire Equations: Starting with the general linearized model, the HCW equations can be derived by further assuming that the chief is in a circular orbit. This greatly reduces the relative equations of motion to a set of second order linear differential equations, where the first two are coupled and the third is uncoupled. This model only requires the mean orbit rate of the chief, which requires significantly less information than what is required in the general set of equations. The mean orbit rate can be estimated to a relatively high accuracy using a small set of measurements because if is a constant value in an unperturbed circular orbit and does not need to be updated at each integration step. A curvilinear version of the CW equations were considered but quickly ruled for the reason that VISION is not producing curvilinear measurements and this modified set requires the inertial position of the chief at each integration step. Tscaunder-Hempel Equations: Starting from the general linearized relative orbit equations of motion, the TH equations can be derived under the assumption that the deputy is in a Keplerian orbit. This set of equations takes a similar form to the HCW equations however they are nondimensionalized and can handle chief orbits of general eccentricity (u,v, and w are non-dimensional relative coordinates). In order to implement these equations the chief true anomaly and orbit eccentricity are required. The orbit eccentricity is assumed to be constant under the Keplerian assumption meaning that it can be estimated with few measurements if only used for a short period of time. The true anomaly must be estimated at each integration step which introduces computational overhead and new sources of uncertainty in the CubeSats relative state estimate. The primary dynamics model characteristic that will drive state estimation method selection is if the system can be accurately described using a set of linear differential equations. In order to quantify the error introduced by a model’s linearity, the CW equations were integrated using a 4th order variable time step Runge-Kutta integrator and compared to the spacecraft’s inertial motion, found by integrating Newton’s gravitational equation in a similar fashion. This analysis is performed in MATLAB. The motion was integrated for a chief orbiting in a circular low Earth orbit (LEO). The deputy is assumed to be deployed from the chief at a relative velocity of 2 meters per second down track. Due to camera resolution limitations VISION will collect data for up to about 100 meters, so model performance in that distance is under question. The Fig. 26 depicts motion of both models propagated over five chief orbits in the chief LVLH frame. The slow drift of the linear model from the actual relative position is expected of this number of orbits as the linear assumption introduces significant error.

Figure 26. Relative Motion in Chief LVLH Frame

On a scale more relevant to VISION, the error introduced by the linear assumption is significantly smaller. The model error over the first 120 meters is depicted in Fig. 27 below.

09/30/19 30 of 46 CDD

University of Colorado Boulder Figure 27. Model Error

The error introduced by the linear model is very small over 120 meters which means that using a linear model to estimate the CubeSat relative position will introduce uncertainties significantly smaller than that expected from the sensors used by VISION. A secondary point of interest is to see how the CW equations perform when the assumption of a circular chief orbit is broken. Figure 28 depicts the error introduced by the HCW equations at 100 meter relative distance at increasing chief orbit eccentricities.

Figure 28. HCW Equation Eccentricity Uncertainty

The model error introduced by the HCW equations over 100 meters as the eccentricity increases is very significant. The amount of acceptable error will need to be better understood before deciding if the HCW or TH models are needed if a linear state estimation method is pursued. The HCW equations are preferable out of the two linearized sets of equations because they are not dependent on any time varying values when calculating relative motion. With a better understanding of how the different dynamic models affect selection of a state estimation method, the following list of pros and cons for each estimation method in reference to the project objectives are defined.

5. Trade Study Process and Results

In order to develop useful trade studies, the VISION team identified the largest decisions that affected the Critical Project Elements (CPE) identified in the team’s PDD. Each CPE was considered in order to increase the likelihood of success of the project. The team had decided to write and verify the functional and derived requirements of the project prior to choosing the metrics by which to conduct each trade study. VISION was then able to chose metrics directly derived from the

09/30/19 31 of 46 CDD

University of Colorado Boulder requirements verified by the customer. After the metrics were chosen, the team designated weights to each metric. This was done by deciding which metrics were most important to meeting the CPEs and contributed the most to mission success. The metrics that were more important to the customer were also given a higher weight. VISION decided that cost would not be a metric to compare the alternatives in the trade study. The rationale behind this was that the team wanted to determine the best engineering solution to each CPE. The team decided to compared the prices of each of the solutions afterwards and make the final decision based on what was feasible given the budgetary constraints. If the solutions chosen were outside the scopes of the budget, a decision would have to be made to chose a less optimal alternative. Once all the metrics and weights were chosen, all of the performance values of the alternatives were entered. The team decided that the performance values would be converted to a scale of 0 to 10 [13]. The method of conversion was to give the highest performing alternative a score of 10 and, likewise, the lowest performing alternative a score of 0. Each of the intermediately performing alternatives were then given a score that linearized the performance of the best and worst alternatives. This is described with Equation (1). (intermediate per f ormance − lowest per f ormance) S core = ∗ 10 (1) (highest per f ormance − lowest per f ormance) The resulting scores were then multiplied by the applicable weight for the given metric to yield the points for that metric. The points were then totalled from all of the metrics for a specific alternative. The team decided that the highest point sum would be the best option for the project. The next best options would only be chosen in the instance of a budgetary constraint.

5.1. Sensor Suite Trade Study The choice of sensor suite is integral to the success of the project. Driven by requirements FR.1, FR.2, FR.3, FR.5 and FR.6, there is clearly a heavy reliance on the accuracy of the system. There is a trade-off between accuracy, and power and size. Therefore, accurate choice of weights is critical to the validity of this trade study. Additionally, more complex systems often deliver higher accuracy or coverage, such as the Dual Visual and ToF approach. These benefits are hindered by their exceedingly high time to develop, which may prevent VISION from completing the requirements. These trade-offs are displayed in figures 29& 30. Table 5.1 shows the rationale and measurement determination for each metric used in the trade study.

Metric Driving Weight Rationale Req’s Volume DR-5.1, 15% The space the sensors occupy must be considered early DR-5.2, in the design process. The entire system must fit within a DR-5.3, single small package. Larger sensor systems become are DR-6.1 not preferred as they may take available space away from other valuable subsystems, such as the computing unit or other valuable electronics equipment. Volume is measured in cm3, with data taken from VANTAGE’s ToF and visual sensor. A safety factor was applied to the lens choice, with wide angle and visual, in order to account for models that vary greatly in size. Power DR-5.4, 15% Sensors will account for a large amount of the system’s DR-6.1, power draw during the data gathering process. The DR-6.3 power draw from the sensor array must be minimized to meet the requirements of as many launch providers as possible. Additionally, higher power sensors give off more heat, which must be dissipated. Power is measured in Watts with data taken from VANTAGE’s ToF [22] and Sensor data sheets [23].

09/30/19 32 of 46 CDD

University of Colorado Boulder Short Distance Accuracy DR-1.1, 30% Observations must be made soon after deployment so DR-1.2, that the CubeSats may be differentiated before overlap DR-2.1, occurs. As the CubeSats drift further away, compression DR-2.2, makes the CubeSats appear closer together. This hurts DR-2.3, the system’s ability to identify individual CubeSats and DR-2.5, increases the error in velocity measurements. DR-2.6, Additionally, short distance observations will aid DR-3.1 deployment verification. The metric for short term accuracy is dependant on the sensors ability to detect distance of overlapping CubeSats, and the overall accuracy of a single cubesat at a distance of 3 meters. Long Distance Accuracy DR-2.1, 15% Measurements at a longer range will increase the time DR-2.2, the observation can occur. This will improve the overall DR-2.3, accuracy of the system by providing more data points. DR-2.5, Long distance accuracy is an approximate measure of DR-3.1 the sensor suite’s ability to detect the relative position of a CubeSat at 40 meters. Implementation Time n/a 25% In order to have a successful project, the development must take place within a reasonable time period. This will be approximately measured in hours to develop, which is based on algorithm complexity and resources available, such as the work inherited from VANTAGE.

Table 1. Weighting of Sensor Suite Trade Study Metrics

5.1.1. Trade Study Results

Figure 29. Sensor Suite Trade Study Results (Part 1)

Figure 30. Sensor Suite Trade Study Results (Part 2)

09/30/19 33 of 46 CDD

University of Colorado Boulder For the sensor suite trade study, each metric was evaluated on a linear scale, where the worst and the best options were assigned a ”score” of 0 and 10 respectively. For example, the best volume choice was the single visual camera, which was determined to have an approximate volume of 100 cm3 [23], while the worst option was the Dual Visual with a ToF camera, which required an approximate volume of 600 cm3 [22][23]. The Single Visual option was assigned the best possible score of 10, and the Dual Visual with a ToF was given a score of 0. Every other design was assigned on a linear scale between the two according to the technique described in the preceding section. The best design option according to the sensor suite trade study is the Wide Angle Visual Camera with a Time of Flight. It is the easiest to implement, as the VANTAGE team was based on this design, and it scores highly in short distance accuracy. These two metrics carry the highest weight, so it is beneficial for the design that this solution scored highly there. It does however require a large amount of space compared to just a single small visual camera, and has a high power draw. A single visual camera came in a very close second, scoring highly for volume and power. However, it loses out when it comes to accuracy, as this approach requires no overlap between CubeSats. Staggered dual visual was anticipated to score higher, but it suffered from the same drawback as the single visual– it is crucial for visual sensors to see an entire edge to determine distance. This drawback causes these two approaches to lose out to the Time of Flight solutions, which are more robust. Finally, Stereoscopic, Dual Visual with a Time of Flight, and Telephoto Visual with Time of Flight ended up very low on the list, and are eliminated from further consideration. Stereoscopic scored the lowest on implementation time, long and short distance accuracy. Dual Visual with Time of Flight uses too much power and is very large, making it a cumbersome solution to the problem. Telephoto Visual with Time of Flight also lost out due to its low short distance accuracy, because a CubeSat would have to move far from the deployer before it enters the field of view of the camera.

5.2. Structural Interface Method Trade Study 5.2.1. Metric Weight Determination Table 5.2.1 shows where each of the metrics were derived from the design requirements. The table also explains and justifies each metric as well as demonstrates which derived requirements are driving the metric. Additionally, it gives the weight determined for each of the metrics.

Metric Driving Weight Rationale Req’ts Required Volume DR.4.1 10% The chassis must have a volume that can contain all DR.5.2 components necessary to operation. However, the design must not be too voluminous since potential launch providers have limited space availability. Tolerancing and DR.5.1 5% In order to meet the requirements of interfacing with Manufacturability DR.5.2 deployers, the manufacturing of this chassis needs to be within ± 0.005” tolerance. The team must also consider cost restraints when looking at the manufacturing complexity. The team anticipates limiting manufacturing to available resources. Development Time DR.4.1 20% Short development time is important as designs need to DR.5.1 be done by the end of the semester. DR.6.1 DR.6.2 Universality DR.5.1 20% How easily will this design be able to be changed to fit with multiple launch providers? Structural Integrity DR.5.1 15% Need a design that can advance TRL, so must be able to DR.6.1 be structurally stable during launch and in operation DR.6.2 environment. Opportunity Cost DR.5.1 15% How much are we costing the deployer? (Ex: Taking up a tube) How easy is it to interface? Thermal Management DR.4.1 15% Need to be able to manage heat generation from DR.5.1 electronic components to prevent overheating.

09/30/19 34 of 46 CDD

University of Colorado Boulder 5.2.2. Trade Study Results Figure 31 displays the key options for the physical design of the structural chassis. Each design alternative was evaluated for its performance of the assigned metrics. The overall best design option is an externally mounted chassis. This design is decidedly much more versatile and could be incorporated with multiple launch providers without as much design change as the Tube replacement design. Furthermore, the external mount would not take up payload space for deployment providers, which makes it more attractive to possible partners. The second highest design option is the tube replacement option but it falls short in the opportunity cost section as it would replace up to 6 1U CubeSat customers. Internally mounted is the third-best option. This design would be limited in both the amount of volume that would usable, and the tolerances that the system would have to meet. The system would be inside the deployment provider and, therefore, would have limited ability to use a radiator to manage its temperature. The least performing option is a design that could mount internally or externally. This design mainly draws the most negative aspects as the other options as it shares their limits. The main negative aspect is the development time which is a combination of the other designs. This alternative would carry the most risk and complexity and is eliminated from consideration.

Figure 31. Structural Interfacing Method Trade Study Results

5.3. Embedded Systems Trade Study 5.3.1. Measure of Weighting The critical elements driving the embedded system trade study are the image processing and communication with the deployer. Therefore, CPU performance, operating system and hardware components are significant in this project. The weights are determined based on the rationale shown in the table below.

Metric Driving Weight Rationale Req’s Integration DR 5.4 15% The integration refers to the amount of time required to integrate the embedded system with the sensors, power, and data interface with the deployer. It was assigned with a relatively high weight because the hardware should be ready before the first test to give software more time to accommodate or change.

09/30/19 35 of 46 CDD

University of Colorado Boulder Performance (RAM, CPU DR 5.5, 30% This project is software heavy and the main function is and GPU) DR 4.2 to take images from sensors and process them to estimate the TLE’s. Therefore, the success of this project highly depends on the performance of the embedded system. This metric was further broken into sub-metrics to evaluate the overall performance of each board. It requires great amount of processor memory, data memory, and processing speed. With a quick processor and large data memory bank, the system can run fast with multiple variables and temporary storage. The processing speed also reduces the software development time by enabling faster iterations on software debugging and image processing. Therefore, the performance of the embedded system is highly weighted. Storage Capability DR 4.4 10% The system needs to provide proof of deployment by customer’s request. Therefore, it shall be able to store multiple images and videos of at least one deployment cycle to be able to transmit to the deployer. The evidence with high quality requires large amount of memory space. Also if the storage is not huge enough, the team can store less images or videos. Therefore, the storage capability is assigned a low-middling weight. Heat Generation DR 5.5 15% A major requirement for this project is to be ready for the flight experiment. Therefore, the generated heat cannot be dissipated through the air in the space environment like on the Earth. Also if the generated heat is too high, the electronic components and sensors are at risk which could overload the whole system. Therefore, the heat generation is assigned with a high-middling weight. Power Consumption DR 4.3, 10% The embedded system is expected to be one of or even DR 4.2 the greatest power draw for the project payload. While NanoRacks has informed the team that 520 W is the maximum allowable power draw. This relatively low weight reflects the team’s large power budget. Interchangeability DR 6.3 20% A major requirement for this project is to be ready for the flight experiment by customer’s request. So the embedded system needs to be space rated. But the team is not able to purchase space rated hardware due to the limited budget. Therefore, interchangeability with space rated hardware is highly weighted.

5.3.2. Trade Study Background Each of the measures of success for the embedded system is given on a range from 0-10 using the system mentioned in the introduction. As mentioned before, one of the most significant factors in choosing the embedded system is its performance to process images and transfer deliverables to the deployer. This measure of success effects random access memory (RAM), GPU and CPU performance. Therefore, the performance metric is broken down in 3 sub- categories that each contains their own scoring system. Each sub-category is equally influential to the effectiveness of the board performance and therefore are averaged to obtain an overall performance score. The first sub-category ranges from 0 GB to 32 GB of RAM. 32GB was chosen because it is currently the highest RAM available on the market. Another metric is the GPU rank which is based on a professional website that ranks various GPU’s performance [17]. The best and worst performances are set to the minimum and maximum performance ranks, from 1 to 468. The CPU benchmark, named the “GeekBench,” is a processor benchmarking program that runs a series of tests on a processor and times how long the processor takes to complete the tasks. The quicker the CPU completes the tests, the higher the GeekBench score [18]. The range is based on the highest and lowest CPU benchmark score. Another important

09/30/19 36 of 46 CDD

University of Colorado Boulder metric is storage capabilities. An ideal board would contain more than 128 GB of memory to give the software team enough space to record and store high-quality video over a deployment cycle. Additionally, heat generation is another significant factor to evaluate for a system operating in space. Although the team will come up with a method to dissipate the heat through a radiator, the device with minimal heat generation is still the preferred choice. The range is based on the highest and lowest heat generations for the four trade study options. This is also true for power consumption, the device which intakes the least power is preferred. The range is half of the power draw as described by the customer’s requirement. The last measure of success is the interchangeability with space-rated hardware. The ideal board would be chosen such that it is already space-rated and does not need to be modified before integrating it into a future space ready system. The least desirable board for this measure of success would be one that meets all requirements but could not be converted to space ready system.

Metric Relevant Units Worst Performance (0) Best Performance (10) Integration Weeks 5 1 Performance RAM GB 0 32 GPU Rank 468 1 CPU Benchmark 3661 5732 Storage Capabilities GB 0 128 Heat Generation W 100 10 Power Consumption W 260 0 Interchangeability Capabilities Known to be Space-rated Inconsistent with Space-Rated hardware

Table 2. Trade Study Background

5.3.3. Trade Study Results The following table presents the individual scores for each measure of success for each design options in the trade study. The results are presented in the same manner as the previous Trade Studies for consistency. Again, the highest point sum is the preferred choice for VISION.

Figure 32. Electrical Trade Study

5.4. Flight Software Programming Language Trade Study 5.4.1. Metric and Weight Determination The rationale for the weighting of the metrics are shown in Table 5.4.1. Execution time refers to the overall run-time of a specific function for each language. Learning time involves the team’s effort in learning the language’s syntax and operations. Development time considers factors like debugging, readability, and write-ability. For a short term project such as this, development time is more important than run time. A finished product is better than a fast but incomplete one. Libraries/packages are the available resources in the current language ecosystem, which can greatly

09/30/19 37 of 46 CDD

University of Colorado Boulder aid the research and development process when the information is easily obtained and/or referenced. Embedded system compatibility refers to the programming language’s ability to generate and implement code onto a mobile computational system, which is the main processing component of VISION.

Metric Driving Weight Rationale Req’s Execution Time FR.2, 25% Execution time is a critical part of the mission. Due to FR.3 the time requirement of sending our deliverables to the deployer before the subsequent deployment, the faster the code can run, the better. Learning Time FR.2 15% Learning time only affects the first few weeks of code development so it is weighted less heavily than other factors. Similarity to languages already known by the team will decrease the amount of time spent learning the language, and allow more time for writing of the actual code. Development Time FR.2 30% Development time will have the biggest factor in VISION’s ability to succeed and fulfill the functional requirements. This focuses on the ease of working with the language, and will affect the speed at which the code is written for the duration of the project. Factors such as readability/visualization, debugging, and compatibility to different environments affects this parameter. Libraries FR.1, 15% The amount of libraries available in each language is FR.2 very important to the capabilities of the code, and Python and C++ simply have more libraries available than MATLAB. However, all three languages can accomplish this project with the available packages, so since this metric isn’t mission critical is is only weighted 15%. Embedded System FR.4, 15% Only certain embedded boards are capable of running Compatibility FR.6 MATLAB (through Simulink), while most can run Python and C++. The code being able to run on our computing board is very important, however the electronics requirements will drive the choice of computer which leads to compatibility being a smaller software consideration.

Table 3. Weighting of programming language measures

5.4.2. Trade Study Results The scores for each metric were given by ranking the best language in each category with 10 points, the worst with 0 points, and the third somewhere in between. The metrics driving this trade study on were hard to find quantifiable performance values for, but nonetheless sufficient research was done to give each language an accurate score. The score will then be multiplied by a weight to get a point for each metric. At the end, all points are added up for the overall score. The same process is repeated for every metric, and the total weight reflects the team’s decision in flight software programming language. All of this is shown in Figure 33. For the execution speed, C++ has the best overall run-time, which will receive a score of 10, multiplied by the weight of 25%, to equal 2.5 points. Python has the second best run-time, and is slightly closer to C++ in this category so it will receive a score of 6 and 1.25 points. MATLAB has the worst run-time, leaving it with a score of 0 and 0 points. MATLAB has the best learning time because the team is proficient in the language and there would be little to no time wasted to learn the language. C++ has the worst learning time because it is a compiler language and is known for its complex syntax, difficult readability, and prone to errors/bugs due to those reasons. Python syntax is fairly similar to MATLAB, however debugging is slightly harder. Python is also an interpreter language and is structured from block indexing, which are further differences from MATLAB that contribute to a learning time score of 7.

09/30/19 38 of 46 CDD

University of Colorado Boulder Development time is the biggest factor and has the heaviest weight. Python has straightforward and readable syntax, and can be written fairly quickly. C++ has the worst development time for the same reasons as its slow learning time, which will impact development time as well. Learning a new programming language and coding a whole project in 9 months is a lofty goal, and our time would be better spent actually writing code. MATLAB’s programming environment is very well documented, and easy to work with, however anything outside of numerical computation (such as data structure handling) is very inefficient and difficult to implement in MATLAB. Python has the biggest open source ecosystem, which means it has the most available libraries and packages for almost any purpose. Some popular and useful packages include NumPy, SciPy, Scikit-learn, Cython, and Matplotlib. These packages can easily give Python an advantage over other languages in specific areas. On the other hand, MATLAB has the smallest available libraries because it does not operate on an open source ecosystem, and cannot easily extend its capabilities outside of mathematical matrix computation. C++ is in between these two due to its open source, but has fewer than Python. Finally, C/C++ are the most commonly used in embedded systems because of their fast run-time, dominating 95% of the embedded systems market. While Python can’t run as fast, it is is the fastest growing language used in embedded computing due to its popularity. This means that many commercial off-the-shelf (COTS) computing boards will be able to run Python. Even though MATLAB is not a production-ready language, it has the capability to translate code into very compact languages such as C, C++, and HDL that can be run on embedded systems. However, these conversions mainly work on the algorithmic portion of scripts and not other functions such as image processing. This defeats the purpose of writing in a different language and leaves MATLAB as the worst choice for our embedded system.

Figure 33. Programming Language Trade Study Results

5.5. State Estimation Method One of VISION’s primary objectives is to report TLE for each deployed CubeSat. In order to perform this function VISION will use its own inertial state and the relative states of the CubeSat. Obtaining accurate relative state estimates starts with the data collected using VISION’s sensors followed by relative positions extracted from sensor data using image processing techniques. Both of these processes introduce significant amounts of uncertainties in the calculated relative position that get worse as the CubeSats drift farther from the deployer. This error directly impacts the accuracy of calculated TLE and if large enough can cause TLE to impede ground based tracking capabilities by providing incorrect orbit information. In order to minimize the error introduced by VISION’s sensors and image processing algorithms a more robust state estimation method is required. This will enable vision to take advantage of a priori knowledge of system dynamics and filter all sensor data to minimize the effects of sensor noise. The filter used will greatly affect the computational resources required by VISION, optimality of the estimate, and the resources invested by the software team to develop a state estimation algorithm. For these reasons a rational variety of state estimation algorithms and filtering methods are explored [9]. In order to quantify the efficacy of each state estimation method to VISION’s requirements, the following metrics are considered: robustness, optimizability, learnability, and computational expense. A discussion of each estimation method with respect to the different metrics are found below.

5.5.1. Robustness Robustness is defined as a sensors ability to produce accurate estimates using noisy measurements and model error. Measurement error can take the form of Gaussian noise, colored noise, and bounded noise. The type of noise associated

09/30/19 39 of 46 CDD

University of Colorado Boulder with the measurement affects how it is handled by the estimation algorithm. Model error is introduced by using a dynamical model that does not perfectly describe the measured system. For example model error introduced by assuming a nonlinear system is linear or the chief orbit is circular when it is in fact not. Different estimation methods are able to account for model error more effectively than others. Robustness is closely coupled with dynamics model selection and sensor selection. Weighted and Recursive Least Squares: Least squares methods are not technically robust. Types of noise can not be distinguished in the filter’s design, so certain noise profiles have the potential to cause large errors, especially in the case of zero-mean and Gaussian noise. Therefore, these filters are assigned very low scores in this category. Classical Kalman Filter(CKF): The CKF can be fairly robust for linear processes, and somewhat workable for certain cases of nonlinear processes. Sensor noise profiles must be verified to ensure appropriate filter design. Additionally, divergence-correction methods may need to be applied, which would increase the complexity of imple- mentation.Therefore, the CKF is assigned a fair score in this category. Extended Kalman Filter (EKF): The EKF is similarly robust to the CKF, but its robustness extends to nonlinear systems, giving a slight edge over the CKF in its range of applicability. Therefore, the EKF is assigned a high score in this category. H∞ Filter: The H∞ filter is extremely robust, applicable to nonlinear systems, and can handle poorly-known dynamics. However, since our system is fairly well-known, this advantage is not given as much consideration. The filter’s ability to limit the frequency of the response of the estimator also gives it a slight advantage over the EKF. Therefore, the H filter is assigned a very high score in this category.

5.5.2. Optimizability Optimizability is defined as the state estimation algorithm’s ability to provide an optimal estimate. Optimality is diffi- cult to quantify but different estimation methods can provide what is considered an optimal estimate for different types of systems. The estimation method’s ability to calculate gains to minimize the estimate covariance is a major impactor in the method’s optimizability. Method optimality will affect dynamics model selection and statistical confidence in relative state estimates. Weighted Least Squares: Weighted Least Squares is highly optimal for linear systems where the measurement noise is zero-mean, white, and Gaussian. Its accuracy can be significantly increased if the relative sensor reliability profile is reasonable. Therefore, Weighted Least Squares is assigned a fair score in this category. Recursive Least Squares: Recursive Least Squares is also highly optimal for linear systems where the measure- ment noise is zero-mean, white, and Gaussian. However, the filter has very little modification and tuning potential, and can’t be attenuated to specific use cases. Therefore, Recursive Least Squares is assigned a low score in this category. Classical Kalman Filter: The CKF is the technically optimal solution for linear systems where the measurement noise is zero-mean, white, and Gaussian. It is also technically the optimal linear solution to non-linear systems with zero-mean and white noise. Therefore, the CKF is assigned a high score in this category. Extended Kalman Filter: The EKF is the technically optimal solution for nonlinear systems with zero-mean and white noise. It produces very similar results to the CKF for linear systems. However, this category doesn’t consider range of applicability for types of systems. Therefore, the EKF is also assigned a high score in this category. H∞ Filter: The H∞ filter produces highly optimal solutions for nonlinear systems with significant noise, thus mak- ing it more tunable to the use case than the EKF and therefore generally produce more optimal solutions. Therefore, the H∞ filter is assigned a very high score in this category.

5.5.3. Learnability Learnability is defined as the software team’s ability to learn and develop the state estimation method being used. It is critical to address the lack of experience that the team has with advanced state estimation methods and it is important to incorporate that into the selection process. Learnability is quantified by the expected development time required to produce a functioning algorithm and by the amount of time required to learn the fundamental principles required to design the specific algorithm. Learnability is important in ensuring that the method used is scoped appropriately. Weighted Least Squares: As a batch method, WLS processes all of the data at once. This means that it is very simple to implement, and most of the setup complexity comes from properly assessing the weights assigned to each sensor, rather than the complexity of the algorithm itself. Recursive Least Squares: RLS builds on the basic least squares method by making it recursive. This increases the complexity of the algorithm. Classical Kalman Filter: The CKF builds on RLS by additionally adding the ability to adapt to the incoming data by propagating the mean and covariance through time, increasing the complexity of algorithm.

09/30/19 40 of 46 CDD

University of Colorado Boulder Extended Kalman Filter: The EKF builds again on the CKF by adapting it to handle nonlinear systems by linearizing them at each step, thus increasing complexity. H∞ Filter: The H∞ filter is a convolution of the EKF supported by complex and abstract theory. Thus, this filter is even more complex than the EKF.

5.5.4. Computational Expense Computational expense is defined as the time required to compute the state estimates and the memory required to store intermediate calculations and measurement data during the estimation process. Different methods require varying amounts of computational resources to produce their estimates. For example, batch methods require all measurement data to be collected and stored in memory prior to processing while nonlinear filters required the linearization of model equations of motion at each step, requiring more computational time. Computational expense will directly affect the processing unit selected and the amount of memory that it has. Weighted Least Squares: The algorithm for WLS is very simple. However, it requires that all of the data be saved before initiating processing, thus greatly increasing its overall computational expense. Recursive Least Squares: The algorithm for RLS is more complex than WLS. However, because it doesn’t require data saving, its overall computational expense will generally be lower. Classical Kalman Filter: The CKF algorithm builds on RLS, thus increasing complexity and computational expense. Extended Kalman Filter: The EKF algorithm builds on the CKF, thus increasing complexity and computational expense. H∞ Filter: The H∞ filter is extremely complex, and therefore very computationally expensive.

5.5.5. Weight Assignments The following table provides rationale for how weights were assigned to the metrics being used. Weights are assigned to each metric such that the metric with the highest impact on the mission objectives will carry the most weight in trade study calculations.

Metric Driving Rationale Req’s Weight Robustness DR-2.2, 30% Robustness carries one of the heaviest weights because having a robust DR-2.3, estimation method will enable us to have a larger selection of feasible DR-2.5 sensors and gives more flexibility to use a linear dynamics model available dynamics models. Optimizability DR-2.2, 20% Optimizability is weighted based on the needed accuracy to produce DR-2.3, TLE that can aid ground based tracking of deployed CubeSats. Though DR-2.5 we do need to have accurate estimates they do not need to be as accurate as what most methods can be optimized to as they are designed for navigation algorithms that require higher accuracy than is needed by VISION. Learnability DR-2.1 30% Learnability is weighted to reflect the importance of reasonably scoping the work required to design and implement the selected estimation method. This is largely influenced by the high complexity of some methods and the teams experience. Computational Expense DR-2.6 20% Computational expense is weighted to reflect the importance of providing fast TLE using a reasonable amount of memory. Though it is critical to provide TLE quickly, this is not the leading factor that will drive design because most methods will be able to process data within a time that is conducive to mission success.

5.5.6. Trade Study Results The trade study found that the Kalman filter will most likely be the optimal method of state estimation for this project. The next-highest scores are held by the EKF and H∞ filter. Should a nonlinear dynamics model be required, the EKF (or some other nonlinear Kalman filter variant) will likely be the optimal method of state estimation.

09/30/19 41 of 46 CDD

University of Colorado Boulder Figure 34. State Estimation Trade Study

6. Selection of Baseline Design

This section will analyze the trade studies performed in the previous section to determine the baseline design for the Vantage Project. It is important to note that cost was not taken into account while determining the baseline selection so that options could be assessed strictly on performance capabilities. A cost analysis will be performed as the project moves into the preliminary design stage. If the project is at risk of exceeding budget restraints, the cost analysis will be used to develop a contingency plan.

6.1. Sensor Suite Baseline Design The sensor suite trade study reveals that the wide angle visual sensor with a Time of Flight camera is the best design for the project moving forward. It scored highly in short distance accuracy, and is adequate with long distance accuracy as well. It’s power draw is non-ideal, but the ToF camera’s accuracy was worth the power cost. A close second is the single visual camera, which uses an entirely different approach for estimating distance lowering its accuracy scores. A large reason the visual and ToF approach won, is due to the work previously done by last year’s VANTAGE team. Their algorithms and testing were all done with this approach, so the development time and component integration for the VISION team is minimized. A major benefit of adding the ToF camera to the single visual is its ability to sense depth of a partially visual CubeSat. During the deployment of multiple CubeSats, the first few CubeSats will be partially obscured by the last CubeSats deployed, the ToF camera will provide an accurate distance measurement to any ”visible” corner, where the visual camera must see an entire edge to determine distance, thus lowering its ability to capture an entire deployment.

6.2. Structural Interfacing Method Design The first option, based on the trade study results, is to externally mount VISION onto the deployment system. This method allows for a very liberal volume constraint, costs the deployer very little in opportunity costs, and can use a thermal management system. This configuration also allows more freedom of camera and sensor placement as it is not constrained by the volume of the launch deployment system. However, there is still much that is not known about the structural interfacing opportunities for this external method, and how universal it is, if at all. Therefore, the external method will be the baseline design until more information is received from deployers, and in the case it is not possible, an internal system will be the off-ramp design choice. This is because it is relatively easy to manufacture a CubeSat chassis than an entirely custom one.

6.3. Embedded System Baseline Design The chosen option from the embedded system trade study is Intel NUC8i7BEH, with a score of 6.595, compared with Intel NUC8i7HNK’s score of 6.07, Intel NUC8i7HVK’s score of 5.545, and NVIDIA Jetson Nano’s score of 5.32. The success of this option can be attributed to the design options with score above 6. It possesses relative good performance and excellent power consumption and heat generation with a short development time. VISION will move forward with Intel NUC8i7BEH as the selection for a baseline design.

6.4. Flight Software Programming Language Python handily won the programming language trade study. As the trade study shows there are no major downsides to Python, however there are some weaknesses, especially compared to C++. The biggest negative of Python was its

09/30/19 42 of 46 CDD

University of Colorado Boulder questionable embedded system compatibility, however this can be negated quite easily by simply finding a computing board that we know is compatible with Python. To make the execution time of Python more comparable to that of C++, a language called Cython can be used. Cython brings the underlying speed of C/C++ to Python by producing extension modules in C that can be loaded in with Python, avoiding much of the overhead that is inherent to Python. There is not much that can improve Python’s learning time score, however it was not very low in the first place coming in at 7. The team is comfortable taking on the commitment of learning Python due to its similarity to MATLAB and popularity amongst casual developers. While Python may hold the fewest top scores in this trade study, its high averages across each metric yields a result that is hard to argue against. The team is happy with this outcome and feel that Python is the right choice for this project.

6.5. State Estimation Baseline Design Classical Kalman filter has championed the state estimation trade study. The primary challenges of selecting the Kalman filter is the need for a linear dynamics model and moderately difficult learnability. Using a linear dynamics model proves to incorporate very little model error over the data collection distances. The Kalman filter will be more difficult to learn and develop compared to the least squares methods however the Kalman filter provides significant gain over those options when if comes to robustness and optimizability. For these reasons the extra development time is expected to be worth the effort. Using either the extended Kalman filter or the H∞ filter prove to provide little gain in robustness and optimizability for VISION’s applications for the increase in computational expense and difficult learn- ability. Using the extended Kalman filter leaves two options for the dynamics model to be used, the HCW equations or the TH equations. The HCW equations would be the ideal model because they are not dependent on inertial values than need to be updated at each time step. The limitation is that as the eccentricity of the deployer increases from zero the HCW introduce significant amounts of model error. In order to accommodate general eccentricities the TH equations can be used however this comes with the draw back of needing the true anomaly at every estimation step. The implications of needing to handle general eccentricities must be explored further leading to PDR. The selected state estimation baseline design is to use the classical Kalman filter with either the HCW equations or TH equations as the dynamic model.

6.6. Summary The baseline design consists of an INTEL NUC8i7BEH central processing board. This board will be running Python based software and running a wide angle visual sensor with a Time of Flight Camera as the sensor suite. The VISION package will be using a Classical Kalman Filter for state estimation and the structure will be designed to externally integrate with the NanoRacks deployment system. Moving into the preliminary design stage of the project, the baseline design will be analyzed and evaluated for feasibility. In the event that any complications prevent VISION from executing this baseline design, other options will be considered from the previously completed trade studies.

09/30/19 43 of 46 CDD

University of Colorado Boulder 7. Appendix

References

[1] Aboaf, A.; Renninger, N.; Lufkin, L. 2019. “Design of an In-Situ Sensor Package to Track CubeSat Deployments, Proceedings of the AIAA/USU Conference on Small Satellites, FJR Student Paper Competition, 141. http://digitalcommons.usu.edu/smallsat/2019/all2019/141/. [2] Boylston, A., J.A. Gaebler, and P. Axelrad, Extracting CubeSat Relative Motion Using In Situ Deployment Imagery, Proceedings of 42nd Annual AAS Guidance Control Conference, Breckenridge, CO, AAS 19-016, 10 pages, Feb 2019. [3] Fitzgerald, Joe. “Why Is There So Much TLE Confusion When New Cubesats Are Launched?” AMSAT, February 20, 2018. www.amsat.org/why-is-there-so-much-tle-confusion-when-new-cubesats-are-launched/

[4] Foust, Jeff. “More Startups Are Pursuing CubeSats with Electric Thrusters”. SpaceNews. July 23, 2018, from https://spacenews.com/more-startups-are-pursuing-cubesats-with-electric-thrusters/

[5] Gaebler, J.A and P. Axelrad, “Improving Orbit Determination of Clustered CubeSat Deployments using Camera- Derived Observations” Proceedings of 42nd Annual AAS Guidance Control Conference, Breckenridge, CO, AAS 19-041, February 2019. [6] Jackson, Jelliffe. “Project Definition Document (PDD)”, University of Colorado–Boulder, Retrieved August 29, 2019, from https://canvas.colorado.edu/ [7] Lan, W. Poly Picosatellite Orbital Deployer Mk. III Rev. E User Guide CubeSat - California Polytechnic State University. Revised: March 4, 2014. [8] Pandey, Parul. “10 Python image manipulation tools”. opensource.com. March 18, 2019, from https:// opensource.com/article/19/3/python-image-manipulation-tools

[9] Schaub, Hanspeter and Junkins. Analytical Mechanics of Space Systems. (2003). [10] Wong, William G. “Python’s Big Push into the Embedded Space”. ElectronicDesign. Au- gust 29, 2018, from https://www.electronicdesign.com/embedded-revolution/ python-s-big-push-embedded-space

[11] Wu, Elaine. “NVIDIA JETSON NANO DEVELOPER KIT DETAILED REVIEW”. seeed Stu- dio Blog. April 3, 2019, from https://www.seeedstudio.com/blog/2019/04/03/ nvidia-jetson-nano-developer-kit-detailed-review/

[12] “CubeSat Concept and the Provision of Deployer Services”. eoPortal Directory. Retrieved from: https://directory.eoportal.org/web/eoportal/satellite-missions/c-missions/cubesat-concept

[13] Van Atten, W.,SYSTEMS ENGINEERING LECTURES Sep.2019.

[14] NVIDIA Jetson Nano. User Guide. From: https://elinux.org/Jetson_Nano [15] Intel NUC ( Intel NUC8i7BEH, Intel NUC8i7HNK, Intel NUC8i7HVK ). User Guide. From: https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits.html

[16] Xilinx FPGA board ( Xilinx Zynq-7000 SoC ZC702, Xilinx Zynq Ultrascale+ MPSoC ). User Guide. From: https://www.mouser.com/Xilinx/Embedded-Solutions/Engineering-Tools/ Programmable-Logic-IC-Development-Tools/_/N-cxcznZ1yzvvqx?P=1yzohtwZ1y8efd7&FS=True

[17] GPU rank website . From: https://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List. 844.0.html

[18] ”GeekBench” – CPU benchmark. From: https://browser.geekbench.com/

09/30/19 44 of 46 CDD

University of Colorado Boulder [19] Image Processing Toolbox – MATLAB. From: https://www.mathworks.com/discovery/ digital-image-processing.html

[20] Embedded Encoder – MATLAB. From: http://msdl.cs.mcgill.ca/people/mosterman/presentations/ date07/tutorial.pdf

[21] C++ OpenCV edge detection. From: https://docs.opencv.org/trunk/da/d22/tutorial_py_canny.html [22] O3D313 Time of Flight Camera. Ifm Electronic, 2017, www.ifm.com/us/en/product/O3D313. [23] Ueye Camera Manual. IDS, en.ids-imaging.com/manuals-ueye-software.html.

09/30/19 45 of 46 CDD

University of Colorado Boulder 8. Appendix A Trade Study Theory

8.1. General Relative Orbit Equations of Motion r˙ µ µ x¨ − 2 ∗ f˙ ∗ (˙y − y c ) − x f˙2 − = − (r + x) (2) r r2 3 c c c rd r˙ µ y¨ − 2 ∗ f˙ ∗ (x ˙ − x c ) − y f˙2 = − y (3) r 3 c rd µ z − z ¨ = 3 (4) rd

8.2. Linearized General Relative Orbit Equations of Motion r r˙ x − ∗ f˙ ∗ c − f˙ y − y c ¨ 2 (1 + 2 ) 2 (˙ 2 ) = 0 (5) p rc r r˙ y¨ + 2 ∗ f˙ ∗ (x ˙ − x c ) − y f˙(1 − y c ) = 0 (6) rc p r˙ z¨ + c f˙2z = 0 (7) p

8.3. Hill-Clohessey-Wiltshire Equations

x¨ − 2ny˙ − 3n2 x = 0 (8) y¨ − 2nx˙ = 0 (9) z¨ − n2z = 0 (10)

8.4. Tschauner-Hempel Equations Equations 3u u00 − 2v0 − = 0 (11) 1 + e cos f v00 − 2u0 = 0 (12) w00 + w = 0 (13)

09/30/19 46 of 46 CDD

University of Colorado Boulder