<<

Expanded Tradespace Analysis and Operational Considerations for Reconfigurable Satellite Constellations

by Alexandra N. Straub

Submitted to the Department of Aeronautics and Astronautics in partial fulfillment of the requirements for the degree of Master of Science in Aeronautics and Astronautics at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY May 2020 © Massachusetts Institute of Technology 2020. All rights reserved.

Author...... Department of Aeronautics and Astronautics May 19, 2020 Certified by...... David W. Miller Professor of Aeronautics and Astronautics Thesis Supervisor Certified by...... Daniel E. Hastings Department Head of Aeronautics and Astronautics Thesis Supervisor

Accepted by ...... Sertac Karaman Chairman, Department Committee on Graduate Theses 2 Expanded Tradespace Analysis and Operational Considerations for Reconfigurable Satellite Constellations by Alexandra N. Straub

Submitted to the Department of Aeronautics and Astronautics on May 19, 2020, in partial fulfillment of the requirements for the degree of Master of Science in Aeronautics and Astronautics

Abstract Earth observation (EO) satellites provide helpful imagery to a variety of applications ranging from weather monitoring to agricultural support. Disaster response imaging is an essential but difficult application for EO to support. Reconfigurable satellite constellations (ReCon) provide a flexible solution to the challenge of quickly providing imagery of unknown locations. ReCon leverages the natural shift of ground tracks due to the disparity between the Earth’s rotation and the period of an to maneuver a constellation into repeating ground track (RGT) . This maneuvering strategy relies on altitude changes to vary an orbit’s ascending node and mean anomaly, which both dictate the location of satellite’s ground track. Altitude changes require signifi- cantly less fuel than plane changes. Using ReCon allows for smaller constellations to provide high-performance imagery for disaster response for a lower cost. This work explores additional trades for consideration when developing ReCon designs. The following explores satellite image scheduling techniques to further the efficacy of an Earth observation constellation. The scheduler incorporates agile satel- lites to add imaging targets outside of the satellite’s nadir field of view. The ability for a satellite to slew to off-nadir targets is incredibly important when leveraging RGTs. Another design trade considered for ReCon is the propulsion system incorporated on the satellites. Performance and cost trades invoked when using electrical propulsion instead of chemical propulsion are presented within the ReCon framework. This work presents recommendations and future considerations to inform future designers. An investigation into the potential use of staged and responsive launch options further explores flexible options. Using alternative launch strategies allows a program to leverage dropping launch costs and adapt to uncertain imagery demand. The use of flexible options for EO satellite constellation design is vital in low Earth orbitas satellite technology improves, and space becomes a more crowded domain.

Thesis Supervisor: David W. Miller Title: Professor of Aeronautics and Astronautics

3 Thesis Supervisor: Daniel E. Hastings Title: Department Head of Aeronautics and Astronautics Disclaimer: The views expressed are those of the author and do not reflect the official policy or position of the United States Air Force, Department of Defenseor the United States Government.

4 Acknowledgments

This research was conducted primarily under the support of the Massachusetts Insti- tute of Technology Department of Aeronautics and Astronautics through a research assistantship. This research was also made possible through the support of the United States Air Force. I would like to thank the department for its support and both Dr. David Miller and Dr. Daniel Hastings for their guidance throughout the research process. Much thanks is given to the entire ReCon team and fellow members of the Space Systems Lab for their immeasurable help in navigating the research experience. This work would not have been possible without the endless support from my parents and sisters. Finally, I would like to thank my husband for his patience and support in my educational journey throughout his own rigorous training.

5 6 Contents

1 Introduction 15 1.1 History of Earth Observation Satellites ...... 16 1.2 ReCon Background ...... 18 1.3 Motivation ...... 29 1.4 Approach ...... 31

2 Literature Review 35 2.1 Constellation Scheduling ...... 35 2.2 Propulsion Systems ...... 42 2.3 Deployment Options ...... 44 2.4 Research Gap ...... 45

3 Scheduling Image Collection for ReCon Designs 47 3.1 Scheduling Background ...... 47 3.2 Methodology ...... 52 3.2.1 Imaging Metrics ...... 52 3.2.2 Multiple Pass Approaches ...... 53 3.3 Scheduler Results ...... 59 3.4 Implementation with Original ReCon Code ...... 65 3.5 Wildfire Case Study ...... 70

4 Implementation of Low-Thrust Propulsion Systems in ReCon Frame- work 75

7 4.1 Low-Thrust Propulsion Background ...... 75 4.2 Low-Thrust Reconfiguration Performance ...... 81 4.2.1 ROM to GOM ...... 83 4.2.2 Mass Tradeoffs ...... 88 4.3 Discussion on Feasibility ...... 94

5 Deployment Strategies for Reconfigurable Constellations 97 5.1 Flexible Systems ...... 97 5.2 Sensitivities of ReCon ...... 99 5.3 Staged Deployment of ReCon ...... 107 5.4 Responsive Launch for ReCon ...... 114 5.5 Challenges in Implementing Engineering Options ...... 119

6 Conclusion 121 6.1 Conclusions and Contributions ...... 121 6.2 Future Work ...... 124

8 List of Figures

1-1 Illustration of Classical Orbital Elements ...... 19 1-2 Results of Applying Varying Amounts of ∆푉 at 450 km Altitude Orbit 21 1-3 Ground Track Shifts in Response to Regional Event ...... 24 1-4 Illustration of Maneuver from GOM to ROM ...... 26 1-5 Performance of ReCon Designs against Equivalent Static Counterparts [34]...... 27 1-6 MIT ReCon Simulation Structure [34] ...... 28 1-7 Revisit Times for a Variety of Remote Sensing Applications [48] . . . 29 1-8 Thesis Areas of Investigation ...... 30

2-1 Satellite FOV vs. FOR ...... 36

3-1 Original Results vs. Replicated Results of Single Pass Capture Code [7] 49 3-2 Effects of Changing the Time Step-Size on Scheduler Performance . 50 3-3 Effects of Changing Imaging Integration Time on Scheduler Performance 51 3-4 Effects of Satellite Agility on Scheduler Performance . . . . . 51 3-5 Effects of Satellite Constraints on Average Revisit Times . . 53 3-6 Example of Mulitple Pass Solution (Roll-Restricted) ...... 54 3-7 Computational Time Required for Exhaustive Approach ...... 55 3-8 Benefits of Using Folding Time Horizon ...... 58 3-9 ReCon Simulation Target Locations ...... 59 3-10 Image Captures for ReCon v.s. Static Design ...... 60 3-11 Average Revisit Time for Regional Target ...... 60 3-12 Average Passes Before Revisit for all Targets ...... 61

9 3-13 Lower Bound of 95% Confidence Interval for Chance of Finding Opti- mal Path Given Varying Convergence Criteria ...... 63 3-14 Iterations Required to Achieve Varying Convergence Criteria . . . . . 64 3-15 Performance of Different Supernode Locations Given Specified Target Distribution ...... 67 3-16 Performance of Varying RGT Designs Using Random Target Locations with Same Supernode ...... 68 3-17 NASA Identification of Wildfire Locations: January 14th, 2020 [42]. 70 3-18 Mapping Fire Locations to Longitude and Latitude Points ...... 71 3-19 Capture Regions for Agile Imaging ...... 72 3-20 Capture Regions for Push-Broom Imaging ...... 72 3-21 Target Captures for Different Imaging Techniques ...... 73

4-1 Burn Sequence of Low-Thrust Transfer ...... 78 4-2 Relative Performance of Low-Thrust Propulsion Against Hohmann Trans- fers...... 82 4-3 Phasing Angle Illustration ...... 84 4-4 Total Transfer Times for Various Phase Angles ...... 85 4-5 Propulsion System Tradespace for 60표 Phasing Angle ...... 87 4-6 Mass Changes Based on RGT Altitude ...... 91 4-7 Mass Trades for Using Low-Thrust Propulsion ...... 92 4-8 Determining 훽 Requirements ...... 94

5-1 Use of Satellite Imagery in Emergencies [61] ...... 101 5-2 Probability Distribution of Average Demand for Imagery Per Year . . 102 5-3 Performance Sensitivities of Nominal Constellation Design ...... 105 5-4 Average Fuel Usage with Varying Response Times ...... 106 5-5 Fuel Usage for Different Sized Constellations ...... 107 5-6 Fuel Consumption of Constellation Comparison ...... 108 5-7 History of Large Prices [29] ...... 110 5-8 Projected Small Launch Vehicle Prices [45] ...... 111

10 5-9 Distribution of Constellation Cost ...... 113 5-10 Time of 45표 Shift in Λ Given Varying ∆푉 Usage ...... 117 5-11 Average Revisit Time As Constellation Size Increases (moving down the graph) and Time to Reconfigure Increases ...... 118

6-1 Thesis Areas of Investigation Conclusions ...... 124

11 12 List of Tables

3.1 Results of 3 Day Schedule for Fire Case Study ...... 74

4.1 Summary of Current Propulsion Systems for Small Satellites [43] . . 77 4.2 Test Thruster Parameters ...... 77

5.1 Launch Provider Options in Analysis ...... 104 5.2 Cost Breakdown for Example Case ...... 109 5.3 Cost Variations in Example Case ...... 110

13 14 Chapter 1

Introduction

Earth Observation (EO) is one of the primary purposes of satellites in space [20]. These satellites provide important imagery and other observational data that pos- itively influence multiple aspects of daily life, from predicting weather patterns to providing disaster relief [30, 55]. The increasingly global access to EO data provides for a large range of applications important for solving global problems such as improv- ing agricultural efforts, tracking changes in land use, and monitoring climate change [58]. Disaster management is another commonly cited use of EO, but responders rarely rely upon it as a primary data source in relief efforts [19, 26]. While emer- gency responders appreciate the data they can get from satellite sources, it is often too expensive, slow, and inconsistent to be a primary source of mapping and imaging for real-time relief efforts. These delays and inconsistencies do not accurately reflect the potential capabilities of satellite constellations. Designing towards responsiveness, through reconfigurability, is a promising solution to this problem [7, 34, 47]. Although previous research shows reconfigurable designs yield higher performance than their traditional counterparts, there are still several open areas for exploration in this topic to investigate in the design space [34].

15 1.1 History of Earth Observation Satellites

The history of EO predates the first artificial satellites, presently the most common collector of such imagery [20]. It was actually a hot air balloon that took the first aerial photographs of the Earth in 1858. The interest in taking pictures of the Earth from above continued with imagery taken from aircraft in conjunction with military campaigns in 1914. Shortly after, in 1929, Robert Goddard was to image clouds by strapping a camera onto a rocket [22]. Yet, EO, as it is known today, originates with the launch of 1 in 1957. The potential capabilities a bird’s eye view from space provided were immediately evident. When the National Aeronautics and Space Administration (NASA) began in 1958, one of the explicit benefits of an orbiting satellite was "it can look down and see the earth as it has never been seen before" [57]. The first image from an orbiting satellite of the Earth was from Explorer 6in1959. In 1960, the Television and Infrared Observation Satellite 1 (TIROS 1) was launched as the first satellite entirely dedicated to one of the most common applications ofEO: observing the weather [20].

One of the most prolific examples of an EO mission is the Landsat program, originally launched in 1972 with Landsat 1 [5]. This satellite is largely considered the first with an EO mission dedicated to global land cover observation. Through 2013,34 sovereign nations have followed suit in launching comparable land cover observation systems. In their work, Lauer et. al identify five forces that drive this inherent interest for EO capabilities: the need for better information, national security, commercial opportunities, international cooperation, and international law [33]. Of course, the most significant drivers have shifted over time. However, each is still relevant tonation actors as a whole. The case study of Landsat shows the long-lasting impacts of these capabilities. It created an entire field of science in remote sensing by providing anew need to decipher every detail contained in the images captured from above. With its 18-day revisit period, researchers could now evaluate change much more consistently than images traditionally captured from aircraft. Landsat is the longest continuously running observational program and is referenced in thousands of scientific publications

16 [5]. The impact of this program is just one example of consistent and wide-reaching effects of EO.

While governments have dominated in the realm of producing satellite imagery for the majority of spaceflight’s short history, recent trends have put private entities in the forefront. Planet, a commercial satellite imagery company, launched the first of its Dove satellites in 2013 and continues to build its constellation through the present day [27]. Today their constellation achieves daily revisit times across the globe, providing imagery with a 3-5 m ground sampling distance (GSD) resolution. They also offer products with even better resolution by request, as their revisit times are not comparable to the Dove constellation. These satellites are much smaller than traditional EO satellites launched into higher orbits, but provide an alternative to the exquisite observational satellites that often require massive budgets.

Despite advances in technology that help bring costs down and improve image quality, the ability of a satellite to change its position in space is still ultimately con- strained by Newton’s laws [54]. A satellite remains in orbit by maintaining enough kinetic energy to prevent it from plummeting down to Earth. The specific energy (휖) is a function of the specific kinetic and potential energy of the satellites on orbit. The kinetic energy is a function of its current velocity, 푉 . The potential energy is a function of the effects of gravity, which are characterized by a standard gravitational parameter, 휇. This parameter is unique to any individual mass, but for Earth, 휇 ≈ 398,600.5 km3/s2. The potential energy of the system is also dependent on the dis- tance from the center of the gravitational body to the satellite, 푅. The high resulting energy of these objects traveling several km/s requires the satellites to follow orbits based on the geometry of orbital dynamics.

푉 2 휇 휖 = − (1.1) 2 푅

While many industries have been able to take advantage of the boom in observational technology, there are still many shortfalls that arise when relying on satellites for imagery. One notable issue is the frequency at which imaging occurs. Geosynchronous

17 satellites are at an extremely high altitude, 35,786 km above the Earth’s surface, which puts them in sync with the Earth’s rotation. This orbit equates to complete persistence. The problem with relying on information from these particular satellites is their distance. Again, constrained by the laws of physics, the optics required to achieve adequate image quality at such distances are costly and could lead to large, exquisite, and expensive projects. Therefore, when designing architectures, engineers often have to play with the tradeoff between persistence and resolution. Invoking this trade has been the traditional way of thinking for many years. Yet, recent work in the past decade has introduced a new mode of operation, known as reconfigurable satellite constellations (ReCon) [34, 47]. Traditionally, satellites only maneuver to compensate for drifts in their desired orbits [63]. However, ReCon accounts for these perturbations, 퐽2 in particular, when generating favorable orbits. ReCon uses fuel-efficient transfers to move a satellite’s ground track to a more favorable position when an event occurs [34]. Other research has also investigated using drag in low altitude orbits to achieve desired ground tracks [2]. This thesis looks at orbits designed for constellations with lifetimes on the order of several years and weighing several hundred kilograms, which removes the possibility of relying on drag for several reconfigurations. In a time when massive satellite proliferation in (LEO) seems to be the newest strategy in increasing coverage, having this kind of capability on a satellite is an advantage. Not only would the satellite be designed to respond to taskings, but it would also carry plenty of excess fuel for potential conjunction avoidance in what may become a very crowded LEO regime.

1.2 ReCon Background

Satellite geometry is commonly defined by six orbital elements: 푎, 푒, 푖, Ω, 휔, and 휈. Figure 1-1 illustrates the definition for each of these elements, which are further described below.

18 Figure 1-1: Illustration of Classical Orbital Elements

The first two of these six represent the shape of the orbit. The semi-major axis, 푎, defines the size of the orbit, and the eccentricity, 푒, defines how elliptical the orbit is. An eccentricity of 0 equates to a circular orbit, and the closer 푒 gets to 1, the more elliptical the orbit becomes. Elliptical orbits have an apogee, the farthest point in the orbit from Earth, and a perigee, the closest point in the orbit to Earth. ReCon uses circular orbits, where 푒 = 0, which allows for both simplification in the orbital dynamics and consistent orbital velocity. Using a circular orbit renders the argument of perigee, 휔, which defines the location of perigee in the orbit relative the the ascending node, obsolete. The inclination of an orbit, 푖, defines the angle between the orbital plane and the equator. This element determines the boundary of a ground track’s latitude band. The target deck fed into the ReCon optimization often dictates the minimum inclination of the design based on the northern or southern extreme of the proposed targets. A satellite never passes over a target location that has a more northern or southern latitude than the orbit’s inclination. The right ascension of the ascending node, RAAN or Ω, defines where the orbit passes through the equator as the satellite is moving from the southern hemisphere to the northern hemisphere. RAAN is sometimes referred to as the "swivel" of the orbit and is the major orbital element of interest in ReCon. The final orbital element is true anomaly, 휈, which represents the satellite’s instantaneous position in the orbit relative to perigee. Since in ReCon there is no perigee, the satellite’s position is instead measured from the ascending node. This element is the argument of latitude

19 (AOL), 푢. Another measure of the satellite’s location is its mean anomaly, 푀. Mean anomaly relates the satellite’s position to its mean motion and can be calculated using Equation 1.2. 퐸 represents the eccentric anomaly, defining a satellite’s location in its elliptical orbit considering its eccentricity. For circular orbits 푀 = 휈.

푀 = 퐸 − 푒 sin 퐸 (1.2)

Where: 푒 + cos 휈 cos 퐸 = 1 + 푒 cos 휈 When satellites maneuver into different orbits, their ground track over the Earth’s surface changes. Maneuvers fired in plane change an orbit’s size, and maneuvers fired out of plane can change the plane of a satellite, shifting its inclination or RAAN for example. However, plane changes are notoriously fuel intensive maneuvers compared to in-plane maneuvers. The amount of fuel needed to complete a maneuver is related to the resulting change in velocity, or ∆푉 . One of the most common and efficient maneuvers is a Hohmann transfer. During this transfer, a satellite uses two burns to maneuver from one orbit to another through an intermediary transfer orbit, with a size 푎푡푟푎푛푠푓푒푟. The following equations dictate how ∆푉퐻표ℎ푚푎푛푛 is calculated for a

Hohmann transfer from one circular orbit, 푎0, to a new circular orbit, size 푎1.

푎 + 푎 푎 = 0 1 (1.3) 푡푟푎푛푠푓푒푟 2

√︂ 휇 √︂ 휇 푉0 = , 푉1 = (1.4) 푎0 푎1 √︃ ⃒ ⃒ √︃ ⃒ ⃒ ⃒ 휇 휇 ⃒ ⃒ 휇 휇 ⃒ 푉푡푟푎푛푠푓푒푟0 = 2 ⃒ − ⃒ , 푉푡푟푎푛푠푓푒푟1 = 2 ⃒ − ⃒ (1.5) ⃒푎0 2푎푡푟푎푛푠푓푒푟 ⃒ ⃒푎1 2푎푡푟푎푛푠푓푒푟 ⃒

∆푉퐻표ℎ푚푎푛푛 = |푉0 − 푉푡푟푎푛푠푓푒푟0 | + |푉1 − 푉푡푟푎푛푠푓푒푟1 | (1.6)

In comparison, the ∆푉 needed to make a simple plane change (∆푉푝푙푎푛푒) is directly

related to the velocity of the satellite (푉0) and the angle of the plane change (휃). This is why plane changes in long-duration missions are performed at much higher

20 altitudes, where the satellite has a lower velocity, and not in the LEO regime.

(︂휃 )︂ ∆푉 = 2푉 sin (1.7) 푝푙푎푛푒 0 2

Figure 1-2 illustrates the dramatic differences in the application of the same amount of ∆푉 for a Hohmann transfer compared to a simple plane change. A 4,000 km increase in altitude requires the same amount of ∆푉 as a 10표 plane change in LEO.

Figure 1-2: Results of Applying Varying Amounts of ∆푉 at 450 km Altitude Orbit

Because adding the capability to maneuver requires the addition of fuel to the satellite, which makes the satellite heavier, maneuvering as part of a concept of oper- ations is unusual. Even if maneuvering is to be executed, plane changes are extremely rare. However, making plane changes does have the added benefit of changing the observational capabilities of a constellation. Orbits experience effects that alter their motion. One of the most notable effects is atmospheric drag, which in general acts to take energy out of the orbit and decreases its semi-major axis. The other common effect is known as 퐽2, which is an effect from

21 the Earth’s nonuniform gravity field, caused by its oblateness. The 퐽2 effects on Ω, 휔, and 푀 are constant over time. The effects are directly related to the size, inclination, and eccentricity of the orbit. The semi-latus rectum (푝) and mean motion (푛) of the satellite are alternative variables used to characterize the satellite’s orbit. The semi-latus rectum is an alternative way of describing an orbit’s shape, and the mean motion describes the satellite’s average radial velocity throughout the orbit.

푝 = 푎(1 − 푒) (1.8)

√︂ 휇 푛 = (1.9) 푎3

Equations 1.10 - 1.12 illustrate the effects of 퐽2 on the orbital elements, where the

radius of the Earth (푅푒) is 6378.137 km and 퐽2 ≈ 0.0010826269 [63].

3푛푅2퐽 Ω˙ = 푒 2 cos 푖 (1.10) 2푝2

−3푛푅2퐽 휔˙ = 푒 2 (5 cos2 푖 − 1) (1.11) 4푝2 −3푛푅2퐽 √ 푀˙ = 푒 2 (3 cos2 푖 − 1) 1 − 푒2 (1.12) 4푝2

Satellite operators either accept the precession that occurs from the 퐽2 effect or use a propulsion system for periodic adjustments that keep the constellation in the correct configuration. The ReCon concept uses specific aspects of orbital geometry togen- erate favorable orbits for regional specific observations, while minimizing the amount of fuel needed for maneuvering. Although there may be many favorable orbits that look at one particular region, MIT’s ReCon simulation restricts the search space to repeating ground tracks (RGT). These are orbits where a satellite will complete an

integer number of revolutions (푁표) around the Earth in an integer number of days

(푁푑) [41]. These values are related by the nodal period of the orbit (푇Ω) and the

nodal period of Greenwich (푇Ω퐺). The period of RGT orbits (푇푅퐺푇 ) is related to

22 these ratios, which can be used to derive the semi-major axis of the orbit.

푇푅퐺푇 = 푁표푇Ω = 푁푑푇Ω퐺 (1.13)

푇Ω퐺 is the amount of time it takes for the right ascension of the ascending node, Ω, to return to the same position relative to the Earth. This return is due to both the −5 ˙ rotation of the Earth at 휔푒 ≈ 7.29211585530*10 rad/sec and Ω, the change in Ω due to 퐽2 effects defined in Equation 1.10.

2휋 푇 = (1.14) Ω퐺 ˙ 휔푒 − Ω

푇Ω is related to the change in mean anomaly, 푀, and argument of perigee, 휔, due to perturbations. 2휋 푇Ω = (1.15) 푀˙ +휔 ˙ Substituting Equations 1.10-1.12 into Equations 1.14 and 1.15 results in the period of an RGT being a function of the orbit’s semi-major axis, inclination, eccentricity and desired RGT ratio [34].

(︂ )︂−1 2휋푁푑 푛 푇푅퐺푇 = 1 + 2휉 cos 푖 휒 (1.16) 휔푒푁표 휔푒 Where: [︁ √ √ ]︁ 휒 = 1 + 휉 4 + 2 1 − 푒2 − (5 + 3 1 − 푒2 sin2 푖)

3푅2퐽 휉 = 푒 2 4푎2(1 − 푒2) 2휋 푇 = 푅퐺푇 푛

ReCon designs use circular orbits, dropping the eccentricity term out of solving

for the RGT. Therefore, given an inclination and desired RGT 푁푑/푁표 ratio, one can solve for the semi-major axis of the orbit. These orbits exist at specific altitudes given

an inclination, and they use the precession of Ω due to 퐽2 to ensure the same location will be passed over after a given integer number of days.

23 The ReCon concept operates by deploying a constellation into a global observa- tional mode (GOM), which meets a specified, global revisit criteria. In the nominal case, ReCon is to achieve complete coverage, within its latitude band, every 24 hours. While the ReCon concept can use a variety of satellite architectures, each with its ad- vantage, the following work focuses on symmetric Walker patterns [62]. Walker

Delta patterns evenly space orbits’ ascending nodes at intervals of 2휋/푁푝, where 푁푝 is equal to the number of planes in the design. Each satellite within a plane is evenly distributed at an interval of 2휋/푁푠, where 푁푠 is the number of satellites per plane in the design.

Once an event occurs, the satellite maneuvers to a regional observation mode (ROM) to meet more frequent revisit requirements, such as once per hour, for a smaller region. Figure 1-3 shows how the ground tracks of the constellation shift from being distributed across the globe in the left-hand image to being aligned a regional target after an event occurs in the right-hand image.

Figure 1-3: Ground Track Shifts in Response to Regional Event

24 The resulting coverage is much better over the desired region but suffers in the ar- eas not covered by the RGTs. The ReCon simulation code models the reconfiguration process and the resulting performance of the maneuvering satellites. An assignment process allocates the satellites within the constellation to either change their orbital altitudes or remain in place. In Figure 1-3 the top grouping of satellites maneuvered into an RGT, which passes over the region from north to south. The middle group- ing maneuvered into a ground track moving from south to north over the region. The bottom grouping does not maneuver, which occurs if too much fuel is required to maneuver, if the maneuver is infeasible for the required response constraint, or if the assigned maneuvers achieve the required revisit metrics without needing full reconfiguration.

Figure 1-4 illustrates the four-burn maneuver to place a satellite into ROM. An initial, instantaneous burn places the satellite into a Hohmann transfer orbit to inter- cept the desired drift orbit. This transfer lasts half of the period of the transfer orbit

(∆푇1), which is typically approximately 45 minutes for a LEO transfer. A change in altitude into a drift orbit causes the satellite’s ground track to shift. Moving into a specific drift orbit allows for control of the rate of the ground track shift. Thesatellite

remains in a drift orbit for ∆푇2 seconds until it phases into the designated Ω and 푢 values, directly related to the desired sub-satellite point (SSP) which is the target’s location. Changing the relationship between the period of the orbit and the length of a sidereal day allows for the change in the SSP without executing a plane change burn. After proper placement, the satellite then performs a second Hohmann transfer into an RGT, entering ROM after ∆푇3 seconds to complete the maneuver.

The SSP is the latitude (훿푆푆푃 ) and longitude (Ψ푆푆푃 ) point on the Earth the satellite instantaneously passes over. It is defined by inclination, AOL, and RAAN. The SSP changes throughout the entire reconfiguration. Equations 1.17 and 1.18 show this change using the final values of 푢 and Ω as calculated using the perturbation equations shown above [40]. The rate of change in 푢 is directly related to the size of the drift orbit. The ReCon simulation models different drift orbits to minimize the amount of time it takes to position the SSP in the correct location. The initial

25 orientation of the Earth is represented by Ω푒푡0, the right ascension of Greenwich at epoch.

Figure 1-4: Illustration of Maneuver from GOM to ROM

−1 훿푆푆푃 = sin sin 푖 sin 푢 (1.17) cos 푖 sin 푢 Ψ = tan−1 − 휔 (∆푇 + ∆푇 + ∆푇 ) + Ω − Ω (1.18) 푆푆푃 cos 푢 푒 1 2 3 푒푡0

In the ReCon simulation, the allocation process, which uses a dynamic program- ming approach, chooses enough satellites to reconfigure to meet a given revisit require- ment. The nominal use case is set at a 1-hour revisit requirement during daylight hours. The performance of this constellation can be just as good as an equivalent traditional static constellation. A static constellation, when referred to in the con- text of the ReCon concept, is a constellation following classical orbital mechanics. It does not maneuver to respond to an event. Instead of designing a constellation that achieves a high revisit frequency globally, the ReCon concept allows for a smaller number of satellites to achieve the same revisit frequency regionally on-demand. The advantage of using ReCon is that it can provide high-quality coverage to many loca- tions without requiring intensive global coverage. Figure 1-5 shows the performance increase achieved by using ReCon designs compared to static constellations of equiv-

26 alent cost. In the performance evaluation, the y-axis evaluates the performance of the constellation against perfect potential performance in terms of the revisit and resolution requirements stipulated.

Figure 1-5: Performance of ReCon Designs against Equivalent Static Counterparts [34]

Adding this capability to satellites is not unreasonably expensive. ReCon achieves superior performance over five years with the addition of only 300 m/sof ∆V. While adding extra weight to a satellite increases launch costs, these costs are continuing to decrease. The value-added for making the satellites heavier on launch is significant, especially since this reduces the total number of satellites on-orbit, and thus the number of launches needed in the long run. The responsiveness of these satellites is a critical component of an ever-increasing interest in making space a flexible, resilient environment. Having the flexibility to comfortably maneuver satellites to new orbital positions can extend the use of satellites and help them to become a more responsive asset. Instead of relying on aircraft, crewed or remotely-piloted, to fly dangerously into fires and hurricanes, satellites could quickly reposition to provide high-quality information to those who need it. There are orbits other than repeating ground tracks that have benefits. If users need to track an event, they could generate an orbitwith parameters that drift from perturbations at a rate that follows a moving target. The goal of ReCon is to maximize the performance of EO constellations while minimizing the overall cost. The ReCon simulation code searches to find the smallest

27 maneuvers that lead to the greatest marginal performance. This vast search space requires the use of an optimization tool to converge on the best solutions. The MIT ReCon simulation relies on a structure shown in Legge’s 2014 thesis, and here in Figure 1-6.

Figure 1-6: MIT ReCon Simulation Structure [34]

The full ReCon simulation is composed of three layers in general. The outer layer is the multi-objective optimization layer, which evaluates the performance of the constellation designs based on the quality of imaging, frequency of overpasses, and cost of the designs. This layer uses genetic algorithms to design constellations. The designs are tested using Monte Carlo methods, testing a variety of target loca- tions based on historical locations of natural disasters, weighted by economic impact [19]. The Monte Carlo sampling passes these locations into the simulation of the ac- tual reconfigurations. The assignment process for each reconfiguration uses dynamic programming. The ability of the satellites to reconfigure is based on their location when the events occur and the amount of fuel needed to meet response constraints. Once the assignment process determines the optimal maneuvers, the outer layer of the ReCon simulation evaluates the performance of the constellation against other designs. MIT’s Supercloud system runs the entirety of this tool. Its original objec-

28 tive was to generate designs suitable for use in a responsive constellation. Since its creation in 2014, MIT researchers have used the tool to look at specific applications and extensions of the ReCon concept.

1.3 Motivation

EO satellites are used for a variety of purposes and carry a variety of sensors. Vege- tation and water systems can change dramatically throughout a single day, requiring a high revisit frequency [48]. In contrast, seasonal and climate changes need observa- tion much less frequently. The ReCon concept applies to all of these missions. With its adaptable revisit cadences, ReCon can go from globally observing climate change to hourly monitoring earthquake relief efforts within days. Figure 1-7 shows a variety of these applications.

Figure 1-7: Revisit Times for a Variety of Remote Sensing Applications [48]

However, the primary use of ROM is to respond to hazards or natural disasters, characterized by a need for imagery at least every other day. Most other applications can be satisfied using GOM. Natural disasters, on the other hand, occur inunknown locations and require an immediate response with high-quality, persistent imagery [26]. The most commonly cited reasons for not using satellite imagery to help with a disaster response are cost and time to get imagery. In a survey of state emergency responders, 70% answered that imagery received 72 hours after an event is already too late [26]. Using whatever imagery is conveniently available is not enough to help emergency responders. However, a constellation dedicated to monitoring for disasters

29 and quickly providing imagery can go a long way to helping disaster response. This imagery is especially important in the first few days following a disaster, but it isalso helpful in the several weeks that follow in the most extreme cases. ReCon provides this solution to those who need it. The focus of this thesis is to expand the ReCon concept in three different dimen- sions. As shown in Figure 1-8, these include refining the operational capabilities of the satellites, adding emerging technologies to the design space in the form of electric propulsion (EP), and exploring deployment strategies to reduce cost.

Figure 1-8: Thesis Areas of Investigation

ReCon observations have limitations if the design uses a traditional push-broom style of observational data. The purpose of ReCon is to collect high temporal res- olution imagery of a distinct target location. However, ReCon positions satellites into RGTs to achieve these high-frequency revisits. Using RGTs prevents the user from having the ability to specifically tailor an overpass’s geometry. If the user is interested in a location larger than a satellite’s field of view, there is no guarantee the entire region of interest exists along the ground track of the satellite. Therefore, the capability to slew to achieve off-nadir viewing angles is vital to make a reconfigurable constellation effective. When considering this functionality, the operational design needs to integrate a scheduler to achieve a high revisit frequency across all targets of interest, not solely the one reconfiguration point. Using the scheduler presented,

30 which implements a hybrid algorithm detailed in Chapter 3, the user can now generate schedules based on a variety of metrics. The weighted metrics include minimizing the total slew of the satellite, maximizing the number of images per target, maximizing revisit frequency, and minimizing the longest revisit period. The biggest limiting factor in ReCon is the fuel needed to implement the concept. While there are immense fuel savings when comparing drifting the ground tracks with simple plane changes, the satellites still need to carry extra fuel to respond to events. The biggest penalty for adding fuel comes in the form of the extra mass added to each satellite. However, one way to reduce the mass added is by using more efficient propulsion systems. Low-thrust technology is becoming more commonplace as the propulsive capabilities improve. Using this technology, either in place of or in con- junction with chemical propulsion, introduces important tradeoffs. Exploring these options allows a designer to decide which option is best for their particular mission, considering both the current and future states of satellite technology. Another way to alleviate the costs associated with adding fuel to the satellites is to explore alternative launch and deployment options. Staging the deployment of satellites is very effective for long-duration programs. As shown by the fifty-year life- time of the Landsat mission, the utility of EO imagery has remained strong and will likely remain strong for decades. Using a staged deployment strategy allows satellites to carry less fuel initially and for the user to launch fewer satellites upfront. The user can wait to observe what the demand of their constellation actually is and add satellites accordingly over time. As the landscape of the space enterprise continues to shift, alternative launch strategies make the concept of ReCon more adaptable to the uncertain demands of EO imagery.

1.4 Approach

ReCon, when referenced in this thesis, in general, refers to the work completed by Dr. Legge in his 2014 thesis. His complete optimization and design code was the foundation for much of this testing and further research. The concepts of schedul-

31 ing, low-thrust propulsion, and staged deployment were all tested through the lens of this simulation. ReCon is a layered simulation, which uses dynamic programming to optimally determine the best reconfiguration strategy for a constellation and ge- netic algorithms to mutate the design of a constellation to improve its performance vs. cost ratio. However, the final product of ReCon is the recommendation ofa constellation design. The testing presented in the following chapters always relies on precomputed designs on the Pareto front. However, the methods and techniques apply to any constellation design presented. The designs chosen for testing were in the high performance range of the generated designs, which showed less variation in their parameters than the low-performance region. This allowed for more confidence in the design being conducive for reconfiguration. Specifically, the constellations often converged on one-satellite-per-plane Walker constellation designs as the performance difference between the ReCon and static designs widened [34].

A current limitation of ReCon is its ability to reconfigure to only one specific ground location. One of the goals in applying ReCon to a more realistic scenario is to consider multiple targets. While use for multiple locations across the globe re- quires a more in-depth look at the assignment process for the reconfiguration, use for multiple locations surrounding the target location represents a more confined prob- lem. An agile, multiple pass scheduler tested the impact of requiring ReCon to view multiple points in a region during its pass. This scheduler builds on previous work completed by Joseph Bogosian [7]. It integrated Bogosian’s approach of using dy- namic programming with other scheduling techniques including genetic algorithms, simulated annealing, and ant colony algorithms to create a multiple pass scheduler. The parameters of these algorithms were tuned to push towards high-performance so- lutions with the smallest computational footprint. After the testing of this scheduler, an investigation was performed into how best to couple the scheduler into ReCon. While it could be incorporated directly into the simulation, its performance is often highly correlated with the performance of ReCon. The more often a region is seen, the more often the ground targets are seen. Instead of a full integration, a function predetermines the value of a constellation design to weight the value of different in-

32 clination and RGT combinations. This value change was the most useful information to provide ReCon within the design loop. The updated ReCon simulation evaluated the performance of a design by coupling a utility function related to targets imaged using the full scheduler with the original ReCon simulation utility function.

A similar strategy analyzed the potential for low-thrust propulsion in ReCon. The analysis used methods adapted from a paper written by Ciara McGrath and Malcolm Macdonald in 2019 [40]. Their techniques were compatible with the current ReCon code structure. After structural transformations, the simulation tested a constellation with hundreds of regional reconfigurations for both the Hohmann transfer and contin- uous, low-thrust models. The following analysis compares the performance of both of these constellations to show the lack of response capabilities for the low-thrust model. While continuous transfers are ineffective for a rapid response application, they may be effective in relaxing the constellation from ROM back to GOM. Moving toGOM is not an urgent maneuver, and taking more time to complete this relaxation is ac- ceptable to ReCon operations. The exploration of this tradeoff provides a user with the ability to make decisions on which propulsion system would be best. Finally, an analysis explored the actual benefits of using low-thrust propulsion in terms of potential mass savings.

While incorporating scheduling and options for alternative propulsion systems are important additional flexible options, a third is exploring the use of staged and responsive launches. Staged deployments have numerous advantages, including the ability to defer costs and to observe actual demand. First, an analysis explored the sensitivity of ReCon’s nominal design. Then, a constructed decision rule determined when to launch satellites in response to observed demand. A comparison of the nom- inal and staged deployment’s cumulative frequency distributions of the performance per dollar spent shows the difference in the strategies. Legge investigated the useof responsive launch in his original 2014 thesis. He found that it is not a useful alterna- tive to ReCon. Although his reasoning holds, there are some additional considerations to explore.

Each of these three flexible additions of ReCon is incorporated to make ReCona

33 more applicable concept to the real world of satellite design. Scheduling, propulsion options, and launch options are all important trades to be considered when making decisions regarding constellation design.

34 Chapter 2

Literature Review

2.1 Constellation Scheduling

Scheduling satellite observations is a challenging, NP-hard problem requiring integer optimization techniques, which are computationally intensive. Integer optimization problems follow a format as shown in Equation 2.1 [8]. In this equation, 푐 represents the relative value of the 푛 number of states in 푥. Both 푎 and 푏 relate the states of the problem to the constraints of the system.

푛 ∑︁ Maximize 푐푗푥푗 (2.1) 푗=1

∑︀푛 subject to: 푗=1 푎푖푗푥푗 = 푏푖 (i = 1, 2, ..., m),

x푗 ≥ 0 (j = 1, 2, ..., n),

x푗 integer (for some or all j = 1, 2, ..., n)

The image scheduling problem is comparable to what is known as a multi-knapsack problem [6]. The basic premise of the problem is figuring out how to pack objects optimally into different "knapsacks" with feasibility constraints. In this case, the objects are images, and the knapsacks are satellites. In the problem formulation, a variable is assigned to each image, a value is assigned for each variable. A set of binary constraints dictates whether or not target capture is feasible. The objective is to find

35 the combination of image to satellite assignments that maximizes the cumulative value of the images collected by the satellites. Finding optimal solutions can sometimes take longer than the schedule is generated for, which makes finding global schedules inadvisable. This complexity is why generally, a rolling timeline is preferred. As agile satellites enter the environment, the already difficult scheduling problem becomes more difficult because the number of potential solutions increases. This prob- lem considers a satellite slewing across its plane of motion to target different ground locations within its field of regard (FOR), but outside of its field of view (FOV).Fig- ure 2-1 represents the satellite traversing through its FOR, capturing different targets (represented by the gold stars) within its FOV.

Figure 2-1: Satellite FOV vs. FOR

Four traditional methods can create high-quality solutions to the agile satellite scheduling problem. These include constraint programming, a local search algo- rithm, a greedy algorithm, and a dynamic programming algorithm [35]. Each of these techniques has its own advantages. Constraint programming is a model-based approach in which the problem is detailed and input into a solver, which allows for significant flexibility. However, this flexibility requires a vast search space tofindsolu- tions, which requires a significant amount of computation to explore without breaking down the problem. Local search algorithms are valuable in difficult problems with large search spaces but do not guarantee optimality. These include gradient descent search, simulated annealing, and genetic algorithms. Local search requires detailing the constraints of the problem explicitly and requires tuning of the algorithms.

36 The greedy and dynamic programming methods offer fast results but are limited in their ability to incorporate multiple objectives over several states. Greedy algo- rithms find the best solution given only the information pertaining to the immediate next step. These are very fast schedulers but are rarely optimal, as there is no con- sideration as to how a decision at 푡 = 1 could affect performance at 푡 = 20. Dynamic programming is a search method that can quickly find solutions when provided with linear criteria. More complex objectives and constraints require extensive memory usage and a loss of efficiency in the algorithm [35]. Researchers have implemented several types of algorithms in the effort to solve the satellite scheduling problem. In general, it is effective to represent the schedule as a permutation and use heuristics to alter the schedule in ways that drive to a final optimal solution [38, 65].

The following analysis builds on a scheduler built by Josef Bogosian in MIT’s Space Systems Lab for optimizing the capture of images over a single pass [7]. The objective is to optimally schedule the imaging of ground locations within a satellite’s FOR, but outside of its FOV. The imaging scheduler used two different techniques. The first was a depth-first branch and bound graph search. This technique found the optimal path but was incredibly computationally expensive and, therefore, not very effective on large target sets. Its structure also restricted the satellite to only rolling across the in-track direction; it could not pitch forward and backward. This structure restricted the true maximum performance of satellites with 3-axis pointing control and is referred to as the roll-restricted method. The second method used dynamic programming. This method operated much faster and allowed for the satellite to both pitch and roll. Bogosian developed the dynamic programming scheduler later due to its complexity. This scheduler also can restrict the pitch of the satellites to mirror the performance of the roll-restricted techniques [7]. The dynamic programming scheduler’s superior performance makes it the method of choice for the following analysis.

Dynamic programming is a process implemented on multi-stage decision making problems such as scheduling, investment strategies, or purchasing policies [4]. These types of problems are traditionally difficult to solve and are considered NP-hard,

37 requiring investigating a large search space. Dynamic programming is an integer op- timization method for finding optimal paths in multi-stage problems. Solving integer optimization problems is challenging due to the inability to implement traditional linear programming approaches and convex optimization. Dynamic programming is a general approach to optimization but is not an explicit algorithm. Instead, it is a way of breaking down a problem to search the space, relying on the fact that decisions made at any particular time do not require knowledge of decisions made at different stages. Dynamic programming can be difficult to implement without a thorough knowledge of a problem’s structure, and it also must be formulated in very specific ways to be computationally efficient.

The distinct parts of a dynamic programming problem include states, decisions, and start and end nodes [4]. At any given time, a system is in a state dictated by state parameters, and the decisions made act as a transformation of the state variables. The overall goal is to choose decisions that maximize the objective in the end state. The process of dynamic programming involves splitting a problem up into smaller, more manageable pieces, dictated by the phases of the problem. The potential solutions within each phase are fully explored and then pruned when compared against other superior solutions within that phase. This problem only works when each transition is entirely independent of all other transitions. An optimal path found after a given state remains optimal, regardless of previous transitions, due to the concurrent pruning of suboptimal solutions. Without transition independence, the pruning process that makes dynamic programming efficient would be unimplementable. Because ofthis lack of state dependence, this also means that the solutions found are the same regardless of the initial conditions of the problem, although traditionally dynamic programming problems are solved from the end state to the starting state.

When applying dynamic programming for target scheduling, the entire sequence is discretized in time using a specified step size [7]. Each of these steps is astate in the system. The agility of the satellite determines its possible paths given the satellite’s position at the previous state. Each node of the path represents a target. The satellite is not required to pass through each node, but it must start at the initial

38 node and end at the final node. These are dummy targets in this scheduler andnot actual targets for the analysis. They are far enough away from the other targets to prevent them from imposing a constraint. The overall metric to maximize in this problem is the total value captured. Like the annealing process metals undergo as they are shaped, simulated annealing mimics this same idea of shaping and cooling a path to find an optimal solution [31]. The iterative process starts with a random path and generates a neighboring solution with minimal changes. If the new path is better than the old path, the scheduler accepts the new path as the current reference solution. The process is then run again, generating a neighboring solution from the current reference path. The scheduler introduces annealing through the generation of a random variable compared to the quality of an answer. When a new solution is worse than the current reference path, its relative performance (∆퐸) is passed into an acceptance probability function, Equation 2.2. This function scales the probability of accepting the new path as the current reference path in accordance with the "temperature" (푇 ) of the process and the quality of the new solution.

푃 (∆퐸) = 푒Δ퐸/푇 (2.2)

The probability calculated is compared to a random number generated between 0 and 1. If the probability returned from Equation 2.2 is higher than this random number, the solution becomes a reference solution, and if not, it is discarded. Therefore, a solution that may not be superior to the current reference solution, but is close, and found early in the annealing process has a high probability of being accepted as the reference path. As the process runs, the temperature "cools," decreasing the value of 푇 at a rate of 훼, meaning the chance of accepting a worse solution than the reference solution shrinks.

푇푖+1 = 훼 * 푇푖 (2.3)

This acceptance function allows the generated solutions to jump out of local minima in the early iterations, but not to stray far from a good solution towards the end of the

39 process. There are different parameters to tune in this process, including the initial temperature, the lower threshold temperature, the rate of cooling, and the number of iterations for each temperature. The tuning process dictates how much the user would like the process to explore wide-ranging solution spaces or to pursue promising paths deeply. The ant colony algorithm also derives its name from the physical world. It acts in the same way ants behave when finding an optimal path from their colony to afood source [13]. At first, the ants do not know where the food is and randomly search for their objective until found. Once they find their objective, they return to the colony following the same path. The ants deposit a pheromone along their path that other ants follow to get to the food source. Small variations in the paths occur in an attempt to find a shorter route. The paths that allow for the fastest travel have pheromones deposited more often, and therefore more ants are more likely to take this path. As the ants use a path less and less, the pheromone evaporates, and that path becomes less desirable. The ant colony algorithm follows this same structure. Initially, the algorithm gen- erates several random paths, passing their quality into transition probability matrix.

These probabilities (푝푖푗) are calculated based on the previous values of paths (휏푖푗)

that followed these transitions and the nominal value of making this transition (휂푖푗). The probability function weights these two values with the exponential constants of 훼 and 훽, reliant on user choice [13].

훼 훽 [휏푖푗] * [휂푖푗] 푝푖푗 = 푛 (2.4) ∑︀ 훼 훽 [휏푖푗] * [휂푖푗] 푗=1 These values are tuned to maintain a good balance between relying on previous solutions and allowing for the easiest path to be selected. The user can tune not only these parameters but also the number of ants used. After the ants generate their solutions, the scheduler updates the transition matrix with the best path found for that particular grouping of ants. The new transition matrix influences the generation of a new path for each ant, and they continue the process again.

40 Again sticking with the theme of observing the surrounding world for inspiration, genetic algorithms derive their techniques from the concepts of passing genes down a genetic line [32]. The algorithm runs with an optimal solution stored as a reference solution. This solution acts as a parent to other solutions found. As the algorithm searches for solutions, it returns the best solution of a set of many children solutions. The new high performing solution and old reference solution exchange properties between themselves to produce "offspring" that are again tested for their fitness inthe algorithm. The formulation for this analysis uses a partially mapped crossover, where sections of the path are traded between the two parents [32]. This crossover allows an algorithm to make intentional next guesses towards optimal solutions instead of searching a wide search space at random. Another property of genetic algorithms is the introduction of genetic mutations. Just as inbreeding causes issues in the natural world, there is potential for the genetic pairings to become cyclical. Therefore, it is also important to add a mutation to the new solutions. The mutations also help to avoid remaining in a local minimum while testing.

When looking at a global constellation design, an optimizer may desire an imaging schedule for a lengthy time horizon. For the application of ReCon, the default mis- sion length is two weeks. However, satellites could observe an area for much longer than that, depending on the circumstances. These long-duration missions are diffi- cult to optimize upfront due to the exponential growth of the path planning problem with the addition of the new path sequences. The authors of Automated Planning for Earth Observation Spacecraft looked at dividing an entire mission sequence into observational patches and front-loading computation to reduce the search space of the sequence planning [39]. Instead of optimizing over the entire mission life, the au- thors suggested patching together smaller sequences, with at least one common pass of overlap. This is known as a folding horizon. Using a folding horizon eliminates the compounding effects seen when making schedules longer and longer. The algo- rithm runs through different combinations of patches, each with a different number of passes, to find the best separation points to return the overall optimal solution. Ifthe scheduler runs to completion, this algorithm searches the entire path space, but the

41 computational time is exceptionally high. However, this method can be interrupted at any time and the current best solution found. Restricting the potential number of patches helps reduce the load on each horizon and can lead to a faster solution without sacrificing too much optimality. The authors also found that cutting offthe optimization after just a few minutes still led to a good solution [39]. Chapter 3 explores all of these techniques further in the creation of a multiple-pass scheduler in the ReCon context.

2.2 Propulsion Systems

The history of electric propulsion (EP) mostly involves either stationkeeping or final orbital insertion of satellites [14]. In the past, the only satellites that used electric propulsion as the primary system were part of science missions. However, the ma- jority of the satellites launched into geosynchronous Earth orbits (GEO) now use electric propulsion, showing the major technology shifts that have occurred in the 21st-century [37]. Electric propulsion is typically broken down into three categories: electrothermal, electrostatic, and electromagnetic. Electrothermal thrusters are the most commonly flown of the EP systems. Electrothermal thrusters function byapply- ing heat to a fluid to increase the exhaust velocity. Electrostatic and electromagnetic thrusters each offer individual advantages but do not as fly as often at thecurrent state of technology [14]. The focus of this research is with the implementation of current technology, with the caveat that as technology improves, the tradespace pre- sented will likely continue to shift in favor of the implementation of EP systems. The original focus in the history of EP research looked at ion thrusters, pulsed plasma thrusters, resistojets, and Hall thrusters. Resistojets drew the closest com- parison to traditional chemical thrusters and were therefore developed the most - oughly early on. In 1980, the use of EP, through resistojets with hydrazine mono- propellant, emerged as a competitive option for stationkeeping in GEO. High-power resistojets are commonly used today on several GEO and LEO satellites. The next big leap in the use of EP was in 1993 with the emergence of arcjets in the commercial

42 sector.

Arcjets offered much greater efficiency than conventional chemical thrusters, mak- ing them a viable option for reducing the mass of a spacecraft. This mass decrease allowed for either additional payload mass or an overall lighter satellite, lowering launch costs. The USSR developed Hall thrusters during the time the United States began investigating EP. It was only after the Cold War that this technology spread to the US, based on substantial research already completed in the USSR. Although EP systems gained traction in the 1990s, it was not until the early 2000s when it seemed as though this technology could be useful for more than stationkeeping. Due to the success demonstrated in the early 2000’s with a few missions supplementing maneu- vers with EP, the United States Air Force and Lockheed Martin developed AEHF, which heavily relied on the use of Hall thrusters for its insertion into GEO. In 2015, Boeing delivered the first satellites to use only EP thrusters for on-orbit maneuvering without carrying a backup chemical system. This decision allowed for mass savings, which created the opportunity to launch on smaller, less expensive launch vehicles [37].

LEO is becoming a more popular choice for satellite constellations. It is also an environment where the missions are often unique and drive different requirements for propulsion systems. EP systems initially showed promise for use in LEO due to their mass savings potential, but as conventional hydrazine thrusters became more versatile and better suited for LEO operations, EP systems lost their appeal because of higher complexity and less flight experience. However, the idea of using EP inLEO more prominently is starting to reemerge [37]. The primary drawback of using EP in LEO is the high power demand that EP places on a satellite. Large GEO satellites can support the power requirements of EP thrusters, but LEO satellites, which tend to be much smaller, have to be redesigned to do so. LEO satellites started by using resistojets for small maneuvers, such as deorbiting, but have slowly been shifting more to the use of Hall thrusters as the technology improves. For all LEO satellites launched with EP since 1981, 74% have used a resistojet. However, from 2009-2018 only 39% used a resistojet and 42% used a Hall thruster, indicating a shift in preference [37].

43 In general, the shift to EP has not been as substantial in LEO as in GEO due to the lack of incentives to invest in EP technology development. Until government and research agencies are able to improve the performance of EP thrusters, it will be difficult for them to proliferate through LEO operations. Most EP satellites that are primarily for operational, instead of technology demonstration, purposes still use the resistojet thrusters due to their simplicity and low-power demands. Where the maneuverability demands are higher on a satellite, Hall thrusters are preferred for their better performance. This thruster is the particular EP system investigated further in Chapter 4. Another growth area for EP is in small spacecraft, such as , which has been an area of growth in the last decade. Over 200 small satellites, weighing less than 50 kg, were launched from 2010 to 2018, with the numbers projected to keep growing. There are several problems with using EP on small satellites, including mass, volume, and power limitations. However, the number of small satellites launched with a micro EP system on board is growing as the capabilities of the satellites improve and more ∆푉 is required for their missions [37].

2.3 Deployment Options

A challenge in the commercial satellite industry is the upfront capital required to de- sign, build, and launch a satellite. Developers often spend much of the money upfront before they can provide the actual services from a constellation. Iridium, Globalstar, and most recently, OneWeb are examples of companies that outspent the demand on their systems [16, 24]. While there are always forecasts of projected demand, the reality is always uncertain. When considering traditional satellite designs, the de- signers use a tradespace to make trades between cost and performance to generate a Pareto curve. A design on the Pareto front is known as a non-dominated design. A non-dominated design is a design in which there is no design of equal cost with better performance and no design of equal performance for a lower cost [56]. However, instead of generating this front in terms of an estimated demand, it is better to use

44 a variety of demands represented by a probability density function. The optimization code already considers some uncertainty when testing ReCon designs, using various target locations and timings to test their performance [34]. The problem with sizing a satellite constellation to a single demand projection is its lack of ability to adapt to a drastically different reality. In the event the demand is drastically higher, the constellation could not support the higher demand. If the demand is drastically lower, the costs upfront were still the same, but now there is no way to recoup the initial investment. A proposed solution to this problem is using a flexible approach, allowing for a staged deployment of a constellation. Previous work focuses mostly on using a staged deployment for communication satellite constellations [17]. Research has shown that using a staged deployment, which would allow users to add capacity over time, could save the owners of communication satellite constellations 20% throughout the lifetime of the constellation. It does require some upfront redesign on the part of the designers, who would need to build out their constellation allowing for this flexibility instead of picking a design on the Pareto front. One of the challenges in using a staged deployment is the difficulty presented by the restricted orbits that launch vehicles can reach. Because a launch vehicle only reaches one plane, typically, all of the satellites that are in that same plane launch on the same rocket. The designers need to carefully pick the stages of the constellation to keep this constraint in mind.

2.4 Research Gap

While, individually these three concepts each have significant literature, the focus of this thesis is in the effects of applying the different ideas to developing the ReCon concept. This analysis includes examining how the optimal ReCon designs change with the implementation of a scheduler into the system. Although the development of agile satellite schedulers is not novel, the merge with ReCon makes the problem challenging and introduces additional performance considerations into the problem. Introducing additional propulsion and deployment options also inject options into the

45 tradespace. It is through this previous low-thrust transfer and flexible option research that a hybridization with the ReCon concept is even possible. In some cases, adding these design options makes sense, and in others, it does not. This thesis explores the previously described scenarios in order to further enhance the breadth of the reconfigurable satellite constellation design space.

46 Chapter 3

Scheduling Image Collection for ReCon Designs

3.1 Scheduling Background

The goal of ReCon is to achieve high-quality, responsive coverage metrics without paying for an exceptionally large or exquisite constellation. To further enhance Re- Con’s effectiveness, there must be an acknowledgment of the limitations placed on the system through the use of RGTs. After a constellation reconfigures, its ground tracks align to pass over a single location on the ground repeatedly. However, the geometry of the orbit determines the exact path the satellite travels on through a point. The inclination and semi-major axis are predetermined elements from the con- stellation design. Depending on the desired imaging locations, this predetermined pass geometry could cause undesirable imaging scenarios. The lower latitude targets experience a faster rotation on the Earth’s surface relative to the ground pass of the satellite. These passes move over the ground targets in a much more north-south direction than the higher latitude locations, which get much more east-west passes due to the slower relative speed of the satellite. This geometry again could present a challenge to users who are interested in targets with an unfavorable orientation. While RGTs are extremely important for the execution of ReCon, if an optical system is pointing only nadir, it views the same narrow band

47 of imagery over and over again. The user cannot choose this band after the orbit has been designed, as it is a function of the inclination and altitude of the RGTs. If the user is looking at an area exclusively within the satellite’s FOV, this is not a significant concern. However, when imagery for a broader region is requested, such as for events with large devastating effects like wildfires or floods, the ability toslew the satellite off of its nadir pointing is critical. Scheduling how a satellite slewsto capture images is a computationally difficult problem, especially extrapolated over an entire constellation. This problem is valuable to explore because adding the capability for satellites to slew to retarget to multiple points on the ground increases ReCon’s flexibility. To ensure proper analysis on the original Bogosian capture sequence, first the initial results presented in his thesis were replicated in Figure 3-1 [7]. Although the exact locations of the targets used in the testing are unknown, some parameters were explicitly listed. There were 30 targets scattered over a region set as 12,000 km by 2,000 km. The minimum elevation angle was set at 30표. Each target was assigned a random value between 1 and 10. The results comparing the roll-restricted and roll- relaxed cases show a similar order of performance, and a similar level of improvement between the two methods when compared to the Bogosian results. Since this was the only solution provided, this comparison, along with the use of the code found in his Appendix, ensured the process was running as it should be.

48 Figure 3-1: Original Results vs. Replicated Results of Single Pass Capture Code [7]

There are a few factors in the code that can be varied to show the effects of certain choices. The first is the effect of the size of the discretization of time intheuseof dynamic programming. Figure 3-2 below shows result averaged over 50 different scenarios with 10 randomly placed targets of equal weight. In the figure below, the x-axis shows the step-size of the discretization, and the y-axis shows the percent of value captured in this scenario. The targets in this testing had random values between 1-10. The y-axis shows the percent of value captured throughout the path planning. In general, the average trends downward. However, in individual testing, occasionally, it jumps up again. This discontinuity is due to the coincidental opportunity for the discretization to line up with the proper imaging opportunities. For the duration of testing, the analysis used a step-size of 1 second to balance performance with computational time.

49 Figure 3-2: Effects of Changing the Time Step-Size on Scheduler Performance

A second factor affecting performance is the integration time for an image, which is the amount of time the satellite must track an image before moving on to the next target. Figure 3-3 shows the percent of value captured during a pass against integration time, in seconds. Part of the value decrease seen is due to the slant range penalty imposed in the code. This penalty diminished the value of a target if it is seen at a long-range. In Equation 3.1 image value, 푣, diminishes as the distance from the target to the satellite, 푑, increases relative to the maximum range acceptable within the satellite’s field of regard, 푟푥.

⃒ ⃒ ⃒ 푑 ⃒ 푣 = 푣 * (1 − 0.1 * ⃒ ⃒) (3.1) ⃒푟푥 ⃒

At a certain point, the satellite misses the target altogether, and large jumps in value occur. This change is seen clearly after 360 seconds of integration, where the satellite can no longer capture targets. Image integration time is an important factor in deciding how many targets are realistic for a satellite to capture in one single pass. However, it is set to 0 seconds for the duration of testing to remove this constraint.

50 Figure 3-3: Effects of Changing Imaging Integration Time on Scheduler Performance

Finally, the effects of the satellite’s actuation authority were analyzed and shown in Figure 3-4. One functionality of the scheduler is to read in experimentally gathered data regarding the slew and settle time of a spacecraft and apply its performance to the scheduler. However, to simplify this analysis, a rigid-body spacecraft is assumed. The following figure shows the effects of changing actuation authority, referred toas the agility of the satellite, in terms of degrees per second. As a point of reference, DigitalGlobe’s WorldView-3 satellite, which is 2800 kg and at a 617 km altitude orbit, has an actuation authority of about 1.5표/sec [18].

Figure 3-4: Effects of Satellite Agility on Scheduler Performance

51 3.2 Methodology

Incorporating an image scheduler into ReCon meant adjusting some of the objectives and constraints used. These additions allowed the priorities of a multi-pass scheduler to match those of ReCon. The following investigates these metrics and how they affect performance. The metrics were used in different algorithm formulations tofind optimal schedules quickly. These algorithms were adapted from previous work and put into context of the image capture problem.

3.2.1 Imaging Metrics

A challenge in spacecraft operations is balancing the priorities of a variety of objec- tives. The single pass solution considers a few different mission characteristics, but overall the objective is to maximize the cumulative value of images viewed by the satellite. In this case, the designer assumes the designated value of each image cor- relates to its importance to the user. This value also diminishes when the image is taken at a large angle, reducing the image quality. The constraints implemented are rooted in the agility of the satellite, derived from its inertial properties and actuation authority. However, the following analysis explores additional metrics that are im- portant when scheduling imaging. The objective of maximizing image value captured remained in place. A new objective added was to minimize the slew of the satellite as it was traversing along its path. This objective is important for satellite resource conservation. An additional resource constraint added was the limitation in how many images the satellite could capture in any given pass. This constraint may not seem useful in the case where there are only 10 images available per pass. However, when scaling up to more complex schedules, a satellite may become memory limited if downlinks are unavailable. For this analysis, the assumption is that a satellite can downlink before it comes around on the next pass and is only memory-constrained on each individual pass. Specifically, the purpose of this analysis is to expand on the single pass scenario

52 into a multiple pass scenario. When considering multiple passes, metrics such as revisit frequency become important. The scheduler included an objective to minimize the average revisit times for each target. If the targets are not meeting average revisit frequency requirements, the utility of imaging this target increases for the subsequent passes. The first of the plots in Figure 3-5 shows how increased agility allows the capture sequence to meet frequency requirements more easily. The second shows the effect of limiting the memory capacity of the satellite. In this figure, the x-axis expresses the limiting capacity as a percentage of the total number of targets to be imaged. The y-axis expresses the average number of passes that occur before a target is revisited, with the objective set at capturing an image during every pass. Both of these constraints are important factors on the scheduler’s performance and should be taken into account in the design stages of a satellite.

(a) Effects of Satellite Agility (b) Effects of Memory Limits

Figure 3-5: Effects of Satellite Constraints on Average Revisit Times

3.2.2 Multiple Pass Approaches

Expanding the problem from a single pass problem to a multiple pass problem intro- duces an additional layer of complexity. Figure 3-6 shows an example of a multiple pass scenario. Each square is a target. The x-axis represents the along-track direc- tion of a satellite’s ground track, while the y-axis represents the cross-track direction. The red squares show targets not yet seen, the blue squares are targets seen on the current pass, and the green squares are targets seen on a previous pass. This scenario

53 shown below shows an example where the scheduler required five passes before the constellation observed all targets for the first time.

Figure 3-6: Example of Mulitple Pass Solution (Roll-Restricted)

The single pass problem relies on the additive properties of dynamic programming. The utility of each next target captured is independent of the previously viewed targets. However, this assumption breaks down when expanded to a multiple pass solution. Now, with each successive pass, the utility of the targets changes, which makes dynamic programming unscalable to this problem. ReCon addresses a particular mission that is not the same as most imaging mis- sions’ concepts of operations. Traditionally, static constellations capture images based on predetermined needs and pass opportunities. While most schedulers optimize to see as much as possible, the ReCon scenario is looking at a problem where a user wants to see certain regions as often as possible. Therefore, traditional techniques with set viewing objectives do not allow for opportunistic image capturing. Trying to brute force the solution leads to exceptionally high computational times. An exhaustive solution would require allocating subsets of targets to each satellite. Yet, to find the value of any given allocation, the capture code would need toberun for each subset. The scheduler outputs the performance of the satellite, given that

54 specific subset of targets. Figure 3-7 shows the amount of time, in seconds, it takesto find the optimal subsets of paths for two separate passes given a number oftargets. This search was completed using a laptop with an Intel i7-8550U processor with 8 GB of RAM. It is important to note that this is only for two subsets and would increase by a factor of 푛 targets as the number of passes increases. Therefore, a brute force approach to allocating targets to satellites, even with higher quality computers, is not viable for this process.

Figure 3-7: Computational Time Required for Exhaustive Approach

There are several challenges presented by scheduling satellite movements. Satel- lites are constrained by the dynamics of their motion, making the feasibility of tran- sitions dependent on the path the satellite takes. Due to these difficulties, combined with the potential urgent need for imaging schedules, schedulers often use greedy algorithms to find solutions to satellite scheduling problems. The integer qualities of the optimization makes the problem difficult to formulate for traditional convex optimization problem. The ultimate goal of this analysis is to combine the abilities of an all-encompassing scheduling algorithm into ReCon’s genetic algorithm search that runs hundreds of thousands of evaluations in a large search space. To achieve this integration, the structure of the collection process favors fast evaluations over a guarantee of optimal- ity. Greedy solutions are the fastest route to a solution. However, they have no sense

55 of the future potential schedules and therefore are not ideal for a situation where each pass can be characterized before it occurs. As situations develop, there may be a need to reprioritize tasking, another factor considered when building the collection algorithm, which favors partial schedules.

The greedy approach to the multiple pass problem operates as follows. Each pass sequentially runs through the single pass capture code. After a satellite completes a pass in the model, the scheduler returns the targets captured. The algorithm compares the results with the desired frequency rate at which the satellite views the targets. The value (푣) of a target not meeting this frequency requirement increases relative to ratio between the desired viewing (휏) frequency and actual viewing frequency (푇 ), raised to a factor of 푝, to make this target more valuable on the next pass.

[︁ 휏 ]︁푝 푣 = 푣 * (3.2) 푖+1 푖 푇

After the single capture code runs through all passes, the objective function eval- uates the entire process as a whole, including the total slew of the capture sequence, total images captured, and frequency requirements met. Yet, the results are depen- dent on the order in which the capture code processes the passes and the relative initial values of the targets. Therefore, overlaying search techniques over greedy algo- rithms, to change the order the scheduler processes the passes and change the initial relative values of the targets, can allow a user to improve performance, but with the sacrifice of computational time.

Three common techniques for solving scheduling problems were considered in solv- ing this problem: simulated annealing, ant colony optimization, and genetic algo- rithms, as well as a hybrid approach modified from Chen et al., who developed an algorithm that combined these three techniques [12]. In this application each pass and initial target value exist as a node to allow for the application of traditional schedul- ing algorithms. The frequency of the capture of each target up until this point drives the dynamic programming solution. Therefore, rearranging the passes, or changing the inital values of the targets, is the equivalent of rearranging the nodes of a path.

56 Since this problem closely mirrors a traditional traveling salesman problem, the code used the following parameters, derived from previous simulated annealing lit- erature [31]. The initial temperature was /2, N being the number of nodes, with the final temperature set at 0.1. The decay rate was setat 훼 = 0.9. The change in performance was adjusted, so the performance metric was of the same order as N.

The challenge in implementing the ant colony algorithm with the multiple pass problem described is the difficulty in calculating the value for 휂 in Equation 2.4. This value in the traditional sense is the inverse of the distance between one city and another in the traveling salesman problem, a value that exists in a lookup table. In the problem formulation described, 휂 is equivalent to calculating the performance of a path target combination, which defeats the purpose of reducing the number of times the code computes the path value. Therefore in the formulation implemented, 휂 is 1.

The third method implemented a genetic mutation in conjunction with simulated annealing. This method maintained the same cooling conditions for the acceptance criteria. The changes occurred in the way that many different offspring explored new paths. While simulated annealing alone implemented a single mutation, the genetic algorithm uses parent solutions to generate several offspring.

All three of these methods can be very useful. In typical path-finding problems, the ant colony algorithm performs much better than simulated annealing. Yet, the ant colony algorithm relies partially on the ability to predict the level of fitness of the next move before traversing it. Due to the nature of this image collection problem, the values of future transitions are dependent on the previous ones and do not exist in a lookup table. Instead, the scheduler must calculate the path value. Therefore, the hybrid structure incorporates the ant colony algorithm to reduce the randomness of the new paths for the search algorithm but relies on simulated annealing to converge on a solution and genetic algorithms to enhance the vastness of the search space. The hybrid structure, adapted from Chen et al., uses all three of these methods for the search. The outer loop consists of a simulated annealing problem, cooling the acceptance criteria as the search continues. The ant colony algorithm generates new paths for the search space, and these paths determine one of the parents for the genetic

57 algorithm. After the best path from the ant colony algorithm is determined, this path and the reference path become parents for the new offspring in the genetic algorithm. The best solution out of the three options remains as the reference solution. Figure 3-8 below shows the importance of using a folding horizon. The multiple pass capture code showed significantly greater improvement when it runs for the same raw number of searches when attempting to schedule fewer passes. This improvement correlates to the previous literature cited. Due to the exponentially increasing com- plexity of larger problems, chunking the problems into smaller numbers makes better solutions much easier to find, even if they may not be optimal for the entire schedule.

Figure 3-8: Benefits of Using Folding Time Horizon

58 3.3 Scheduler Results

The ReCon simulation runs against target decks that pull from a target distribution based on the economic impact of natural disasters across the globe [34]. Figure 3-9 shows the target points plotted across the globes that represent the distribution of disasters. These 10,000 target points are representative of this distribution.

Figure 3-9: ReCon Simulation Target Locations

Figure 3-10 shows the greedy version of the collection code used on both a static constellation covering an area for ReCon and on a reconfigured constellation. In this figure, each block on the bar graph represents one of the ten different targetsinthe area. The y-axis shows the number of images taken each day, and the x-axis marches forward through the observational period, which in this case is two weeks long. It is interesting to note that the static configuration, seen on the right, on certain days collects more images compared to the reconfigured constellation. However, the recon- figured constellation dramatically evens out the performance, making it consistent across all targets throughout the entire observational period. Consistency is truly the key to persistence, not necessarily a lot of data all at once.

59 Figure 3-10: Image Captures for ReCon v.s. Static Design

Figure 3-11 breaks down the revisit metrics in a moving average of the revisit time to the target throughout the time period. The solid black line indicates the 1-hour daytime revisit metric requirement imposed for the scenario. While the static design does meet the requirement, sometimes, ReCon allows for consistency throughout the entire scenario.

Figure 3-11: Average Revisit Time for Regional Target

60 When further breaking down this data to look at actual revisit metrics, Figure 3-12 shows the number of eligible passes that occur before the targets are revisited. Eligible passes are passes that are within range of the target area, and that occur during daylight hours, between 6:00 AM and 6:00 PM local time. The reconfigurable constellation consistently revisits the individual targets in fewer passes throughout the entire time period.

Figure 3-12: Average Passes Before Revisit for all Targets

When testing the algorithms, a toy problem tested the different algorithm designs quickly. The problem had a similar structure as the image collection problem, in that the different paths had different values, but the transition values were unknown before the scheduler builds the path. The results below look at three different factors: the number of paths searched before the scheduler found the solution, the percentage of tests where the scheduler found the optimal solution, and the final convergence value when the scheduler found the final solution. Convergence is a standard metric used in evaluating optimization algorithms. The speed of convergence, is defined as the rate at which the푒 error( 푘) between the current * solution (푥푘) and the optimal solution (푥 ) approaches zero [3].

* 푒푘 = ||푥푘 − 푥 || (3.3)

푒푘 → 0 (3.4)

61 Since the optimal schedule is unknown in the ReCon scenario, in the following analysis convergence was defined as follows in Equation 3.5. In this equation 푥 is the current optimal solution, 푘 is the number of paths searched between the current optimal solution and previous solution, and 푛 is the total number of optimal solutions found up to the point of the search. A solution is updated at least every 10 steps in the search process to check convergence over time.

푛 1 ∑︁ ||푥푖+1 − 푥푖|| 푒 = (3.5) 푘 푛 푘 푖=1 푖+1 Figure 3-13 shows the effects of different convergence criteria on the different algorithms tested. The algorithms continued to search until the convergence value was less than the required convergence criteria shown on the x-axis. The best path found at the point was output as the optimal solution. Since, in this test case, the optimal solution was known, the analysis recorded the percentage of times the algorithm found the optimal solution. This percentage follows a binomial distribution where the algorithm either does or does not find the optimal solution. Equation 3.6, known as a Wilson score interval, can be used to establish a confidence interval for this probability [64]. This equation calculates a 95% confidence interval, bounded by the 푝ˆ values, where 푝 is the proportion of times the algorithm found the optimal solution, and 푛 is the number of trails.

√︂ 푝(1 − 푝) 푝ˆ = 푝 ± 1.96 (3.6) 푛

Figure 3-13 shows the lower bound of the 95% confidence interval of the percentage of the time each algorithm finds the optimal solution given different convergence re- quirements. In this case, finding the optimal solution 95% of the time is a reasonable requirement. The closer to 100% the requirement becomes, the more computation- ally expensive finding the optimal solution is. Due to the considerable computational loads on ReCon already, increasing this requirement above 95% is unnecessary. While optimal global schedules can be helpful for initial planning, running a scheduler as close to the imaging events as possible generates better schedules for uncertain sit-

62 uations [39]. Updating a schedule over time also allows for the user to reprioritize their imaging objectives. A perfect schedule is nonessential for the initial performance estimates.

Figure 3-13: Lower Bound of 95% Confidence Interval for Chance of Finding Optimal Path Given Varying Convergence Criteria

By imposing this 95% requirement, each algorithm has different convergence cri- teria. These criteria correlate to the smallest value at which the confidence in an optimal solution drops below 95% probability using the lower bound of the confi- dence interval. Although, in some cases, the algorithms show better results at higher convergence factors, this is due to the noise presented in the testing data. Therefore the most conservative result is used. Figure 3-14 shows this convergence factor then translated to a high bound of how many paths the algorithm searched to reach the convergence criteria. Again, this value can significantly vary depending on the initial guesses of the algorithm, sothe value shown on the figure is the 95th percentile of path lengths required. This anal- ysis uses the 95th percentile to evaluate the performance of the different algorithms by assuming the worst-case scenario. The analysis shows that although simulated annealing seemed to need fewer paths than the other algorithms, its performance re- lated to the convergence factor drops off quickly. The hybrid algorithms performance

63 decline is much more gradual, and the higher convergence value translates to the fewest paths required to search to have confidence that the algorithm found the best solution.

Figure 3-14: Iterations Required to Achieve Varying Convergence Criteria

The modified ant colony algorithm had the poorest performance, requiring the largest number of paths to reach the required confidence level. The modifications made to the algorithm hindered its ability to make greedy decisions about the next step to be taken in the path. Because of this modification, the algorithm was rather ineffective at high confidence levels. Simulated annealing had the highest sensitivity to the convergence criteria. The confidence in the optimal solution quickly dropped off as the convergence factor increased. A drastic decrease in the number ofpaths required mirrors the confidence drop off. This algorithm excels at providing pretty good solutions quickly but takes much longer to find optimal solutions due to the ran- domness in the search. Therefore, it is a recommended approach for initial searches. The simulated annealing with the genetic mutation had about the same performance at the 95% probability level but performed better in the higher performance regimes and worse in the lower performance regimes. Using different mutation techniques could also improve this algorithm, which could prove to be more effective. Finally, the hybrid algorithm did perform the best in the high performance regimes.

64 It is important to note is that there is an increase in computational time required when performing the entire hybrid algorithm. However, while the computational time of an algorithm is extremely important when evaluating its effectiveness, the algo- rithms were tested against a fast lookup table. When applied to the full collection algorithm, the dynamic programming search must run for each pass, which makes each individual iteration expensive to search through. Therefore, it was more impor- tant to identify the algorithms which required a small pool of paths to search before converging on a solution.

3.4 Implementation with Original ReCon Code

Since the original ReCon code simulation responds to only one event at a time, the ultimate goal of this analysis was to use the scheduler to inform the ReCon design of additional imaging considerations. In initial integration testing, the collection code evaluated the imaging capabilities of the reconfigured satellites in post-processing, after the ReCon code determined the satellite maneuvers. For example, Figures 3-11 and 3-12 used a ReCon test design, which maneuvered a constellation of 18 satellites to respond to an event. The output of this data included the COEs for all 18 satellites, and the capture code scheduler extrapolated the ground tracks out from this information. These ground tracks were then used in the scheduler to determine image capture assignments for the constellation. However, this does not affect the actual design of the constellation, only the imaging schedule after reconfiguration. A second option was to evaluate the design’s performance by running the collec- tion code over each new design while they were developed in the ReCon simulation. This option would have some minor effects on the optimal designs but would becom- putationally intensive. The utility metrics used in deciding optimal reconfiguration assignments leverage the RGTs known revisit frequencies. Adding the additional com- putational steps by integrating the collection code into the optimization loop would prove to be exponentially more time-intensive. There would also be little benefit to adding the collection code metrics directly in with the reconfiguration decisions. The

65 ReCon process optimizes to find a high revisit frequency combination of maneuvers, which in turn would generate high-performance metrics in the collection code as well. Therefore, the implementation method chosen was to provide supplementary infor- mation to the ReCon optimizer based on the RGT parameters of the design. Since inclination and ROM altitude drive the location of an RGT, the collection code can find the performance of a constellation in preprocessing, instead of evaluating the reconfigurations individually. When using Recon in conjunction with the scheduler, first, a user would define the system’s inputs. These inputs include the design constraints for the constellation. In this case, the collection algorithm is particularly sensitive to the inclination and the RGT altitude options. The user would also designate their entire target set for a region, not just one reconfiguration point per region. Search and rescue strategies use "supernode" locations to distribute assets adequately [52]. The supernodes are chosen as central locations to several other points of interest and help to organize complicated rescue efforts. The collection code reads in the targets for each region and outputs one supernode location per region to feed into ReCon. Since RGTs are used in this process, the ground track of any given reconfiguration can be predicted given an altitude and inclination of the track. The collection code can quickly run a greedy, multiple pass solution for each of the supernodes in the list using this RGT information. The optimal supernode is then matched for each combination of inclination and RGT altitude, as well as a performance metric for the passes. Different supernodes were generated based off of 10 randomly placed targets, which are the white points in Figure 3-15. This location was the point where the ReCon code maneuvers the satellites’ RGTs directly over. The line in the Figure 3-15 rep- resents the ground track a satellite is taking though the region when in ROM. The scheduler calculated the relative performance of the passes for each different supern- ode location. The contour plot in Figure 3-15 shows the performance of different supernode locations, with blue indicating superior performance.

66 Figure 3-15: Performance of Different Supernode Locations Given Specified Target Distribution

The star in this contour plot shows the mean target location based on latitude and longitude of all targets in the region. For randomly placed targets, this star fell within the highest performance level of the contour in every test. Therefore, to reduce computation time, it is acceptable to use the mean location as the supernode location to pass into ReCon. However, a user could run through each supernode location if desired to ensure this location provides the optimal performance, as the distribution list of targets used was not exhaustive. After the supernode is determined, the different constellation designs can be evalu- ated against different regional target sets quickly. This creates a relative performance scaling factor that can be applied upfront to any given constellation design. When ReCon reads in the inclination and altitude of the current design, it can evaluate the predicted performance of this design for all of the regional target sets, using the supernode for each set as the reconfiguration point. Figure 3-16 shows contour plots for the performance of different combinations of RGT altitudes and inclinations for six different regional target sets.

67 Figure 3-16: Performance of Varying RGT Designs Using Random Target Locations with Same Supernode

Each plot has the same supernode, but different target distributions. These plots show that having the same supernode does not equate to the same performance for a given constellation design. This resulting performance calculated with the collection algorithm must be taken into account when determining the final performance of the ReCon constellation design. For this testing, the supernode exists at the equator. However, the optimal inclination is rarely at this point. This lack of correlation is due to the orientation of the targets influencing the optimal ground track. Although it is hard to predict the best inclination without seeing the target distribution, higher altitudes do yield better results.

ReCon continues to output its performance metric, which it uses to assign certain satellites to maneuver. This performance metric is related to how often a satellite passes over the supernode itself. However, in the final evaluation of a design, this performance metric must scale with the design’s performance in the collection code. The collection code determines the imaging capabilities of the constellation for all the targets in the region. The regional collection performance is calculated upfront prior

to reconfiguration. The optimization weighs the utility of the ReCon푈 code( 푅푒퐶표푛) against the utlitiy of the collection code (푈퐶표푙푙푒푐푡푖표푛) using the constants 훼 and 훽 as shown in Equation 3.7. A final utility metric푈 ( ) is compared against the cost

68 of a constellation design to return the final Pareto front. It is strongly suggested to weight the performance of ReCon at a level of 0.8 or higher to account for its significant positive influence on the performance of the collection code as it currently stands.

푈 = 훼푈푅푒퐶표푛 + 훽푈퐶표푙푙푒푐푡푖표푛, where 훼 + 훽 = 1 (3.7)

This final step of weighting the two performance metrics against each other is critical because the use of the collection code introduces an additional trade to consider in the design of the constellation. ReCon can choose to pick lower altitudes for some solutions in order to use smaller optics on the satellites. However, the lower altitude RGT orbits perform significantly worse in the collection code when compared tothe higher altitude RGT orbits. The scheduler would drive the ReCon optimization code to pick higher altitude RGT orbits, as the overall capture performance is better per pass. However, other effects, including aperture size and launch considerations, also drive the choice in RGT altitude. Therefore, this performance metric is used to in- form the quality of design choices, but not override the original metrics used in the ReCon formulation. This analysis also brings up an additional callback to the potential use of elliptical orbits. There are additional challenges in properly assigning satellites to the correct orbital slots when using elliptical orbits, which require perigee or apogee to be lined up in the proper position relative to the Earth depending on imaging requirements. However, since there seem to be benefits to both a high, slower pass, allowing the satellite to image more targets, and a low, fast pass, achieving higher quality reso- lution, there is likely value in getting a combination of both. An ideal operational scenario would include some high passes to survey the entire site and for the satellites to pass the information on to lower altitude satellites, which can take high-quality res- olution images. However, due to the complexity of implementing these decisions, this kind of analysis is left as future work in choosing orbits for multiple image capturing scenarios.

69 3.5 Wildfire Case Study

From late 2019 through early 2020, experienced many wildfires burning an estimated 40 million acres of the land, focused on the southeast region of the country [10]. The effects of this devastation included hundreds of millions of animals killed, thousands of homes destroyed, and dozens of lives lost. Wildfires are a challenging natural disaster as they can affect large areas, move and change quickly, and are extremely dangerous to control. During the fight of the Australian fires, three air- crew members and three firefighters were killed [9, 44, 51]. With the recent risein global temperature, extreme wildfires are becoming more commonplace and still have devastating effects [60]. Satellite imagery helped to identify the locations of the fires as they burned. The satellites used to identify these locations had a GSD of several hundred meters, and could only show locations of fires, but could not provide detailed mapping information. Figure 3-17 is an image showing the location of fires on January 14th,1 2020 [42].

Figure 3-17: NASA Identification of Wildfire Locations: January 14th, 2020 [42]

1The author acknowledges the use of data and imagery from LANCE FIRMS operated by NASA’s Earth Science Data and Information System (ESDIS) with funding provided by NASA Headquarters

70 Each block shown in Figure 3-17 is 25 km by 25 km, with the color intensity indicating how many wildfires remote sensing identified in this region. The red blocks show areas with several hundred fires identified. There are 66 blocks in the region that correspond with the most dense fire locations. These were translated to their longitude and latitude points, and numbered, shown in Figure 3-18.

Figure 3-18: Mapping Fire Locations to Longitude and Latitude Points

Since these points are 25 km blocks, a satellite with a large enough FOV could capture a few of these targets at once. Figure 3-19 shows a breakdown of 100 km x 100 km regions, the assumed FOV of the constellation for this scenario. This delin- eation leaves the satellite with 14 distinct imaging locations for capture. Although, the satellite could not view all 14 imaging points in a single pass, using ReCon would enable high-resolution imagery to be captured of these high-value targets. This im- agery could be important information provided in near real-time to firefighters trying to determine the extent of the damage of a particular region they are entering.

71 Figure 3-19: Capture Regions for Agile Imaging

Figure 3-20 shows the importance of implementing agile imaging techniques. Tech- niques described in Section 3.4 chose the optimal ground track for collection. If the constellation used push-broom style imagine techniques, only pointing nadir, the satellite would miss a significant number of the target locations entirely. Using RGTs means the constellation would not capture targets outside of the ground track’s path. An agile satellite, which can point off-nadir, allows for the satellite to view there- maining targets.

Figure 3-20: Capture Regions for Push-Broom Imaging

72 The scheduler optimized over the parameters described above in Section 3.2. Fig- ure 3-21 shows one of these metrics in the number of times the constellation imaged each of the 66 targets over one week of imaging every hour during daylight hours. The push-broom imaging captures a large number of images, but only of the fire locations directly in its path. This technique could be useful for a well-defined region, but the volatility of natural disasters makes this knowledge challenging over the course of a week. The agile greedy solution better distributes imaging resources across all of the targets, with each target viewed a median of 11 times over the week.

Figure 3-21: Target Captures for Different Imaging Techniques

Overall the constellation cannot achieve hourly revisit to all 66 target locations. However, in well-defined regions existing directly along a satellite’s ground track, high-frequency, high-resolution imagery is achievable. This specific scenario also con- siders a large number of targets across a vast landmass. Balancing field of view, resolution, and revisit requirements are important in determining the actual feasible revisit frequencies over a region. The full scheduler found a solution for a 3-day schedule of this scenario. Table 3.1 below illustrates the optimal solution found by the scheduler compared to the baseline greedy solution. The scheduler looked at the total number of images captured, the

73 total slew required, the mean average revisit across the targets, and the standard deviation (휎) of the average revisit across all targets.

Table 3.1: Results of 3 Day Schedule for Fire Case Study

Average Average Images Slew Parameter Revisit Revisit 휎 Captured [rad] [hours] [hours] Greedy Solution 365 4.53 5.55 1.75 (Baseline) Optimal 349 3.74 5.36 1.54 Solution

Notice how this optimal solution does not show superior performance in the total images captured category. This is a common trade when minimizing the average revisit metric across all targets and the standard deviation of this revisit metric. Also take note that the greedy schedule achieves a high level of performance. For this reason, a greedy schedule is recommended for the initial schedule generation. The use of the optimization of the schedule is for the user to find Pareto optimal schedules, and to choose solutions that exhibit the higher performance in the user’s preferred metrics. The example shown in Table 3.1 favored a low average revisit more than the images captured, however the inverse solutions also exist for the user. Overall, using a schedule to allow for off-nadir imagine is extremely important for performance, but optimizing a schedule gives the user the opportunity to pick a schedule which matches their mission priorities.

74 Chapter 4

Implementation of Low-Thrust Propulsion Systems in ReCon Framework

4.1 Low-Thrust Propulsion Background

Before diving into the potential uses of low-thrust propulsion, it is important to note that this is a term that includes several types of propulsion. Specifically, this analysis is most concerned with electric propulsion (EP) systems, which include: electrothermal, electrostatic, and electromagnetic propulsion [14]. Electrothermal propulsion is the most commonly used system. Resistojets are one of these systems and are limited to specific impulses, a measure analogous to fuel efficiency, near 350 seconds due to the limitations imposed through the use of hydrazine. Arcjets achieve specific impulses up to 600 seconds, although they are half as efficient as their resistojet counterparts. Electromagnetic propulsion systems use magnetic fields to accelerate plasma to generate thrust. Electrostatic propulsion systems have very high specific impulses, up to 4,000 seconds, but they generate such little thrust that they are not typically a primary propulsion system. These include Ion and Hall thrusters.

75 EP systems are emerging as a more common part of space missions as the technol- ogy matures. One of the most important benefits of EP technology is the potential for mass savings they provide for a satellite. This mass savings generally allows for either an upgrade of payload capabilities or a reduction of overall mass general, which correlates to the overall constellation cost. Referencing the rocket equation, Equa- tion 4.1, the (Isp) acts as an efficiency characteristic of a thruster. In

this equation 푚0 is the inital mass of the satellite, which is a sum of the final mass

(푚푓 ) and the propellent mass (푚푝). The ratio relates to the ∆푉 capabilities of the satellite. The more ∆푉 required for a mission, the more impactful having a high Isp becomes in terms of fuel savings [50]. ReCon is a situation in which adding ∆푉 adds performance value to the satellite. In the following equation, 푔 = 9.81 m/s2.

푚 푚 + 푚 0 = 푓 푝 = 푒Δ푉/(퐼푠푝*푔) (4.1) 푚푓 푚푓

However, merely increasing Isp to the largest extent possible is not always ideal, as driving up Isp increases the additional mass needed for an electrical power system that can accommodate EP thrusters. In electrical power systems, the mass of the power system is proportional to the power required. At constant thrust, the power required is proportional to the Isp. Therefore, although a higher Isp drops the propellant mass required, the mass of the electrical power system increases as a result. In general, the optimal Isp is dependent on the ratio of the mass of the power system compared to the rest of the satellite. The status of low-thrust technology is still in a variety of developmental stages. In December 2018, NASA released its State of the Art Small Spacecraft Technology report. In this report, which contains a brief overview of the status of propulsion systems for small satellites. Table 4.1 below is a summary from the NASA report [43].

76 Table 4.1: Summary of Current Propulsion Systems for Small Satellites [43]

Specific Product Thrust TRL Status Impulse [s] Hydrazine 0.5 - 30.7 N 200 - 235 9 GN2/Butane/ Cold Gas 10 mN - 10N 40 - 70 R236fa 9 Alternative (Green) 0.1 -27 N 190 - 250 HAN 6, ADN 9 Propulsion Pulsed Plasma and Teflon 7, 1 - 1300 휇N 500 - 3000 Vacuum Arc Thrusters Titanium 7 Electrospray Propulsion 10 - 120 휇N 500 - 5000 7 Hall Effect Thrusters 10 - 50 mN 1000-2000 Xenon 7, Iodine 7 Ion Engines 1 - 10 mN 1000 - 3500 Xenon 7, Iodine 4 Solar Sails 0.25 - 0.6 mN N/A 85 m2 6, 35 m2 7

The previous MIT ReCon analysis modeled a hydrazine, monopropellant propul- sion system arguing for the use of impulsive thrust systems instead of low-thrust systems. Initial ReCon analysis showed that high-thrust, impulsive maneuvers, like the ones provided by this system, provide the most significant performance gains from reconfigurability when compared to low-thrust systems [34]. However, the initial anal- ysis did not explore a sufficient search space to find low-thrust transfer solutions. A more thorough investigation into low-thrust systems shows there are potential use cases for EP, including a return to global observational mode (GOM). However, the mass trades invoked by implementing such a system do not compensate for the de- creased performance. The following analysis models a satellite with a Hall effect thruster with the following parameters [63]:

Table 4.2: Test Thruster Parameters

Thrust (T) 80 mN Efficiency (휂) 0.5 Power (P) 1350 W Isp 1600 s g 9.81 m/s2

As described in Chapter 1, the original implementation of ReCon used instanta-

77 neous Hohmann transfer burns to place the satellites into drift orbits to shift their ground tracks at a differential rate compared to their ground track shift inGOM. However, using Hohmann transfers, the satellites still take days to reconfigure, some- times even more than a week. This delay is because orbital precession can only go so fast. The satellite spends most of that maneuvering time period waiting in a drift or- bit. A paper from McGrath and Macdonald showed that a low-thrust satellite could potentially maneuver to an event within a similar time period using a comparable amount of ∆푉 [40]. This work is expanded on below to integrate low-thrust tech- niques into the ReCon framework. Instead of using a four burn maneuver, as the original ReCon framework uses, the low-thrust technique uses two burns. The first burn either increases or decreases an orbit’s semi-major axis, burning for a total of 푡1 seconds. The satellite then waits in its drift orbit for 푡2 seconds, before burning into the final desired orbital slot for 푡3 seconds. This process is illustrated in Figure 4-1.

Figure 4-1: Burn Sequence of Low-Thrust Transfer

The methods used to implement low-thrust technology into ReCon were derived from McGrath and Macdonald’s general perturbation method, which derived a closed- form solution for the final orbital properties of a low-thrust transfer. The authors evaluate their technique against a numerical calculation of subsatellite points for a nonmaneuvering satellite over time. The mean difference between the analytical and numerical solutions was less than 15 km after two weeks of propagation, well within the typical field of view ranges. However, this discrepancy does increase as time increases and should be noted for applying the following techniques to longer

78 duration propagations, which this analysis considers.

The two driving equations leveraged by the general perturbation method are the rate of change of RAAN and the rate of change of AOL with respect to the change in semi-major axis, shown in Equations 4.2 and 4.3. These equations can be integrated with respect to 푎, and a detailed description of the resulting equations can be found in McGrath and Macdonalds’s work [40]. The change in semi-major axis with respect to time is derived from the thrust (푇 ) of the satellite. Using the parameters in Table 4.2, the acceleration of the satellite can be found using Netwon’s first law, under the assumption that the change in mass is small enough during the transfer that the acceleration (퐴) remains constant throughout a maneuver. This process is shown in equations 4.4 and 4.5. Since propellant mass is a small fraction of the mass of the satellite when using EP systems this is a valid assumption for the purposes of this work. 푑Ω −3¯푛푛¯′푅2퐽 = 푒 2 cos 푖 (4.2) 푑푎 4¯푎2퐴 푑푢 푛¯푛¯′ (︂ 3푅2퐽 )︂ = 1 + 푒 2 (4 − 5 sin 푖2) (4.3) 푑푎 2퐴 4¯푎2

퐴 = 푇/푚푠푎푡푒푙푙푖푡푒 (4.4) 푑푎¯ (︂푑푎¯)︂ (︂푑푎¯)︂ 2 = + = 퐴 (4.5) 푑푡 푑푡 푑푡 푛¯ 퐽2 푡ℎ푟푢푠푡 Where: [︂ 3푅2퐽 ]︂ 푛¯′ = 푛 1 − 푒 2 (3 sin 푖2 − 2) (4.6) 4푎2 It is important to note that these equations use mean orbital elements. In the case where the oscullating semi-major axis is given as an input, the conversion in Equation 4.7 could be used to calculated the mean semi-major axis. However, this step prevents the use of an analytical solution, and the author’s show there is minimal impact to assuming 푎¯ = 푎 for short duration propogation.

3퐽 푅2 푎¯ = 푎 − 2 푒 sin (푖)2 cos (2푢) (4.7) 2푎

The solutions to Equations 4.2 and 4.3 are used in finding the total change in RAAN

79 and AOL. In the following equations, index 0 represents the initial orbit, index 1 represents the first low-thrust transfer, index 2 represents the drift orbit, andindex 3 represents the final low-thrust transfer.

Ω푡표푡푎푙 = Ω0 + ∆Ω1 + ∆Ω2 + ∆Ω3 (4.8)

푢푡표푡푎푙 = 푢0 + ∆푢1 + ∆푢2 + ∆푢3 (4.9)

The total time required for the coast (푡2) is found by subtracting the maneuver times

(푡1 and 푡3) from a currently unknown total transfer time (푡푡표푡푎푙) as shown in Equation 4.10. The maneuver times are directly related to the change in semi-major axis taking place and the thrust of the satellite, and are found using Equation 4.11.

푡2 = 푡푡표푡푎푙 − 푡1 − 푡3 (4.10)

√ 5/2 2 2 2 5/2 2 2 5/2 2 휇(푎0 {20푎1 + 3퐽2푅푒[2 − 3 sin 푖 ]} + 3푎1 퐽2푅푒[3 sin 푖 − 2] − 20푎1 푎0) 푡1 = 5/2 5/2 20푎0 푎1 퐴 (4.11)

Equation 4.11 can also be used to calculate 푡3. This is accomplished by substituting

푎2 for 푎0 and 푎3 for 푎1. Now Ω푡표푡푎푙 and 푢푡표푡푎푙 can be expressed as functions of 푡푡표푡푎푙 and 푎1. Equation 4.12 uses the value of an Ω and 푢 to define a RGT’s location on the Earth, using the value Λ. For testing, a 15/1 RGT was used, but there are several other options at alternative altitudes. The final desired Λ, which corresponds to the desired ground track location, is a function of the Λ’s initial location at the start of the event, and how much it has shifted during the reconfiguration. The desired final AOL is directly related to this Λ value, the final RAAN, and the desired RGT.

The original ReCon code tries a variety of lengths of time in drift orbits, 푡2, to find which will produce the fastest reconfiguration, solving backwards for ∆Ω2 abd ∆푢2. Although reformulating the equations above to properly fit into the ReCon assignment structure breaks down the analytical solution, it is easy to replace ∆Ω1퐻표ℎ푚푚푎푛 and

∆푢1퐻표ℎ푚푚푎푛 with ∆Ω1퐿표푤푇 ℎ푟푢푠푡 and ∆푢1퐿표푤푇 ℎ푟푢푠푡. Using Equations 4.12-4.14 the only

80 unknowns are ∆Ω2 and ∆푢2, which are directly related to 푡2.

Λ = 푁표Ω + 푁푑푢 (4.12)

˙ Λ푓 = Λ0 + (푁표Ω푅푂푀 + 푁푑푢˙ 푅푂푀 )푡푡표푡푎푙 (4.13)

Λ푓 − 푁표Ω푡표푡푎푙 푢푡표푡푎푙 = (4.14) 푁푑 The total drift time is also affected by the total time spent maneuvering, which is determined by the ∆푉 expenditure required to get to the corresponding drift orbit. There are multiple solutions to finding the correct amount of drift time to satisfy the equations.

˙ 푁표 푢푡표푡푎푙 − Λ + (푁표Ω푅푂푀 + 푁푑푢˙ 푅푂푀 )(푡1 + 푡3) − (∆Ω1 + ∆Ω3 + Ω0) 푡 = 푁푑 (4.15) 2 ˙ ˙ (푁표Ω푅푂푀 − 푁표Ω2 + 푁푑푢˙ 푅푂푀 )/푁푑 +푢 ˙ 2

Since 푢2 is unknown in this scenario, a variety of revolutions spent in drift orbits are attempted in order to find the combination that returns the lowest positive value for

푡2.

4.2 Low-Thrust Reconfiguration Performance

When finding solutions for the optimal transfer time, the traditional Hohmann trans- fer solution searches through drift orbits that are 10 km apart. It also modulates the mean anomaly forward or backward up to 20 orbits, which represents the amount of time the satellite may have to wait in the drift orbit to correctly phase the ground track. However, using this same search space, it is more difficult to find transfers using a low-thrust system. A much greater range of both drift orbits and modu- lating orbits were needed to find solutions when the transfer times were less than three weeks. Both increasing the modulating orbits and the number of drift orbits to search through had a significant impact on the code’s runtime. It also increased the memory usage required, which reached capacity in long duration maneuvers lasting several weeks below a discretization of 0.1 km drift orbits. Expanding the modulation

81 search space had less of an effect than the expansion of the number of drift orbitsto search through. The modulation is one step of matrix multiplication to expand the solution space after the transfer properties are calculated, however the drift orbits are manipulated dozens of times throughout the code. Figure 4-2 shows that using EP to reconfigure to an event in the same way Hohmann transfers are used has substantially poorer performance, especially for high- urgency events. This figure shows how much utility is added by maneuvering using continuous, low-thrust propulsion when compared to using impulsive Hohmann trans- fers.

Figure 4-2: Relative Performance of Low-Thrust Propulsion Against Hohmann Trans- fers

The performance is evaluated against a range of response time requirements, up to 60 days. Each curve represents a different discretization of the low-thrust search space. A finer resolution prior to a 20 day response time shows a distinct increase in performance. However, as the time to reconfigure increases, the search space also increases, and the performance between the different discretization values converge. Using a larger search space to find low-thrust solutions, in terms of drift orbits ex-

82 amined, is only necessary for high-urgency events. Therefore, although the expansion of the search space has an effect on the performance of the code, its contribution is most seen in the low-performance areas, where low-thrust is least likely to be used. After two months of reconfiguration, the low-thrust solution fails to achieve 70%of the value added by the Hohmann transfer solution. Even if a user was interested in capitalizing on these diminished effects, it takes 3 weeks before even half ofthe performance achieved by the Hohmann transfer solution is seen. Using a low-thrust propulsion system would still allow a constellation to reconfigure, but removes the capability of timely responsiveness from the constellation.

4.2.1 ROM to GOM

The reconfigurable constellation design’s main objective is to respond to eventson demand. However, providing coverage of the entire globe between events allows for this constellation to still be of continuous use. The constraint modeled in the ReCon simulation is to achieve a global revisit time of at most 24 hours within a latitude band that can be specified by the user. This implied value in achieving global coverage necessitates that the constellation returns to a GOM configuration after an event ends. However, if this is a low priority for the user, other architectures may provide a more distinct advantage than the constellations that the ReCon code currently designs. When the satellites return to GOM, it is to their original mean anomaly slot, but not the same RAAN. This decision is due to the relatively small drifts in RAAN that occur post reconfiguration, which have minor effects on the placement of the satellite. The randomness in the assignment process also pushes towards these small differences to cancel out throughout the duration of the simulation. An analysis completed by Legge showed after twenty reconfigurations, the largest shifts from the desired RAAN were less than two degrees [34]. For this reason, the extra propellant and time required to correct for a difference in RAAN are unwarranted. These same assumptions apply to using a continuously thrusting, low-thrust transfer. The equations shown in Section 4.1 determine the respective change in mean anomaly that occurs relative to the target orbit throughout a maneuver.

83 The following analysis assumes that the ROM orbit has a lower altitude than the GOM orbit. However, the same principles hold when the ROM orbit is above the GOM orbit. When satellites are making in-plane rendezvous, this analysis looks at two main maneuvering options. The first option is to immediately initiate a transfer that minimizes the total time it takes for a satellite to rendezvous with a target location. This transfer is commonly known as a super Hohmann transfer. A super Hohmann transfer requires the satellite to transfer into an orbit much larger than the desired final orbit so that the phasing time would be equal to one period ofthe transfer orbit. However, these transfer orbits are typically fuel-inefficient unless the target orbit is much larger than the first orbit. Fuel is an essential part of ReCon and therefore, should be conserved wherever possible, which is what makes the second option more attractive. In this case, the satellite waits in ROM until its instantaneous phase angle is such that when the satellite completes the transfer, the satellite inserts into GOM at the correct position. Figure 4-3 illustrates this concept, showing what a spiral transfer would look like when using low-thrust propulsion.

Figure 4-3: Phasing Angle Illustration

84 It is important to note that while Hohmann transfers are relatively fast maneuvers, only requiring a transfer time equivalent to half of the period of the transfer orbit, the entire transfer time may take between hours to days. This delay is because the satellite may have to wait for the proper phase angle. This wait time correlates with the differential rate between the radial velocity of the satellites in ROM andGOM, which is proportional to the difference in the semi-major axes of the two orbits. Therefore, the wait time to achieve proper phasing between two orbits that are vastly different in size is much shorter than the wait time for two orbits that are very close together. Figure 4-4 below shows this effect, illustrating how the total transfer time to go from ROM to GOM changes as the altitude difference from ROM to GOM changes. Each line of both the continuous thrust transfers, which are the higher, green grouping, and the Hohmann transfers, which are the lower, blue grouping, represents a different required phase angle.

Figure 4-4: Total Transfer Times for Various Phase Angles

First, examine the Hohmann transfer grouping. As the altitude change between

85 ROM and GOM increases, the total time to complete the transfer decreases. This change is due to the increase in the relative mean motion as the difference in altitude increases. It is also important to note that the effects of different phase angles are not as pronounced in the larger differential regions. That is, being in the worst-case position does not have as much of an impact in the high differential region than the low differential region. Now compare this with the continuous thrust grouping. In the low differential re- gion, there is little difference between the continuous thrust solution and the Hohmann transfer solution, due to the short amount of time required to make a small altitude change, but the long amount of wait time needed to phase correctly. However, as the altitude decreases, the continuous thrust solution does not decrease at the same rate as the Hohmann transfer solution. While the Hohmann transfer time to actually complete the maneuver remains approximately equal along the x-axis, being the half of the period of the transfer orbit, the time to complete the maneuver for the con- tinuous thrust solution grows as the altitude change grows. Therefore, this becomes the dominant factor in finding the total time for transfer after approximately 50km of altitude change. The total time for transfer starts to increase for the continuous thrust transfer due to the increase in maneuver time required for the larger altitude changes. For any given phase angle, the analysis uses this information to discover trades in the system. Figure 4-5 represents the trades made in deciding which type of transfer to use. A user could also use this tradespace to consider how far apart to place ROM and GOM. It is important to note that these solutions assume equivalent fuel usage in terms of ∆푉 except for the super Hohmann transfer region, shown in gray. For a satellite to complete a transfer in this regime, the satellite must expend additional fuel, and this, therefore, represents an undesirable solution. The user of ReCon should reference any requirements they may have about the utility of GOM to decide whether potentially waiting for over a week to return to GOM is acceptable.

86 Figure 4-5: Propulsion System Tradespace for 60표 Phasing Angle

The ramifications of this tradeoff are important to consider. Using an EPsystem with 1,000 sec Isp compared to a 250 sec Isp chemical propulsion system means that a satellite could carry a fourth of the fuel usually needed for transfers going back to GOM. These transfers account for half of the transfers throughout a satellite’s lifetime and therefore, could account for a 50% fuel mass savings for the satellites in perfect conditions. As will be addressed further in Chapter 5, the mass of these satellites due to the extra propellant is a significant driver to the cost of the constellation due to the effects it has on the choice of launch vehicle. A designer could also choose to use these mass savings to enhance the satellite’s payload. In the case presented throughout this thesis, the primary payload is an optical sensor. The performance of which correlates to the aperture size of the sensor. A larger aperture size allows a satellite to get equivalent quality imagery at a higher orbital altitude. This trade is extremely important, as shown in Chapter 3, higher altitudes provide a satellite with an increase in opportunities to see targets on the ground when slewing. Yet, the mass

87 tradeoffs are not as clear as a direct relationship between fuel mass and Isp.Adding an EP system adds power loads to the satellite which has downstream effects on the satellite’s mass.

4.2.2 Mass Tradeoffs

To investigate whether or not using EP is useful in ReCon, the tradeoff concerning the mass of the propulsion systems and the mass of the power systems must be considered. The mass tradeoff analysis considers the same propulsion system detailed earlier in Table 4.2. While the tradeoffs presented vary dependent on the propulsion system chosen, this thruster is representative of the current state of technology.

First, the model must consider the added power load by using the efficiency fac- tor found in Table 4.2. An efficiency factor (휂) represents the proportion of power

provided to the thruster (푃푠) that the system turns into actual propulsive power (푃푗).

푃 푃 = 푗 (4.16) 푠 휂

This power, 푃푠, is an additional load on the satellite and is added to the total average power of the system. Although the thruster does not fire throughout the entire maneuver, it could be firing for days continuously, and therefore 푃푠 is added to the average power load. The total dry mass of the system is a sum of the payload mass

(푚푝푙) and the inert mass (푚푖푛푒푟푡) of the EP system.

푚푓 = 푚푝푙 + 푚푖푛푒푟푡 (4.17)

The inert mass is directly related to the power required through a 훽 term, which is a characteristic of the power system. This general parameter is representative of how much mass is needed for an electrical power system (EPS) to generate a required amount of power. In this case, the satellite model uses solar thermal dynamic EPS design, which has a 훽 range of 0.06-0.1 kg/W [63]. This analysis uses a 훽 of 0.06

88 kg/W.

푚푖푛푒푟푡 = 훽푃푠 (4.18)

The initial mass (푚푖) of the system is a sum of the mass of the fuel, payload, and inert propulsion mass. In this case 푚푝푙 includes everything on the satellite that is not power or propulsion system related.

푚푖 = 푚푝푟표푝 + 푚푝푙 + 푚푖푛푒푟푡 (4.19)

The rocket equation, defined above in Equation 4.1, relates the resultant change in velocity to the change of the system’s mass and its Isp. In general, to increase thrust, the efficiency of the system decreases if the power levels remain the same. Forthe purposes of this analysis, thrust and power remains constant. However, it is clear that there exists an optimum 퐼푠푝 for a mission duration that balances both 푚푝푟표푝 and 푚푖푛푒푟푡. Instead of optimizing the Isp, this analysis assumes the thruster’s capa- bilities are fixed and rather examines what other factors are necessary for itsuseto be beneficial. As mentioned above, the mass saving benefits for using an EP system arenot inherently obvious due to the additional power loads imposed on the system. Ideally, a design would apply any mass savings to making the aperture of the satellite larger, allowing for either increased image quality at the design altitude or equivalent image quality at an increased altitude. Since meeting a GSD resolution requirement drives the design of the reconfigurable constellation, there is no utility added in the system’s performance metrics for achieving better resolution than required. It is better for the system to design for a smaller aperture in an effort to save on costs. However, as Chapter 3 addressed, there is a benefit to increasing the altitude of the constellation if a larger aperture is achievable. Higher altitude orbits have a lower orbital velocity. A longer pass time means more opportunity for the satellite to slew across a region of interest to capture many unique point images. The larger satellites require more control authority to slew than a lighter coun- terpart, but an already increased power supply could alleviate this new need. The

89 ADCS would also originally be designed with the original altitude and mass in mind, and would only be able to improve its performance. The challenge arises in pairing this potential altitude increase with the use of RGT orbits. At any given inclination, these orbits typically exist between 100-200 km apart from one another due to their unique properties. Using RGTs means if the user is unable to change inclination due to other constraints in the system, such as a mandated latitude ban, the design must increase the aperture size in large steps to get the same resolution effects. The design cannot move incrementally to a higher altitude.

To examine the tradeoffs, a 1-m GSD resolution requirement was set to sizethe power and dry mass of a the satellite using chemical propulsion. Figure 4-7 shows the power and dry mass of the satellite at different RGT altitudes at 표a 60 inclination. The dry mass was determined from Legge’s previous modeling, a sum of Equations 4.21 and 4.22 which are functions of aperture diameter (퐷). The aperature diameter is sized based on the resolution requirement (퐺푆퐷), altitude of the satellite (ℎ), and wavelength of the observations (휆), assumed to be in the visible spectrum at 500 nm. [34]. 2.44휆ℎ 퐷 = (4.20) 퐺푆퐷

2 푚푝푎푦푙표푎푑 = 498.82퐷 − 190.17퐷 + 57.11 (4.21)

2 푚푏푢푠 = 1639.2퐷 + 13.78퐷 + 96.47 (4.22)

The power was derived from the payload power equation from Legge’s original work in Equation 4.23. This was assumed to be 46% of the total average power, which is a representative value for LEO satellites with propulsion from SMAD [63].

2.041 푃푝푎푦푙표푎푑 = 594.86퐷 (4.23)

Although the aperture scales linearly with height, the mass and power scales as a square to the diameter. So moving from a 200 km to 400 km altitude RGT, many only require 50 kg of mass to be added to the bus, but moving from 800 km to 1000 km increases the mass by several hundred kg. Figure 4-6 shows the mass and power

90 changes at each RGT, with the red points on the top figure representing the power system mass.

Figure 4-6: Mass Changes Based on RGT Altitude

The real trade comes when looking at the total mass added for propulsion in the low-Isp, chemical propulsion design vs. the total mass added for power and propulsion for the high-Isp, EP design. For this analysis, several values of ∆푉 were used and compared across different altitudes. The standard rocket equation calculated the propellant mass added for both the chemical and EP systems. The analysis recalculated the average power required for the satellite to calculate the mass added for the electrical power system for the Hall thruster. Figure 4-7 shows the difference between the EP design and the chemical propulsion design for several ∆푉 values and RGT altitudes. The red shaded region in Figure 4-7, above 0, is where the mass of the EP satel- lite is greater than that of the chemical propulsion satellite. This region is always undesirable. In the yellow region between 0 and the black cutoff line, the EP satel- lite weighs less than the chemical propulsion satellite, but the mass savings are not enough to make the jump to the next RGT altitude. Therefore, it would be better to remove mass from the satellite and create a lighter version, than to increase the RGT altitude and increase the payload diameter. Below the cutoff line in the white

91 Figure 4-7: Mass Trades for Using Low-Thrust Propulsion region, the mass savings are such that there is sufficient mass savings by switching to EP to make the altitude jump to the next highest RGT altitude. Note, that the 500 m/s ∆푉 solution falls within the red, undesirable region for all RGT altitudes. As is addressed further in Chapter 5, the ReCon design code typically does not add more than 500 m/s of fuel for reconfigurability onto a satellite to fulfill its 5 yearmis- sion lifetime. The fuel is enough to satisfy a constellation’s reconfiguration to 20-25 events, which corresponds to the projected frequency of high impact natural disasters [34, 19]. However, there may be instances where reconfiguring much more often is desirable, and higher ∆푉 cases are of use. Even then, most of these solutions fall into the yellow region of the graph between 0 and the cutoff line, unable to jump to the next RGT. The only solutions that were able to make the jump were those moving from 650 km to 850 km with greater than 3.5 km/sec of ∆푉 on board. This trade shows how important a factor the ∆푉 requirements are for making design decisions.

92 The factors that most influence the effectiveness of EP when it comes to making mass trades are ∆푉 and 훽. Given a ∆푉 , one can find the required 훽 that leads to an equivalent mass tradeoff between the electrical and chemical propulsion satellites. Figure 4-8 shows the different 훽 requirements for different ∆푉 values. The dashed lines show the upper range for current capabilities of large solar electric power systems, as well as the near term target values [1]. As the satellite’s altitude increases, the diameter of the payload and power required for the payload increase as well. By the nature of the rocket equation, a satellite with higher ∆푉 must carry a higher proportion of its total mass as fuel. This leads to greater savings when switching from heavier chemical fuels to light EP fuels. At the minimum point of these curves, the relative amount of mass added for fuel is the smallest compared to the change in mass due to payload and power requirements. This difference is why the higher ∆푉 options reach their lowest 훽 requirements at lower altitudes. The relative amount of fuel added to maintain the same amount of ∆푉 is higher at each altitude jump for the higher ∆푉 options. Yet, the power requirements are dependent on the altitude alone and are fairly similar across ∆푉 designs. The increase in mass for fuel occurs at a much faster rate for the higher ∆푉 designs, which is why these designs have their strictest 훽 requirements at lower altitudes compared to the lower ∆푉 requirements. Regardless, carrying more fuel lends itself to being more advantageous for EP systems.

The more ∆푉 needed, the less strict the 훽 requirement. Increasing power effi- ciency is an extremely high priority for NASA. The current 훽 capabilities are between 0.01-0.02 kg/W on the best technology, however, further improving this number is extremely important to not only those interested in the space industry but in many other technology industries as well [1]. This analysis did not consider a cost trade as- sociated with changing to a system with a lower 훽. A designer must consider whether or not switching to a more power efficient system still allows cost savings through a decrease in mass before pursuing this change in the propulsion system. These values are again reflective of the values needed to make large altitude jumps. However, there is always the option to remove mass in general at a lower altitude to increase a satel- lite’s ability to slew quickly by reducing the inertia of the satellite while maintaining

93 Figure 4-8: Determining 훽 Requirements the same control authority. Unlike the changes in altitude, this occurs incrementally and is the recommended approach for a constellation already deployed at a higher altitude RGT orbit.

4.3 Discussion on Feasibility

Low-thrust propulsion missions are typically the most beneficial on a long-duration mission that requires a large amount of ∆푉 . These applications could include use when moving from LEO to GEO, which uses a transfer that requires several km/s of ∆푉 . However, the entire intent of ReCon is to minimize the use of fuel altogether. With the current state of battery and solar technology, it is not recommended to use low-thrust propulsion for either responsive reconfiguration or returning to GOM. However, as these thrusters improve in efficiency and electrical systems improve their energy capture and storage capabilities, the ability for EP technology to provide mass savings for lower ∆푉 missions improves.

94 It should also be noted that the analysis for this chapter used the same specifica- tion as were dictated in the early iterations of ReCon. This methodology drove the size of the satellite to several hundred kg to accommodate an optical payload, which could observe at a GSD of 1-m. If a user was interested in deploying a constellation of CubeSats on which a chemical propulsion system may be impossible to install, using EP becomes much more useful. Chemical propulsion systems prove to be challenging to put onto CubeSats due to stored chemical energy limits placed on the secondary payload position CubeSats occupy [36]. Hydrazine monopropellant, the propulsion system used in the stan- dard ReCon modeling, requires strict handling procedures, driving up its cost. Since the primary purpose of CubeSats is an inexpensive alternative to larger satellites, the qualities of hydrazine make it a poor match for CubeSat missions. Green monopropel- lants offer an alternative to hydrazine, the most common being hydroxylammonium nitrate and ammonium dinitramide-based propellants. Despite having higher per- formance qualities than hydrazine, their higher combustion temperatures drive pro- hibitively expensive thermal and power requirements. EP on CubeSats is in the early stages of development and comes with its own set of challenges, including increased power requirements, thermal management issues, and electromagnetic interference. However, several new systems show promise in providing an inexpensive alternative to cold gas thrusters, which are typically placed on board CubeSats, if they have a propulsion system at all. There are also limits to how long one of these systems could operate. A reconfigurable constellation of CubeSats would be best suited forone reconfiguration throughout their lifetime. Using ReCon for CubeSats may bebest executed in the form of shifting to a different observational mode based on seasonal events. However, multiple, responsive, effective reconfigurations are outside of the scope for CubeSats at this time.

95 96 Chapter 5

Deployment Strategies for Reconfigurable Constellations

5.1 Flexible Systems

The systems engineering process drives satellite design in a requirements based ap- proach. In a need-based mission, the first step is designing objectives and constraints, which end up driving important design trades further along in the process [63]. How- ever, when working on timelines that can be decades long, important performance indicators can change significantly. These changes can lead to great frustration for designers who tailor their product to the requirements initially provided. The later in a system’s lifetime requirements are changed or added, the more pronounced the effect on a systems engineer’s workload [49]. Most systems engineers generate ade- sign that optimizes a satellite’s performance within a given tradespace when applied to a predefined mission. However, optimizing with one given group of priorities, and then applying the product to a completely different scenario can lead to exception- ally poor performance. A common example cited in satellite constellation literature of over constraining a project is the failure of the Iridium constellation to adapt to a lack of demand from consumers. The designers sized the communications project to provide cellular satellite connectivity for one million customers in its first year, and the designers failed to create a contingency plan if their projections were inaccurate.

97 In reality, the demand never materialized, which ultimately to bankruptcy [16].

The assumed conditions which drive requirements rarely match the ever changing nature of the real-world environment. Even in situations where the environment can be well defined, project operators and managers turn over as time passes. Newob- jectives and interests emerge, and the objectives originally optimized for in the old system may no longer be desirable. Therefore, there is significant value in introducing flexibility into a design [16, 28]. While sensitivity analyses show how an "optimal" design can withstand changes in the original assumptions, it still is a post-design analysis. A sensitivity analysis applies the projected uncertainties to test the final design instead of designing around the uncertainties at the beginning of the process. This analysis often fails to recognize the ability of management to respond to these unexpected events in real time. The flexible designs account for this uncertainty. Flexible systems may perform worse than an optimized design in the "most likely" forecasted scenarios that drove the original design optimization. However, the ad- vantage of the flexible design is that it performs well in a large range of scenarios by giving management the ability to execute a variety of options when operating the system [16].

Traditional constellation designs place satellites in specific orbits to fulfill require- ments dictated by the customer’s mission objectives. In cases where a user may have a precise area they are interested in, observing traffic tendencies in New York City for example, a designer would choose orbits that maximize observational coverage of this area. In these well-defined cases, the choices for high-performance solutions are limited. However, this is not a practical approach to system design in a world full of uncertainty. As technology improves and the users of satellite imagery shift, the applications for a constellation on-orbit may change. If users design a constellation for a specific area or type of observations, without building in flexibility, theymay need to launch an entirely new constellation to meet changing requirements.

Previous literature identifies five different forms of flexibility to respond totheun- certain environment where satellites operate: reconfiguration, retasking, replenishing, expanding, and upgrading [34]. Reconfiguration is the primary subject of this thesis

98 and has been justified thoroughly in previous chapters. Retasking occurs when the purpose of a satellite changes. When an old system is no longer state of the art for its primary purpose, the operator could still use the satellite for a secondary mission in the event it is still operational. Replenishing refers to refueling depleted supplies on board a satellite, including fuel or power resources. Expanding a constellation adds to the total number of satellites operating in the constellation. Finally, upgrading a constellation adds new technology to the satellites to enhance their performance capabilities. Many of these options have been explored individually, yet picking one strategy is rarely ever the optimal solution. Researchers have explored both the de- sign of reconfigurable constellations and the deployment of a constellation in stages to distribute the cost over time [17, 34]. However, being able to replenish and expand a constellation that designers optimized for reconfiguration adds additional flexibility that is valuable for fielding an EO constellation for decades instead of a few years.

5.2 Sensitivities of ReCon

The original ReCon design specifically looked at how to improve the performance of an EO constellation without increasing the cost. The primary source of uncertainty considered in ReCon is the ground locations of the targets to be observed using this constellation. When the locations of the events are unknown before the launch of the constellation, it is more cost-effective to reposition satellites already on orbit thanto launch an entirely new constellation or to launch an excess of satellites to be prepared for all scenarios, as shown in Legge’s 2014 thesis [34]. Legge dictated a few design specifications for the mission. The mission had a 1-m GSD resolution requirement, which sized the diameter of the payload and drove the altitude of the RGT orbits. The ability to respond to ground targets following the distribution of natural disasters around the world, weighted in probability by the economic impact, drove the orbital attributes of the constellation. It had a 5 year mission lifetime designation, and the total number of events tested were between 20 and 25 throughout this lifetime. The amount of fuel added to a satellite to compensate for reconfigurations was a design

99 variable that adjusted to satisfy this demand and typically converged to around 300- 400 m/s worth of ∆푉 . This characteristic means that the fuel on board could adjust the satellite’s velocity, which in turn adjusts the satellite’s orbit size, by 300-400 m/s throughout the satellite’s lifetime.

The design presented in the following analysis considers a longer lifetime, 15 years instead of 5 years, and expands the number of events considered. With the "democ- ratization of space" emerging internationally and analytics proving to be extremely valuable to almost every industry, it is hard to predict what the demand for this information may actually be [59]. Perhaps there may be users who want regional persistence to monitor climate change effects in a region over long periods of time, and the demand may be much higher than initially expected. Or perhaps there is little desire for these high-quality resolution images, and customers prefer a cheaper alternative, making demand less than expected. Beyond the uncertainty in target ground locations, many other ambiguous factors can affect the value of a reconfig- urable constellation. Some of these factors include the value of satellite imagery compared to other forms of data, demand fluctuations for satellite data, and price changes in satellite manufacturing, launch, or operations.

The overall ReCon mission is the high-quality observation of natural disasters. Figure 5-1 shows that although the International Disaster Database, EM-DAT, has had a decrease in recorded disasters each year, more and more satellite imagery is being requested. Between two and five different constellations respond to an event depending on the scale. The most significant concern is not necessarily the quantity of imagery, but rather the speed at which it can get to the emergency responders. Estimates show that about two days are needed before a satellite can retask to take a first image of an event [61]. In these situations, a reconfigurable constellation, with dedicated procedures in place for retasking and maneuvering, would offer a significant advantage in terms of the ability to provide the first image the fastest to responders. Ideally, responders desire real time imagery from satellites, which is currently not a possibility. However, ReCon would offer the closest thing to this information withits exceptional temporal resolution capabilities.

100 Figure 5-1: Use of Satellite Imagery in Emergencies [61]

The variation in imagery value can influence the effectiveness of any constellation. Some events may have extremely high value to certain customers but not to others. Some may desire different sensors than the visible spectrum used on these satellites. Therefore, the design must consider the actual value of the event when evaluating the performance of an on-demand type constellation like the one proposed. If getting high-resolution imagery from a natural disaster is worth a nominal value of 1000, then getting high-resolution imagery of a typical watershed may have a value of 100, with varying importance of events in between depending on the user. These values loosely correlate with the urgency and frequency of high-quality imagery. For example, during a natural disaster, high quality, continuous imagery is needed to be delivered within days of the event, without any prior warning. Yet, tracking the effects of climate change on an area of land is only required at most monthly, and is something that can be scheduled well in advance. Therefore, a variety of different EO satellites could complete this mission set, and it does not carry the same level of value [58]. EO imagery may become more valuable over time, for example, if there is an increased interest in climate research, or less valuable if alternative providers exist. It is also possible that the value increases while the demand decreases if, for example,

101 imaging is extremely valuable for a specific subset of rare events. The analysis modes these value shifts to go as low as 25% of the predicted value, but as high as 200% as satellite imagery becomes more and more common in all sectors. The forecast used as a reference value shows a slight increase in the value of this imagery, worth 110% of its original value after 15 years. Reconfiguring to satisfy a customer’s regional needs has many potential appli- cations, but specifically, the modeled demand for this constellation matches natural disaster frequency. The analysis derived demand through historical data, but natural disasters are challenging events to predict accurately [19]. Users are also interested in using persistent satellite imagery for slower developing events, such as when operators tasked satellites to provide data for the Ebola outbreak in 2015 or for droughts on the Horn of Africa in 2011 [61]. Even if the requests for observations occur at the nominal rate modeled, about four per year, other factors could influence demand. It may be that alternatives exist to the data that this constellation provides, and it is not necessary to task this constellation. On the flip side, demand could increase as more industries are finding value in satellite imagery. This variability in demand significantly impacts the design of the system. The analysis models demand tobeas little as zero events per year, or as often as one event per month, with a distribution shown in Figure 5-2.

Figure 5-2: Probability Distribution of Average Demand for Imagery Per Year

The cost model used to design the satellites in this constellation is the same as the

102 one used in the original ReCon design found in Legge’s thesis [34]. This model used from models in the Space Mission Engineering textbook, Space Mission Analysis and Design (SMAD) and the following analysis applies a 5% learning curve [63]. Using this model, the cost of manufacturing typically levels out to approximately $30 million per satellite when applying the 1-m resolution requirement in a 15/1 RGT orbit. The prices used in this analysis assume very high-quality imaging satellites, but there is no reason a similar performance analysis cannot translate to cheaper CubeSat missions. It is also important to note that as more companies enter the satellite market, they introduce new solutions that drive manufacturing costs down. However, most of these innovations are currently happening at a CubeSat or small communication satellite type scale [53]. Since there is not a lot of data to support a substantial decrease in costs for optical satellites, the model uses a consistent price as a conservative estimate for manufacturing. The bounds on this cost forecast are between 90% and 175% to reflect typical satellite project overruns. The most interesting forecast is the future of launch providers in the aerospace industry. There are dozens of companies proposing a new launch vehicle for use in the next two to three years. Companies are offering significantly cheaper options and a variety of payload capabilities, providing more ways to get a satellite into orbit [45]. Five of these probable providers are used for this simulation shown in Table 5.1. The analysis models the costs of each of the constellation options using these five options, but a forecast factor adjusts the costs to reflect launch costs in future years. Although lower or higher launch costs would likely take the form of new launch vehicles entering the market or old vehicles leaving the market, a forecasting adjustment factor is used to reflect these changes. The launch cost forecasts are modeled to go to as low 50% in the next twenty years and as high as 120% due to unforeseen circumstances.

103 Table 5.1: Launch Provider Options in Analysis

Mass to Cost Per Cost Per Launch Vehicle LEO [kg] Launch [$M] Kg [$k] Electron 170 5 29.6 6,200 70 11.29 Falcon-9 (Reusable) 22,800 43.7 1.92 LauncherOne (2020) 400 10.06 25.16 Firefly Alpha (2020) 400 12.58 31.45

A final factor that provides uncertainty is operations costs. Operations include the cost of employees needed to run the satellite and perform maintenance on the satellites’ software. Since there are no operational reconfigurable constellations cur- rently, the estimation for operations is set at a medium complexity satellite system in terms of SMAD costing approximations [63]. However, the reality of this cost could vary since there is a fair amount of uncertainty in this value [34].

When considering all of these uncertainties, one must evaluate the value of a re- configurable constellation. In this case, the value comes from regional observations on demand. Ideally, the constellation should maximize coverage of requested regions for a minimal cost. When looking at a nominal deployment strategy, meaning launching the full constellation upfront, the effects of these uncertainties have varying impacts on the value per dollar of the constellation.

Figure 5-3 shows the sensitivity of the system to these different uncertainties. The metric used to evaluate performance is the amount of value captured per the cost of the design, in millions of dollars. This analysis values the constellation in terms of Net Present Cost (NPC). This term is more often referred to as Net Present Value (NPV), which subtracts revenue over time from costs over time. However, there is no estimated revenue in this model. While several commercial products could apply the ReCon concept, it also could be fielded by nonprofits or governments who do not evaluate value in terms of monetary compensation. Therefore, this analysis evaluates the capital spent on the project in terms of NPC. Using NPC involves applying a discount rate to capital spent in the future. Discount rates represent the added value of having capital on hand now when a designer chooses differ costs

104 to the future. Discount rates are very high in high-risk industries, such as start-up businesses. Yet, they are very low in the government [16]. The tornado diagram below shows the variation from the expected nominal value per NPC when the extremes of the forecasts come to fruition. In Figure 5-3, the cost category is the sum of all three cost factors, and even then, it is still clear that demand and value have the most significant impact on a constellation’s performance per dollar. The design can capitalize on some of the upside provided from the higher value imagery, but very little of the upside caused by increased demand due to the satellites running out of fuel. However, both have significant downside effects on the low value and demand sides. Due to the ability of ReCon to scale relatively equally to low and high-value scenarios and the smaller effects of costs, the staged deployment strategy developed was used specifically against the uncertainty in demand.

Figure 5-3: Performance Sensitivities of Nominal Constellation Design

The biggest constraint on meeting demand is the amount of fuel on board the satellites. The ReCon simulation uses a variety of factors when deciding how to maneuver satellites and which ones to maneuver. It maneuvers as many satellites as needed to meet the temporal resolution requirement set, but also attempts to minimize the fuel cost of doing so (퐶푅). This cost considers both the fuel needed for reconfiguration∆ ( 푉푅) and a penalty for fuel imbalances across the constellation

(∆푉푝푒푛). The penalty factor (퐺푝푒푛) scales the cost of being out of balance as a proportion of the difference between satellite 푖’s fuel remaining and the average fuel

105 remaining across of the constellation, with a size of 푁푇 [34].

퐶푅 = ∆푉푅 + ∆푉푝푒푛 (5.1)

(︃ 푁 )︃ 1 ∑︁푇 ∆푉푝푒푛,푖 = − min 0, ∆푉푠푎푡,푖 − ∆푉푠푎푡,푘 퐺푝푒푛 (5.2) 푁푇 푘=1 When there is a time constraint imposed on the constellation to respond to an event, the amount of fuel used per maneuver increases substantially. However, typ- ically the amount of time a constellation has to respond is set to two weeks, and giving the constellation this amount of time to reconfigure allows the fuel usage per maneuver to level out as shown in Figure 5-4. Forcing faster maneuvers introduces an additional constraint that introduces an additional layer of complexity when es- tablishing the demand capacity of a constellation. Giving the constellation time to reconfigure allows for the true capabilities to be shown without compounding effects.

Figure 5-4: Average Fuel Usage with Varying Response Times

Figure 5-5 shows the rate at which different sized constellations deplete their fuel averaged across the entire constellation. The x-axis shows the target number the constellation is responding to in sequential order, and the y-axis shows the amount of fuel remaining on average per satellite in the constellation in terms of m/s of ∆푉 .

106 The rate of depletion is fairly linear until the constellation begins to run out of fuel. In the scenario depicted, the temporal resolution requirement was a six-hour revisit frequency during daylight hours. This requirement means the satellite must view the area only twice per day, which is an achievable revisit time for a constellation of a small size, used to reduce the computational complexity of the problem in this analysis. The event capacity of each constellation size increases as the constellation size increases. The constellation always saves some fuel for its disposal at the end of its life, so the average fuel will never quite reach zero. However, once it reaches below approximately 25 m/s on average, the constellation can no longer adequately respond to events.

Figure 5-5: Fuel Usage for Different Sized Constellations

5.3 Staged Deployment of ReCon

Adding fuel at all to the satellites in a constellation is already one way in which flexibility is implemented in this design, allowing them to maneuver to meet demand. However, the expected demand for the satellite’s imaging largely tailors the design. One way to resolve this constraint is to add even more fuel to each satellite. At some point, this becomes impractical, and the upper limit for fuel added for reconfigurations is 500 m/s for this analysis. However, the ability to add more propellant would be an extremely valuable way to address increased demand. In the case where fuel is

107 limited, another option is to launch additional satellites into the constellation as they are needed. In this scenario, the constellation can grow as demand occurs and value changes, instead of the constellation having an excess of resources or running out of fuel too quickly. An example of this process is detailed in Figure 5-6. This scenario shows two launched constellations. The first one, in red, has 21 satellites in the constellation. It is sized to respond to approximately 100 different events. This event capacity would be almost seven events per year, significantly more than the event frequency ofthe nominal ReCon designs. Launching and manufacturing this constellation would cost approximately $1.6 billion. The second constellation, shown in blue, is launched with only 15 satellites. This constellation costs $1.2 billion to manufacture and launch. It can respond to approximately 60 events in its lifetime. The situation shown below demonstrates the value of a staged deployment scenario. After the constellation responds to 20 different events, two more satellites are launched into the constellation.

Figure 5-6: Fuel Consumption of Constellation Comparison

Three different values show the cost of this expansion detailed in Table 5.2.The first is considering a full redesign. In the satellite cost model, the first satellite unit

108 costs significantly more than the rest due to the testing considerations neededto launch a new unit into space. Therefore, if the two additional satellites launched to supplement the constellation are new designs, the first unit has to go through testing again. In this case, the launch would cost $0.25 billion. However, if these units were exactly the same as the previous units, the learning curve would remain in effect, driving the cost down to about $0.13 billion. It is also important to note that while learning curve effects are powerful, there are always some discontinuities introduced when there is a break in production, and the true cost likely lies between these two bounds. The discount rate used for this project is 5% over three years. The Federal Reserve publishes the recommended discount rate for use, and it was around 2% in early 2020, making the 5% rate over three years a reasonable approximation for this analysis [46]. The final value represents the cost of expanding includes a 5% discount rate over three years. Using this discount rate drives down the cost to about $0.11 billion per expansion. Table 5.2 illustrates these different cost options.

Table 5.2: Cost Breakdown for Example Case

Year 3 Year 6 Year 9 Year 12 Launch Upfront $1.6 B $1.6 B $1.6 B $1.6 B Staged Launch: Full Re-design $1.2 B $1.45 B $1.7 B $1.95 B Staged Launch: Same Design $1.2 B $1.33 B $1.46 B $1.7 B Staged Launch, 5% Discount Rate $1.2 B $1.32 B $1.44 B $1.55 B

Table 5.2 shows that when the demand realized requires the of the entire constel- lation fuel capacity the total cost comes out to be only $50 million difference between the staged launch and the upfront launch. However, Table 5.3 below better illustrates that the true value in using a staged deployment is the ability to size for the worst case scenario initially and save significantly if this is the scenario that comes to fruition. However, the decision maker still retains the ability to expand and still satisfy high demand too. If the constellation only receives a demand of 60 events throughout its lifetime, it requires no additional staging, and the total cost is only $1.2 billion in the staged launch plan. However, the cost for the nominal situation is $1.6 billion, regardless of

109 Table 5.3: Cost Variations in Example Case

Number of Events 60 80 100 Launch Upfront $1.6 B $1.6 B $1.6 B Staged Launch $1.2 B $1.44 B $1.55 B the ultimate demand. This scenario results in a $400 million cost savings, making the program 25% cheaper if the original estimated demand was too high. This savings is a significant reason to consider a staged launch concept of operation. The only reason staged deployment is feasible is through the use of a variety of launch options. Due to the changing landscape of the launch environment, a variety of providers can accommodate the launch of either a single or dozens of satellites efficiently. There are two simultaneous movements in the launch industry, and they are both the result of the increase in commercial interests in the space industry. As more launches are needed by the government and commercial industries alike, there is increased opportunity for new companies to break into the launch industry, driving an increase in competition. This new competitive environment has driven revolutionary developments, such as reusable first stage boosters, which in turn has made getting payloads on orbit more economically feasible. Figure 5-7 below how the average cost per kilogram has decreased throughout time, making it cheaper and cheaper to put mass into orbit [29].

Figure 5-7: History of Large Launch Vehicle Prices [29]

110 The second movement in the launch industry is the staggering increase in the number of small vehicle launch providers. While in general, small launch vehicles have a higher cost per kg to put a satellite into orbit, the alternative for a customer who does not need the large lift capacity of the large rockets is to try and rideshare a launch. Using a rideshare could place the customer in an undesirable orbit, and forces their schedule and launch to accommodate the primary payload. Therefore, it may be advantageous for a company to purchase a rocket with a poorer cost per kg performance, but a more accurate profile for their mission needs. Figure 5-8 shows the cost for the current and future small launch vehicles. The trend shows that the current price is about $30,000 per kg to launch into LEO using smaller vehicles, which is more expensive than today’s new large launch vehicle shown in Figure 5-7. From 2015 to 2019, an annual small launch vehicle survey went from identifying 22 launch vehicle efforts to 49 efforts [45]. Not all of these efforts will become operational; however,like in the large launch vehicle space, the increased competition will continue the trend of making launching small satellites more affordable. These price changes are especially important using for using a staged deployment as the satellites need to deploy into separate planes. The original deployment strategies allow for the satellites to slowly disperse to the proper orbits, but this is not a luxury that operators can provide to satellites launching into an operational constellation.

Figure 5-8: Projected Small Launch Vehicle Prices [45]

111 It is important to note that building satellites is not a trivial matter, and therefore the decision timeline for this project is on a three-year cycle. In this design, satellite manufacturing takes two years to complete, and satellites cannot launch until the next year. These decision making opportunities are not rolling but instead occur every three years, although this could be adjusted in the future. In a simulation for a 15 year mission lifetime, the remaining fuel across the constellation informs the decision to launch every three years. The simulation assumed that the average demand would remain constant throughout the given three year time period, and the simulation projected the value observed in the first year of the period forward to calculate the average remaining ∆푉 on each spacecraft. When this value dropped below 500 divided by the current staging period number (푖) in Equation 5.3, the simulation triggered the manufacturing of two more satellites to be launched. The first staging period is when the initial constellation launches, so the decision rule used to launch more satellites starts at period 2 and continues until the last staging opportunity (푛).

∆ V푝푟표푗 < 500/푖 (5.3) 푖 = 2, 3, 4...푛

A Monte Carlo analysis showed the effectiveness of using a staged deployment instead of launching an entire constellation at once. The simulation ran over a 15 year lifetime. The simulation used the uncertainty distribution in Figure 5-2 for the average demand in each three year period. The nominal constellation launched 19 satellites upfront with 500 m/s of ∆푉 on board the satellites and did not launch again throughout the mission’s lifetime. The staged deployment started with only 15 satellites launched and could expand by launching two satellites every three years if the decision rule dictated that the fuel on board would likely not be sufficient for the mission.

Figure 5-9 shows the result of a Monte Carlo simulation. The chart on the left is the most significant. It shows the relative frequency of the capital required for each design. The nominal design has the same capital expenditure regardless of the

112 resulting demand scenario. This consistency is because the entire cost is upfront, regardless of the outcome. On rare occasions, the staged deployment expended as much capital as the nominal solution, when demand was extremely high. However, the majority of the solutions saved about $100 million when compared to the nominal solution. This cost savings comes with an equal performance compared to the nominal solution. The middle chart shows the performance of the constellation in terms of the value it captured per million spent. The lines represent the cumulative frequency of occurrence, and the black line, representing the performance of the staged solution, shows a slightly better performance overall, as it shifts to the right of the nominal solution, shown in the gray line. The final graph shows the percent of the demand actually captured. The constellations performed equally in this metric, meaning the staged deployment was not missing targets despite being deployed slowly. In the worst case scenarios, neither the staged or nominally deployed satellites captured fewer than 85% of all demand seen.

Figure 5-9: Distribution of Constellation Cost

This decision rule in Equation 5.3 was relatively conservative, ensuring perfor- mance was not lost using staged deployment. However, decision makers with different priorities can use different decision rules. For example, to reduce the risk ofaloss of a large initial capital investment, a smaller constellation upfront would extend the tails of the cost distribution histogram. Low demand environments would require less upfront capital, but more capital would be needed to catch up in these environments. Decision rules could also be changed to reflect the current status of the cost ofthe launch and manufacturing of satellites. If costs are low, it is more beneficial to launch

113 more satellites to meet potential demand, but if costs are high, it may be beneficial to wait if the current constellation can meet demand. Regardless of the exact priorities of the decision maker, it is important to give managers the ability to execute options throughout a project’s lifetime. A final note on using a staged deployment, in general, is the potential benefits of making upgrades in the technology of a system when launching more satellites over time. These upgrades may have some detriments to the learning curve effects mentioned earlier in this chapter. However, the value of the imagery captured would be higher, and thus the value per cost would still increase. For longer mission lifetimes, like the 15 year timeline previously discussed, the ability to upgrade technology every few years could prove to be invaluable.

5.4 Responsive Launch for ReCon

The staged deployment method assumes a three-year window to go from the decision to manufacture additional satellites to putting them on orbit. However, it would be greatly beneficial if satellites could be manufactured upfront and simply launched on-demand at a moment’s notice. Responsive launch is a capability that is currently unavailable but is under constant consideration. In 2014 the United States Congress requested an update from the Department of Defense on its potential responsive launch capabilities. When the Government Accountability Office released its report, it identified several programs across the DoD that are pursuing the development of launch on-demand, but none of the programs could develop the technology at the time [11]. Now, DARPA appears to be the farthest along in the process in their Launch Challenge. The DARPA Launch Challenge asked companies to launch two satellites into orbit with only a few weeks notice. The winning prize was $10 million, and a high likelihood of receiving future contracts from the government [23]. However, of the three finalists announced in the spring of 2019, no teams were able to complete the challenge. Vox Space dropped out to focus on other products in their company, and

114 Vector withdrew due to financial difficulties. The final company, , made itto the launch in March 2020 with a two weeks notice of the location and payload details. However, they scrubbed the launch with only a minute left in the countdown due to technical and weather related issues [15, 23]. While currently, it takes years to go from scheduling to performing a launch, the government is becoming increasingly interested in moving this timeline to weeks or days [11]. However, this is largely out of reach at the moment, not only due to the operational but also the regulatory challenges of short notice rocket launches [21]. United Launch Alliance CEO, Tony Bruno, expects that future launch requirements could include a requirement for responsive launch; however, this capability is currently unavailable [25].

An additional factor in the replenishment of ReCon is the use of responsively launching the satellites to supplement to performance of the constellation. In the analysis in Section 5.3, the expansion options occurred on a timeline of every three years, but reconfiguring satellites is not an instant action. The standard timecon- straint on the constellation is two weeks. Using this time constraint, the fuel usage per satellite remains approximately constant across a variety of constellation sizes. As the allowable reconfiguration time decreases, it can affect both the number of satellites that are reconfiguring and the amount of fuel used. Some satellites maynot be able to reconfigure in time, even with dramatic fuel usage.

RGT’s can be characterized with a Λ term which relates to their longitudinal shift in ground track. This term is defined between표 0 -360표, where a value of 0표 is the same as 360표. This relationship was described in Equation 4.12. This places the ground track onto the Earth, and is the term ReCon shifts to respond to an event. Both Ω and 푢 determine this value. The worst case reconfiguration scenario would be if the optimal Λ position were 180표 away from a satellite’s current position. For the following analysis, the revisit requirement used is once every six hours throughout an entire 24-hour cycle. The constellation cannot acheive this requirement with fewer than four satellites evenly spaced out in a 15/1 repeating ground track. For four satellites evenly distributed in four different planes, the largest required change in Λ would potentially be 45표. Equation 4.12 is shown again below as a reference, using

115 푀 in place of 푢.

Λ = 푁표Ω + 푁푑푀

Using a 15/1 ground track gives a much higher weight to a change in Ω than a change in M. The following equation shows the relative change in Ω that would occur using the 퐽2 effect. To calculate any given change in Ω using the 퐽2 effect the time (푇 ) needed to complete the plane change one needs to consider the inclination of the orbit (푖), the final semi-major axis (푎푓 ), and the semi-major axis of the drift orbit

(푎푡푟푎푛푠푓푒푟), which is directly related to ∆푉 usage.

2∆Ω 푇 = (︂ )︂ (5.4) 2√ 1 1 −2퐽2푅 휇 cos 푖 * 7/2 − 7/2 푒 (푎푓 +Δ푎푡푟푎푛푠푓푒푟) 푎푓

Equation 5.5 relates ∆푉 and 푎푡푟푎푛푠푓푒푟 when assuming the initial and final orbits have the same semi-major axis. This equation assumes a Hohmann transfer.

⃒ ⃒ ⃒√︂ 휇 √︂ 휇 ⃒ ∆푉푡표푡푎푙 = 2 ⃒ − ⃒ (5.5) ⃒ 푎푓 푎푡푟푎푛푠푓푒푟 ⃒ Figure 5-10 illustrates the relationship between ∆푉 use and time to make a 3표 plane change. This would be the required change needed to shift Λ 45표, while main- taining 푀. The second line in red shows the amount of time required to shift 푀 maintaining Ω. Note how the units of this shift in 푀 are in hours instead of days. This further illustrates the effects of ReCon in increasing revisit performance, without requiring a plane change. The 500 m/s point is marked on this plot to illustrate the point of maximum fuel usage. However, the challenge in ReCon is accounting for the simultaneous changes in Ω and 푀. It is difficult to analytically determine how the satellies will respond toan event. Larger constellations can often reconfigure faster due to their higher chance of being in a good position for overflights already. Figure 5-11 shows the results ofa ReCon simulation where the constellation had limited time to reconfigure to the final position to meet a six-hour average revisit requirement. The top line in this chart

116 Figure 5-10: Time of 45표 Shift in Λ Given Varying ∆푉 Usage shows a constellation size of four satellites, each descending line after that is a larger constellation size, with the bottom line showing a constellation of size 13. Figure 5-11 only extends out to seven days. After this point, constellations of all sizes fail to increase their performance. This convergence is due to the smaller search space ReCon is designed for, which highly values fuel conservation. Figure 5-10 shows the results when a constellation is willing to use all of its fuel. Since the nominal ReCon design uses a five year mission lifetime, and the transfers to higher altitudes are limited, the aggressive maneuvers needed are not in the solution space. A constellation of size 13 achieves an average revisit time of less than six hours without reconfiguring. The maximum amount of time needed, according toFigure 5-10 would be a matter of hours if an operator was willing to burn all fuel on board to alter its ground track. For responsive launch to be competitive with a reconfigurable constellation, it must be able to put fewer satellites into orbit than ReCon requires. In this scenario, as seen in Figure 5-11 to respond to a situation immediately with a six-hour revisit time, a minimum of 13 satellites are needed. The total cost of building and launching 13 satellites is $547 million using the same cost model, as discussed previously. This model assumes all satellites launch on a single Falcon-9 rocket. Conversely, to have satellites immediately launched into orbit, only 4 satel- lites are needed to get a six-hour average revisit time. The model approximates the

117 Figure 5-11: Average Revisit Time As Constellation Size Increases (moving down the graph) and Time to Reconfigure Increases manufacturing of these satellites as $249 million. This price leaves the user a remain- ing $297 million for responsive launch. This amount would be more than enough to launch four Firefly Alpha rockets to separate, desirable planes. However, additional expenses are guaranteed to come along with the ability to launch satellites into space responsively. This process includes loading, launching, and deployment time required for a responsive launch to get on orbit. This also does not take into consideration that the satellites already on orbit in the reconfigurable constellation will almost certainly obtain imagery of the target before they reach their final configuration.

Although launch costs are indeed dropping, and having four launches into four separate planes for a reasonable price is very realistic, launch costs are also dropping for the large launch vehicles. This drop means that the benefit of launching all of the satellites at once grows larger and larger. It is also important to note that this comparison is for the case of the satellites using all of their fuel in the worst case scenario to get to a new position. Realistically, the value of ReCon is the ability to reconfigure multiple times, not just to one event. Although there is an responsiveness hit by saving fuel, it is more effective than launching separate satellites to each event.

118 Responsive launch could only really be considered if used in conjunction with a staged deployment strategy already in place, reducing the delay time for expansion from years to months, or even one day to weeks.

5.5 Challenges in Implementing Engineering Options

There are several implementation challenges involved in the staged deployment ap- proach. Constraints in the staged simulation ensure a built-in delay between deciding to build additional satellites and getting them on orbit. However, the ability to have a team continuously available to build satellites could prove challenging. Getting on the proper launch manifests that are most cost-efficient could also be challenging. It may be easier to distinctly separate the development and operation stages for the sake of the workforce, despite the financial incentive to build as needed. However, staged constellations are not a novel concept, and the commercial sector is leading the way in shortening the timeline needed to build satellites with the same designs. Before its financial difficulties, OneWeb opened its mass production facility in Florida, which could build many satellites efficiently. Although OneWeb was unable to reach its goals, facilities like this may become available for use from other entities for a fee, which could be easier and cheaper than a company maintaining its own facilities and workforce [24, 53]. This implementation also assumes that each replenishment uses the same design. This consistency is to keep testing costs down, but there will likely be a temptation to upgrade as time goes on. The problem with upgrading designs is the testing that follows from design changes in satellites. Additional trades would be required to decide if a design upgrade is worth the cost for the potential increase in imagery value. The flexible approach to design is more beneficial in situations that havelong timelines or fast turnaround for the flexibility options. The space industry is a tradi- tionally slow business, often with exquisite satellites that can last decades. However, this paradigm is shifting, with manufacturing and launch costs coming down, favoring larger constellations with shorter individual lifetimes. This analysis relies on some of

119 those effects but is still operating at a smaller scale than companies proposing mega- constellations in LEO. However, the positive effects from the increases in launch and manufacturing tempos only make this idea more, not less, viable. In situations where the system is slow to respond to demand, flexible decision making may be difficult to implement, which is why this is not very prevalent in today’s space market. The main conclusion of this chapter is that this strategy showed how applying a second form of flexibility over a first can add to a system’s performance, especially inthe face of uncertainty. The solution for many problems is not often one strategy or the other, but instead a mix of both.

120 Chapter 6

Conclusion

The purpose of this thesis was to explore the design of reconfigurable constellations in ways that previous work did not fully analyze. Important questions remained for the potential implementation of ReCon. Although this work is not a design of a single constellation for a single mission, it provides context to decision makers. This work informs the consideration of additional trades when designing a responsive constellation. In general, this thesis showed that considering the potential layout of targets and a satellite’s imaging capabilities is very important. Analysis showed the use of electric propulsion is not yet worth exploring, given its current state. Finally, an engineering options analysis showed that mission planners should consider the use of alternative deployment strategies for ReCon, but only if they are willing to make changes to the traditional process of satellite development. The preceding chapters provided the context and analysis to these conclusions.

6.1 Conclusions and Contributions

This work began by laying out the history and importance of EO missions. The use of this information has become increasingly valuable with the introduction of data pro- cessing technologies and advances in artificial intelligence. As payloads increase their capabilities, launch becomes more agile, and buses become more variable, reinventing the execution of space missions becomes more important. ReCon is an alternative

121 concept to traditional constellation designs. This thesis discussed ReCon’s methods and benefits upfront to build the framework that the remainder of the work explored.

This thesis explored three specific areas concerning ReCon. These areas included image collection at the operational level, the use of low-thrust propulsion at the design level, and launch strategies at the execution level. After the discussion of ReCon as an idea, Chapter 2 explored the past work in each of these three areas. This section discussed scheduling techniques and showed the computational challenge that comes from attempting this satellite scheduling problem. It also discussed how there is not necessarily a one-size-fits-all algorithm that can be universally applied when it comes to image scheduling. This chapter provided background for the current state and uses of low-thrust technologies, which have been used primarily at GEO and beyond but not yet explored as a primary propulsion system in LEO. Finally, the section explored previous work that looked at staged deployment and responsive launch systems to show the potential benefits of flexible strategies. However, the landscape of launch has changed dramatically in recent years, and the previous tradespace analysis did not reflect this change.

Chapter 3 introduced the techniques already developed for single pass scheduling using a dynamic programming approach. This section explored the sensitivities of the current approach. It also introduced the metrics used for performance throughout the analysis. Four different algorithms were adapted for use in conjunction with ReCon. The analysis of the four algorithms showed how the hybrid algorithm provided the most promising results using the fewest number of iterations. A convergence factor, correlated with a 95% confidence in the scheduler finding optimal solution, was defined to stop the scheduler. Finally, the integration of the algorithm withthe total ReCon design showed that when using RGTs, the geometry of the constellation can predetermine the effectiveness of the design in the context of the scheduler. The effects of implementing the scheduler were important as they showed an increase in performance for traditionally high-cost designs.

The following chapter explored the potential use of low-thrust propulsion in Re- Con. Although work had been done previously on the analysis of the use of low-thrust

122 propulsion, the full effects of switching propulsion systems were unquantified. This chapter showed that after a two-month window for reconfiguration to an event, the EP constellation failed to reach 70% of the performance provided by the traditional solution. The use of EP was understandably going to require a sacrifice in perfor- mance, but it did show some promise for the return to GOM. In cases where a user could wait two weeks for a relaxation to GOM, the EP system proved to have ade- quate performance. However, further analysis showed that ReCon does not benefit from using an EP system, as its ∆푉 requirements fail to reach high enough values for the mass savings in fuel reduction to balance against mass increases due to power load increases. This trade could change as additional fuel requirements emerge from an increase of expected maneuvers or as technology continues to drive the mass required per unit of power down. Finally, different deployment strategies were explored through the lens of engineer- ing options. Chapter 5 acknowledged the performance limitations of ReCon when the fuel on board is limited and the true demand for reconfigurations is unknown. This section showed how the use of staged launches does not significantly decrease the performance of a constellation. However, it can defer costs to the future when the demand picture is more refined, and launch and manufacturing costs are likely below where they are presently. Although the original ReCon analysis refuted the use of responsive launch systems, this section also showed the potential responsiveness in ReCon with maximum fuel usage. The analysis showed that if a satellite cannot be in the correct position with hours, there is no benefit to a perfectly functional responsive launch compared to using a reconfigurable constellation. In general, it is difficult to execute both of these options due to the typically rigid structure of satellite programs.

Figure 6-1 summarizes the main findings and recommendations of this thesis. In general, this thesis provides a roadmap on how to explore trades within the ReCon framework. The conclusions and recommendations for each of the three areas of interest are only valid for as long as the performance and cost parameters used remain accurate. The primary takeaway from the analysis is that the answers to questions in the context of complicated designs are often nonintuitive. However, the trades

123 explored and laid out in the previous chapters provide a context in which a designer could question the efficacy of implementing a new scheduler, propulsion system, or launch strategy.

Figure 6-1: Thesis Areas of Investigation Conclusions

6.2 Future Work

A challenge in exploring design optimization and tradespace analyses is the depth that is required to achieve a complete analysis. This thesis answers significant questions about ReCon but also leaves some for further exploration. The field of scheduling is vast but important. With the ever increasing perfor- mance of on board computing, it is recommended that further investigation considers the use of distributed schedulers across satellite constellations. Future work should look at using on board scheduling informed by cross-links instead of schedules pro- cessed on the ground. In general, one should refine the hybrid algorithm to lever- age alternative advanced scheduling techniques, including a complete redesign of the scheduler, which could allow dynamic programming to be overlayed with the entire mission, instead of a single pass. In terms of the implementation of new technologies, it is recommended that fur-

124 ther work looks at alternative means of mass reduction. This should include the use of on-orbit servicers, which could refuel, repair, and upgrade reconfigurable constel- lations. While low-thrust propulsion does not currently prove beneficial to ReCon, as technology improves, it is recommended to revisit this idea. While this work explored different launch strategies, it did not investigate the optimal strategy. Further work should include finding the optimal decision rule for staged launch given different observed demands. Future work should also include re- fined cost models used for staged deployment and responsive launch decision making. Any further work should more thoroughly research the process of implementing a continuous production line for satellites and operational costs for ReCon.

125 126 Bibliography

[1] 2015 NASA Technology Roadmaps. Technical Report TX-03, National Aeronau- tics and Space Administration, Washington D.C, 2015.

[2] Aerospace Corporation. A rollercoaster approach to satellite re-positioning. https://aerospace.org/article/rollercoaster-approach-satellite-re-positioning, Oc- tober 2019.

[3] J. Barzilai and M.A.H. Dempster. Measuring rates of convergence of numerical algorithms. Journal of Optimization Theory and Applicaitons, 78(1), 1993.

[4] Richard Ernest Bellman. The theory dynamic programming. Technical Report P-550, RAND Corporation, 1954.

[5] Alan S. Belward and Jon O. Skøien. Who launched what, when and why; trends in global land-cover observation capacity from civilian earth observation satel- lites. ISPRS Journal of Photogrammetry and Remote Sensing, 103:115.

[6] E. Bensana, M. LeMaitre, and G. Vaerfaillie. Earth observation satellite man- agement. Constraints, 4:293–299, 1999.

[7] Josef Roach Bogosian. Image collection optimization in the design and operation of lightweight low aereal-density space telescopes. Master’s thesis, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, May 2008.

[8] Stephen P. Bradley, Arnoldo C. Hax, and Thomas L. Magnanti. Applied Mathe- matical Programming, chapter 9, pages 272–319. Addison-Wesley, February 1977. https://web.mit.edu/15.053/www/AMP-Chapter-09.pdf.

[9] Matt Bungars and Eilidh Mellis. Three dead as air tanker fighting bushfires crashes near Snowy Mountains. The Sydney Morning Herald, January 2020.

[10] Jesinta Burton. ’It was a line of fire coming at us’: South West firefighters return home. Busselton-Dunsborough Mail, February 2020.

[11] Christina Chaplain. Gao assessment of DoD responsive launch report. Technical Report 16-156R, Government Accountability Office, Washington, DC, October 2015.

127 [12] Xiaofeng Chen, Zhenhua Tan, Guangmin Yang, and Wei Cheng. A Hybrid Algo- rithm to Solve Travelling Salesman Problem, volume 1 of Advances in Electronic Engineering, Communication and Management. Springer, Berlin, Heidelberg, 2012.

[13] Alberto Colorni, Marco Dorigo, and Vittorio Maniezzo. Distributed optimization by ant colonies. Proceedings of the First European Conference on Artificial Life, pages 134–142, January 1991.

[14] John W. Dankanich. Low-thrust mission design and application. In Journal of Spacecraft and Rockets, 46th AIAA/ASME/SAE/ASEE Joint Propulsion Con- ference & Exhibit, Nashville, TN, 2010. AIAA.

[15] DARPA Public Affairs. DARPA launch challenge closes with no winner. https://www.darpa.mil/news-events/2020-03-03, March 2020. Date Accessed: 2020-04-05.

[16] Richard de Neufville and Stefan Scholtes. Flexibility in Engineering Design. The MIT Press, 2011.

[17] Olivier L. de Weck, Richard de Neufville, and Mathieu Chaize. Staged deploy- ment of communications satellite constellations in low earth orbit. Journal of Aerospace Computing, Information, and Communication, 1(3):119–136, 2004.

[18] DigitalGlobe. Worldview-3 data sheet. https://www.spaceimagingme.com, 2014.

[19] Maxx Dilley. Natural disaster hotspots: a global risk analysis, volume 5. Tech- nical report, World Bank Publications, 2005.

[20] Harry Eyres. Seeing Our Planet Whole: A Cultural and Ethical View of Earth Observation, chapter A Short History of Earth Observation, pages 77– 87. Springer, Cham, 2017.

[21] Jeff Foust. The challenge of agile launch. https://www.thespacereview.com/ article/3478/1, April 2018.

[22] Rob Garner. Dr. Robert H. Goddard, American rocketry pioneer. NASA Web- page, August 2017.

[23] Caleb Henry. Stealth startup lone remaining contender in darpa respon- sive launch challenge. https://spacenews.com/ stealth-startup-lone-remaining- contender-in-darpa-responsive-launch-challenge/, 2019.

[24] Caleb Henry. Oneweb files for chapter 11 bankruptcy. https://spacenews.com/oneweb-files-for-chapter-11-bankruptcy/, March 2020.

[25] Theresa Hitchens. DoD rethinks launch needs to counter Russia, China. https://breakingdefense.com/2019/10/dod-rethinks-launch-needs-to- counter-russia-china/, October 2019.

128 [26] Michael E. Hodgson, Sarah E. Battersby, Bruce A. Davis, Shufan Liu, and Leanne Sulewski. Geospatial Data Collection/Use in Disaster Response: A United States Nationwide Survey of State Agencies, pages 407–419. Cartography from Pole to Pole. 2014.

[27] Planet Labs Inc. Planet products. https://www.planet.com/products/, 2020.

[28] Dava J. Newman Joseph H. Saleh, Daniel E. Hastings. Flexibility in system design and implications for aerospace systems. Act Astronautica, 53(12):927– 944, 2003.

[29] Konstantin Kakaes. The shadow of apollo. MIT Technology Review, 122(4):11, Jul/Aug 2019.

[30] Pratistha Kansakar and Faisal Hossain. A review of applications of satellite earth observation data for global societal benefit and stewardship of planet earth. Space Policy, 6:46–54, may 2016.

[31] S. Kirkpatrick, Jr. C.D. Gelatt, and M.P. Vecchi. Optimization by simulated annealing. Science, 220(4598):671–680, 1983.

[32] P. Larrañaga, C. Kuijpers, and R. et al. Murga. Genetic algorithms for the travelling salesman problem: A review of representations and operators. Artificial Intelligence Review, 13:129–170, April 1999.

[33] Donald T. Lauer, Stanley A. Morain, and Vincent V. Salomonson. The landsat program: Its origins, evolution, and impacts. Photogrammetric Engineering and Remote Sensing, 63(7):831–838, 1997.

[34] Robert S. Legge and David W. Miller. Optimization and Valuation of Recon- figurable Satellite Constellations under Uncertainty. PhD thesis, Massachusetts Institute of Technology.

[35] Michael Lemaitre, Gerard Verfaillie, Frank Jouhaud, Jean-Michel Lachiver, and Nicolas Bataille. Selecting and scheduling observations of agile satellites. Aerospace Science and Technology, 6:367–381, July 2002.

[36] Kristina Lemmer. Propulsion for cubesats. Acta Astronautica, 134:231–243, May 2017.

[37] Dan Lev, Roger M. Myers, Kristina M. Lemmer, Jonathan Kolbeck, Hiroyuki Koizumi, and Kurt Polzin. The technological and commercial expansion of elec- tric propulsion. Acta Astronautica, 159:213–227, June 2019.

[38] Xiaolu Liu, Gilbert Laporte, Yingwu Chen, and Renjie He. An adaptive large neighborhood search metaheuristic for agile satellite scheduling with time- dependent transition time. Computers Operations Research, 86:41–53, October 2017.

129 [39] Johannes Löhr, Johannes Aldinger, Stefan Winkler, and Georg Willich. Auto- mated planning for earth observation spacecraft under attitude dynamical con- straints. [40] Ciara N. McGrath and Malcom Macdonald. General perturbation method for satellite constellation reconfiguration using low-thrust maneuvers. Journal of Guidance, Control, and Dynamics, 42(8):1676–1692, August 2019. [41] Daniele Mortari, Matthew P. Wilkins, and Christian Bruccoleri. The flower constellations. Journal of the Astronautical Sciences, 62:134–142, January 2004. [42] NASA. MODIS Collection 6 NRT Hotspot / Active Fire Detections MCD14DL. Available on-line https://earthdata.nasa.gov/firms. [43] NASA. State of the art small spacecraft technology. Technical Report NASA/TP-2018-220027, Small Spacecraft Systems Virtual Institute, Ames Re- search Ceneter, Mofett Field, California, December 2018. [44] Kevin Nguyen. RFS firefight who died when fire torndao flipped truck during Green Valley bsuhfire names as Samuel Mcpaul. ABC News, December 2019. [45] C.G. Niederstasser. A 2019 view of impending small launch vehicle booms. In 70th International Astronautical Congress, Washington D.C., 2019. [46] Board of Governors of the Federal Reserve System. Minutes of the board of gover- nors discount rate meetings. Federal Reserve Policy Tools Webpage, Nov 4 - Dec 11, 2019. https://www.federalreserve.gov/monetarypolicy/discountrate.htm. [47] Sung Wook Paek. Reconfigurable satellite constellation for geo-spatially adaptive earth observation missions. Master’s thesis, Massachusetts Institute of Technol- ogy, Department of Aeronautics and Astronautics, May 2012. [48] Sung Wook Paek, Luzius G. Kronig, Anton B. Ivanov, and Oliver L. de Weck. Satellite constellation design algorithm for remote sensing of diurnal cycle phe- nomena. Advances in Space Research, pages 2529–2250, jul 2018. [49] Mauricio Pena and Ricardo Valerdi. Characterizing the impact of requirements volatility on systems engineering effort. Wiley Online Library. [50] Elaine M Petro and Raymond J. Sedwick. Survey of moderate-power electric propulsion systems. Journal of Spacecraft and Rockets, 54(3), May-June 2017. [51] Australian Associated Press. NSW bushfires: RFS names two firefighter killer south-west of Sydney. The Guardian, December 2019. [52] Nasuh Razi and Mumtaz Karatas. A multi-objective model for locating search and rescue baots. European Journal of Operational Research, 254:279–293, 2016. [53] OneWeb Satellites. News. onewebsatellites.com, May 2019. Date Accessed: 2019-11-15.

130 [54] Jerry Jon Sellers. Understand Space An Introduction to Astronautics. Space Technology Series. Fourth edition, 2014.

[55] M.V.K Sivakumar and Donald E. Hinsman. Satellite remote sensing and GIS applications in agricultural meteorology and WMO satellite activities. July 2003.

[56] W. Stadler. A survey of multicriteria optimization or the vector maximization problem, part 1: 1776-1960. Journal of Optimization Theory and Applications, 29(1):1–52, September 1979.

[57] The President’s Science Advisory Committee. Introduction to outer space. The White House, March 1958.

[58] J. Townshend and et al. Integrated global observations of the land: an igos-p theme. Technical report, IGOL, 2008.

[59] A. Vernile. The democratisation of earth observation: A revolution for the down- stream sector? Possible measures to support a development of the downstream sector in light of new space trends. In 70th International Astronautical Congress, Washington D.C., 2019. International Astronautical Federation.

[60] Giovanni Di Virgilio, Jason P. Evans, Stephanie A. P. Blake, Matthew Arm- strong, Andrew J. Dowdy, Jason Sharples, and Rick McRae. Climate change increases the potential for extreme wildfires. Geophysical Research Letters, pages 8517–8526, jul 2019.

[61] Stefan Voigt and et al. Global trends in satellite-based emergency mapping. Science, 353(6296):247–252, 2016.

[62] J.G. Walker. Circular orbit patterns providing continuous whole earth coverage. Technical Report AD0722776, Royal Aircraft Establishment, Defense Technical Information Center, Fort Belvoir, VA, November 1970.

[63] James Richard Wertz and et al. Space Mission Engineering: the New SMAD. Microcosm Press, 2011.

[64] E.B. Wilson. Probable inference, the law of succession, and statistical inference. Journal of the American Statistical Association, 22(158):209–212, 1927.

[65] William J. Wolfe and Stepehen E. Sorensen. Three scheduling algorithms applied to earth observing systems domain. Management Science, 46(1):148–166, 2000.

131