THE MODELING, SIMULATION, AND OPERATIONAL CONTROL OF AEROSPACE COMMUNICATION NETWORKS

by

BRIAN JAMES BARRITT

Submitted in partial fulfillment of the requirements

For the degree of Doctor of Philosophy

Thesis Adviser: Dr. Francis L. Merat

Department of Electrical Engineering and Computer Science

CASE WESTERN RESERVE UNIVERSITY

August, 2017 The Modeling, Simulation, and Operational Control of Aerospace

Communication Networks

Case Western Reserve University Case School of Graduate Studies

We hereby approve the thesis1 of

BRIAN JAMES BARRITT

for the degree of

Doctor of Philosophy

Frank Merat

Committee Chair, Adviser June 30, 2017 Department of Electrical, Computer, and Systems Engineering

Michael Rabinovich

Committee Member June 30, 2017 Department of Computer Science

Daniel Saab

Committee Member June 30, 2017 Department of Electrical, Computer, and Systems Engineering

Mark Allman

Committee Member June 30, 2017 Department of Computer Science

1We certify that written approval has been obtained for any proprietary material contained therein. Dedicated to Sharon Barritt, who gave me roots and wings... Table of Contents

List of Tables vi

List of Figures vii

Acknowledgementsx

Abstract xi

Chapter 1. Introduction1

Chapter 2. Modeling and Simulation5

Approach 9

Software 18

Trade Studies 19

Conclusions and Future Work 22

Chapter 3. Temporospatial SDN 24

Software-Defined Networking 24

Temporospatial SDN 27

Implementation 34

Conclusions 42

Chapter 4. Use Case: LEO Constellations 43

TS-SDN vs. Dynamic Routing 50

Next Steps and Future Work 58

Conclusions 59

Chapter 5. Use Case: High Altitude Platforms 60

iv Network Architecture 61

Mesh & Backhaul Challenges 63

Temporospatial SDN 65

SDN and Distributed Routing in HAPS 65

Conclusions 72

Chapter 6. Suggested Future Research 74

NBI and CDPI Standardization 74

Network Functions Virtualization 75

Mobile, 5G, and CORD 76

Delay Tolerant Networking 78

Complete References 80

v List of Tables

4.1 OLSR Configuration Tuning Parameters 48

4.2 Simulation Results Summary 57

5.1 Comparison of Three Mesh Routing Protocols 66

5.2 Comparison time at startup (in seconds) 69

vi List of Figures

2.1 An example protocol stack diagram for a LEO constellation that

provides Internet access. A high fidelity virtual network model

can be built by modeling the ground segment topology, network

protocols, and network application traffic in a network simulator

while modeling the space-to-ground links, inter- links, and

air interface physical layers (highlighted) in a spatial temporal

information system, such as STK. 10

2.2 STK Networking Architecture 14

2.3 Screenshot from STK’s Report & Graph Manager showing return-link

TCP throughput over time for an example mission scenario of a

Cubesat downloading science data during communication through

a ground station in Poker Flat, Alaska. TCP’s slow-start algorithm for

congestion avoidance is evident in the slow ramp-up in throughput in

the beginning of the plot. 19

2.4 End-to-end voice application latency is observed to change over time,

with a substantial increase occurring after the TDRS hand-off.1 20

2.5 The saw-tooth pattern in application packet latency is due to the

insertion of ”idle frames” into the Advanced Orbiting Systems (AOS)

space data link protocol.1 21

3.1 A high-level overview of the software-defined networking

architecture2 25

vii 3.2 The network data model represents the nodes (vertices) and links

(edges) in the topology and includes all accessible wired and wireless

links. The data model is time-dynamic; each directional edge is

associated with a set of time intervals of predicted accessibility with

predicted link metrics throughout each accessible interval. 37

3.3 The network topology is annotated with required end-to-end packet

connectivity and provisioned flow capacities. 37

3.4 A traffic engineered solution is created by finding a satisficing

subgraph or spanning tree. 38

3.5 Our implementation includes a topology and routing service, which

jointly optimizes the wireless network topology and routing while

transitioning it through phases in time. 40

4.1 Example polar orbiting LEO constellation, from the ns2.35 manual,

generated using SAVI3. 47

4.2 Simplified Simulation Network Topology 51

4.3 A TCP time-sequence plot, generated using the tcptrace tool4,

showing multiple costly retransmission timeouts (RTOs) resulting

from the soft handover event. 54

4.4 A TCP time-sequence plot, generated using the tcptrace tool4,

showing that additional RTO events occur with the hard handovers. 55

5.1 While it is relatively common for Internet access networks to

accommodate mobility in the Access Layer, high-altitude platform

viii systems and LEO satellite networks must also accommodate mobility

in their Distribution Layer. 61

5.2 Visualization of the simulated network topology. 67

5.3 Probability distribution function of mesh routing protocol startup

convergence times for all nodes to find a route to a designated EPC

node. 70

5.4 Probability distribution function of mesh routing protocol

convergence times in repairing a single route upon link failure. 70

ix Acknowledgements

I would like to express my sincere gratitude to my advisor, Prof. Frank Merat, for his continuous support of my Ph.D study and related research. I’d also like to thank Dr.

Kul Bhasin, for fostering and supporting this area of research and development at NASA

Glenn Research Center; Wes Eddy, who was co-author on several of my publications in this field and co-inventor of Astrolink Protocol; and David Mandle, who co-designed and completely implemented the Topology and Routing Service in our Temporospatial

SDN implementation. And I’d like to thank Google and X, the Moonshot Factory, for sup- porting my work in this field and for approving publications disclosing our application and implementation of this research on projects there.

Additional acknowledgement, appreciation, and credit are due to Mike Fuentes, Ta- tiana Kichkaylo, Nicolas Thiébaud, You Han, Ketan Mandke, Adam Zalcman, and Victor

Lin for their collaboration on the ideas, network simulations, implementations, and pa- per publications related to this research.

x Abstract

The Modeling, Simulation, and Operational Control of Aerospace Communication Networks

Abstract

by

BRIAN JAMES BARRITT

A paradigm shift is taking place in aerospace communications. Traditionally,

aerospace systems have relied upon circuit switched communications; geostationary

communications act as bent-pipe transponders and are not burdened with

packet processing and the complexity of mobility in the network topology. But factors

such as growing mission complexity and NewSpace development practices are driving

the rapid adoption of packet-based network protocols in aerospace networks. Mean- while, several new aerospace networks are being designed to provide either low la-

tency, high-resolution imaging or low-latency Internet access while operating in non-

geostationary orbits – or even lower, in the upper atmosphere. The need for high data-

rate communications in these networks is simultaneously driving greater reliance on

beamforming, directionality, and narrow beamwidths in RF communications and free-

space optical communications.

This dissertation explores the challenges and offers novel solutions in the modeling,

simulation, and operational control of these new aerospace networks. In the concept,

design, and development phases of such networks, the dissertation motivates the use of

xi network simulators to model network protocols and network application traffic instead of relying solely on link budget calculations. It also contributes a new approach to net- work simulation that can integrate with spatial temporal information systems for high-

fidelity modeling of time-dynamic geometry, antenna gain patterns, and wireless signal propagation in the physical layer. And towards the operational control of such networks, the dissertation introduces Temporospatial Software Defined Networking (TS-SDN), a new approach that leverages predictability in the propagated motion of platforms and high-fidelity wireless link modeling to build a holistic, predictive view of the accessi- ble network topology and provides SDN applications with the ability to optimize the network topology and routing through the direct expression of network behavior and requirements. This is complemented by enhancements to the southbound interface to support synchronized future enactment of state changes in order to tolerate varying de- lay and disruption in the control plane. A high-level overview of an implementation of Temporospatial SDN at Alphabet is included. The dissertation also describes and demonstrates the benefits of the application of TS-SDN in Low Earth Orbiting (LEO) satellite constellations and High Altitude Platform Systems (HAPS).

xii 1

1 Introduction

For the last two decades there has been a tremendous amount of work on the use of commercial networking protocols for aerospace communications. Today, three ma- jor forces continuing to drive the use of networking protocols in aerospace are (1) mis- sion complexity5, (2) increased adoption of modern consumer electronics technologies and standards, and (3) new potential applications for aerospace platforms in providing broadband Internet service to those who lack economical access.

The mission complexity expected for new flagship aerospace systems, with multiple on-board instruments and payloads, flight computers, and other avionics sys- tems, is driving the industry towards networked platforms, which utilize packet or frame structures for their mission-internal communications. Networking technologies are es- sential to enabling automation and scalable mission control in complex missions archi- tectures where communications may need to be relayed, and management of the com- munications paths is necessary in addition to management of the mission platforms themselves.

Additionally, NewSpace6 companies and philosophy requires building cheaper and easier-to-operate systems based on existing consumer electronics technology and stan- dards, which are leveraged wherever possible in order to control costs. For telemetry, Introduction 2

command, and control applications, this generally implies software systems built on the Internet protocols. These can be successfully used for communications with (and between) low cost aerospace platforms such as small unmanned aerial vehicles (UAVs), balloons, CubeSats, hosted payloads, or between fleets of larger UAVs or satellite con- stellations.

Beyond this, the use of aerospace platforms has recently (re)emerged as a poten- tial solution for providing Internet access to rural populations in emerging economies.

High-Altitude Platform Systems (HAPS), which can act as atmospheric psuedo-satellites, are being developed for this purpose by Facebook and by Google’s parent company,

Alphabet. Large, credible Low-Earth Orbiting (LEO) constellation plans for providing

Internet services have also recently been announced by OneWeb, SpaceX, Telesat, and

Boeing7. These systems will have networking as their reason for being, and they require improvements in the state of aerospace network management due to the magnitude of their size, geometry, and handling of packet-based communications rather than circuit- switched techniques.

The modeling, simulation, and control of these networks differs enormously in com- parison to current aerospace systems, geostationary orbit (GEO) Internet services, and even existing non-geostationary orbiting (NGSO) constellations (like Iridium and O3b).

With network retransmission protocols and/or dynamic routing in use, the performance of such systems is not linear or directly assessable via link budgets or other simplistic techniques. And operating these networks also requires improved automation of func- tions, including multi-hop relay configuration, relay failover, selection of optimal net- work routes, multicast distribution, and perhaps even store-and-forward capabilities.

This dissertation is organized as follows: Introduction 3

chapter 2 explains why link budgets alone are insufficient for assessing the per- • formance of network applications in these aerospace systems, and it motivates

the use of network simulators to model network protocols and application traf-

fic. It then contributes a new approach to simulating aerospace networks based

on integrating a discrete-event network simulator with high-fidelity, time-dynamic

physical layer models8. It also describes the author’s implementation of new

open-source and commercial software based on the approach.

chapter 3 contributes a solution to the operational control of aerospace net- • works called Temporospatial Software Defined Networking (TS-SDN). TS-SDN

leverages predictability in the propagated motion of mission platforms, com-

bined with high-fidelity link modeling, to build a holistic, predictive view of

the available link set, thereby providing SDN applications with the ability to

optimize the network topology and routing through the direct expression of

intended network behavior or requirements9. This gives network operators a

more flexible and programmable control of their infrastructure, including the

ability to avoid packet loss around predictable changes in link accessibility in

the dynamic topology. The chapter also offers a high-level overview of the de-

sign and implementation of the first TS-SDN network operating system at Google’s

parent company, Alphabet.

chapter 4 motivates the application of TS-SDN in LEO satellite constellations • and uses modeling and simulation to compare the performance of TS-SDN in

such networks to that of traditional, distributed routing protocols. It demon-

strates that TS-SDN can avoid the significant packet loss that would otherwise Introduction 4

occur with the use of distributed routing protocols during hand-off events be-

tween satellites and user terminals10.

chapter 5 motivates the application of TS-SDN in HAPS networks and uses mod- • eling and simulation to explore the application of distributed mesh routing pro-

tocols to autonomously repair any disruption of the TS-SDN control plane. While

TS-SDN can establish and maintain a wireless mesh/backhaul topology that

is more robust and stable than relying solely on distributed mesh protocols, it

concludes that it is still desirable to fall back to a distributed routing protocol

when a HAP node is disconnected from the controller in the case of an unpre-

dictable link failure11.

chapter 6 concludes by proposing areas for further research in the modeling, • simulation, and operational control of aerospace networks. 5

2 Modeling and Simulation

The use of networking protocols in aerospace systems, and analysis of their configu-

ration and operation, differs significantly from classical telemetry and command meth-

ods. For instance, instead of periodically coalescing sampled data into telemetry frames

that are downlinked at a fixed interval, in a networked system, telemetry packets can vary in size, allowing flexibility in sampling rates, and can be downlinked over either re-

liable or unreliable protocols. With network retransmission protocols and/or dynamic

routing in use, the performance of such systems is not linearly or directly assessable via

link budgets or other simplistic techniques. Detailed protocol modeling is necessary to

reveal and evaluate important metrics, such as:

End-to-end packet or frame-loss rates:

These are related to, but not directly assessable from, individual link bit error

rate (BER) or the energy per bit to noise power spectral density ratio (Eb/N0).

End-to-end packet or frame latencies:

These depend on application traffic load on the system, statistical multiplexing,

link quality, and transmission mechanisms and their configurations; they are

not directly assessable from the signal propagation delays. Modeling and Simulation 6

End-to-end packet throughput:

The throughput achieved by Transmission Control Protocol (TCP)12 and other

connection-oriented transport layer protocols is dependent upon end-to-end

packet or frame loss rates and latencies.

Jitter (variability in latency) introduced in the end-to-end packet stream:

This depends on the latencies of subsequent packets within a user’s stream. Jit-

ter as a metric is very important to packet-based voice services, teleoperation

and haptic feedback, and for users who are sending real-time video data, as the

amount of buffering required to mitigate jitter reduces the real-time nature of a

data stream.

Network simulators can be employed to quantify these metrics and characterize the performance of frame- and packet-based systems.

2.0.1 Network Simulation

Network simulation is a technique in which software is used to model the behavior of the network protocols and applications used in a system, which are instantiated on vir- tual hosts whose interfaces exchange packets in a virtual network. Through this tech- nique, the behavior of the network can then be observed and various parameters, design choices, or environmental conditions can then be modified to perform trade studies and experimentation in a controlled and cost-effective manner.

When network simulation software is employed in a network simulation, software models of network applications are used as the source of network traffic and packet generation. As such, the time required to run a network simulation experiment is only bounded by the computational resources required to model each discrete event in the Modeling and Simulation 7

simulation. A subtle contrasting distinction is made with network emulation, in which one or more virtual interfaces in the virtual network are bound to physical, real-world network interfaces. Network emulations run in real-time (computational resources per- mitting), since network interface packet captures and packet playback are used as the sources of network traffic in these experiments. Network simulation allows experimenters to quickly obtain quantitative data, while network emulation allows for qualitative / ex- periential assessment of network performance and for integration with hardware in the loop.

Network simulation software is available in both open-source and commercial pack- ages. ns-313 is a popular open-source network simulator with over 70,000 downloads in

2016 and 9,570 posts to the ns-3 users group in 201614. In addition to network simu- lation, ns-3 supports network emulation through the Network Simulation Cradle (nsc) and Direct Code Execution (DCE) framework, which allows the use of Linux kernel net- work stacks within ns-3 simulations15. Popular commercial network modeling & sim- ulation software packages include Riverbed Modeler (formely OPNET Modeler)16 and

SCALABLE Network Technologies’ QualNet (simulation) and EXata (emulation) software packages17.

Network simulation software packages are designed to model all layers of the Open

Systems Interconnection (OSI) model, including the physical layer. Modeling the physi- cal layer also requires modeling of transmitters, receivers, antennas, signal propagation, and mobility. However, the fidelity of physical-layer modeling in existing network simu- lators is likely to be insufficient for most aerospace communication systems; their prop- agation and mobility models are mostly tailored to model RF path loss at sea level over propagation distances characteristic of Wireless LAN and cellular systems. In contrast, Modeling and Simulation 8

aerospace systems may have communications links that, in part or in whole, operate in the upper atmosphere or the vacuum of space, with platforms whose positions and orientations move through these environments according to ballistics, orbital trajecto- ries, or other laws of motion, and thus require more advanced propagation loss, time- dynamic geometry, and mobility models. Furthermore, platform attitude and dynamics, antenna patterns, gimbal articulation capabilities, and structural blockages due to other parts of a spacecraft are not easy to model or supported off-the-shelf in most network simulators.

2.0.2 Spatial Temporal Information Systems

A spatial temporal information system explains the dynamics of objects’ interactions, from signal analysis to trajectory design, spatial modeling, and other spatial analytics18.

Systems Tool Kit (STK) by Analytical Graphics, Inc. (AGI)19 is a well-known example of such software. Other spatial temporal information systems include FreeFlyer by AI

Solutions, NASA Goddard’s GMAT, and SaVi (open source)3,20,21.

In these tools, users can model the time-dynamic position and orientation of ve- hicles; and, given these dynamic positions and orientations, along with the modeled characteristics and pointing of sensor, communications, and other payloads aboard the assets, the time-dynamic spatial relationships between all of the objects can be deter- mined19. Examples of time-dynamic spatial relationships that can be evaluated in- clude line of sight, transmitter and receiver antenna gains along the link direction vector given their modeled antenna gain patterns, signal path loss, and signal propagation de- lay. Environmental effects, such as weather, thermal noise, and atmospheric density are often also incorporated in assessing certain relationships, such as communications link budgets19. STK is able to model time-dynamic wireless signal propagation losses Modeling and Simulation 9

due to atmospheric absorption, rain, clouds, fog, and tropospheric scintillation; its set of optional propagation loss models include the ITU-R P676 Atmospheric Absorption

Model22, ITU-R P618 Rain Model23, and ITU-R P840 Cloud and Fog Model24. It imple- ments Alion’s Terrain Integrated Rough Earth Model (TIREM)25 for better modeling of

RF propagation loss over irregular terrain and seawater for ground-based and air-borne transmitters and receivers at lower elevations. It can even model RF propagation in a 3D urban environments via Remcom’s Wireless InSite Real Time embedded module26.

Spatial temporal information systems can therefore model the physical layer and mobility of satellite and airborne platforms with high fidelity. However, to our knowl- edge, none of the existing spatial temporal information systems are capable of model- ing packet- or frame-based network communications or the applications and resulting traffic patterns from network services on these systems.

In order to maximize the fidelity in computing service-oriented network performance metrics, a unified approach integrating both network stack and spatial temporal infor- mation systems is needed1.

2.1 Approach

Modern communications stacks are built as a series of layers. The end-to-end commu- nications flow traces down and up the layers implemented in each node (e.g. as shown in figure 2.1). At the bottom of the stack, at each hop, is the physical layer, which rep- resents the physical medium connecting the nodes. The physical layers between ter- restrial nodes are generally Ethernet or other wired technologies that can be reasonably modeled within a network simulator. The physical layers between aerospace nodes (or Modeling and Simulation 10

Figure 2.1. An example protocol stack diagram for a LEO constellation that provides Internet access. A high fidelity virtual network model can be built by modeling the ground segment topology, network protocols, and network application traffic in a network simulator while modeling the space-to-ground links, inter-satellite links, and air interface physi- cal layers (highlighted) in a spatial temporal information system, such as STK. between aerospace and terrestrial nodes) are much more difficult to accurately model in a network simulator, as they require detailed motion propagation, antenna, orbit, at- titude, and other flight dynamics data processing to be done at appropriate time-steps.

Although physical-layer models for WiFi and other terrestrial wireless links are readily available in network simulators27, these are significantly different than aerospace link physical layers, and they use different coordinate systems and geometry primitives.

For this reason, it is not reasonable to try to bring detailed physical-layer simulation into network simulator packages directly. The amount of work and maintenance re- quired would be too great in comparison to the volume of people who need this kind of analysis presently. Since detailed physical-layer simulation capabilities already exist in spatial temporal information systems used for mission analysis – including communica- tions links (e.g. STK/Communications19), it is very sensible to leverage these in model- ing the physical-layer properties on aerospace links within an end-to-end network sim- ulation28. Note that while the network in figure 2.1 is overly simplistic, the complexity of real-world multi-node and multi-hop aerospace networks are an even bigger driver for this approach. Modeling and Simulation 11

An additional benefit of this approach is that it supports direct reuse of models of aerospace platforms and their motion that may already be developed and maintained by mechanical engineers, systems engineers, and other disciplines within a mission team. These can have important influence on antenna performance, pointing capabil- ities, structural blockages, and other aircraft or spacecraft design aspects that would be extremely difficult to include natively in a network simulator – and which may change often as a the design of a mission or system evolves.

In the course of this research, a new framework was developed for this approach.

The implementation consists of a lightweight middleware that allows for the network simulator and spatial temporal information system employed in a cosimulation to be

flexibly interchanged. We named the cosimuation protocol Astrolink Protocol. Interest in its application for commercial purposes later led the author to develop an implemen- tation as a third-party add-on to the STK product suite called STK Networking.

2.1.1 Prior Work

Researchers of space networks have long know that the prevailing methods of modeling the physical and link layers were insufficient for modeling OSI layer 3+. The need for a new approach to network simulation/emulation was also noted during evaluation of new Internet Protocol (IP) based architectures for NASA deep space communications29.

Early work on NASA’s Space Communications and Navigation (SCaN) Simulator30 used pre-computed STK link properties, which were saved as a standard STK report text file and imported these into QualNet network simulations through a custom module31. This approach worked well for a small number of space links but was not very flexible or scal- able in terms of modeling large systems of systems. It did not support dynamic logic in Modeling and Simulation 12

the spatial temporal information system based on the behavior of the network. For in- stance, if a command message was to actuate a spacecraft maneuver after delivery, pre- cision in that timing would not have been possible to orchestrate, since the simulators were “ships in the night” that never communicated with one another directly.

The GEMINI (Glenn Environment for Modeling Integrated Network Infrastructure) simulator, which the author helped to develop in prior work at the NASA Glenn Research

Center, is an example of how STK and its communications link analysis modules can be dynamically interfaced with the QualNet network simulator1. GEMINI followed the general approach that is outlined above; however, through the use of GEMINI the devel- opers discovered some points that could be improved. The following lessons learned by the author in developing and using GEMINI, and earlier tools, have been addressed in

STK Networking:

(1) In GEMINI, mappings between STK and QualNet objects needed to be done

manually within the QualNet XML scenario files. This included embedding the

strings that identify STK objects, which can be rather long, as they represent

paths within the organization of the STK objects. This is also brittle to changes

in how the STK objects are organized and labeled, which means that simple

changes to the STK model could drive the need to make manual changes to the

QualNet XML scenario file. This drove us to adopt a design for STK Networking

that decouples the STK object paths and names from the network simulator ob-

jects; and, rather, allows for flexibly “mapping” the objects between simulators

without embedding identifiers from one tool inside the other.

(2) GEMINI performance was much slower than realtime due to the QualNet model’s

reliance on remote procedure calls using the STK/Connect interface for every Modeling and Simulation 13

individual link event. Only part of this delay can be attributed to the STK com-

putations; performance hits also come from network communications between

the different simulator machines and the marshalling of data between machine

integers and the STK/Connect API text calls. This spurred us to design STK Net-

working such that the spatial temporal information system calculations could

be cached, allowing the number of RPC transactions to be greatly reduced, as

well as enabling us to develop a pre-fetching capability, which is necessary for

using the network simulator to perform near-realtime link emulation with ac-

tual hardware or software in-the-loop. We also improved performance by em-

bedding an STK Engine instance within the application instead of using the

text-based STK/Connect socket API.

(3) The design and implementation in GEMINI was very specific to supporting only

the STK and QualNet simulators. The techniques used were not flexible to al-

low for the use of alternative spatial temporal information systems or network

simulators. We realized that supporting N physics simulators and M network

simulators would require N-by-M integration tools, and it would not be rea-

sonable to create or maintain such a suite of software. This drove the use of

a middleware-like concept for STK Networking in order to make integrating a

diverse set of simulation tools scalable and manageable by a small team.

2.1.2 STK Networking Architecture

Figure 2.2 describes the STK Networking system architecture at a high-level. The key component is the Astrolink Server, which intermediates communications between all other elements. Multiple user interface options exist in order to interact with the server. Modeling and Simulation 14

Figure 2.2. STK Networking Architecture

It accepts commands and exports status and other data using a gRPC interface, which al- lows either a web-browser-based application, or use of gRPC and Google Protocol Buffer

APIs, available for all common languages, in order to provide a user interface. One user interface that we have developed is a plug-in that integrates within the STK Graphical

User Interface (GUI). This chapter mainly focuses on the Astrolink Protocol and the de- sign of the server and network modules, so the user interface options are not the focus of the rest of this chapter.

Many spatial temporal information systems support interfaces for integration with external code. For instance, many models or components of models are prototyped or maintained in Matlab or other programming languages. Being able to import and export data from a spatial temporal information system, beyond just file-based I/O, is a critical feature. However, there are not standard cross-platform, cross-vendor, cross- tool methods for doing this in “real-time” during a simulation.

The Astrolink Server communicates with spatial temporal information systems us- ing each of their own native mechanisms. Meanwhile, it communicates with network simulators using the Astrolink Protocol described in the next section. This architec- ture was chosen because the major network simulators to-date are designed in order Modeling and Simulation 15

to be extensible with new protocols and models; they easily support drop-in of code to add Astrolink physical layer models, which retrieve data external to the simulator rather than computing physical-layer properties from an internal model. On the other hand, it is not as easy to add code to spatial temporal information systems, and so we have N

(or more) interface methods for N spatial temporal information system packages, even though there is only 1 interface (the Astrolink Protocol) used in common to all supported network simulator packages.

2.1.3 Astrolink Protocol

The Astrolink Protocol is a relatively simple TCP-based application protocol. It runs on port 27876. Each Astrolink network simulator module opens a listening TCP socket at the time that a network simulation is started. Using simulator-specific methods, the network simulation is paused immediately in order to wait for a control signal to be received from the Astrolink Server.

The Astrolink Server is configured with the addresses of hosts running network sim- ulators, and it opens TCP connections to each of them. These can be located on the same computer as the Astrolink Server and/or spatial temporal information system, but for scaleability can also be on one or more separate computers or high-performance computing clusters.

After the TCP connections are established, remote procedure call (RPC) messages are exchanged between the server and network simulator modules. Astrolink uses the gRPC framework32; each message is a simple Protocol Buffer message in serialized wire format. Over time, updates to the Astrolink Protocol can be made simply by extending the Protocol Buffer descriptor with additional messages and fields. Protocol procedures Modeling and Simulation 16

allow older code to gracefully indicate failures when receiving newer unsupported mes- sages.

There are presently a number of different messages implemented to support basic simulator integration and state queries, as well as export of network flow data from the network simulator, and other types of event notification. The core messages needed for a basic network simulation include:

GetNetworkInterfaceListRequest:

This message is sent from the server to a network module in order to request

a list of the data structures representing the simulated aerospace link network

interfaces. These can then be mapped directly to objects (e.g. transmitters,

receivers, antennas) in the spatial temporal information system. Each has a

textual name assigned by the user when the network simulation is created, as

well as other useful data (MAC addresses, IP addresses, etc).

GetNetworkInterfaceListResponse:

This message is returned from a network module to the Astrolink Server, and

it enumerates the set of network interfaces in the network simulation that use

Astrolink physical layer models. These interfaces need to be mapped through

to objects within the spatial temporal information system.

AstrolinkReadyNotification:

After the mappings have been completed between spatial temporal informa-

tion system objects and network simulator interfaces (typically through user

interaction), this message is sent from the Astrolink Server to the network mod-

ules, allowing them to resume the network simulation. Modeling and Simulation 17

GetLinkPhyStateRequest:

During the course of a network simulation, this message is sent from the net-

work simulator modules to the Astrolink Server in order to request computa-

tion of relevant link properties (one-way latency, modeled signal propagation

losses, and antenna gains along the link direction vector). The queries indicate

the network interfaces that the link properties need to be evaluated between, as

well as the network simulation clock time (which is relative to an epoch within

the spatial temporal information system).

GetLinkPhyStateResponse:

This message is a reply from the Astrolink Server to the network modules in

order to answer a GetLinkPhyStateRequest. It contains all of the necessary

link parameters computed by the spatial temporal information system, which

allows the network simulation to continue.

In the network simulator, only links simulating aerospace platform connections will use the Astrolink physical layer models. In the GetNetworkInterfaceListResponse, only network interfaces pertaining to these aerospace links will be returned and avail- able to map to antenna objects in the spatial temporal information system. All other links in the simulation (e.g. between terrestrial nodes, or between nodes onboard the same aircraft or spacecraft) are kept local to the network simulators, and packet flow be- tween them does not require making Astrolink GetLinkPhyStateRequest operations.

The network simulator modules can be tuned to cache requests for channel state information from the spatial temporal information system; instead of allowing the net- work simulator to issue a GetLinkPhyStateRequest requests for each event on a link, Modeling and Simulation 18

it can cache data for configurable amounts of simulator clock time. Since spatial tem- poral information systems, like STK, perform analysis over some configured step size (a

1 second step size is common for communication link analysis), it would be reasonable to cache channel state information per link over the same interval. In order to support near realtime emulation (e.g. with hardware/software in the loop) rather than strict non- realtime event-based simulation, queries can also be predicatively made in advance for a window of time, so that answers are available immediately.

2.2 Software

The code that implements the STK Networking architecture is divided between an As- trolink Server, written in Java for either Linux or Windows, an STK Networking user in- terface written in C# for Windows, and an Astrolink network module for ns-3 written in

C++ for Windows or Linux.

The testing of STK Networking with even simple, example mission scenarios is suffi- cient to see and verify interesting effects of running network protocols rather than typ- ical fixed-rate telemetry and transfer protocols. For instance, with TCP-based flows, it is possible to see the characteristic TCP slow-start congestion avoidance algorithm, as shown in figure 2.3.

But the benefits of this unified approach to aerospace network modeling and simu- lation are best illustrated by its application in real-world trade studies. Modeling and Simulation 19

Figure 2.3. Screenshot from STK’s Report & Graph Manager showing return-link TCP throughput over time for an example mission scenario of a Cubesat downloading science data during communication through a ground station in Poker Flat, Alaska. TCP’s slow-start algorithm for con- gestion avoidance is evident in the slow ramp-up in throughput in the beginning of the plot.

2.3 Trade Studies

NASA applied this new approach in a simulation of end-to-end latency and through- put of the Orion spacecraft. In this study, Orion sent and received G.729 RTP33 Voice-

over-IP (VoIP) flows over the NASA Space Network in a one hour scenario that included

a hand-off from a Tracking and Data Relay Satellite (TDRS) associated with the White

Sands Complex (WSC) to a different TDRS associated with a terminal in Guam. In the

scatter plot of the end-to-end packet latency over time shown in Figure 2.4, we can see

that the curvature of the plot is due to the shortening and then lengthening of the total

propagation distance as the spacecraft travels along its orbit. Modeling and Simulation 20

Figure 2.4. End-to-end voice application latency is observed to change over time, with a substantial increase occurring after the TDRS hand-off.1

The thickness of the apparent line in Figure 2.4 is due to the observed jitter in the voice application packet stream, which is shown in Figure 2.5. Closer inspection of the

data revealed that the saw-tooth pattern in application packet latency was due to the

insertion of ”idle frames” into the Advanced Orbiting Systems (AOS) space data link pro-

tocol. Due to the limited data rate of the simulated voice streams, the link layer proto-

col on-board Orion occasionally finds that there are no network layer packets ready for

transmission. As a result, an ”idle frame” is sent to keep the synchronous space link ac-

tive and maintain symbol synchronization at the demodulator; any outbound packets

arriving into the link layer queue during transmission of this idle frame are required to wait for its completion before they can be transmitted. This contribution to end-to-end

latency had not been captured in NASA’s previous analytical evaluations of network per-

formance that relied upon spreadsheets and other techniques instead of full protocol

stack simulation1. Modeling and Simulation 21

Figure 2.5. The saw-tooth pattern in application packet latency is due to the insertion of ”idle frames” into the Advanced Orbiting Systems (AOS) space data link protocol.1

GEMINI, the predecessor to STK Networking, was used by the author in a different

NASA trade study that focused on whether or not VoIP requirements could be met from the Orion spacecraft and Ares I launch vehicle during launch and ascent through main engine cut-off (MECO). The study focused on the minutes before MECO when the Near-

Earth Network (NEN) link to NASA’s Wallops Flight Facility on Chincoteague Island, Vir- ginia would be at a very low elevation angle and sought to understand whether or not an additional ground station would need to be built at a more northern latitude in order to satisfy VoIP performance requirements34. Because the antenna gain pattern of the Ares

rocket was fixed relative to the body axis of the launch vehicle, and, given a roll tolerance

of ±10° in the launch vehicle, an integrated network simulation with QualNet and ns-3 was run at each 1° step from -10° to +10° off of the nominal roll in the STK scenario. The

engineers concluded that the need for an additional ground station could be mitigated Modeling and Simulation 22

by tightening the roll tolerance of the Ares I rocket. The ability to link VoIP application performance to the time-dynamic effects of the link director vector on the measured an- tenna gain pattern and the launch vehicle’sroll specification (in its local reference frame) would not have been possible without the integration of a spatial temporal information system with a discrete-event network simulator.

2.4 Conclusions and Future Work

Aerospace missions in the near future will be utilizing networking protocols in order to cope with increasing complexity and to reduce costs. Since the performance of net- working protocols can have non-linearities or unexpected (e.g. second-order) effects, accurate network system simulation will be an important part of mission design.

We have contributed the first ever integration of a temporospatial information sys- tem in a discrete-event network simulator, resulting in higher-fidelity network simula- tion in aerospace systems. We learned from our initial implementation, GEMINI, and applied those lessons in a new, more general implementation called STK Networking.

The STK Networking architecture enables broad software reuse by integrating existing simulation packages using middleware and custom network simulator modules.

At present, the ns-3 Astrolink modules developed for this research are available as open-source software, and a beta release of STK Networking is available from AGI as an integrated, commercial solution that includes an Astrolink Server and user interface.

This initial release of STK Networking requires users to define the network topology, protocol stacks, and network applications in ns-3; its user interface merely facilitates the binding of network interfaces to STK antenna objects and running the co-simulation.

The following is a list of future work proposed for later releases of STK Networking: Modeling and Simulation 23

Reducing the learning curve by enabling STK users to define network simu- • lation scenarios and drive a ns-3 co-simulation entirely from within the STK

graphical user interface.

Improved support for near-realtime hardware/software-in-the-loop testing to • move from simulation to emulation.

The ability to import, export, and synchronize simulated network configura- • tions with real network configurations. This is necessary for reducing the user-

burden of large-scale simulation and emulation, and it provides an interesting

way to do configuration management of the simulation system, thereby ensur-

ing high fidelity. Additionally, it would allow discrete changes from an under-

stood operational baseline to be tested in simulation prior to deploying those

changes in the field. This is extremely important for aerospace systems that

could wind up lost or degraded if a poorly-tested or poorly-analyzed change is

deployed. 24

3 Temporospatial SDN

The previous section described a new approach to the modeling and simulation of aerospace communication networks. This approach can be used to assess the pre- dicted performance of network applications in aerospace networks, which is important for systems engineers and protocol researchers to understand as early as possible in project lifecycles. However, in the course of that research, we realized that high-fidelity, time-dynamic modeling of aerospace communication networks could also be applied to solve key challenges in the operation of aerospace networks.

3.1 Software-Defined Networking

Software-Defined Networking (SDN) decouples the control and data planes of network- ing devices35. SDN enables the implementation of services and applications that man- age and control the network through a software abstraction layer on a centralized con- troller, which provides a holistic view of the entire underlying network infrastructure.

The SDN Controller handles the control logic of the network and interacts with net- work elements through a southbound interface called the Control-to-Data-Plane Inter- face (CDPI). OpenFlow is a well-known example of a CDPI protocol, and it has become Temporospatial SDN 25

Figure 3.1. A high-level overview of the software-defined networking ar- chitecture2 widely supported across the current generation of terrestrial router and switch hard- ware, as well as in software switch platforms36.

Beyond the capabilities of legacy control plane protocols (such as intradomain rout-

ing protocols), SDN gives network owners and operators a more flexible and programmable

control of their infrastructure. This allow them to use standard cross-vendor interfaces

in order to pursue customization and optimization, which in turn reduces the overall

capital and operational costs. As an example, Google and its parent company, Alphabet,

has been deploying and enjoying the benefits of SDN in its datacenter networks for more

than a decade37.

Recently, new requirements have led to proposals to extend this concept to Software-

Defined Wireless Networks (SDWN)38,39, which decouple radio control functions, such

as spectrum management, mobility management, and interference management, from

the radio data-plane. SDWN has a number of advantages in wireless backhaul net- works40. Temporospatial SDN 26

The aerospace industry tends to be risk averse; and, with most aerospace communi- cations systems still using legacy circuit-switched technology, there has been little adop- tion of SDN to date. But a study by Honeywell concluded that SDN is a promising tech- nology that can offer many benefits to aerospace41. In proposing an architecture for

NASA’s future space communications infrastructure, Clark, Eddy, et al. concluded that

SDN platforms will be crucial to enabling cognitive networking42. Some advantages that

SDN and SDWN can offer in aerospace communications include:

Slicing of flows and network resources in order to implement multiple isolated • virtual networks on top of the same physical infrastructure. For aerospace plat-

forms such as UAVs and spacecraft, this can permit isolating experiment data

and instrument flows from flight telemetry, tracking, and control data while

achieving size, weight, and power (SWaP) efficiencies by sharing the same RF

links, communications systems, and onboard buses. For wireless aerospace

networks, this can permit radio system reuse and sharing between multiple

payloads (e.g. in a hosted payload). Within a fleet, constellation, or mission

involving multiple platforms, it can permit more efficient and aggressive shar-

ing of spectrum.

Enabling a more flexible network control structure involving either single or • distributed controllers in multiple arrangements. Adjustments to the control

plane composition are possible during and throughout operations and allow

operators to trade-off link efficiencies, fault-tolerance, and other factors dy-

namically – and to bring-in or remove additional control-plane systems for di-

agnosis, debugging, or other temporary tasks without impacting the rest of the

network. Temporospatial SDN 27

Traffic engineering within the wireless network in order to perform analogous • functions as supported by traffic engineering in wireline networks. This in-

cludes optimizing cost of paths selected, loading of individual resources, and

meeting other policy objectives. This could also support automated shifting of

network flows and spectrum-agile radios between different spectrum areas (e.g.

Ka-band to Ku-band), for instance.

Beyond the basic SDWN features applicable to aerospace networks, however, there are additional benefits that can be obtained by using SDN techniques to control a net- work with additional information about the physical platforms and their environment.

In this section, we propose Temporospatial SDN, which extends the SDN/SDWN para- digm even further to enable SDN applications to make network control decisions based on the location, motion, and orientation of assets in space (i.e. position vector, veloc- ity vector, and attitude), the relationships between those assets and their constraints

(e.g. pointing angle limits, planetary, structural, or other occlusions), and the quality of wireless communications as assets move through space and time (e.g. sources of RF interference, mutual interference, atmospheric or space weather affecting signal atten- uation, etc).

3.2 Temporospatial SDN

In traditional networks, wired or wireless, failure is mostly unpredictable. Such unpre- dictable outages may be caused by hardware failure, wireless signal fading, or occluded links. This is typically addressed by building redundancy into the network and/or reac- tively repairing breakages when they occur. Temporospatial SDN 28

In aerospace networks, however, the properties of the candidate network topology change constantly, but in a somewhat predictable manner. While it is possible to treat such changes the same way as “traditional” failures, doing so leads to inefficient use of resources and to user-visible network disruptions.

A better solution is to extend the SDN concept again, this time by adding a tem- porospatial aspect9. In this extended model, called Temporospatial-SDN (TS-SDN), the holistic view of the topology provided to SDN applications is annotated with the pre- dicted, time-dynamic properties of its network links and nodes, and network changes are scheduled as opposed to executed as soon as possible. In TS-SDN, the SDN con- trollers utilize knowledge of the physical position and trajectory of each platform and its antennas to make predictions about the future state of the lower-level network9. For

instance, the state and performance of current or potential future line-of-sight wireless

links in a UAV or satellite network is relatively easy to accurately predict by embedding

software libraries for spatial temporal analysis, such as STK Components, by Analytical

Graphics, Inc. (AGI)43, into the SDN controller layer.

In contemporary SDN, the SDN Controller is responsible for compiling the require-

ments from the SDN Applications down to stateful changes to the actual network infras-

tructure, and it’s also responsible for providing SDN Applications with an abstract view

of the network that may include statistics and event reporting. In Temporospatial SDN,

high-fidelity, time-dynamic, predictive wireless link modeling is embedded in the SDN

Controller. This enables the TS-SDN controller to provide TS-SDN Applications with

a predictive, time-dynamic view of the network topology that includes not only wired

links but also the link metrics and accessible time intervals of current and all candidate wireless links. Temporospatial SDN 29

Work on Temporospatial SDN has led to the identification of several types of aerospace

SDN applications that are fundamentally enabled by the addition of temporospatial knowledge and are not otherwise possible with contemporary SDN and SDWN tech- nology.

3.2.1 Topology Management

The use of centralized controllers in aerospace network operations is not new. In satel- lite communications, scheduling controllers are commonly used to reserve ground sta- tions and to task antenna gimbals to track satellites. These controllers rely on satellite or- bit propagation models to predict and deconflict access intervals for communications.

Establishing wireless links between mechanically steered and highly directional anten- nas is difficult using decentralized, distributed protocols, and this motivates the use of a controller with a holistic view of ground segment resources and space segment users in order to direct the establishment of directional, point-to-point links.

In the Temporospatial SDN paradigm, scheduling controllers are implemented as

SDN applications; they have access to a holistic view of the time-dynamic modeled mo- bility of network elements (satellites, rovers, aircraft, balloons, ships, etc) in order to drive operational decisions about the time-dynamic topology of the network (antenna tasking schedule). Computing feasible schedules may involve models that include many aspects of the physical systems, including gimbal rates for slewing antennas, attitude control system parameters, ability to propagate orbits with tolerable accuracy, etc. None of these capabilities are part of existing SDN systems, but they are fundamental to Tem- porospatial SDN, and they enable controllers to produce and distribute network direc- tives that match asset capabilities. STK Components libraries make it relatively easy Temporospatial SDN 30

to add time-dynamic modeling of physical systems to existing SDN controllers that are based on either Java or C#43.

In addition to modeling time-dynamic mobility, time-dynamic prediction of net- work link accessibility and performance metrics through high-fidelity antenna pattern modeling, wireless signal propagation, and network protocol behavior is also possi- ble8. Based on trajectory predictions and mission planning data, software can be used to model the time-dynamic position and orientation of aerospace vehicles; and, given these dynamic positions and orientations, along with the modeled characteristics and pointing of sensor, communications, and other payloads aboard the vehicles, the time- dynamic spatial relationships between all of the objects can be determined. This in- cludes predictions of line-of-sight visibility times, propagation delay, predicted wireless signal strength and signal-to-noise, and other communications system performance metrics. Environmental effects, such as weather, thermal noise due to beta angle, sun transit solar radiation, and atmospheric density may also be incorporated in assessing certain relationships, such as in determining communications link budgets43. By incor-

porating this entire corpus of information and predictions into the holistic view of the

network provided by the Temporospatial SDN control layer, our SDN applications are

able to better optimize wireless network topology management and use automation to

intelligently plan which of the subset of the many possible paths through the potential

future networks should be utilized.

Links that are physically feasible but not needed can be pruned, and power, spec-

trum, momentum, fuel or other resources can be conserved. Failures can be proactively

modeled or predicted (e.g. based on trending and analysis data) and their impact can be

assessed automatically, with alternative routes pre-computed and ready in case a failure Temporospatial SDN 31

occurs. The combination of the physical information provided through Temporospatial

SDN control software and the network programmability of SDN hardware fundamen- tally enables topology management functionality far beyond what existing satellite net- work control and scheduling systems support.

3.2.2 Packet Forwarding

A number of aerospace systems are currently being developed for the purposes of pro- viding Internet access. For example, in contrast to previous GEO systems, and even ex- isting LEO constellations, some proposed Internet access systems are attempting to op- erate with lower orbits (or even aerospace platforms within the atmosphere) as a means to avoid the significant end-to-end packet latency that is known to encumber GEO sys- tems44. However, lower altitudes usually come at the expense of greater mobility and a higher rate of change in the network topology. Consider that the coverage area of an in- dividual spot-beam on a Low-Earth Orbiting (LEO) satellite may traverse a municipality in a matter of seconds or minutes. Network path control via distributed routing proto- cols (such as OSPF,IS-IS, or any of the mobile ad-hoc networking protocols) is generally reactive to changes in the physical environment after they have occurred and the occur- rence is detected. They do not take into account known degradations that will occur and proactively re-route traffic ahead of the disturbances or changes to the topology. If these protocols are unable to keep up with the rate of change in the network topology, unac- ceptably long periods of packet loss could occur, resulting in exponential back-off or broken sockets for Transmission Control Protocol (TCP) application flows and dropped calls or extreme delays in voice and video applications. Temporospatial SDN 32

Temporospatial SDN can solve such problems. By predicting the near-term trajec- tory of aerospace network platforms in the SDN control layer, using this to continu- ously refine wireless link accessibility and performance predictions, and providing this information to the SDN application layer, the SDN routing application can anticipate topology changes and route breakages before they occur. It can then utilize modeled link metrics to proactively select a new route and and modify the flow rules accordingly, throughout the network, at the appropriate time.

This is possible even in large systems, such as hundreds or thousands of moving plat- forms, because their positions and communications capabilities can typically be pre- dicted in advance based on propagating the orbital elements for spacecraft, near-term

flight paths for aircraft and balloons, etc., and combining the knowledge of their future positions with other environmental data. And because each computation of the prop- agated motion of a platform or wireless link models is independent from every other computation, the computations can be parallelized across multiple compute cores and machines for scalable implementation on cloud compute infrastructure.

Temporospatial SDN is also more efficient than relying on routing protocols because there is no overhead from control plane signalling between the network elements in or- der to probe and detect predictable changes to link metrics. Instead, it relies on efficient control plane messaging between the Temporospatial SDN control system and specific network elements whose state is scheduled to change based on a change in the intended state of the network.

When unpredictable link outages inevitably occur, a stranded network node may need to fall back to an alternate mechanism in order to restore control plane connec- tivity to the TS-SDN controller. For example, a satellite system may be designed to use Temporospatial SDN 33

an omnidirectional S-Band telemetry, tracking, and command (TT&C) link as a backup control plane channel. And, TS-SDN doesn’t preclude the use of a backup distributed routing protocol to be utilized when packets ”fall through” the higher priority routing table entries installed by the TS-SDN controller.

Many different control plane architectures are feasible for TS-SDN. While OpenFlow generally operates remotely between switches and servers hosting the controller soft- ware over a network, controllers can also be co-located with switches (e.g. onboard a relay spacecraft). TS-SDN allows centralized intelligence to completely orchestrate the future network state, distribute timed directives to localized relay controllers, and to have those directives executed at set times – even if control plane connectivity to the centralized intelligence is later broken.

3.2.3 Interference Avoidance

Aerospace communication systems often rely on regulated RF spectrum. In order to be permitted to operate a new aerospace communication system, the operator must demonstrate that the new system will not interfere with incumbent systems. For exam- ple, OneWeb’s priority spectrum allocated from the International Telecommunications

Union (ITU) levied the constraint that their use of the spectrum must not cause inter- ference with GEO satellites45.

Temporospatial SDN can be used to respect spectrum constraints and to avoid inter- ference. The SDN control layer can model constraints on link accessibility and designate hypothetical links as inaccessible during windows of time in which their use would oth- erwise violate RF spectrum allocation rules. Any other SDN application, such as those responsible for topology and route management, would then react to reorganize the Temporospatial SDN 34

network topology and routing around the inaccessible link intervals while still meeting business objectives.

Additional sources of interference, such as cellular infrastructure, radar sites, solar, and cosmic objects can be inferred as well based on their presence in the line-of-sight cones of antenna objects. Additionally, since their actions are coordinated by the Tem- porospatial SDN control software, interference that might occur due to orientation of the aerospace network objects among themselves can be predicted and avoided, since most assets within a constellation or other cooperative system will be operating within the same range of spectrum.

3.3 Implementation

The implementation of a TS-SDN operating system at Alphabet follows the general ar- chitecture depicted in Figure 3.1. Its central component is the Control Layer, which in- terfaces northbound with a set of SDN applications in the Application Layer and south- bound with a set of Control Data Plane Interface (CDPI) agents running on the network devices in the Infrastructure Layer.

The Control Layer employs a network data model to expose an abstract, mutable, high-level view of the network to the SDN applications. The data model consists of a number of typed entities that describe both networking and physical attributes of the network nodes and links. Examples of supported network node attributes include:

Unique ID:

The network controller references a globally unique identifier string for each

network node in the topology. The source of this identifier is project dependent

and may be derived from a serial number or other existing identifier. Temporospatial SDN 35

IP addressing:

The IPv4 or IPv6 addresses and subnets assigned to each logical network inter-

face on the network node.

Physical addressing:

The MAC address of each network adapter.

Typed attributes:

Several logical network interface types are enumerated, such as wired, virtual/tunnel,

or wireless. Supported attributes for wireless interfaces include a list of sup-

ported wireless channels, channel access methods, transmit power ranges, and

a table relating received signal levels to IP layer datarates based on the expected

modulation & coding scheme that would be used (based on the use of static

vs. adaptive modulation and coding). Interference constraints may also be in-

cluded.

Each network node may optionally reference a physical platform with its own set of at- tributes:

Unique ID:

The network controller references a globally unique identifier for each platform

in its universe. The source of this identifier is project dependent and may be

derived from a callsign, transponder ID, satellite catalog ID, etc.

Time dynamic coordinates:

These are attributes necessary to model the position and orientation of the plat-

form over time. For a satellite ground station or teleport, this may simply be the

GPS coordinates and reference frame of the facility. For a balloon or aircraft, it

may contain its modeled trajectory and attitude along a flight path. And for a Temporospatial SDN 36

satellite, it may contain the latest Two-line Element (TLE) ephemeris and atti-

tude for the controller to perform its own orbit propagation.

Aperture models:

These include antenna radiation patterns or models of free-space optics aper-

tures, whether or not the antenna can steer, and any constraints on its steering

(maximum or minimum azimuth/elevation angles, angular velocities, angular

accelerations, sensitivity to sun outages, etc.) for mechanically or electrically

steered beams.

Target Acquisition Info:

Additional information necessary to target the platform or its antennas, such as

an aviation transponder identifier.

A reference to a physical platform is optional because temporospatial modeling is not required for all nodes in a network. For example, consider a ground segment router or switch in an aerospace network. We may need to include this network node in the

SDN controller’s model of the wired network topology and may want to control and pro- gram its forwarding information base; however, its exact cartographic coordinates are probably irrelevant to the operational control of an aerospace network.

For each link in the data model, the SDN controller provides the current link state

(hypothetical candidate, pending installation, installed, failed, withdrawing, etc.), a col- lection of time intervals indicating predicted link accessibility over time, and time-dynamic quality metrics, such as expected data rate and bit-error rate.

Figure 3.2 shows an example of a network graph with different types of nodes and their candidate links. These links are accessible according to models but are not yet established in the network. Temporospatial SDN 37

Figure 3.2. The network data model represents the nodes (vertices) and links (edges) in the topology and includes all accessible wired and wire- less links. The data model is time-dynamic; each directional edge is asso- ciated with a set of time intervals of predicted accessibility with predicted link metrics throughout each accessible interval.

Figure 3.3. The network topology is annotated with required end-to-end packet connectivity and provisioned flow capacities.

In addition to describing the current state of the network and its properties, the model also allows SDN applications to express network configuration objectives, such as a request to establish end-to-end packet connectivity between a pair of network nodes or interfaces, or to provision and reserve a minimum, time-varying network capacity for transiting network flows. Figure 3.3 shows an example in which the data model is expressing that four nodes in the network need to establish end-to-end packet connec- tivity with certain minimum capacities to a gateway node. Temporospatial SDN 38

Figure 3.4. A traffic engineered solution is created by finding a satisficing subgraph or spanning tree.

The network data model serves as a shared state as well as an API between SDN applications, managers, and services – each of which consumes and produces data of certain kinds. For example, one Control Layer service produces link quality data based on telemetry from vehicles, weather data, and physics models. Another job in the SDN

Application layer sets connectivity objectives as platforms enter and exit service regions.

Together, this data specifies the input problem to be solved by a separate SDN App that is responsible for jointly optimizing the network topology and routing as shown in Fig- ure 3.4.

In the initial graph, each vertex v V represents a network node, and each direc- ∈ tional edge e E represents an network link that is accessible for some collection of ∈ time intervals extending into the future. Let each directional edge e E be a direc- n ∈ n tional link whose source is a network interface n N, where N is the set of network in- ∈ terfaces on node v. Let tn be the number of simultaneous targets supported by a given network interface n. This value may be non-uniform throughout the network; for exam- ple, a mechanically steered parabolic antenna on a satellite ground station or teleport may only be able to target a single satellite at a time; however, other types of wireless Temporospatial SDN 39

network interfaces and apertures may support multiple concurrent targets or associa- tions. In the initial graph, it’s possible that E t . The problem of jointly optimizing | n| > n the network topology and routing therefore maps to a non-uniform degree bound span- ning subgraph or spanning tree problem, which is a well-studied NP-hard problem with methods for approximation46. The number of network nodes and edges in all current and envisioned aerospace networks is sufficiently small (we assume < 100,000 nodes) to make this a scalable approach.

The SDN application responsible for jointly optimizing the topology and routing in our implementation will find and maintains a satisficing subgraph over time in a man- ner that avoids disruption to network flows whenever possible. For example, consider a set of platforms (aircraft, satellites, ships, etc.) that are physically moving in and out of wireless link accessibility to a set of other platforms and user terminals. In this example, assume that the description of the predicted motion, and the atmospheric weather af- fecting signal propagation, are being revised at some rate, and so our TS-SDN controller would be refreshing the candidate link graph (Figure 3.2) at an even faster rate. Now con- sider the case in which each user terminal in this example is provisioned in the topology model for a certain amount of network capacity between it and some gateway node, which peers with the Internet, as shown in Figure 3.3. Our topology and routing service would find a satisficing, non-uniform, degree bound spanning subgraph that optimizes the ability to satisfy the requests to provision capacity (Figure 3.4) for current and future phases of the network, based on current forecasting. It then manages the transition of the network through future phases, all the while reevaluating and adjusting planned fu- ture plans in response to changes in the time-dynamic candidate link graph, as shown in

Figure 3.5. This allows it to foresee a predictable degradation or outage to wireless links Temporospatial SDN 40

Figure 3.5. Our implementation includes a topology and routing service, which jointly optimizes the wireless network topology and routing while transitioning it through phases in time.

(due to signal fading, geometric obstruction, etc.) and to proactively schedule transi- tions in the network wireless topology and/or routing in order to maintain undisrupted transit for the provisioned network flows.

Other SDN applications in our implementation are much more simplistic and han- dle business or project-specific logic. For example, an SDN application may inhibit can- didate link creation for nodes when their power is too low. Another SDN application may adjust attributes associated with the prioritization of provisioned network capacity between endpoints based on customized mission requirements. Temporospatial SDN 41

Our experience with developing these and other SDN applications highlights the ease with which the TS-SDN operating system can be extended to include complex and powerful network control logic.

The abstract, high-level intents of SDN applications are continuously translated by the Control Layer to the low-level network node state information exchanged with CDPI agents in the Infrastructure Layer. The information exchanged consists of state change requests sent to the CDPI agents and state notifications sent back in the opposite direc- tion. For example, a CDPI message may request that a rule be added or deleted from the

Forwarding Information Base (FIB) on a specified node at a specified time. In traditional

SDN systems, this exchange uses one of the existing CDPI protocols, such as OpenFlow.

In our TS-SDN, however, existing CDPI protocols turned out to be insufficient for two reasons. First, potentially intermittent connectivity between the Control Layer and the CDPI agents and the need to synchronize state changes across affected nodes over aerospace links, with potentially very long signal propagation delays, imposes the re- quirement that state change requests be scheduled for a specified future time. Second, while existing CDPI protocols only support modification of forwarding rules in the In- frastructure Layer, the wireless and temporospatial features of TS-SDN require the con- trol of a broader range of parameters, such as steerable beam tasking and cognitive ra- dio control. Therefore, we have developed a new, OpenFlow-inspired CDPI protocol that facilitates fully synchronized control over topology, routing, radio parameters, and wireless link establishment. Temporospatial SDN 42

3.4 Conclusions

In this chapter, we described our contribution, Temporospatial SDN, which extends the

SDN/SDWN paradigm by adding temporospatial information and high-fidelity wireless link modeling in the SDN topology – and adds delay and disruption tolerance to the southbound CDPI. We explained how TS-SDN enables SDN applications for aerospace network topology management, packet routing, and interference avoidance. Temporospa- tial SDN also enables powerful SDN applications that can jointly optimize the wireless network topology and routing. Yet the framework is flexible enough to allow such SDN applications to coexist with other, more simple SDN applications that handle project- or mission-specific business logic.

We also provided a high-level overview of the first known implementation of a SDWN or Temporospatial SDN, which is now used in production at Google and X, the Moonshot

Factory, for some of its projects.

In the following two chapters, we explore the application of TS-SDN in two different types of aerospace networks (LEO satellite constellations and HAPS) and demonstrate the concrete benefits that it offers over existing solutions. 43

4 Use Case: LEO Constellations

Networking services through GEO relays are commonplace today. Service provider architectures based on GEO relays have many strong advantages over alternative relay obits; however, there are also some weaknesses fundamental to GEO communications.

For network users, latency is a primary issue. GEO round-trip propagation delays of ap- proximately half a second hamper interactive applications including voice calling, video chat, and online competitive gaming. For instance, ITU recommendations have used

150 milliseconds as an upper bound for mouth-to-ear latency, beyond which conversa- tion quality degrades47. The one-way latency of a network path through a GEO satellite exceeds 250 milliseconds due to propagation alone.

LEO relays, with much lower altitudes, can reduce delays by an order of magnitude.

OneWeb has been cited as providing services with expectations of no more than 50 mil- liseconds of latency48. Additionally, the smaller footprints of LEO antennas can simplify the need for larger numbers of spot-beams and complex spot-beam switching – as well as simplifying the channel access protocols and bandwidth sharing mechanisms within a beam. For many types of network services and applications, there are compelling ben- efits to LEO relays or more complex networks with mixtures of LEO and possibly other

GEO, MEO, or high-altitude airborne platforms. Use Case: LEO Constellations 44

Recently, several LEO constellations have been proposed. Recent press articles have discussed the OneWeb system design, consisting of over 700 LEO spacecraft. Some arti- cles also cite SpaceX constellation plans involving around 4,000 spacecraft49. Addition-

ally, Facebook has made statements about its intentions to use LEO (and other) plat-

forms to provide services to areas of the world without terrestrial infrastructure. A Boe-

ing proposal discusses between 1,396 and 2,956 spacecraft at 1200 kilometers50.

Existing LEO constellations in operations are much simpler. The legacy Iridium con-

stellation, for instance, uses 66 active spacecraft at around 781 kilometers. Orbcomm

uses 29 spacecraft at around 775 kilometers.

Increases in the number of spacecraft and the use of cross-links pose a challenge

for route control in future large LEO constellations. The number of potential routes is

greater, and many routing algorithms scale poorly. Additionally, legacy designs primar-

ily are based on circuit-switching architectures and have a limited number of ground

stations. There may be a much richer interconnection of future LEO constellations with

supplemental relays in other orbits, with mixtures of ground stations and high-altitude

aerial platforms, and other types of network nodes. The routing techniques employed

for providing packet services through future large LEO constellations are a timely re-

search topic as the system designs continue to advance.

This chapter discusses the challenges of employing traditional Internet routing tech-

niques, using typical routing protocols like RIP,OSPF,IS-IS, EIGRP,OLSR, and others. In

contrast, we discuss the advantages of Software Defined Networking (SDN) and specific

Temporospatial SDN (TS-SDN) enhancements that can be used in the control plane in-

stead of traditional, dynamic routing. The remainder of this section describes related work in this area, as there are applicable studies even prior to the 1990s when earlier Use Case: LEO Constellations 45

large LEO constellation proposals were topical. Subsequent sections discuss the ap- plication of SDN for LEO routing, Temporospatial SDN, and its benefits over dynamic routing.

4.0.1 Related Work

Fundamentally, many systems have been circuit switched in the space segment, with circuits pre-computed and controlled via the ground segment. This is appropriate for services such as Iridium voice and low-rate data channels, but packet switching on board relay spacecraft is much more desirable for a number of reasons, including:

(1) Forwarding without double hops into the ground segment

(2) Multiplexing traffic more efficiently than circuit switching

(3) Efficient support for IP multicast forwarding

(4) Providing IP-based Quality of Service forwarding

IP routing in the space segment has been studied extensively. Henderson proposed a geographic-based routing algorithm for LEO constellations and described challenges due to i) overlapping footprints, ii) changes to the ISL mesh in the polar region, and iii) routing across the seam (in a star constellation) between counter-rotating planes51.

Wood analyzed the use of several approaches, including tunneling across the space seg-

ment, network address translation, and using separate routing protocols in space and

ground segments to isolate the different concerns52. However, forwarding based on

Multi-Protocol Label Switching (MPLS), rather than IP routing across the space segment, was strongly recommended by Wood. MPLS switching can be done by the space seg-

ment using label stacks to indicate paths computed on the ground and applied to pack-

ets before ingress to the space segment. In some ways, SDN can operate similarly to Use Case: LEO Constellations 46

MPLS by distributing flow table entries from the ground, per-flow, rather than attaching label stacks per-packet, and SDN can support MPLS forwarding, using flow-table rules that operate on the MPLS labels. In this light, the TS-SDN methods presented in this chapter could be viewed as an extension of this earlier work. TS-SDN takes advantage of the ability to predict, precompute, and orchestrate solutions to the distributed routing problems that Henderson outlined.

4.0.2 Dynamic Routing Protocol Issues for LEO Routing

LEO satellite constellations include different types of inter-satellite links (ISLs). The con- stellations are divided into separate rotating planes of spacecraft, with phasing within each plane close enough to make handoffs without loss of service, and phasing between planes intended to cover the globe. Intraplane ISLs are relatively stable through most of an orbit, as the spacecraft within a plane follow the same trajectory and have a sta- ble separation. Interplane ISLs are more dynamic, since the separation between planes changes significantly between equatorial and polar regions of the orbit. In “Walker star” constellations, interplane ISLs crossing the “seam” between counter-rotating planes are even more dynamic. ISLs may be disabled selectively in the polar regions, where the orbital planes converge, in order to avoid interference. This is a vast simplification of

LEO constellation geometry given the scope available in this chapter. Other references include detailed descriptions of typical geometries such as Walker star, Walker / rosette, and other constellation designs52.

Since the ground track of each plane in a LEO constellation only cuts a relatively narrow swath of terrestrial footprint, there may not always be a ground station site for terrestrial backhaul available via only intraplane ISLs. In this case, routing must take the Use Case: LEO Constellations 47

Figure 4.1. Example polar orbiting LEO constellation, from the ns2.35 manual, generated using SAVI3.

more dynamic interplane ISLs. Additionally, there may be more optimal paths reachable via interplane ISLs depending on the traffic flows being supported.

Dynamic routing protocols developed for the Internet are good at sensing which

links are up, finding neighboring nodes, sharing routing information with neighbors,

and computing the most desirable paths. This computation takes place in parallel on

each router. Algorithms for analyzing routes across graphs, such as Dijkstra’s algorithm

used in OSPF, can have inefficient (super-linear) scaling as the number of links grows

and link state is ever changing. The network routes may never actually converge in prac-

tice. For large LEO networks, this is especially problematic, since the computational

and memory burden on each in-space relay should be minimized. Additionally, the sig-

nalling (control plane) traffic needed to exchange link-state notifications and routing

information can be wasteful of the ISL bandwidth. Use Case: LEO Constellations 48

Each dynamic routing protocol has a number of configuration options and settings that can be varied and tuned to adjust performance, stability, bandwidth usage, and other properties of the protocol operations. As an example, some adjustable properties of the Open Link State Routing (OLSR)53 protocol1 are summarized in Table 4.1.

Table 4.1. OLSR Configuration Tuning Parameters

Parameter Default Value Effects Hello Interval 2 seconds Determines latency to discover neighbors and form routing relationships. Refresh Interval 2 seconds Also determines latency to discover neigh- bors TC Interval 5 seconds Topology Control (TC) messages impact speed of reaction to link failures MID Interval 5 seconds Impacts (re)computation of routes when nodes use multiple interfaces HNA Interval 5 seconds Impacts (re)computation of routes for pre- fixes outside the OLSR domain Neighbor Hold time 3 x Refresh Interval Trades off tolerance to packet loss and re- action time to link changes Other hold times various Trades off tolerance to packet loss and re- action to routing information changes

Proper tuning of the dynamic routing protocol parameters depends on many fac- tors including the expected stability of links, the expected packet loss rates, the rate of handovers between spot beams or relays, and the numbers of relays, network interfaces, and user prefixes being routed. There are default values but not formulaic prescriptions for optimal settings. Network operators trying to use dynamic routing can tune parame- ters and measure performance under nominal and off-nominal conditions; but, this can be time-consuming and error-prone, plus different failure scenarios can vary in perfor- mance impact.

1OLSR is typically used as an ad-hoc routing protocol for MANET scenarios, but it can also be used in infrastructure-like scenarios, such as LEO constellations, with less of an ad-hoc nature. Use Case: LEO Constellations 49

Additionally, the ideal configuration parameters might not be identical across the system and might differ across relay nodes, vary over time, or even vary per ISL within a node. For instance, along the seam, interplane signalling should be more frequent in order to quickly detect neighbor changes and react; however, intraplane and interplane signaling opposite to the seam does not require as frequent signaling, and timers could be tuned completely differently for these ISLs. Unfortunately, these settings in OLSR are generally made on a per-node basis (affecting all interfaces running OLSR on the node) rather than per-interface. A similar case is normal for other dynamic routing protocols.

4.0.3 Temporospatial SDN for LEO Routing

SDN defines the Control-to-Dataplane Interface (CDPI), which is the control plane in- terface between the SDN controller and forwarding plane elements; but, it does not pre- scribe any standards or algorithms for the SDN Applications that control the forwarding decisions. For instance, the OpenFlow CDPI protocol is well defined and implemented by many switches and controller software packages; but, each SDN controller or ap- plication accessing a controller’s API includes its own logic for creating, updating, and deleting network routing paths.

The Temporospatial SDN (TS-SDN) concept was developed to capture the potential for SDN controllers to utilize knowledge of physics to make predictions about the future state of the lower-level network9. For instance, the state of intraplane, interplane, and ground station links is relatively easy to accurately predict using modeling and simula- tion tools such as STK43. Applications of TS-SDN relevant to LEO networks include:

Topology Management:

Since the entire set of potential links and paths available at any given time can Use Case: LEO Constellations 50

be accurately predicted, optimal link sets can be selected, and unnecessary

links (e.g. redundant cross-links that are not part of an optimal path) can be

disabled. This can provide power savings, among other advantages.

Packet Routing:

Routes programmed via TS-SDN can be adjusted in advance of handovers or

other events that would temporarily disrupt a path. Since the physical timing

of link events is highly predictable, packet routes at the network layer can be

updated in direct accordance with the predicted physical layer events. This

can avoid user service impacts including dropped packets, reduced through-

put, multimedia stream disruptions, etc.

Interference Avoidance:

Negative impacts to service due to interference in the spectrum can be pre-

dicted and avoided, thereby improving quality of service for users.

Leveraging TS-SDN for these features can be especially useful for large LEO constel- lations. For instance, other research demonstrated that the lifetime of Iridium spacecraft could be extended to double the battery lifetime – simply by using an alternative routing algorithm54. With TS-SDN, the routing algorithm can be updated in centralized ground control systems, at any time, and the entire constellation benefits from improvements or changes – with no flight software or firmware updates necessary.

4.1 TS-SDN vs. Dynamic Routing

We used STK Networking8 to evaluate the impact of routing protocol issues on user ap- plications over a simplified LEO constellation model with in-space routing. The topol- ogy model is sketched in Figure 4.2. User terminals are connected to relays in one plane Use Case: LEO Constellations 51

Figure 4.2. Simplified Simulation Network Topology and flow data over the LEO constellation, to and from the Internet, via a service provider ground station connected in another plane. For this demonstration, the constellation configuration is similar to Iridium55 in terms of the number of ISLs (2 intraplane and

2 interplane per spacecraft) and the simulated propagation delays. However, Iridium narrowband channels are not suitable for today’s Internet services, so we use a 10 Mbps user channel instead. In the simulation script, we have control over handover timing and overlap of spot beams in order to facilitate make-before break operations.

In our simulation scenario, probe packets are sent every 10 milliseconds in order to measure the network delays and detect losses. As a baseline, we ran a simulation covering several minutes, with connectivity dynamics disabled, and verified that there was no loss of the probe packets. This used the OLSR implementation in ns-3 in order to support dynamic routing in the space segment. In the baseline scenario, the OLSR configuration uses the default values from Table 4.1. We waited for a one-minute “set- tling time” after starting the simulation before taking measurements in order to avoid any startup issues while OLSR is forming adjacencies and establishing routes. In total, Use Case: LEO Constellations 52

the baseline simulation lasts for ten minutes, comparable to an average Iridium relay view period. Over this time, an example single TCP connection across the network, us- ing standard congestion control and ns-3 default settings, is able to achieve 4.068 Mbps average throughput.

Introducing link changes makes the impact of the dynamic routing protocol appar- ent. In additional simulation runs, we modify the baseline scenario to include han- dovers one-third and two-thirds of the way through the simulation. The first handover impacts the provider ground station, and the second impacts the user terminal. In or- der to support potential make-before-break handovers in the simulation, we assume the coverage footprints overlap well-enough within an orbital plane to support 30 seconds of dual-coverage between neighboring relays within the same plane. The OLSR behavior during this period walks through a set of states. Other dynamic routing protocols, e.g.

OSPF,go through a similar progression:

(1) Single Coverage - The OLSR instances between a ground node and relay have

established a symmetric relationship and are forwarding for one another.

(2) Detecting Dual Coverage - The physical and link layer come up with a new re-

lay as it enters view of the ground node. OLSR instances send Hello messages

over the link and establish first asymmetric and then symmetric and forwarding

relations with one another.

(3) Dual Coverage - The ground node has two possible uplinks that can be used,

and the OLSR algorithms select an appropriate link based on packet destination

address using the HNA information received. Traffic towards the ground node

through the constellation can take multiple paths and utilize either downlink. Use Case: LEO Constellations 53

(4) Detecting Loss of Coverage - As the first relay leaves view, the physical and link

layer outages may trigger immediate link state updates to OLSR, or the loss of a

neighbor may be detected through timeouts. Link state triggers are much more

timely and are assumed for this scenario. Reducing the time in this state is cru-

cial, because uplink traffic may be incorrectly routed or dropped, and traffic will

not reach the proper downlink until the relay’s link state change has propagated

throughout the constellation.

(5) Single Coverage - At this point, the scenario looks similar to the beginning; how-

ever, the relay that the ground station has a symmetric forwarding OLSR rela-

tionship established with is changed.

With link changes added to the simulation, as described for a soft handover, there are 541 probe packets lost during the handover events. The losses are clustered in time corresponding to the handovers. This would result in temporary issues for realtime ap- plications. Figure 4.3 shows the impact on a TCP connection time-sequence plot. The outage caused by the handover is significant enough to prevent the fast retransmission mechanism from being triggered, and multiple costly retransmission timeouts (RTOs) result.

For comparison, we also ran simulations with the OLSR emission timers tuned more aggressively to 100 milliseconds each – and with the hold timers tuned to 300 millisec- onds. In this case, with soft handovers, there were only 30 probe packets lost by the test application, which represents a significant improvement from the 541 lost with the default settings; but, it’s still a significant burst loss. Use Case: LEO Constellations 54

sequence number 1.0.1.1:49153_==>_3.0.1.1:80 (time sequence graph) 90000000

85000000

R R RRRRRRRR

80000000

19:03:35 19:03:40 19:03:45 19:03:50 time

Figure 4.3. A TCP time-sequence plot, generated using the tcptrace tool4, showing multiple costly retransmission timeouts (RTOs) resulting from the soft handover event.

In addition to the make-before-break soft handovers, we ran simulations with ideal, hard handovers – where link-down and link-up events on old and new links are simul- taneous (i.e. with no time spent re-acquiring, authenticating, etc. at the physical or link layer). In a hard handover setting, there were 1543 probe packets lost – nearly triple the soft handover amount with default OLSR settings. For TCP performance, Figure 4.4 shows an additional RTO backoff experienced in the hard handoff scenario; where there were 3 RTOs following a soft handover, there were now 4 with the hard handovers using

OLSR with default timer settings. Use Case: LEO Constellations 55

sequence number 1.0.1.1:49153_==>_3.0.1.1:80 (time sequence graph) 85000000

80000000

R R R RRRRRRRRR

75000000

70000000

19:03:20 19:03:30 19:03:40 time

Figure 4.4. A TCP time-sequence plot, generated using the tcptrace tool4, showing that additional RTO events occur with the hard handovers.

With the more aggressive “tuned” OLSR settings, and hard handovers, there were only 73 probe packets lost. This is far less than with default settings; but, such bursts of packet loss are still going to detract from user experience. Furthermore, to achieve this improvement, OLSR goes from using about 400 bps of capacity for signalling (with default settings) up to 9.6 kbps (with tuned settings). OLSR bandwidth utilization is comparable for both hard and soft handover scenarios; and, in our simulations, it varied primarily due to the configuration settings in use.

Finally, for OLSR, we ran the same handover simulation with OLSR turned off and a simulated TS-SDN mechanism in place to coordinate time-triggered route changes Use Case: LEO Constellations 56

across the constellation and ground segment. In this case, the problematic periods where the network is adjusting to the loss of the original links and converging to use of the new links is not just minimized – it is completely reduced to zero. The link-down event is never experienced, because it is predicted ahead of time, and route changes across the entire network are synchronized in advance to avoid using the link prior to its predicted down-time. In our simulation, this switch-over occurs midway through the period of dual coverage.

To compare TS-SDN against the OLSR results, we ran a set of simulations in which the dynamic routing was replaced by time-triggered control of static routing table en- tries. The timing was based on the known handover times. In the case of make-before- break / soft handovers where coverage overlaps, TS-SDN is able to achieve the same zero-loss performance as the baseline scenario; during the overlapping coverage, pack- ets are flushed from the network down working links and fully transition to using the new links well in advance of the original link going away. However, with hard handovers,

TS-SDN experiences 4 lost probe packets and 3 lost probe response packets, which is at- tributable to the probe packets that were queued within the network at the time of the handover and were queued for transmission on links that went down. The simulator network device model throws these packets away rather than re-routing them on the link-down event. If the packets were re-routed, the losses could be zero – even in the case of hard handovers. While the OLSR traffic was small in other scenarios, it should be noted that in the TS-SDN scenarios, there is absolutely zero control plane traffic, so this undesirable overhead is minimized as well. In reality, there would be a small amount of traffic to transfer CDPI directives and maintain that connection; but, since it operates Use Case: LEO Constellations 57

controller-to-switch, rather than peer-to-peer among all routers, the signalling overhead will be much smaller for TS-SDN.

Table 4.2 summarizes simulation results across all of the example scenarios dis- cussed in this chapter and clearly shows the impacts of default versus tuned OLSR pa- rameters, and of soft versus hard handovers, in each case.

Table 4.2. Simulation Results Summary

Scenario Lost Probes Lost Probe Responses Baseline 0 0 OLSR, Soft Handover 541 14 Tuned OLSR, Soft Handover 30 14 OLSR, Hard Handover 1543 671 Tuned OLSR, Hard Handover 73 21 TS-SDN, Soft Handover 0 0 TS-SDN, Hard Handover 4 3

In this discrete-event simulation, it is possible for the changes to be perfectly syn- chronized on the same nanosecond (the simulator has nanosecond granularity). In re- ality, this may not be the case, and systems may only be able to synchronize within mi- croseconds, or worse, depending on the architecture and the availability and resolution of a common clock source (such as Global Positioning System time). Methods to deal with imperfect synchronization are important future work for TS-SDN research. During the period in which the network is not fully changed over to the new routes, there could be misrouted or dropped packets; but, this period, being on the order of perhaps mil- liseconds, is still significantly better than the several seconds that even a tuned dynamic routing protocol might take to converge. Use Case: LEO Constellations 58

4.2 Next Steps and Future Work

The results presented in this chapter are simplified in order to clearly show the impact of the routing protocol. In reality, there are a number of further complications, some of which we have studied in other, confidential simulations of commercial systems. For in- stance, in addition to handovers between relays, there are typically multiple spot beams per relay, and there will be handovers between spot beams even more frequently than handovers between relays. Depending on the communications system design, these spot beam handovers can have similar impact to handovers between relays. While not discussed further in this chapter, TS-SDN is capable of handling mobility between spot beams with no changes.

Additionally, more accurate modeling of the ISL behavior in polar regions, and inves- tigation of TS-SDN’s capabilities to deal with this would be relatively simple to explore.

This would involve more complex mobility or physics models for the LEO relays, via

STK/Networking’s ability to interface with ns-3 via the Astrolink module8.

There are more complex satellite network architectures incorporating MEO, GEO, and other relay orbits in cooperation with LEO relays, and there may be even stronger advantages to using TS-SDN in these scenarios. Use Case: LEO Constellations 59

4.3 Conclusions

TS-SDN is a promising technique that blends concepts from legacy centralized space- craft network control and modern packet switching capabilities supported by SDN. TS-

SDN avoids the probing, neighbor, and link-state detection heuristics that dynamic rout- ing protocols rely upon. By time-sequencing the instantiation of flow table rules, TS-

SDN supports fine-grained network control orchestrated system-wide across flight and ground segment nodes.

Incremental deployment is possible so that an entire network does not need to fully support TS-SDN, and upgrades can be facilitated. Because it builds upon existing ma- ture technologies (physics models, SDN protocols and implementations, etc), TS-SDN can be relatively easily introduced into ground networks today. This can be appropri- ate even when the space segment uses bent-pipe forwarding and does not perform on- board routing or switching. As constellations with onboard routing capabilities in the space segment are deployed, as long as their systems support SDN, the TS-SDN tech- niques can also be applied to these future constellations. Note that the size, weight, and power (SWaP) requirements on routing or switching payload hardware in the TS-SDN approach is less than that of other approaches. We found that, in general, commercial system-on-chip (SoC) hardware that supports MPLS tended to target enterprise applica- tions and have higher power requirements compared to SoCs that support the minimal set of TS-SDN forwarding requirements (L3 switching).

The simulations results and analysis in this chapter provide encouragement for fur- ther work applying TS-SDN for LEO constellations, and they show compelling advan- tages in comparison to traditional dynamic routing protocols. 60

5 Use Case: High Altitude Platforms

The use of unmanned aerial vehicles (UAVs) and other high-altitude platform sys- tems (HAPS) has recently emerged as a potential solution for providing Internet access to rural populations in emerging economies. Titan Aerospace, which Google acquired in 2014, designed a high-altitude, long-endurance UAVs for this purpose. A network of stratospheric balloons to extend LTE access is being developed in X, the moonshot fac- tory at Alphabet (formerly Google[x]). Facebook has been developing a high-altitude, long-durance UAV called Aquila after acquiring the British company Ascenta.

Other past and present efforts to develop airborne platforms that could be used for similar purposes include Airbus’s Zephyr program, Thales Alenia Space’s Stratobus, the

Vulture Program’s SolarEagle, NASA’s Helios, and joint US DoD and DARPA projects such as the ISIS Airship, among others56.

This chapter explores the unique challenges presented by these networks and the constraints that they impose on the distribution layer network control mechanisms.

Various distributed approaches to routing in HAPS-powered networks are discussed and compared to a centralized approach based on Temporospatial SDN. Using network sim- ulations, we show that the unfavorable trade-offs between convergence time and cost in Use Case: High Altitude Platforms 61

Figure 5.1. While it is relatively common for Internet access networks to accommodate mobility in the Access Layer, high-altitude platform sys- tems and LEO satellite networks must also accommodate mobility in their Distribution Layer. terms of network overhead make a TS-SDN approach generally preferable; however, dis- tributed routing protocols constitute a viable fallback mechanism to help nodes recover from the loss of connectivity to the TS-SDN controller.

5.1 Network Architecture

We use the hierarchical internetworking model57 to describe the high-level architecture of a network that provides 4G LTE mobile Internet service from airborne platforms.

As depicted in Figure 5.1, the Access Layer is comprised of an LTE base station (eN- odeB) on each airborne platform, which connects directly to many end user handsets, tablets, etc, referred to as user equipment, or UEs. In accordance with 3GPP standards, Use Case: High Altitude Platforms 62

each eNodeB establishes a stateful connection (S1 interface) to an Evolved Packet Core

(EPC) in the Core Layer of the network for data plane flows and control plane signaling.

The EPC peers with the Internet, performs billing and subscriber management func- tions, and handles Access Layer mobility management.

The Distribution Layer connects the eNodeB to the Core Network infrastructure. In traditional LTE networks, this is accomplished via static, wired (typically fiber) infras- tructure. Our Distribution Layer, however, is comprised of a time-dynamic multi-hop wireless mesh/backhaul network, which is significantly more complex.

Because these networks are designed to provide Internet service to underserved pop- ulations of users, we cannot assume that terrestrial fiber backhaul infrastructure is avail- able within the coverage area of a given HAP.As such, bent pipe architectures, in which the HAP acts as an analog transponder for LTE service, cannot be used. Instead, we favor a packet-based multi-hop wireless mesh network for our backhaul.

Terrestrial ground stations must be located in regions with sufficient terrestrial broad- band backhaul connectivity and may be beyond line-of-sight (BLOS) for some HAPs. Be- cause of the long distances between peers, to keep transmit power within safe and real- izable limits, ground stations and HAPs must use mechanically or electronically steered beams (gimbaled parabolic antennas, phased arrays, or steerable free-space optics as- semblies) to establish highly-directional links, even to nearby peers. Different platforms in the network may have varying constraints on the number of mesh links that they can form with peers. For example, a ground station may only be able to track a single HAP, while each HAP may be able to simultaneously steer multiple beams towards several other platforms. Different link types may be used within the same mesh, e.g., optical Use Case: High Altitude Platforms 63

links between airborne platforms and radio links to ground stations. These links in ag- gregate form a mesh network that enables multi-hop wireless communication between non-LoS peers.

In some architectures, HAPs and ground stations may also establish links with plat- forms at even higher altitudes, such as satellites. The highly-directional nature of the

Distribution Layer motivates the use of such links as an additional out-of-band con- trol channel to bootstrap the network. That is, since narrow beam link establishment requires both peers to know that they must point at each other at the same time, an out- of-band control channel is required to provide a disconnected node with its initial target acquisition information to bootstrap its connectivity in the narrow beam mesh topology.

This control channel may be a very low data rate and relatively expensive resource. In many systems, existing control and non-payload communication (CNPC) links may be suitable for this purpose.

5.2 Mesh & Backhaul Challenges

The use of highly-directional and narrow beams in a mobile, multi-hop, wireless Distri- bution Layer creates unique challenges. The mobility of network platforms and chang- ing weather conditions impart time-dynamic properties to the availability and fidelity of wireless mesh/backhaul links. Additional complexity in control is caused by the fact that beam steering (whether mechanical or electrical) results in non-zero link acquisition times. Any reconfiguration of the Distribution Layer topology therefore risks disrupting connection-oriented network protocols that need to transit across this time-dynamic network.

These challenges impart the following constraints: Use Case: High Altitude Platforms 64

Centralized Topology Control:

Constructing a mesh topology that may include narrow, mechanically steered

beams (with slew and link acquisition times on the order of at least several sec-

onds) is prohibitively difficult using distributed protocols. A centralized con-

troller is therefore required to coordinate link establishment between peers.

No Addressable Hierarchy:

In traditional 3GPP standards-based radio access networks (RANs), the S1 in-

terface between the EPC and eNodeB uses IP at layer 3, and both the EPC and

each eNodeB could reasonably assume that the other has a static IP address.

The hierarchy inherent in a traditional RAN’s topology allows for routers in the

Distribution Layer to use routing prefixes in the forwarding information base

(FIB), which helps to minimize the size of forwarding tables. In contrast, the

lack of implicit hierarchy in a HAP-based Distribution Layer network means

that the assignment of static addresses or address blocks to access points lim-

its our ability to prefix and aggregate routes in the FIB of network routers and

switches.

Disruption Avoidance:

These networks are designed to provide Internet access, and most user-traffic

uses the TCP protocol. TCP throughput backs off exponentially in the face of

packet loss, and prolonged disruptions (on the order of several seconds to min-

utes, depending on the configuration) can result in socket closures. These can

cause users to experience call drops in interactive voice and video, cancelled

file transfers, and other problems. It is therefore important that we avoid dis-

ruption to the S1 user plane connection between the eNodeB and EPC. Use Case: High Altitude Platforms 65

Traffic Engineering:

These networks need to support sufficient capacity to carry all traffic between

eNodeB and EPC, or between eNodeBs. The network needs to ensure it can

provide sufficient capability to carry all traffic. For example, the routing algo-

rithm should investigate use of all available links, instead of always using only

the shortest path route.

5.3 Temporospatial SDN

As previously mentioned, the TS-SDN concept was developed to capture the potential for SDN controllers to utilize knowledge of physics and physical relationships to make predictions about the future state of the lower-level network9. Whether in multi-hop wireless mesh/backhaul networks consisting of HAPs and ground stations, or in low- earth orbiting (LEO) satellite networks10, the predictably evolving, time-dynamic nature of highly-directional steerable links presents a significant opportunity to benefit from the application of TS-SDN.

In chapter 4, we demonstrated the advantages of TS-SDN in a LEO satellite network with highly-predictable handover events. However, the motion of balloon or UAV based

HAPS may be more chaotic and less predictable. The remainder of this chapter com- pares SDN based routing to existing, distributed mesh routing protocols and explores how both types might be leveraged in HAPS.

5.4 SDN and Distributed Routing in HAPS

In SDN, the Control Layer gathers and integrates state information from network nodes to form a coherent, abstract view of the entire network, which is a highly centralized Use Case: High Altitude Platforms 66

Table 5.1. Comparison of Three Mesh Routing Protocols

AODV DSDV OLSR Route Format table driven table driven table driven Route Discovery reactive proactive proactive Route Maintenance periodic periodic periodic Core Algorithm searching Bellman-Ford Dijkstra

approach. As explained in prior chapters of this dissertation, a centralized controller is required to coordinate wireless link establishment in a network that may consist of nar- row, highly-directional, mechanically-steered beams. However, while the establishment and maintenance of an optimal network topology is tightly coupled with the establish- ment and maintenance of an optimal packet routing or switching solution, the use of a centralized controller for SDN style forwarding is not strictly required. An alternative is to embed sufficient intelligence into network nodes themselves so that their behavior and communication with other nodes leads to the emergence of an adequate switch- ing/routing solution across the whole network.

5.4.1 Mesh Routing Protocols

In the past two decades, many dynamic routing protocols have been proposed for wire- less ad-hoc networks. However we compare TS-SDN routing with only the three popu- lar ones implemented in the ns-3 network simulator13, namely AODV58,59, DSDV60 and

OLSR61. Table 5.1 shows the comparison of the three protocols from four perspectives: route format, route discovery mechanism, route maintenance mechanism, and the core algorithm of finding routes. Use Case: High Altitude Platforms 67

Figure 5.2. Visualization of the simulated network topology.

5.4.2 Simulation Scenarios

To study the performance of these three routing protocols, we defined a plausible simu- lation scenario in the ns-3 network simulator. The scenario consists of a network topol- ogy of 487 HAPs and 38 ground stations with no mobility (the relative motion of HAPs, such as high-altitude UAVs and balloons, tends to be slow, since the platforms are de- signed to be as geostationary as possible). In our scenario, each HAP may establish wire- less mesh/backhaul links with up to 3 other nodes, while each ground station node may connect to at most 1 HAP.Ground stations are also connected via a wired, point-to-point packet connection to a network node representing the EPC and Core Layer of the net- work. Figure 5.2 provides a visualization of the simulated network topology.

We consider the performance of the three protocols in two scenarios: startup and link failure. In the first scenario, the three routing protocols start from empty routing tables, and we study how long it takes them to find routes to the EPC node for all the

HAPs. In the second scenario, we first let the routing protocols run for the same period of time, during which they can build routing tables. Then, we break some links and study Use Case: High Altitude Platforms 68

how long it takes the routing protocols to recover from the link failures. Therefore, in this scenario, the metric to evaluate performance of the routing protocols is convergence time, i.e., the average period of time it takes the routing protocols to find routes to the

EPC node for all the HAPs. As such, the definition of ”convergence time“ used here is different from the traditional one, which is the time until the routing protocols find routes for every node to all the other nodes. Here, we say the routing protocol converges for a single HAP if the EPC node receives at least one packet from that HAP.

We used the existing AODV, DSDV and OLSR models implemented in ns-3. However, since these models are hard-coded for wireless broadcast networks (e.g., WiFi), we mod- ified the source code to make them act like point-to-point links in our network model.

Since convergence time of the two proactive protocols (i.e., DSDV and OLSR) is deter- mined by the route update period, we set them to small values to be comparable with the convergence time of AODV. Specifically, in DSDV, the periodic updating interval is set to 1 second and the settling time is set to zero. In OLSR, the Topology Control (TC),

Multiple Interface Declaration (MID), Host and Network Association (HNA), and Hello message intervals are all set to 0.5 seconds. The trade-off is that a smaller route updating period results in more control overhead throughout the network.

In the startup simulation scenario, we let all the HAPs send packets to the EPC node at time 0 at a constant bit rate of 50 Kbps. To forward the packets to the EPC node, the routing protocols needed to first search for routes for all the HAPs. In the link failure scenario, we first let the routing protocols run for 800 msec, and then we break some links. Then, at 800 msec into the simulation, all HAPs begin sending packets to the EPC node at a constant bit rate of 50 Kbps. On the EPC node, we measure the time required to receive packets from every HAP in the topology. Use Case: High Altitude Platforms 69

Table 5.2. Comparison time at startup (in seconds)

Protocol Convergence Time AODV 0.0238 s DSDV 0.0579 s OLSR 2.1358 s

5.4.3 Simulation Results

Startup. For the startup scenario, Table 5.2 shows the average convergence time of three protocols, in which we can see that AODV converges the fastest. One reason is that AODV only searches for routes in an on-demand manner. That is, in our simulation, all HAPs only search for the route to the EPC node. In contrast, in both DSDV and OLSR, each

HAP must find routes to all other HAP,ground station, and EPC nodes. The second rea- son is that the convergence time of the two proactive routing protocols is determined by their topology update period. Shorter update periods can achieve faster convergence, but it can also generate more control overhead. Recall that in our simulation, we set the periodic update interval of DSDV to be 1 second, and we set the TC updating interval of

OLSR to be 0.5 seconds.

Figure 5.3 illustrates the probability density function (PDF) of convergence times in the topology. We can see that, given the aforementioned simulation parameters, AODV achieves the shortest convergence time while OLSR requires the longest time to con- verge.

Link Failure. Figure 5.4 shows the PDF of convergence time for the three protocols. We can see that AODV also achieves the shortest convergence time due to its on-demand feature. In addition, we can see that DSDV usually converges in less than 200 msec in Use Case: High Altitude Platforms 70

Figure 5.3. Probability distribution function of mesh routing protocol startup convergence times for all nodes to find a route to a designated EPC node.

Figure 5.4. Probability distribution function of mesh routing protocol convergence times in repairing a single route upon link failure. our simulated topology. In contrast, when the TC interval of OLSR is set to 0.5, its route recovery time is much longer than the other two protocols.

Our simulation considered the use of a centralized controller for topology manage- ment, while relying on distributed mesh routing protocols for forwarding (absent TS-

SDN). The results show a layer 3 disruption of the S1 interface between the airborne eNodeB and EPC of between 20 msec and 2 seconds, depending on the choice of mesh routing protocol. Note however that any reconfiguration of the Distribution Layer topol- ogy can lengthen disruptions due to mechanical beam steering and link establishment Use Case: High Altitude Platforms 71

time. Nonetheless, the performance of distance-vector based mobile ad-hoc network protocols is quite good in our simulation. Why do we need something better?

5.4.4 Advantages of TS-SDN

Simulations that are run with a static, highly-directional topology that is provided a pri- ori are excluding the problem of constructing and maintaining that topology over time.

While it’s possible to construct and maintain the wireless topology using algorithms based on graph robustness or resiliency, TS-SDN can establish and maintain a wire- less mesh/backhaul topology that is more robust and stable than relying solely on dis- tributed mesh protocols because it computes a routing solution with global knowledge of the present and predicted future state of the network. TS-SDN can also co-optimize routing around its own reconfiguration of the network topology. As such, TS-SDN can completely eliminate disruptions for any predictable changes in link accessibility by proactively rerouting provisioned network flows around reconfiguration of the Distri- bution Layer topology.

5.4.5 Reaction in TS-SDN

However, given that the simulation results clearly demonstrate that distributed rout- ing algorithms can quickly react to find an alternate route to the SDN controller (and

EPC) in the event of an unexpected link failure, we should leverage distributed routing algorithms and protocols when a TS-SDN node has its routes disrupted by an unpre- dictable link failure. The distributed routing algorithm can provide routing solutions for the network while the TS-SDN controller learns from the current state of the network and re-programs it for a better solution. This hybrid approach can minimize the service Use Case: High Altitude Platforms 72

interruption and also avoid using expensive, out-of-band CNPC or TT&C communica- tions channels for network reconstitution.

Nominally, in a purely predictive mode, the TS-SDN topology and routing service described chapter 3 would only include wireless links in the topology that are being uti- lized by one or more programmed routes. But to take advantage of this hybrid approach, which relies on distributed routing algorithms during unpredictable route disruption, we may want additional wireless links installed in the topology for robustness. The flex- ibility afforded by the Temprospatial SDN framework is helpful here, as an additional

SDN Application, with a resource reservation priority lower than that of the primary topology and routing service can be chartered to analyze the predicted accessibility and lifetime of other accessible wireless links and install them for added resilience while con- sidering competing project goals and costs, such as energy budgets.

5.5 Conclusions

This chapter described a variety of challenges posed by networks with nodes carried aboard HAPs. The challenges originate in the desire to manage the scarce network re- sources in the optimal fashion in the face of a range of disruptive, but generally pre- dictable events, such as changes in vehicle position and orientation over time and the effect that this has on various relational constraints on wireless link accessibility be- tween nodes in the network.

We discussed the constraints imposed in this environment on the network and its control mechanisms, and we proposed a solution based on extending SDN architecture to account for time-dynamic properties of the network. Use Case: High Altitude Platforms 73

Finally, we compared and contrasted our centralized SDN-based approach to net- work routing control with the distributed approach. We found that the unfavorable trade-offs between convergence time and cost in terms of network overhead make an

TS-SDN approach generally preferable. However, distributed routing protocols consti- tute a viable fallback mechanism to improve network resiliency while the TS-SDN re- covers from any unpredictable link failures in HAPS. 74

6 Suggested Future Research

This chapter suggests future work related to the operational control of aerospace communication networks and Temporospatial SDN. For suggested future work related to the modeling and simulation of aerospace networks, see chapter 2, section 2.4.

6.1 NBI and CDPI Standardization

In the SDN reference model, the Northbound Interface (NBI) refers to the set of APIs used by SDN Applications to obtain a holistic view of the network topology and to con- trol the network through a high level of abstraction. The Control-to-Dataplane Interface

(CDPI) refers to the set of APIs between the SDN Controller and the actual network ele- ments in the network infrastructure. Open source SDN frameworks, such as OpenDay- light62 and the Open Network Operating System (ONOS)35, have their own proprietary

NBI; but, the use of OpenFlow36 for the CDPI has become the de-facto standard for con- temporary SDN.

The definition and standardization of the CDPI for Temporospatial SDN (TS-SDN) represents an area for further work. Existing releases of OpenFlow are not suitable for

TS-SDN because they do not provide a mechanism for the network controller to instruct the network node to modify the control state at some future time (which is an important Suggested Future Research 75

feature in aerospace networks, which may exhibit significant and highly-variable control plane latency between ground and air/space segments). OpenFlow also lacks support for wireless control plane functions.

Google developed an entirely new CDPI for its initial implementation of TS-SDN; however, it would be worthwhile to explore a different approach that includes extend- ing and incorporating OpenFlow as part of a hybrid TS-SDN CDPI. OpenFlow could be extended to support scheduled future enactment and withdrawal of flow rule modifi- cations, and a separate, dedicated CDPI could be developed and standardized for the wireless and cognitive radio control plane functions. Leveraging OpenFlow in the TS-

SDN CDPI would likely help to reduce migration and adoption costs in the ground seg- ments of aerospace networks, which may already include OpenFlow capable routers or switches from commercial network equipment vendors.

6.2 Network Functions Virtualization

Network Functions Virtualization (NFV) is an initiative to virtualize the network ser- vices that are now being carried out by proprietary, dedicated hardware. The OpenDat- aPlane (ODP) project is an open-source, cross-platform NFV framework providing ap- plication programming interfaces (APIs) for the networking data plane. Major providers of network packet processing silicon, including Cadvium, Freescale, Marvell, and Texas

Instruments, have added support for ODP APIs63. This has enabled Open vSwitch, a production quality virtual switch distributed under the open source Apache 2.0 license, to abstract away the underlying hardware while still taking advantage of any hardware Suggested Future Research 76

acceleration for packet processing operations64. Existing support for the popular Open-

Flow SDN CDPI in Open vSwitch makes it relatively easy for networks with ODP compli- ant hardware to adopt NFV and SDN.

Google and its parent company, Alphabet, recently announced its own edge node infrastructure for NFV and carrier network integration65. Alphabet’s implementation of

TS-SDN described in chapter 3 has a CDPI integration with this proprietary edge node infrastructure, which provides a highly-scalable platform for custom packet processing and tunnel encap/decap at the ingress from the packet core to the nodes comprising the aerospace network ground segment in the backhaul layer.

While much progress has been made on SDN and NFV for packet processing and forwarding functions, the field of Software Defined Wireless Networks (SDWN) is still emerging38, and the extension of NFV paradigms to wireless networking is an open area of research66. Development and standardization of a common hardware abstraction layer, analogous to ODP,for steerable beam tasking and cognitive radio control, may be an interesting avenue for future research.

6.3 Mobile, 5G, and CORD

Those aerospace networks designed with the purpose of providing Internet access are required to implement or integrate with a 3rd party’s core network, which acts as a gateway to the Internet and often performs a variety of other functions, such as user authentication, access authorization, and mobility management. These functions are performed by an Evolved Packet Core (EPC) in the latest 3GPP based mobile network architectures67. There are equivalent systems in the Central Office of fixed broadband Suggested Future Research 77

network operators that provide residential Internet access via Digital Subscriber Line

(DSL) or cable modems.

CORD (Central Office Re-architected as a Datacenter) is a relatively new project that combines NFV, SDN, and open hardware designs to bring datacenter software agility to the Telco Central Office68. CORD lets the network operator manage their Central Offices using a declarative modeling language as the SDN North-Bound Interface (NBI) to their

Open Network Operating System (ONOS) for agile, real-time configuration of customer services35. Major service providers like AT&T, SK Telecom, Verizon, China Unicom and

NTT Communications are already supporting CORD69. But CORD’s mission is not lim- ited to mobile and fixed-broadband access services; it seeks to be a general purpose platform for any type of control plane (SDN) and data plane (NFV) services.

As such, CORD is a compelling platform for the future development of an open Tem- porospatial SDN operating system. CORD includes the ONOS SDN operating system – an open source Java implementation of SDN designed for multiple, plugable Control-to-

Dataplane Interfaces (CDPIs). The use of Java would ease integration STK Components technology19 for time-dynamic geometry, wireless communications, and signal prop- agation loss modeling. And while it’s possible to run CORD in the Cloud, the CORD

Pod reference implementation, which includes the ONOS SDN operating system, is de- signed to run on-premises in one or more racks at the network edge. The ability to op- erate a Temporospatial SDN entirely on-premises, and avoid dependencies on Internet access to cloud infrastructure, is highly-desirable attribute for the operational control of aerospace networks in the military and defense sectors. Suggested Future Research 78

6.4 Delay Tolerant Networking

Delay Tolerant Networking (DTN) and the associated Bundle Protocol allow packets to be stored and forwarded in transit through bundle agents on networks that lack con- tinuous connectivity, such as deep space networks70. There are several different ap- proaches to routing in DTNs71. Simplistic techniques, such as Epidemic routing, flood the network with control plane signalling. As an alternative, human network opera- tors will sometimes manually schedule static routes in a DTN based on a mission plan and scheduled contacts72. Schedule-Aware Bundle Forwarding (also known as Contact

Graph Routing) attempts to automate this by providing the bundle agents with a sched- ule of the communication ”contacts“ (future time intervals for scheduled wireless com- munication with a given peer) in order to reduce control plane overhead and increas- ing total data return while at the same time reducing mission operations cost and risk.

It takes advantage of the fact that space flight mission communication operations are planned in detail, and so the communication routes between any pair of bundle agents in a DTN can be inferred from those plans. This alleviates the need to discover bundle agents via control plane signalling, which is anyway impractical given the signal propa- gation delays in deep space communications72.

Schedule-Aware Bundle Forwarding does not prescribe how the wireless links (con- tacts) in the network topology are planned, yet it relies on their predetermination in or- der to inform distributed route calculation in a DTN. However, in Temporospatial SDN, the set of all possible wireless links over time, and the provisioned network flows be- tween endpoints in the network, are represented in the same network graph and data model. This enables the co-optimization of a wireless network topology and routing plan that evolves over time. The modeled signal propagation delay time is also included Suggested Future Research 79

in the network graph and data model. It should therefore be relatively straightforward to communicate the forwarding schedule to bundle agents in the Temporospatial SDN

CDPI. This is a compelling area for future work, which would also include adding the existence of store-and-forward capabilities, and associated Bundle Agent configuration parameters, to the set of attributes associated with network nodes in the Temporospa- tial SDN data model. The TS-SDN control layer could also help to abstract away net- work management differences between DTN and non-DTN nodes in a heterogenous networks by maintaining a generic SDN North-Bound Interface (NBI) to SDN applica- tions. Future work could also include standardization of the TS-SDN NBI and CDPI in collaboration with the Consultative Committee for Space Data Systems (CCSDS)73. Bibliography 80

Complete References

[1] Brian Barritt, Kul Bhasin, Wesley Eddy, and Seth Matthews. Unified approach to modeling & simulation of space communication networks and systems. In Systems Conference, 2010 4th Annual IEEE, pages 133–136. IEEE, 2010.

[2] OME Committee et al. Software Defined Networking: The new norm for networks. Open Networking Foundation, April 2012.

[3] Lloyd Wood. SaVi: Satellite constellation visualization. In First Annual CCSR Research Symposium (CRS 2011), Centre for Communication Systems Research, 2012.

[4] Shawn Ostermann. tcptrace - official homepage. http://www.tcptrace.org, 2000.

[5] Kul Bhasin and Jeffrey Hayden. Developing architectures and technologies for an evolvable NASA space communication infrastructure. In 22nd AIAA International Communications Satellite Systems Conference & Exhibit 2004 (ICSSC), page 3253, 2004.

[6] Wikipedia. Newspace — wikipedia, the free encyclopedia, 2017. [Online; accessed 5-July-2017].

[7] SpaceNews archive: Mega constellations. http://spacenews.com/tag/ mega-constellations. [Online; accessed 4-June-2017].

[8] Brian J Barritt and Wesley M Eddy. Astrolink for modeling, simulation, and operation of aerospace communication networks. In 32nd AIAA International Communications Satellite Systems Conference, page 4438, 2014.

[9] Brian Barritt and Wesley Eddy. Temporospatial SDN for aerospace communica- tions. In AIAA SPACE 2015 Conference and Exposition, page 4656, 2015.

[10] Brian Barritt and Wesley Eddy. SDN enhancements for LEO satellite networks. In 34th AIAA International Communications Satellite Systems Conference, page 5755, 2016.

[11] Brian Barritt, Tatiana Kichkaylo, Ketan Mandke, Adam Zalcman, and Victor Lin. Operating a UAV mesh & Internet backhaul network using Temporospatial SDN. In 2017 IEEE Aerospace Conference, 2017. Bibliography 81

[12] J. Postel. Transmission Control Protocol. RFC 793 (Internet Standard), September 1981. Updated by RFCs 1122, 3168, 6093, 6528.

[13] George F Riley and Thomas R Henderson. The ns-3 network simulator. In Modeling and Tools for Network Simulation, pages 15–34. Springer Berlin Heidelberg, 2010.

[14] ns-3 statistics. https://www.nsnam.org/overview/statistics/. [Online; ac- cessed 4-June-2017].

[15] Hajime Tazaki, Frédéric Urbani, and Thierry Turletti. DCE cradle: Simulate network protocols with real stacks for better realism. In Proceedings of the 6th International ICST Conference on Simulation Tools and Techniques, SimuTools ’13, pages 153– 158, ICST, Brussels, Belgium, Belgium, 2013. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering).

[16] Riverbed. Riverbed Modeler | Datasheet. [Online; accessed 4-June-2017].

[17] Scalable Network Technologies. EXata | Datasheet. [Online; accessed 4-June-2017].

[18] Linda M McNeil and TS Kelso. Spatial Temporal Information Systems: An Ontological Approach Using STK®. CRC Press, 2013.

[19] Analytical Graphics, Inc. Systems Tool Kit (STK) | Datasheet. [Online; accessed 4- June-2017].

[20] AI Solutions. FreeFlyer software. https://ai-solutions.com/freeflyer/ freeflyer, 2016. [Online; accessed 4-June-2017].

[21] Moriba Jah, Steven Huges, Matthew Wilkins, and Tom Kelecy. The General Mission Analysis Tool (GMAT): A new resource for supporting debris orbit determination, tracking and analysis. In Fifth European Conference on Space Debris, volume 672, 2009.

[22] Attenuation by atmospheric gases. ITU P.676, September 2016.

[23] Propagation data and prediction methods required for the design of Earth-space telecommunication systems. ITU P.618, July 2015.

[24] Attenuation due to clouds and fog. ITU P.840, September 2013.

[25] Jack R Powell et al. Terrain Integrated Rough Earth Model (TIREM). ECAC Annapolis, MD, Rep. TN–83-002, 1983. Bibliography 82

[26] Ronald Eichenlaub, Clark Valentine, Stephen Fast, and Sam Albarano. Fidelity at high speed: Wireless InSite® Real Time Module™. In Military Communications Conference, 2008. MILCOM 2008. IEEE, pages 1–7. IEEE, 2008.

[27] ns3::SpectrumPhy class reference. https://www.nsnam.org/doxygen/ classns3_1_1_spectrum_phy.html. [Online; accessed 2-July-2017].

[28] Brian Barritt, Wesley Eddy, Seth Matthews, and Kul Bhasin. Integrated approach to architecting, modeling, and simulation of complex space communication net- works. In SpaceOps 2010 Conference: Delivering on the Dream, Hosted by NASA Marshall Space Flight Center and Organized by AIAA, page 1941, 2010.

[29] Shaun Endres, Michael Griffith, Behnam Malakooti, Kul Bhasin, and Allen Holtz. Space based internet network emulation for deep space mission applications. In 22nd AIAA International Communications Satellite Systems Conference & Exhibit 2004 (ICSSC), page 3210, 2004.

[30] Esther Jennings, Richard Borgen, Sam Nguyen, John Segui, Tudor Stoenescu, Wang Shin-Ywan, Simon Woo, Brian Barritt, Christine Chevalier, and Wesley Eddy. Space Communication and Navigation (SCaN) network simulation tool development and its use cases. In AIAA Modeling and Simulation Technologies Conference, Chicago, IL, 2009.

[31] Esther H Jennings, John S Segui, and Simon Woo. MACHETE: Environment for space networking evaluation. In AIAA International Conference on Space Operations (SpaceOps), Huntsville, AL, 2010.

[32] gRPC | a high performance, open-source universal RPC framework. http://www. grpc.io/. [Online; accessed 2-July-2017].

[33] A. Sollaud. RTP Payload Format for the G.729.1 Audio Codec. RFC 4749 (Proposed Standard), October 2006. Updated by RFC 5459.

[34] Kar-Ming Cheung, Christian Ho, Anil Kantak, Charles Lee, Robert Tye, Edger Richards, Catherine Sham, Adam Schlesinger, and Brian Barritt. Wallops’ low elevation link analysis for the Constellation launch/ascent links. In Aerospace Conference, 2011 IEEE, pages 1–15. IEEE, 2011.

[35] Pankaj Berde, Matteo Gerola, Jonathan Hart, Yuta Higuchi, Masayoshi Kobayashi, Toshio Koide, Bob Lantz, Brian O’Connor, Pavlin Radoslavov, William Snow, et al. ONOS: Towards an open, distributed SDN OS. In Proceedings of the third workshop on Hot topics in software defined networking, pages 1–6. ACM, 2014. Bibliography 83

[36] Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru Parulkar, Larry Peterson, Jennifer Rexford, Scott Shenker, and Jonathan Turner. OpenFlow: Enabling inno- vation in campus networks. ACM SIGCOMM Computer Communication Review, 38(2):69–74, 2008.

[37] Amin Vahdat. SDN @ Google: Why and how, June 2013. Remarks and associated blog post by Amin Vahdat at the Open Networking Summit.

[38] S. Costanzo, L. Galluccio, G. Morabito, and S. Palazzo. Software Defined Wireless Networks: Unbridling sdns. In Proceedings of the 2012 European Workshop on Software Defined Networking (EWSDN), pages 1–6, October 2012.

[39] Carlos J Bernardos, Antonio De La Oliva, Pablo Serrano, Albert Banchs, Luis M Con- treras, Hao Jin, and Juan Carlos Zúñiga. An architecture for Software Defined Wire- less Networking. IEEE Wireless Communications, 21(3):52–61, 2014.

[40] Osianoh Glenn Aliu, Senka Hadzic, Christian Niephaus, and Mathias Kretschmer. Towards adoption of software defined wireless backhaul networks. In Internet of Things. IoT Infrastructures: Second International Summit, IoT 360◦ 2015, Rome, Italy, October 27-29, 2015, Revised Selected Papers, Part II, pages 521–529. Springer, 2016.

[41] KA ThangaMurugan. Software Defined Networking (SDN) for aeronautical commu- nications. In Digital Avionics Systems Conference (DASC), 2013 IEEE/AIAA 32nd, pages 1–20. IEEE, 2013.

[42] Gilbert J Clark, Wesley Eddy, Sandra K Johnson, David E Brooks, and James L Barnes. Architecture for cognitive networking within NASA’s future space com- munications infrastructure. In 34th AIAA International Communications Satellite Systems Conference, page 5725, 2016.

[43] Analytical Graphics, Inc. STK Components for Java. [Online; accessed 5-June- 2017].

[44] Thomas R. Henderson and Randy H. Katz. TCP performance over satellite channels. Technical report, University of California at Berkeley, Electrical Engineering and Computer Science Department, Berkeley, CA, USA, 1999.

[45] Ward A Hanson. In their own words: OneWeb’s Internet constellation as described in their FCC Form 312 application. New Space, 4(3):153–167, 2016.

[46] Jochen Könemann. Approximation Algorithms for Minimum-Cost Low-degree Subgraphs. PhD thesis, Graduate School of Industrial Administration, Carnegie Mellon University, Carnegie Mellon University, Schenley Park, Pittsburgh, PA 15213, U.S.A, 2003. Bibliography 84

[47] One-way transmission time. ITU G. 114, May 2003.

[48] Peter B. de Selding. SpaceNews: European governments boost satcom spending. http://spacenews.com/european-governments-boost-satcom-spending, January 2016. [Online; accessed 6-June-2017].

[49] Klint Finley. Internet by satellite is a space race with no winners. http: //www.wired.com/2015/06/elon-musk-space-x-satellite-internet, June 2015. [Online; accessed 6-June-2017].

[50] Peter B. de Selding. SpaceNews: Boeing proposes big satel- lite constellations in V- and C-bands. http://spacenews.com/ boeing-proposes-big-satellite-constellations-in-v-and-c-bands, June 2016. [Online; accessed 6-June-2017].

[51] Thomas R Henderson and Randy H Katz. On distributed, geographic-based packet routing for LEO satellite networks. In Global Telecommunications Conference, 2000. GLOBECOM’00. IEEE, volume 2, pages 1119–1123. IEEE, 2000.

[52] Lloyd Wood. Internetworking with Satellite Constellations. PhD thesis, University of Surrey, 2001.

[53] T. Clausen and P.Jacquet. Optimized Link State Routing Protocol (OLSR). RFC 3626 (Experimental), October 2003.

[54] Mohammed Hussein, Gentian Jakllari, and Béatrice Paillassa. On routing for ex- tending satellite service life in LEO satellite networks. In Global Communications Conference (GLOBECOM), 2014 IEEE, pages 2832–2837. IEEE, 2014.

[55] S. Pratt, R. Raines, C. Fossa, and M. Temple. An operational and performance overview of the Iridium satellite system. IEEE Communications Surveys, 1999.

[56] Flavio Araripe d’Oliveira, Francisco Cristovão Lourenço de Melo, and Tes- saleno Campos Devezas. High-altitude platforms - present situation and technol- ogy trends. Journal of Aerospace Technology and Management, 8(3):249–262, 2016.

[57] K. Raza and M. Turner. Cisco network topology and design. In Large-scale IP Network Solutions, CCIE professional development. Cisco Press, February 2002.

[58] C. Perkins, E. Belding-Royer, and S. Das. Ad hoc On-demand Distance Vector (AODV) routing. RFC 3561 (Experimental), July 2003. Bibliography 85

[59] Ian D Chakeres and Elizabeth M Belding-Royer. AODV routing protocol implemen- tation design. In Distributed Computing Systems Workshops, 2004. Proceedings. 24th International Conference on, pages 698–703. IEEE, 2004.

[60] Charles E Perkins and Pravin Bhagwat. Highly dynamic destination-sequenced distance-vector routing (DSDV) for mobile computers. In ACM SIGCOMM computer communication review, volume 24, pages 234–244. ACM, 1994.

[61] Philippe Jacquet, Paul Muhlethaler, Thomas Clausen, Anis Laouiti, Amir Qayyum, and Laurent Viennot. Optimized link state routing protocol for ad hoc networks. In Multi Topic Conference, 2001. IEEE INMIC 2001. Technology for the 21st Century. Proceedings. IEEE International, pages 62–68. IEEE, 2001.

[62] Jan Medved, Robert Varga, Anton Tkacik, and Ken Gray. OpenDaylight: Towards a model-driven SDN controller architecture. In A World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2014 IEEE 15th International Symposium on, pages 1–6. IEEE, 2014.

[63] OpenDataPlane. https://www.opendataplane.org. Accessed: 2017-05-28.

[64] Ben Pfaff, Justin Pettit, Teemu Koponen, Ethan J Jackson, Andy Zhou, Jarno Ra- jahalme, Jesse Gross, Alex Wang, Jonathan Stringer, Pravin Shelar, et al. The de- sign and implementation of Open vSwitch. In Proceedings of the 12th USENIX Conference on Networked Systems Design and Implementation, pages 117–130. USENIX Association, 2015.

[65] Hassan Sipra, Ankur Jain, and Bok Knun Randolph Chung. Distributed software defined wireless packet core system, September 2016. US Patent App. 15/270,831.

[66] Panagiotis Demestichas, Andreas Georgakopoulos, Dimitrios Karvounas, Kostas Tsagkaris, Vera Stavroulaki, Jianmin Lu, Chunshan Xiong, and Jing Yao. 5G on the horizon: Key challenges for the radio-access network. IEEE Vehicular Technology Magazine, 8(3):47–53, 2013.

[67] Magnus Olsson, Stefan Rommer, Catherine Mulligan, Shabnam Sultana, and Lars Frid. SAE and the Evolved Packet Core: Driving the Mobile Broadband Revolution. Academic Press, 2009.

[68] Larry Peterson, Ali Al-Shabibi, Tom Anshutz, Scott Baker, Andy Bavier, Saurav Das, Jonathan Hart, Guru Palukar, and William Snow. Central office re-architected as a data center. IEEE Communications Magazine, 54(10):96–101, 2016.

[69] Open CORD. http://opencord.org. [Online; accessed 3-June-2017]. Bibliography 86

[70] V. Cerf, S. Burleigh, A. Hooke, L. Torgerson, R. Durst, K. Scott, K. Fall, and H. Weiss. Delay-Tolerant Networking Architecture. RFC 4838 (Informational), April 2007.

[71] Evan PC Jones, Lily Li, Jakub K Schmidtke, and Paul AS Ward. Practical routing in Delay-Tolerant Networks. IEEE Transactions on Mobile Computing, 6(8), 2007.

[72] Scott Burleigh. Interplanetary Overlay Network: An implementation of the DTN Bundle Protocol. In Consumer Communications and Networking Conference, 2007. CCNC 2007. 4th IEEE, pages 222–226. IEEE, 2007.

[73] Consultative Committee for Space Data Systems. Consultative Committee for Space Data Systems — Wikipedia, the free encyclopedia, 2016. [Online; accessed 3-June-2017].