arXiv:1605.01944v2 [cs.NI] 15 May 2016 rto ont,cnitn fcmrmsdhssand hosts gen- compromised next of addition, consisting In botnets, packets. eration dropping 10] simply 9, by [8, switches or other against attacks launching exhaustion by state availability disrupt can [8]. they Furthermore, security bypass to man-in- or over eavesdropping, attacks, the-middle traffic perform redirect to can paths 7] unauthorized 6, of [5, switches security Compromised though, issue. Ironically, neglected a 4]. is se- itself network 3, SDN mitigate [2, to Furthermore, block issues building curity benefits. a promised as the serves SDN of some are costs operational deployment, reduced and service management, network Rapid simplified revolutionize lock-in. vendor to and administration eliminating promise network – centralizing by [1] networking OpenFlow – realization INTRODUCTION 1. Fur- packets. forwarding. multicast/broadcast the link-failure and secure by recovery for mechanisms taken present reactively we path to thermore, actual controller val- the the path examine complementary allows en- A mechanism and switch. idation rules each forwarding at the them of used force integrity is the cryptography protect Symmetric-key to lookups. table due attacks to exhaustion the state in limiting re- and during encoded configurations behavior are network consistent rules ensuring Forwarding packet, the plane. for data accountability SDN forwarding provides that attacks. tension exhaustion state to makes susceptible space plane flow data large dur- the the behavior the Third, network network. at reconfigurations. inconsistent updates ing the to state leads of of plane behavior nature data the distributed the inspect followed Second, lacks reactively be will operator to policies network that or the ensure the proactively to to First, threat tools security a pose Inherent benefits. pro- however, promised today, flexible, manage. SDN in to more problems easier networks and make grammable, to promises SDN ABSTRACT D srf ihvleaiiisa h aaplane. data the at vulnerabilities with rife is SDN current its and (SDN) Networking Defined Software ex- security SDN an SDNsec, presents paper This aauiSasaki Takayuki Dsc owrigAccountability Forwarding SDNsec: [email protected] † E Corporation NEC † hitsPappas Christos , o h D aaPlane Data SDN the for { apsh tle tr aperrig htor, kthlee, pappasch, ∗ ah Lee Taeho , osntpoiegaate htteplce ilbe will ( policies violated the been that it ( guarantees Specifically, followed provide that applied. not correctly verify data does are the to rules Thus, mechanisms forwarding accountability policies. lacks correctly network to plane specified trusted are the devices follow network all plane: data SDN feasible. in attacks vulnerabilities these latent make that are today There victims. against their firepower unprecedented an unleash could switches, eofiuain 3 3 4 rb xlctyrequesting explicitly by or network 14] during 13, time real [3, pol- in reconfigurations for performed invariants; checks be network can proposals certain checking examining the of by to class violations pushed path Another icy is a recovery hosts. that failure ensure end followed; to However, indeed mechanism validation was en- a switches. protect pro- lacks malicious to it era from architecture pre-SDN networks security a terprise a [12], proposes SANE posal, attacks. resource counter exhaustion limiting, to dropping) (rate packet and solutions filtering, patch event simple Open- proposes of [9] analysis Flow security recent A security. data-plane how forwarded. inspect been reactively has to and traffic operators paths, for ensure network means should a enforce provide extension updates, the policy is, ap- consistent That correctly accountabil- forwarding mechanisms. are through ity plane policies data the operator’s at plied the that ensures switch. through compromised affected reconfigurations can a forcing all attacker by problem an after However, the only exploit updated. applied been the have correctly because switches is systems, policy isola- distributed new an or in is flooding This problem environments. link inherent multitenant to in not violations leading do tion policy, that with paths During comply follow [11]. can reconfigured packets is reconfigurations, plane forwarding the con- when the or switches other violated by be troller. caught can getting policies without forwarding compromised, get h rtpolmle nteavraymdlfrthe for model adversary the in lies problem first The hr r nyafwpooasdaigwt SDN with dealing proposals few a only are There which extension security SDN an build to is goal Our guarantees consistency of lack the is problem Another ∗ ∗ T Z ETH ose Hoefler Torsten , enforcement urich ¨ validation } @inf.ethz.ch o ro htplce aenot have policies that proof nor ) .Oc n rmr switches more or one Once ). ∗ dinPerrig Adrian , ∗ the state of the data plane [15]. D Authorized path Path shortcut Contributions. This paper proposes an SDN security extension, SDNsec, to achieve forwarding accountabil- Path detour Path forging ity for the SDN data plane. Consistent updates, path enforcement, and path validation are achieved through S additional information carried in the packets. Crypto- graphic markings computed by the controller and veri- fied by the switches construct a path enforcement mech- anism; and cryptographic markings computed by the Figure 1: Forms of path deviation attacks that do not switches and verified by the controller construct a path follow the authorized path from S to D. validation mechanism. Furthermore, we describe mech- anisms for secure failure recovery. Finally, we imple- ment the SDNsec data plane on software switches and We consider an adversary that can compromise in- show that state exhaustion attacks are confined to the frastructure components and hosts, and can exploit pro- edge of the network. tocol vulnerabilities. Furthermore, compromised com- ponents are allowed to collude. 2. PROBLEM DESCRIPTION We do not consider payload modification attempts by switches, as hosts do not trust the network and use We consider a typical SDN network with a forwarding end-to-end integrity checks to detect any unauthorized plane that implements the operator’s network policies changes. In addition, controller security is out of the through a logically centralized controller. Network poli- scope of this paper, since our goal is to enforce the con- cies of the operator dictate which flows are authorized troller policies at the forwarding plane. to access the network and which paths are authorized to forward traffic for the corresponding flows. 2.2 Assumptions Our goal is to design an extension that makes a best- We make the following assumptions: effort attempt to enforce network policies at the for- warding plane, and to detect and inform the controller • Cryptographic primitives are secure, i.e., hash func- in case of policy violations. tions cannot be inverted, signatures cannot be forged, and encryptions cannot be broken. 2.1 Adversary Model • The communication channel between the controller The goal of the attacker is to subvert the network and benign switches is secure (e.g., TLS can be policies of the operator (e.g., by forwarding traffic over used, as in OpenFlow [1]). unauthorized paths) or to disrupt the communication between end hosts. To this end, we consider the follow- • End hosts are authenticated to the controller and ing attacks: cannot spoof their identity (e.g., port-based Net- Path deviation. A switch causes packets of a flow to work Access Control can be used [16]). be forwarded over a path that has not been authorized for the specific flow. This attack can take the following 3. OVERVIEW forms (Figure 1): In SDNsec, the controller computes network paths and the corresponding forwarding information. The • Path detour. A switch redirects a packet to devi- switches at the edge of the network receive this for- ate from the original path, but later the packet re- warding information over a secure channel and embed turns to the correct next-hop downstream switch. it into packets that enter the network. Switches at the core of the network forward packets according to the for- • Path forging. A switch redirects a packet to de- warding information carried in the packets; and the last viate from the original path, but the packet does switch on the path removes the embedded information not return to a downstream switch of the original before forwarding the packet to the destination. Fig- path. ure 2 shows the network model for SDNsec. We stress that end hosts do not perform any additional function- • Path shortcut. A switch redirects a packet and ality (e.g., communicate with the controller), i.e., the skips other switches on the path; the packet is for- network stack of the hosts is unmodified. warded only by a subset of the intended switches. We describe and justify our main design decisions and present an overview of the control and data plane. Packet replay. A switch replays packet(s) to flood a host or another switch. 3.1 Central Ideas Denial-of-Service. We consider state exhaustion at- We identify three main problems that undermine net- tacks against switches, which disrupt communication of work policies in today’s SDN networks and describe our end hosts. corresponding design decisions. Consistent Updates. In SDN, the distributed nature of updating the forwarding plane can cause inconsis- Controller tencies among switches. Specifically, a new policy is correctly applied only after all affected switches have K0 Kn been reconfigured; however, during state changes the K1 Ki forwarding behavior may be ill-defined. Although solu- tions have been proposed to counter this problem [17, ...... 18], they require coordination between the controller S0 S1 Si Sn and all the involved switches in order to perform the updates. Figure 2: The SDNsec network model: The ingress and In SDNsec, packets encode the forwarding informa- egress switches store forwarding tables; and the con- tion for the intended path. This approach guarantees troller has a shared secret with every switch at the data that once a packet enters the network, the path to be plane. followed is fixed and cannot change under normal opera- tion (i.e., without link failures). Hence, a packet cannot encounter a mixture of old and new forwarding policies, The PCC computes the forwarding information for leading to inconsistent network behavior. Forwarding paths that are authorized for communication. Specif- tables exist only at the entry and exit points of the ically, for each flow that is generated, a path is com- network, simplifying network reconfiguration: only the puted. We do not impose restrictions on the flow spec- edge of the network must be updated and coordination ification; for interoperability with existing deployments, among all forwarding devices is not needed. we adopt the 13-tuple flow specification of OpenFlow [19]. The packet overhead we have to pay for this approach The computed forwarding information for a flow is provides additional benefits: guaranteed loop freedom, embedded in every packet of the flow. For each switch on the path, the PCC calculates the egress interface since we eliminate asynchronous updates; and minimum 1 state requirements for switches, since forwarding tables that the packet should be forwarded on . Hence, the are not needed in most of the switches (see Section 3.3). ordered list of interfaces specifies the end-to-end path The lack of forwarding tables confines the threat of state that the packets should follow. Furthermore, each flow exhaustion attacks. and its corresponding path is associated with an expi- ration time (ExpTime) and a flow identifier (FlowID). Path Enforcement. In SDN, the controller cannot The expiration time denotes the time at which the flow obtain guarantees that the forwarding policies will be becomes invalid, and the flow identifier is used to opti- followed, since the forwarding plane lacks enforcement mize flow monitoring in the network (Section 4.4). mechanisms. Ideally, when a switch forwards packets Furthermore, the forwarding information contains cryp- out of the wrong port, the next-hop switch detects the tographic primitives that realize path enforcement. Each violation and drops the packet. forwarding entry (FE (Si)) for switch Si contains a Mes- We incorporate a security mechanism that protects sage Authentication Code (MAC) that is computed over the integrity of the forwarding information in order to the egress interface of the switch (egr(Si)), the flow in- detect deviations from the intended path and drop the formation (ExpTime and FlowID), and the forwarding traffic. However, this mechanism by itself is insufficient entry of the previous switch (FE(Si−1)); the MAC is to protect from replaying forwarding information that computed with the shared key (Ki) between the con- has been authorized for other flows. troller and the corresponding switch on the path. Equa- Path Validation. In SDN, the controller has no knowl- tion ?? and Figure 2 illustrate how the forwarding in- edge of the actual path that a packet has taken due to formation is computed recursively for switch Si (for the lack of path validation mechanisms. 1 ≤ i ≤ n). We design a reactive security mechanism that checks if the intended path was followed. The combination of B = FlowID || ExpTime path enforcement and path validation provides protec- FE Si egr Si MAC Si tion against strong colluding adversaries. ( )= ( ) || ( ) (1) MAC (Si)= MAC Ki (egr(Si) || FE(Si−1) || B) 3.2 Controller Furthermore, a forwarding entry for switch S0 is in- The controller consists of two main components: a serted into the packet to be used by S1 for correct verifi- path computation component (PCC) and a path vali- cation of its own forwarding information; FE(S0) is not dation component (PVC). Furthermore, the controller used by the first-hop switch and is computed as follows: generates and shares a secret key with every switch at FE (S0)= B. the data plane; the shared key is communicated over the secure communication channel between them. 3.2.2 Path Validation Component 1We assume a unique numbering assignment for the 3.2.1 Path Computation Component ports of a switch. switches (Figure 2). Edge switches (shaded circles) op- The PVC is a reactive security mechanism that pro- erate at the edge of the network and serve as the entry vides feedback/information about the path that a packet and exit points to the network. Core switches operate has taken. The controller can then detect attacks that in the middle of the network and forward packets based have bypassed path enforcement and reconfigure the on the forwarding information in the packets. network accordingly. Path validation is achieved through two mechanisms: a path validation field in the packet 3.3.1 Edge Switches and flow monitoring. Edge switches are directly connected to network hosts Each switch embeds a proof in every packet that it and perform different operations when acting as an en- has indeed forwarded the packet. Hence, the collec- try point (ingress switch) and when acting as an exit tive proof from all on-path switches forms a trace for point (egress switch). Edge switches, as opposed to core the path that the packet has taken. The controller can switches, have flow tables in order to forward packets. instruct any switch to report packet headers and thus Ingress Switch. An ingress switch receives packets inspect the path that was taken. from source hosts and uses a forwarding table to look up The path validation field of a switch (PVF (Si)) con- the list of forwarding entries for a specific flow. In case tains a MAC that is computed over the PVF of the of a lookup failure, the switch consults the controller previous switch (PVF (Si−1)), flow related information and obtains the corresponding forwarding information. (FlowID), and a sequence number (SeqNo). The SeqNo Next, the switch creates a packet header and inscribes is used to construct mutable information per packet, the forwarding information in it. Furthermore, for ev- ensuring different PVF values for different packets; this ery packet of a flow, the switch inscribes a sequence detects replay attacks of the PVFs. The MAC is com- number to enable replay detection of the PVF. Finally, puted with the shared key between the switch and the 2 the switch inscribes PVF (S0 ), and forwards the packet controller . Equation ?? shows how the PVF is com- to the next switch. puted: Egress Switch. An egress switch receives packets from a core switch and forwards them to the destination. To C = FlowID || SeqNo forward a packet, the egress switch uses a forwarding

PVF (S0)= MAC K0 (C) (2) table in the same way as the ingress switch. Having a forwarding table at the egress switch is a PVF (Si)= MAC Ki (PVF (Si 1) || C), 1 ≤ i ≤ n − design decision that limits the size of forwarding tables Given the FlowID and PVF in the packet header, the at ingress switches. It allows rule aggregation at ingress controller can detect path deviations. The controller switches at the granularity of an egress switch. With- knows the path for the given flow, and thus the keys out a forwarding table at the egress switch, a separate of the switches on the path. Thus, the controller can flow rule for every egress port of an egress switch would recompute the correct value for the PVF and compare it be needed. The egress switch has the egress interface with the reported one. However, this mechanism cannot encoded in its FE, but it does not consider it when for- detect dishonest switches that do not report all packet warding the packet; the FE is still used to verify the headers when requested. correct operation of the previous hop. Monitoring and flow statistics are additional mecha- Upon packet reception, the switch removes the ad- nisms to detect false reporting.3 The controller can in- ditional packet header and forwards the packet to the struct arbitrary switches to monitor specific flows and destination. If requested, it reports the packet header, obtain their packet counters. Inconsistent packet re- together with its PVF to the controller. ports indicate potential misbehavior and further inves- tigation is required. For instance, if all switches after a 3.3.2 Core Switches certain point on the path report a lower packet count, Core switches operate in the middle of the network then packets were possibly dropped. However, if only a and perform minimal operations per packet. They ver- switch in the middle of the path reports fewer packets, ify the integrity of their corresponding forwarding entry it indicates a dishonest report. The controller combines and forward the packet out of the specified interface. In flow monitoring with the PVF in the packet headers to case of a verification failure, they drop the packet and detect policy violations. notify the controller. Furthermore, each core switch stores a list of failover 3.3 Data Plane paths that are used in case of a link failure (Section 4.2) The data plane of SDNsec consists of edge and core and keeps state only for multicast/broadcast traffic (Sec- tion 4.3) and flow monitoring (Section 4.4). 2For ease of exposition, the MAC of the PVF is com- puted with the same key as the MAC of the FE. In a real deployment, these two keys would be different. 4. DETAILS 3Monitoring is an essential tool for other crucial tasks First, we present the SDNsec packet header. Then, as well (e.g., traffic engineering). we describe link-failure recovery, multicast/broadcast 0 1 2 3 4 5 6 7 forwarding, and monitoring. 1 bit 1 bit 6 bit FE Expiration Packet Do Not Link Failure 4.1 SDNsec Packet Header Header Ptr Time Type Detour Counter Egress Flow ID Sequence Number The packet header (Figure 3) encodes the forward- Switch ID ing information (Equation ??), the PVF (Equation ??), Path Validation Field and additional information that enables the switches to Egress MAC 1 FE(S1) parse the header (e.g., a pointer to the correct forward- IF 1 Egress ing entry). We present the packet-header fields catego- MAC 2 FE(S2) Forwarding IF 2 rized by their use. … Entries Egress MAC n FE(Sn) 4.1.1 Fields for Forwarding and Path Enforce- IF n ment L3 Data

• Packet Type(PktType): PktType indicates whether the packet is a multicast/broadcast or a unicast Figure 3: SDNsec packet header for unicast traffic. packet. A single bit is used as a boolean flag to indicate the packet type. and to detect replay attacks against the Path Val- • FE Ptr: A pointer that points to the FE that a idation mechanism, in which a malicious switch switch on the path must examine. During packet replays valid PVFs to validate a rogue path. The processing, each switch increments the pointer so 24-bit sequence number can identify more than 16 that the next-hop switch examines the correct FE. million unique packets for a given flow. For the av- One byte is allocated for the FE Ptr, which means erage packet size of 850 bytes in data centers [20], that SDNsec can support up to 255 switches for a the 24 bits suffice for a flow size of 13 GB; Benson single path. This upper bound does not raise prac- et al. report that the maximum flow size is less tical considerations even for large topologies, since than 100 MB for the 10 data centers studied [21]. the network diameter is typically much shorter. Hence, it is highly unlikely that the sequence num- ber wraps around. Even if the sequence number • Expiration Time (ExpTime): ExpTime indi- wraps around, under normal operation the same cates the time after which the flow becomes in- values would appear a few times, whereas in an valid. Switches discard packets with expired for- attack scenario typically a high repetition rate of warding information. ExpTime is expressed at the certain values would be observed. granularity of one second, and the four bytes can express up to 136 years. • Flow ID (FlowID): FlowID is an integer that uniquely identifies a flow. FlowIDs are used to • Forwarding Entry (FE): A FE for switch Si index flow information, enabling SDNsec entities consists of the egress interface of switch Si (egr(Si)) (controller and switches) to efficiently search for and the MAC (MAC(Si)) that protects the in- flow information; the 3 bytes can index over 16 tegrity of the partial path that leads up to the million flows. The active flows in four data centers, switch Si. One byte is used for egr(Si) allowing as observed from 7 switches in the network, do not each switch to have up to 255 interfaces; and 7 exceed 100,000 [21]. bytes are used for MAC(Si). In Section 5.1, we justify why a 7-byte MAC is sufficient to ensure path integrity. 4.1.3 Fields for Link-Failure Recovery

4.1.2 Fields for Path Validation • Link Failure Counter (LFC): LFC indicates the number of failed links that a packet has en- • Path Validation Field (PVF): Each switch that countered throughout its journey towards the des- forwards the packet inserts a cryptographic mark- tination. SDNsec reserves 6 bits for LFC, which ing on the PVF according to Equation ??, and means that up to 63 link failures can be supported the controller uses the PVF for path validation. (see Section 4.2). SDNsec reserves 8 bytes for PVF, and in Section 5.1, we justify that 8 bytes provide sufficient protection • Egress Switch ID (EgressID): The EgressID against attacks. identifies the egress switch of a packet. Although FEs in the packet dictate the sequence of switches • Sequence Number (SeqNo): The ingress switch that a packet traverses, the core switches on the inserts a monotonically increasing packet counter path cannot determine the egress switch (except in every packet it forwards. Specifically, a separate for the penultimate core switch) from the FEs. counter is kept for every flow entry at the ingress However, the egress switch information is neces- switch. The SeqNo is used to randomize the PVF sary when a core switch suffers a link failure and LFC FE needs to determine an alternate path to the egress LFC FE ExpTime ExpTime (=1) Ptr (=0) Ptr switch. To this end, the SDNsec header contains FlowID EgressID SeqNo FlowID EgressID SeqNo the EgressID. With 2 bytes, it is possible to uniquely FailOverPathID Unused SeqNo PVF identify 65,536 switches, which is sufficient even for PVF A series of Original large data centers. Forwarding Entries (FEs) A series of Forwarding Entries (FEs) for the Failover Path 4.2 Link-Failure Recovery The design decision that packets encode the forward- Figure 4: Modifications to the SDNsec packet header ing information for the intended path makes link-failure for link-failure recovery; additional and modified fields recovery challenging: the intended path for packets that are highlighted. are already in the network is no longer valid. Dropping all ill-fated packets does not compromise the security guarantees, but degrades network availability until the paths. Per flow failover paths provide very fine-grained controller reconfigures the network. control, since the operator can exactly specify the path We design a temporary solution to account for the for a flow in case of a failure. However, core switches ill-fated packets until a new path is specified at the cor- would have to store failover paths for every flow they responding ingress switches or until the failure is fixed. serve. Per-link failover paths minimize the state re- quirements, but provide minimal control to the opera- Furthermore, the temporary solution must satisfy the 4 three requirements for SDNsec. First, it must ensure tor . Furthermore, a path to the egress switch might update consistency, i.e., only one temporary policy must exist, even if a path around the failed link to the next- be used on one packet for one link failure. Second, it hop switch does not. must provide path enforcement, i.e., deviations from the Storing per-egress-switch failover paths may not sat- intended temporary path should lead to packet drop- isfy the strict isolation requirements for certain flows. ping from a benign switch. Third, it must enable path For example, the failover path to the egress switch may validation, i.e., the controller must be able to verify the traverse an area of the network that should be avoided path – including the switches of the temporary policy – for specific flows. To this end, we define a do not detour that a packet has taken. flag. If set, the switch drops the packet instead of us- Our recovery mechanism uses a failover path. A failover ing the failover path. In other words, the flag indicates path is a temporary path that detours around the failed if security or availability prevails in the face of a link link and leads to the same egress switch as the orig- failure. Note that failover paths are temporary fixes inal path. The forwarding information of the failover to increase availability, while the controller computes a path is encoded in the packet as described in Equa- permanent solution to respond to the failure. tion ??. That is, the failover path contains the list of egress interfaces of the switches that are on the detour 4.2.1 Forwarding with Failover Paths path; the integrity of the list is protected with MACs Packet Header. Figure 4 shows how a switch changes that are computed with the corresponding keys of these the packet header of an ill-fated packet when a failover switches. When a link failure is detected, the switch path is used. The FEs of the original path are replaced inserts the appropriate pre-computed failover path into with those of the failover path. Furthermore, the switch the packet and forwards the packet to the appropriate changes the expiration time field with ExpTimeFailoverPath next hop, as specified by the failover path. Each switch and appends the information of the failover path (i.e., on the failover path updates the PVF as it would do for FailoverPathID, SeqNo) below that of the original a normal path. Since the failover path is constructed path. Hence, the packet contains the flow information identically to the original path, the forwarding proce- of the original and the failover paths followed by the dure (Section 3.3) needs only minor modifications (Sec- FEs of the failover path. tion 4.2.1). Then, the switch resets FE Ptr to one so that the This solution satisfies the mentioned requirements. next-hop switch on the failover path can correctly de- First, update consistency is satisfied since the forward- termine the FE that it needs to examine. ing information of the failover path is encoded in the Lastly, the switch increments the LFC by one to in- SDNsec header. Second, the authenticated forward- dicate that a link-failure has occurred. The LFC field ing information provides path enforcement. Third, the counts the number of failover paths that a packet has controller can perform path validation – including the taken and enables multiple link failures to be handled failover path – with minor changes. without additional complexity. One shortcoming of the recovery mechanism is the requirement to store state at core switches for the pre- Forwarding Procedure. Three changes are made computed failover paths. To balance the tradeoff be- to the forwarding procedure to accommodate link fail- tween fine-grained control and state requirements, core 4 Per-link failover paths would detour the ill-fated pack- switches store per-egress-switch failover paths. Alter- ets to the next-hop switch of the original path, but over native solutions could store per-flow or per-link failover another temporary path. 0 1 2 3 4 5 6 7 ures. First, since additional forwarding information is 1 bit 7 bit Ethernet inserted into the packet if there is a detour, a switch Un- Packet Header ExpTime Reserved identifies the correct FE by computing the following used Type byte offset from the beginning of the SDNsec packet TreeID Unused SeqNo header: 6+(LF C + 2) ∗ 8+ FEPtr ∗ 8 bytes. Second, PVF when computing the PVF, the switch uses Equation ?? FailOverPathID if there is a detour. is determined by L3 Data looking at the FlowID field of the most recent forward- ing information, which is identified by taking the byte offset of 6+ LF C ∗ 8 bytes. Figure 5: SDNsec packet header for multicast traffic.

C = FailOverPathID || SeqNo (3) tree the two-tuple and the list of egress interfaces. Upon receiving a multicast packet, the ingress switch deter- 4.2.2 Path Validation mines the correct multicast tree (based on the packet’s Path validation accounts for the switches on the orig- information) and inserts the TreeID, ExpTime, andase- inal path and the failover path. The controller obtains quence number (SeqNo) in the packet (Figure 5). Each the switches of the path that the packet should have core switch that receives a multicast packet, looks up traversed by referring to the FlowID field(s) of the for- the forwarding information based on the TreeID in the warding information in the header. Then using Equa- packet and forwards it according to the list of specified tion ?? for the original path and Equation ?? for the interfaces. failover path(s), the controller computes the expected The main challenge with this stateful approach is pol- PVF value and compares it with the PVF value in the icy consistency, i.e., ensuring that a packet is not for- packet header. warded by two different versions of a multicast tree. To this end, we require that a tree is never updated, 4.3 Multicast/Broadcast instead a new tree is created. However, this alone is We describe our design for multicast/broadcast for- not sufficient to guarantee drop freedom: if the ingress warding that adheres to the three requirements for SDNsec switch forwards packets of a newly created tree while (update consistency, path enforcement, and path vali- core switches are being updated, then the packets with TreeID dation). For simplicity, we refer to multicast/broadcast the new may get dropped by core switches. forwarding as multicast. To solve the problem, we add a safe-guard when switches A strawman’s solution for multicast is to leverage are updated with a new multicast tree: an ingress switch TreeID unicast forwarding: the ingress switch replicates each is not allowed to use the new tree, i.e., insert the packet of a multicast group and uses the unicast for- into incoming packets, until all other switches on the warding mechanism to send it to every egress switch tree (core and egress switches) have been updated with that is on the path of a receiving host. This approach the new tree information. Ingress switches can use the comes with two benefits: all three requirements are sat- new tree only after an explicit notification by the con- isfied; and the unicast forwarding mechanism can be troller. used without modifications. However, this solution is Path enforcement is implemented implicitly, since only inefficient with respect to bandwidth overhead. switches on the multicast tree learn the two-tuple infor- TreeIDs An alternative approach to implement multicast is mation of the tree; packets with unknown are to encode the multicast tree in the packet. Bloom fil- dropped. Hence, if a malicious switch incorrectly for- ters can be used to efficiently encode the links of the wards a packet to an incorrect next-hop switch, then the TreeID tree [22]. For each link, the switch checks if the bloom switch will drop the packet. Tampering with the filter returns a positive answer and forwards the packet in the packet is detected through path validation. along the corresponding links. However, the false posi- The path validation information for multicast is simi- tives of Bloom filters become a limitation: loops can be lar to unicast forwarding. Each switch on the tree com- formed; and more importantly, forwarding a packet to putes a MAC for the PVF using its shared key with TreeID an incorrect switch violates network isolation. the controller. The only difference is that the , FlowID We thus adopt a stateful multicast distribution tree instead of the , becomes an input to the MAC to forward multicast traffic. To implement forwarding (Equation ??). along the specified tree, the forwarding decisions are stored in forwarding tables at switches. A multicast tree C = TreeID || SeqNo (4) is represented by a two-tuple: an integer that identifies the tree (TreeID) and an expiration time (ExpTime) that indicates when the tree becomes invalid. The controller computes a multicast tree and assigns it a unique TreeID. Then, it sends to each switch on the 4.4 Monitoring SDN simplifies key management, since the controller Network monitoring is an essential tool for traffic en- sets up the shared symmetric keys with the switches; so- gineering and security auditing. For instance, network phisticated key-establishment protocols are not needed. operators can steer traffic away from traffic hot spots However, an important difference is that we consider or identify switches that drop packets. the ingress and egress switch – not the hosts – as the In SDNsec, monitoring is performed at the granular- source and destination, respectively. In Section 7, we ity of a flow, similar to OpenFlow. Switches maintain discuss the security implications of this decision. a monitoring table that stores packet counters for the Path enforcement is the first line of defense against flows that they serve. Specifically, ingress switches have path deviation attacks. It prevents path forging and flow tables to look up the FEs, hence, an additional field path detours from a malicious switch that generates is required for packet counters. Core switches need an forged FEs. The next benign switch on the path will additional data structure to accommodate flow statis- drop the packet due to a MAC verification failure. How- tics. ever, a more sophisticated attacker can replay forward- Designing monitoring for the core network is based ing information of other paths that it is part of, but on two principles. First, to prevent state exhaustion at- which are not authorized for the diverted flow. tacks the controller instructs switches explicitly which Path validation is the second line of defense against flows they should monitor. Since switches do not mon- path deviation attacks. Since each switch is inscribing a itor all flows, an attacker cannot generate flows ran- MAC value in the packet, the packet carries information domly to exhaust the monitoring table. Second, to min- about the presence or absence of switches on the path. imize the impact of monitoring on forwarding perfor- The controller can reactively inspect this information mance, we use an exact match lookup table: the FlowID and obtain a guarantee about the traversed switches in the packet header serves as the key to the entry. and their order. SDNsec provides this guarantee be- Avoiding more heavyweight lookups (e.g., longest prefix cause the attacker does not possess the secret keys of matching) that require multiple memory accesses and other switches. Note that path validation also catches often linear search operations (e.g., flow-table lookups attacks from malicious ingress switches that embed in in software switches) mitigates attacks that target the the packets FEs of other flows. The controller knows computational complexity of the lookup procedure. the forwarding information for every flow (based on the flow tuple) and can detect the misbehavior. Changing the information that defines a flow would break com- 5. SECURITY ANALYSIS munication between the end hosts; Section 7 discusses We start by justifying our design choice of short MACs, such cases in more detail. and then we describe how SDNsec protects from the at- Furthermore, sequence numbers are used to prevent tacks described in Section 2.1. replay of the path validation information. A malicious switch could replace the PVF value in a packet with 5.1 On the length of MACs a value from a previously seen packet, obfuscating the The path enforcement and path validation mecha- actual path taken by the packet to avoid being detected nisms require MAC computations and verifications at by the controller. The replay is detected through a high every switch. We argue that the length of the MACs – repetition frequency of certain sequence numbers; under 7 bytes for FEs and 8 bytes for the PVF – is sufficient normal operation each sequence number would appear to provide the security guarantees we seek. at most a few times (Section 4.1). The main idea is that the secret keys used by other The path enforcement and validation properties of switches are not known to the attacker, which means SDNsec can be compromised in the case of multiple ad- that an attacker can at best randomly generate MACs jacent malicious switches. For example, if a malicious without a way to check their validity. Consequently, on-path switch has multiple malicious adjacent switches the attacker would have to inject an immense amount (not on the path), then the packets can be forwarded 56 of traffic even for a single valid FE (2 attempts are re- along the malicious path segment and back. The on- quired). Furthermore, to forge FEs for n hops requires path malicious switch can then reinject the packets along 56 n 2 · attempts, which becomes computationally infea- the initial intended path; this attack cannot be de- sible even for n = 2. Hence, such traffic injection with tected, as pointed out by prior work [23]. incorrect MACs is easily detectable. 5.3 Denial-of-Service 5.2 Path Deviation Attacks Network devices typically store state (e.g., forwarding Path deviation attacks – in which packets follow a tables) on fast memory (e.g., SRAM), which is a lim- path not authorized by the controller – can take differ- ited resource. This becomes the target of attackers by ent forms, as described in Section 2.1. populating the memory with bogus data that replaces The security properties of chained MACs with respect legitimate information. to path validation have been formalized and verified for In SDNsec, the state exhaustion attack vector is con- a decentralized setting [23]. The centralized control in fined to the edge of the network. Only edge switches keep forwarding tables and thus they are susceptible to aesenc xmm3,xmm4 //Round 1 for Packet 4 a state exhaustion attack by malicious hosts that orig- inate bogus flows. Core switches keep forwarding state Second, a dedicated CPU core is assigned to a NIC only for broadcast/multicast traffic, but these entries port and handles all the required packet processing for are preconfigured by the controller with the valid tree the port. Each physical core has a dedicated AES-NI IDs and, thus, cannot be populated with bogus entries. engine and thus packets received on one port are served In Section 6.3.2, we compare the performance between from the AES-NI engine of the physical core assigned an edge switch and a core switch under a state exhaus- to that port. tion attack. Third, we create per-core data structures to avoid Furthermore, each switch keeps state to monitor for- unnecessary cache misses. Each NIC is linked with a warded traffic at the granularity of flows. An attacker receive queue and a transmit queue, and these queues could generate random flow IDs in order to exhaust the are assigned to a CPU core to handle the NIC’s traf- monitoring table. This resource is protected by having fic. Furthermore, we load balance traffic from one NIC the switches monitor only flow IDs that the controller over multiple cores, depending on the system’s hard- mandates. Thus, the controller can securely adapt the ware. For this purpose, we leverage Receiver Side Scal- resources according to the device’s capabilities. ing (RSS) [26] as follows: each NIC is assigned multiple queues, and each queue can be handled by another core. 6. IMPLEMENTATION AND EVALUA- RSS is then used to distribute traffic among the queues TION of a NIC. Our implementation of the edge switch is based on the We implement the SDNsec data-plane functionality DPDK vSwitch [27]. The DPDK vSwitch is a fork of on a software switch, and evaluate performance on a the open source vSwitch [28] running on DPDK for bet- commodity server machine. Furthermore, we analyze ter performance. Open vSwitch is a multilayer switch the path validation and bandwidth overhead for the net- that is used to build programmable networks and can work. run within a hypervisor or as a standalone control stack 6.1 Software Switch Prototype for switching devices. Edge switches in SDNsec use the typical flow matching rules and forwarding tables To achieve high performance, our implementation lever- to forward a packet and therefore we chose to augment ages the Data Plane Development Kit (DPDK) [24] and an existing production quality solution. We augment the Intel AES-NI instruction set [25]. DPDK is an open- the lookup table to store forwarding information for a source set of libraries and drivers for packet processing flow in addition to the output port. The ingress switch in user space. DPDK comes with zero-copy Network increases the size of the packet header and inputs the Interface Card (NIC) drivers that leverage polling to additional information (FEs, sequence number, and its avoid unnecessary interrupts. Intel AES-NI is an in- PVF). struction set that uses hardware cryptographic engines We implement core switches from scratch due to the built into the CPU to speed up the AES block cipher. minimal functionality they perform. A core switch per- To compute and verify the required MACs, we use the forms two MAC computations (it verifies its FE and Cipher Block Chaining mode (CBC-MAC) with AES as computes its PVF value), updates the flow’s counters the block cipher. The input lengths to the MACs for a (if the flow is monitored), and forwards the packet from FE and PVF are 15 and 14 bytes respectively. Note that the specified port. for both cases the input fits in one AES block (16 bytes) and that the input length is fixed and independent of the 6.2 Packet Overhead 5 path length . Furthermore, we use 128-bit encryption The security properties of SDNsec come at the cost keys and truncate the output to the required number of of increased packet size. For each packet, the ingress bits (Section 4.1). switch creates an additional packet header with its size Furthermore, we optimize forwarding in the following depending on the path length: 8 bytes/switch (includ- ways. First, we store four FEs in different xmm registers ing the egress switch) and a constant amount of 22 xmm0-xmm3 ( ) and issue four encryption instructions with bytes/packet. the preloaded round key (stored in xmm4). Since each To put the packet overhead into context, we analyze AES engine can simultaneously perform 4 AES opera- two deployment scenarios for SDNsec: a data-center tions, a switch can process four packets in parallel on deployment and a research network deployment. Fur- each CPU core. The assembly code snippet is given thermore, to evaluate the worst case for SDNsec, we below: consider the diameter of the network topologies, i.e., aesenc xmm0,xmm4 //Round 1 for Packet 1 the longest shortest path between any two nodes in the aesenc xmm1,xmm4 //Round 1 for Packet 2 network. We also evaluate the packet overhead for the aesenc xmm2,xmm4 //Round 1 for Packet 3 average path length in the research-network case. 5CBC-MAC is vulnerable when used for variable-length For the data-center case, we consider two common messages data center topologies: a leaf-spine topology [29] and a 3-tier topology (access, aggregation, and core layer) [30]. Trace 1 Trace 2 Trace 3 The diameter for the leaf-spine topology is 4 links (i.e., 747 B 463 B 906 B 1420 B 691 B 262 B 3 switches) and for the 3-tier topology 6 links (i.e., 5 A 8.3% 13.4% 6.8% 4.4% 9.0% 23.7% switches)6. In addition, to relate the overhead to realis- D 12.6% 20.3% 10.4% 6.6% 14.0% 35.9% tic data center traffic, we use the findings of two studies: Table 2: Packet overhead for the average path length the average packet size in data centers is 850 bytes [20], (A) and the diameter (D) of the Internet2 topology and and packet sizes are concentrated around the values of the mean and median packet sizes from 3 CAIDA 1-hour 200 and 1400 bytes [21]. Table 1 shows the overhead packet traces. for the different topologies and path lengths. For the research network deployment, we analyze the topology of the Internet2 network [31], which is publicly NICs (PCIe Gen2x8) providing a unidirectional capac- available [32]; we consider only the 17 L3 and 34 L2 ity of 40 Gbps. devices in the topology – not the L1 optical repeaters We utilize Spirent SPT-N4U-220 to generate traffic. – and find a diameter of 11 links (i.e., 10 switches). We specify IPv4 as the network-layer protocol, and we Furthermore, for the Internet2 topology we calculate an vary Ethernet packet sizes from 128 to 1500 bytes.7 For average path length of 6.62 links (i.e., 6 switches). To a given link capacity, the packet size determines the relate the overhead to actual traffic, we analyze three 1- packet rate and hence the load on the switch. For ex- hour packet traces from CAIDA [33] and calculate the ample, for 128-byte packets and one 10 GbE link, the respective packet overhead for the mean and median maximum packet rate is 8.45 Million packets per second packet lengths. (Table 2). (Mpps); for all 8 NIC ports it is 67.6 Mpps. These val- Our results indicate a moderate packet overhead for ues are the physical limits and represent the theoretical the average path length in Internet2 and a considerable peak throughput. packet overhead for the worst case (high path lengths) Furthermore, for the SDNsec edge switch and the in both deployment scenarios. This analysis provides DPDK vSwitch, we populate a flow table with 64k en- an insight about the price of security and robustness tries; for the SDNsec edge switch, the flow table holds for policy enforcement and validation of the SDN data forwarding entries for a path with 5 switches. Flows plane. Furthermore, we observe that the packet over- are defined based on the destination MAC address – all head is more significant for ISP topologies because they other fields remain constant. have typically longer paths than data center networks: data center networks are optimized with respect to la- 6.3.1 Normal Operation tency and cabling length leading to shorter path lengths. Novel data center topologies demonstrate even shorter For normal operation, we generate packets with a destination MAC address in the range of the addresses path lengths compared to the more common topologies we analyzed [34]. This path-length optimization leads stored in the flow table of the switch. Figure 6b shows the average latency per packet, and Figure 6a shows the to a lower packet overhead for a data center deployment of SDNsec. switching performance for a 60-second measurement in- terval. 6.3 Performance Evaluation The ingress switch demonstrates a higher latency com- pared to DPDK vSwitch because the SDNsec header We compare the forwarding performance of edge and must be added to every packet: the packet size increases core switches with the DPDK vSwitch under two sce- and the longer entries in the lookup table cause addi- narios: normal operation and a state exhaustion attack. tional cache misses that increase latency. Furthermore, We run the SDNsec software switch on a commodity the latency of the core switch is the same as the DPDK server machine. The server has a non-uniform mem- baseline latency, demonstrating the minimal processing ory access (NUMA) design with two Intel Xeon E5- overhead at the core switches. 2680 CPUs that communicate over two QPI links. Each We observe a considerable performance decrease for NUMA node is equipped with four banks of 16 GB DD3 the ingress switch compared to the DPDK vSwitch. RAM. Furthermore, the server has 2 dual-port 10 GbE This decrease is a side-effect of the packet overhead 6Our reported path lengths include the links between (Section 6.2): the outgoing traffic volume of an ingress the hosts and the switches. switch is higher than the incoming volume. Thus, when the incoming links are fully utilized, packets get dropped and the throughput is lower (assuming that the aggre- Packet Size 200 B 850 B 1400 B gate ingress and egress capacity of the switch is the same). This comparison captures the effect of packet Leaf-Spine 19.0% 4.5% 2.7% 3-Tier 27.0% 6.4% 3.9% 7We exclude 64-byte packets because the minimum Table 1: Packet overhead for data center traffic patterns packet size in the core of the network is higher because the additional information in SDNsec does not fit in the and topologies. minimum-sized Ethernet packet. 80 800 70 Line Rate DPDK DPDK vSwitch 60 ) 600 vSwitch

50 Ingress µs Ingress Core Core 40 Line Rate 400 30 20 200 Latency ( Latency 10 Line Rate

Throughput (Mpps) Throughput 0 0 128 256 1500 128 256 1500 Packet Size (bytes) Packet Size (bytes)

(a) Throughput (b) Latency

Figure 6: Switching performance under normal operation. overhead and not the processing overhead. In contrast case: all traffic is inter-rack and the path consists of to the ingress switch, the core switch outperforms the 5 switches (worst-case path length for 3-tier data cen- other switches and achieves the baseline performance ter); also, all egress switches report all the packet head- for all packet sizes. ers. Overall, the aggregate packet rate for this setup is Our experiments under normal operation demonstrate 1176 Mpps. a performance decrease at the edge of the network, how- We implement the PVC, which reads the packet head- ever, the core of the network can handle significantly ers, fetches the corresponding shared keys with the switches, more traffic, compared to today’s SDN realization. and recomputes the PVFs. For the previous setup, an 8 core CPU can validate 17 Mpps. For the whole data- 6.3.2 State Exhaustion Attack center traffic (1176 Mpps), 69 CPUs would be required. To analyze the switching performance of the switch For the bandwidth overhead, the data size to vali- under a state exhaustion attack, we generate traffic with date one packet is 14 bytes (3 bytes for the FlowID, 3 random destination MAC addresses. The destination bytes for the SeqNo, and 8 bytes for the PVF). For the addresses are randomly drawn from a pool of 232 (∼4 previous setup, we estimate the bandwidth overhead at billion) addresses to prevent the switches from perform- 115 Gbps, which accounts for 1.6% of the whole data- ing any optimization, such as caching flow information. center traffic. Figure 7 shows the switching performances. We observe a considerable decrease (i.e., over 100 7. DISCUSSION times slower than the DPDK baseline) in throughput One attack we have not considered is packet drop- for both the DPDK vSwitch and the ingress switch (Fig- ping by a malicious switch. Flow statistics through ure 7a). This decrease is due to cache misses when per- monitoring provide a basic defense perimeter for such forming flow table lookups–the switches are forced to attacks. The controller can instruct switches to period- perform memory lookups, which are considerably slower ically report packet counters for certain flows and then than cache lookups, in order to determine the forward- inspect if packets are dropped at a certain link. Fur- ing information to process the incoming packets. The thermore, dishonest reports would result in inconsistent latency plot in Figure 7b tells a similar story: both the reports that pinpoint the misbehavior to a certain link DPDK vSwitch and the ingress switch take considerably between two switches (it is not possible to identify the longer time to process packets. exact switch) [37]. However, packet dropping from a However, for the core switches the switching perfor- malicious ingress or egress switch cannot be detected mance remains unaffected compared to normal opera- through monitoring. This is the side-effect of a design tion. This is because the core switches do not perform decision in SDNsec. any memory lookup when processing packets. We have made the deliberate design decision that the network stack of the host should not be modified. This 6.4 Path Validation Overhead design choice provides a smoother incremental deploy- Path validation introduces processing overhead for ment path for SDNsec, since hosts do not perform any the controller and bandwidth overhead for the network. additional functionality. This can be beneficial also for The controller has to recompute the PVFs for the re- a data-center deployment, when tenants have control ported packets, and the egress switches have to report over their operating system (e.g., in the Infrastructure- the PVFs in the packet headers to the controller. as-a-Service model). We estimate the overheads based on information for This design decision, however, has implications for large data centers: 80k hosts [35] with 10G access links the security properties of SDNsec and enables certain that are utilized at 1% (in each direction) [36] and send attacks. For example, a malicious egress switch can average-sized packets of 850 bytes [?]. Due to lack of transfer packets out of an incorrect interface, replay knowledge for traffic patterns, we consider the worst packets, or drop packets; without feedback from the 81 2.0 1e5 Line Rate DPDK DPDK 61 vSwitch vSwitch

) 1.5 Ingress Ingress 41 Line Rate µs Core 1.0 Core 21 Line Rate 0.5 1 0.2 ( Latency 0.01

Throughput (Mpps) Throughput 0.0 0 128 256 1500 128 256 1500 Packet Size (bytes) Packet Size (bytes)

(a) Throughput (b) Latency

Figure 7: Switching performance under state exhaustion attack. end host it is not possible to detect such attacks. Fur- key management is simplified, since each shares thermore, a malicious ingress switch can replay packets a key only with the controller. Thus, we do not involve without being detected, since the ingress switch can in- the host in any key establishment and avoid the over- scribe different sequence numbers; again, the transport head of key establishment in the presented protocols. layer of the destination host – and not the network – ICING [40] is another path validation protocol that can detect the replay. leverages cryptographic information in the packets. Each router on the path verifies cryptographic markings in 8. RELATED WORK the packet that were inserted by the source and each upstream router. ICING comes with a high bandwidth We briefly describe recent research proposals that are overhead due to large packet sizes, demonstrating a related to data-plane security and state reduction in 23.3% average packet overhead. Furthermore, ICING SDN. requires pairwise symmetric keys between all entities Data-plane security. There are only a few proposals on a path. accounting for compromised switches at the data plane. State reduction for SDN. Another class of propos- The most closely related work to ours is SANE [12]: als focuses on state reduction for the SDN data plane. the controller hands out capabilities to end hosts – not Source routing is a commonly used approach to realize to switches, as in SDNsec – in order to enforce net- this goal, and recent work shows that source routing not work paths. This approach requires modification of end only decreases the forwarding table size, but provides a hosts in order to perform additional tasks. Namely, ev- higher and more flexible resource utiliziation [41]. In ery host must communicate with the controller in order SourceFlow [42], packets carry pointers to action lists to establish a shared symmetric key and obtain capa- for every core switch on the path. Hence, core switches bilities. Failure recovery is pushed to the host, which only store action tables that encode potential actions has to detect the failure and then explicitly ask the con- for packets and are indexed by a pointer in the packet. troller for a new path. In addition, SANE cannot pro- Segment Routing [43] is based on source routing and vide protection against stronger adversaries that collude combines the benefits of MPLS [44] with the central- and perform a wormhole attack: a malicious switch can ized control of SDN. An ingress switch adds an ordered replay capabilities by prepending them to the existing list of instructions into the packet header, and each sub- forwarding information in the packet and thus can di- sequent switch inspects such an instruction. These ap- verge traffic over another path; a colluding switch re- proaches are similar to SDNsec in that they reduce state moves the prepended capabilities and forwards packets at core switches by embedding information in packet to a downstream switch of the original path. SDNsec headers. However, the use of source routing without provides path validation to deal with such attacks. Fi- corresponding security mechanisms opens a bigger at- naly, SANE does not consider broadcast/multicast for- tack vector compared to legacy hop-by-hop routing: a warding. single compromised switch can modify the forwarding Jacquin et al. [38] take another approach, using trusted information and steer a packet over a non-compliant computing to attest remotely that network elements use path. an approved software version. Being a first step in this direction, there are unaddressed challenges with respect to scalability: processing overhead (overall attestation 9. CONCLUSION time), bandwidth overhead (extra traffic due to attes- Security in SDN remains a neglected issue and could tation), and management overhead (the number of dif- raise deployment hurdles for security concerned envi- ferent software versions deployed). ronments. We have presented a security extension to OPT [39] provides path validation on top of a dy- achieve forwarding accountability for the SDN data plane, namic key-establishment protocol that enables routers i.e., to ensure that the operator’s policies are correctly to re-create symmetric keys with end hosts. In SDNsec, applied to the data plane. To this end, we have de- signed two mechanisms: path enforcement to ensure [13] P. Kazemian, G. Varghese, and N. McKeown, that the switches forward the packets based on the in- “Header Space Analysis: Static Checking for structions of the operator and path validation to allow Networks,” in Proc. of USENIX NSDI, 2012. the operator to reactively verify that the data plane [14] P. Kazemian, M. Chang, H. Zeng, G. Varghese, has followed the specified policies. In addition, SDNsec N. McKeown, and S. Whyte, “Real Time Network guarantees consistent policy updates such that the be- Policy Checking Using Header Space Analysis,” in havior of the data plane is well defined during recon- Proc. of USENIX NSDI, 2013. figurations. Lastly, minimizing the amount of state at [15] H. Mai, A. Khurshid, R. Agarwal, M. Caesar, the core switches confines state exhaustion attacks to P. B. Godfrey, and S. T. King, “Debugging the the network edge. We hope that this work assists in data plane with anteater,” in Proc. of ACM moving towards more secure SDN deployments. SIGCOMM, 2011. [16] “IEEE Standard for Local and metropolitan area 10. REFERENCES networks - Port-Based Network Access Control,” IEEE Std 802.1X-2004, Dec 2004. [1] N. McKeown, T. Anderson, H. Balakrishnan, [17] R. Mahajan and R. Wattenhofer, “On Consistent G. Parulkar, L. Peterson, J. Rexford, S. Shenker, Updates in Software Defined Networks,” in Proc. and J. Turner, “OpenFlow: Enabling Innovation of ACM HotNets, 2013. in Campus Networks,” SIGCOMM Comput. [18] X. Jin, H. H. Liu, R. Gandhi, S. Kandula, Commun. Rev., 2008. R. Mahajan, M. Zhang, J. Rexford, and [2] S. Shin, P. A. Porras, V. Yegneswaran, M. W. R. Wattenhofer, “Dynamic Scheduling of Network Fong, G. Gu, and M. Tyson, “FRESCO: Modular Updates,” in Proc. of ACM SIGCOMM, 2014. Composable Security Services for [19] O. N. Foundation, “OpenFlow Switch Software-Defined Networks.” in Proc. of NDSS, Specification Version 1.5.0,” 2013. ”http://bit.ly/1Rdi6Yg”, 2014. [3] A. Khurshid, W. Zhou, M. Caesar, and P. B. [20] T. Benson, A. Anand, A. Akella, and M. Zhang, Godfrey, “VeriFlow: Verifying Network-wide “Understanding Data Center Traffic Invariants in Real Time,” in Proc. of ACM Characteristics,” SIGCOMM Comput. Commun. HotSDN, 2012. Rev., 2010. [4] P. Porras, S. Shin, V. Yegneswaran, M. Fong, [21] T. Benson, A. Akella, and D. A. Maltz, “Network M. Tyson, and G. Gu, “A Security Enforcement Traffic Characteristics of Data Centers in the Kernel for OpenFlow Networks,” in Proc. of Wild,” in Proc. of ACM IMC, 2010. HotSDN, 2012. [22] P. Jokela, A. Zahemszky, C. Esteve Rothenberg, [5] “Cisco Routers Compromised by Malicious Code S. Arianfar, and P. Nikander, “LIPSIN: Line Injection,” ”http://bit.ly/1KtUoTs”, Sep. 2015. Speed Publish/Subscribe Inter-networking,” in [6] “Juniper ScreenOS Authentication Backdoor,” Proc. of ACM SIGCOMM, 2009. ”http://bit.ly/1Nx8J5i”, Dec. 2015. [23] F. Zhang, L. Jia, C. Basescu, T. H.-J. Kim, Y.-C. [7] “Snowden: The NSA planted backdoors in Cisco Hu, and A. Perrig, “Mechanized Network Origin products,” ”http://bit.ly/1PKtbQW”, May 2015. and Path Authenticity Proofs,” in Proc. of ACM [8] P.-W. Chi, C.-T. Kuo, J.-W. Guo, and C.-L. Lei, CCS, 2014. “How to Detect a Compromised SDN Switch,” in [24] “Data Plane Development Kit,” Proc. of IEEE NetSoft, 2015. ”http://dpdk.org”. [9] R. Kloti, V. Kotronis, and P. Smith, “OpenFlow: [25] S. Gueron, “Intel Advanced Encryption Standard A Security Analysis,” in Proc. of IEEE NPSec, (AES) New Instructions Set ,” 2012. 2013. [26] S. Goglin and L. Cornett, “Flexible and extensible [10] M. Antikainen, T. Aura, and M. S¨arel¨a, “Spook in receive side scaling,” 2009. Your Network: Attacking an SDN with a [27] “Open vSwitch accelerated by DPDK,” Compromised OpenFlow Switch,” in Secure IT ”://github.com/01org/dpdk-ovs”. Systems , 2014. [28] “Open vSwitch,” ”www.openvswitch.org”. [11] M. Reitblatt, N. Foster, J. Rexford, [29] M. Alizadeh and T. Edsall, “On the Data Path C. Schlesinger, and D. Walker, “Abstractions for Performance of Leaf-Spine Datacenter Fabrics,” in Network Update,” in Proc. of ACM SIGCOMM, Proc. of IEEE HOTI, 2013. 2012. [30] Cisco, “Data Center Multi-Tier Model Design,” [12] M. Casado, T. Garfinkel, A. Akella, M. J. ”http://bit.ly/23vtt5u”. Freedman, D. Boneh, N. McKeown, and [31] “Internet2,” ”https://www.internet2.edu”. S. Shenker, “SANE: A Protection Architecture for Enterprise Networks,” in Proc. of USENIX [32] “Internet2 Network NOC,” Security, Aug 2006. ”http://bit.ly/1JHulh0”. [33] “CAIDA: Center for Applied Internet Data Hu, and A. Perrig, “Lightweight Source Analysis,” ”http://www.caida.org”. Authentication and Path Validation,” in Proc. of [34] A. Singla, C.-Y. Hong, L. Popa, and P. B. ACM SIGCOMM, 2014. Godfrey, “Jellyfish: Networking Data Centers [40] J. Naous, M. Walfish, A. Nicolosi, D. Mazi`eres, Randomly,” in Proc. of USENIX NSDI, 2012. M. Miller, and A. Seehra, “Verifying and [35] “Inside AmazonˆaA˘Zs´ Cloud Computing Enforcing Network Paths with ICING,” in Proc. Infrastructure,” ”http://bit.ly/1JHulh0”, 2015. of ACM CoNEXT, 2011. [36] A. Roy, H. Zeng, J. Bagga, G. Porter, and A. C. [41] S. A. Jyothi, M. Dong, and P. B. Godfrey, Snoeren, “Inside the social network’s (datacenter) “Towards a Flexible Data Center Fabric with network,” in Proceedings of the 2015 ACM Source Routing,” in Proc. of ACM SOSR, 2015. Conference on Special Interest Group on Data [42] Y. Chiba, Y. Shinohara, and H. Shimonishi, Communication. “Source Flow: Handling Millions of Flows on [37] S. Goldberg, D. Xiao, E. Tromer, B. Barak, and Flow-based Nodes,” in Proc. of ACM SIGCOMM, J. Rexford, “Path-quality Monitoring in the 2010. Presence of Adversaries,” in Proc. of ACM [43] “Segment Routing Architecture,” SIGMETRICS, 2008. ”https://tools.ietf.org/html/draft-filsfils-rtgwg-segment-routing- [38] L. Jacquin, A. Shaw, and C. Dalton, “Towards [44] E. Rosen, A. Viswanathan, and R. Callon, trusted software-defined networks using a “Multiprotocol Label Switching Architecture,” hardware-based Integrity Measurement RFC 3031 (Proposed Standard), Internet Architecture,” in Proc. of IEEE NetSoft, 2015. Engineering Task Force, Jan. 2001. [39] T. H.-J. Kim, C. Basescu, L. Jia, S. B. Lee, Y.-C.