Stardust: Divide and Conquer in the Data Center Network

Stardust: Divide and Conquer in the Data Center Network

Stardust: Divide and Conquer in the Data Center Network Noa Zilberman, University of Cambridge; Gabi Bracha and Golan Schzukin, Broadcom https://www.usenix.org/conference/nsdi19/presentation/zilberman This paper is included in the Proceedings of the 16th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’19). February 26–28, 2019 • Boston, MA, USA ISBN 978-1-931971-49-2 Open access to the Proceedings of the 16th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’19) is sponsored by Stardust: Divide and Conquer in the Data Center Network Noa Zilberman Gabi Bracha Golan Schzukin University of Cambridge Broadcom Broadcom Abstract In Stardust, we divide the network into two classes of de- vices: top-of-rack (ToR) devices maintain classic packet- Building scalable data centers, and network devices that fit switch functionality, while any other device in the network within these data centers, has become increasingly hard. is a simple and efficient cell switch. We refer to the part With modern switches pushing at the boundary of manufac- of the network created by these simple switches as the net- turing feasibility, being able to build suitable, and scalable work fabric. Devices within the network fabric do not re- network fabrics becomes of critical importance. We intro- quire complex header processing and large lookup tables, duce Stardust, a fabric architecture for data center scale net- have minimal queueing and reduced network interface over- works, inspired by network-switch systems. Stardust com- heads. Their data path requires minimum clock frequency, bines packet switches at the edge and disaggregated cell regardless of packet size. Creating a network fabric that is switches at the network fabric, using scheduled traffic. Star- made completely of a single type of a simple switch is a rad- dust is a distributed solution that attends to the scale lim- ical move from common approaches, whereby each network itations of network-switch design, while also offering im- device is a full-fledged, autonomous packet switch. To con- proved performance and power savings compared with tra- quer the DCN scalability and performance requirements, we ditional solutions. With ever-increasing networking require- build upon a number of insights, drawn from silicon through ments, Stardust predicts the elimination of packet switches, the network level: replaced by cell switches in the network, and smart network • The DCN can be compared to a large-scale network-switch hardware at the hosts. system, where complex routing decisions are taken at the 1 Introduction edge, and simple forwarding is applied within the fabric. By using only basic routing mechanisms within the core of For the last ten years, cloud computing has relentlessly the DCN, significant network-switch resources can be saved, grown in size [28]. Nowadays, data centers can host tens while maintaining functionality. of thousands [75] of servers or more. The complexity of • The increase in port rates, utilizing an aggregation of N the data center network (DCN) has grown together with the serial links per port as a norm, limits the scalability of the scaling of data centers. While scale has a direct effect on DCN. By using discrete, rather than aggregated links, the the bisection bandwidth, it also affects latency, congestion, scale of Fat-tree networks can improve by O(N2). manageability and reliability. To cope with these demands, • Unnecessary packet transmissions can eventually lead to network switches have grown in capacity by more than two packet loss. Credit-based egress scheduling can prevent orders of magnitude in less than two decades [85]. packet loss due to congestion, and increase fairness between The computing community faced the end of Dennard’s competing flows. scaling [35] and the slowdown of Moore’s law [60] over a • Congestion can happen within underutilized networks due decade ago, prompting a move to multi-core and many-core to imperfect balancing of flows across available paths. By CPU design [68]. Similar challenges are faced by the net- evenly distributing traffic across all links, optimum load bal- working community today. In this paper, we discuss limita- ancing can be achieved. tions on network device scalability, and assert that in order • Network-switch silicon is over-designed: it is under- to continue to scale DCN requirements, data center network utilized for small packet sizes, and packet sizes not aligned devices need to be significantly simplified. We introduce to the internal data path. By batching packets together, and Stardust, a DCN architecture based on the implementation chopping them into cells that perfectly match internal data- of network-switch systems on a data center scale. path width, maximum data path utilization can be achieved. USENIX Association 16th USENIX Symposium on Networked Systems Design and Implementation 141 Stardust has no sense of flows, and it does not require route room-wide HPC interconnects, or multi-chassis systems in- installation. The network fabric is completely distributed, terconnecting interface line-cards and control processors. with no need for complex software-defined networks (SDN) Such platforms, (e.g., [27, 43]) implement advanced packet management, central control or scheduling. Stardust is also operations on the line-cards, interconnected using a simple protocol agnostic, reducing silicon level requirements by fabric consisting of minimal packet queuing or processing. 33%. The resulting network provides full bisection band- In DCN, the ToR switches take the role of the line-cards, width. It is lossless, load balanced and provides significant while the “interconnect fabric” is the spine and leaf switches power and cost savings. interconnecting all ToR devices. This model of the DCN Switch-silicon vendors have produced, and shipped in vol- equally applies to Stardust. ume, chipsets such as we exploit in Stardust (e.g., [20, 61]); Fig. 1 illustrates the difference between the two ap- we do not claim these designs as a contribution. Similarly, proaches: In both cases, the first switch (the ToR) pro- switch vendors have produced single-system switches that vides packet processing, queueing, and full network inter- internally use such chipsets to produce the illusion of a single faces. However, in the typical data center network approach, many-port packet switch (e.g., [14, 43, 26]). Our contribu- traversing a Fat-tree will include going through all functions tion, in this paper, is to architect and implement our approach (ingress and egress processing, queueing, scheduling,...) in on a data center scale, using such commodity chipsets. each and every hop. In contrast, with Stardust, scaled up To summarize, we make the following contributions: from the single system approach, these functions will be used • We explore performance and scalability limitations in cur- only at edge devices. By eliminating stages, logic and mem- rent DCN network devices (§2), motivating the need for sim- ory, consuming considerable silicon area [19], significant re- pler network devices. sources can be saved. While the I/O remains the same, the • We introduce Stardust, an architecture offering a better ap- network interface is considerably smaller; there is no need proach to realizing DCN (§3,§4), and discuss its advantages to support multiple standards (e.g., 100GE, 400GE), and on a data center scale (§5). a single MAC functionality is sufficient. Stardust benefits • We evaluate Stardust (§6): experimental measurements on include reducing both network-wide latency and network- O(1K)s of port environments, event simulations of O(10K)s management complexity as it practically presents as a single port systems and an analysis of large scale O(100K)s port (large) switch. Shallow buffering is not entirely eliminated networks, illustrating the advantages of Stardust. (see §6). • We propose a future vision of DCNs (§8), where ToR func- On the order of microseconds [16, 69] rather than millisec- tionality is reduced to fit within network interface cards, with onds, the reduced fabric latency of a Stardust DCN behaves ToR switches turned into cell switches. as a single hop (albeit across a larger backplane fabric). This allows applications to continue operating unchanged 2 Motivating Observations Observation: Significant resources can be saved by simpli- The design of scalable network devices has hit a wall. If fying the network fabric and adopting a scaled-up single- a decade ago the main question was “how can we imple- system approach. ment this new feature?”, now the question is “is this feature 2.2 The I/O Wall worth implementing given the design constraints?”. We dis- cuss three types of limitations to network device scalability: The maximum number of network interfaces on network resources, I/O, and performance. Our observations are or- devices has grown by an order of magnitude over the last thogonal, providing trajectories to improving scalability. decade, climbing from 24 ports [31] to 260 ports [64]. De- vice packages are big, 55mm × 55mm [79] or more, bigger 2.1 The Resource Wall than high-end CPUs [44]. Such big packages raise concerns Every network device is limited in its die size and its power about manufacturing feasibility, cost and reliability. It is un- consumption. Chip architects need to balance the design to likely the I/O number will continue to scale at the same rate. avoid exceeding these resources limitations; die size is not The second part of the I/O problem is that a network port just a function of cost, but also of manufacturing and operat- is not necessarily a single physical link. A Link Bundle (l) is ing feasibility, and reliability. the number of serial links connecting two hops. For exam- Specializations and simplifications enabled DCNs to ple, connecting two switches using a port of 100GE CAUI4, evolve considerably from being an Internet-in-miniature us- four lanes at 25Gbps each, is a link bundle of four. Link ing complex router and switch architectures.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    20 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us