Netvm: High Performance and Flexible Networking Using Virtualization on Commodity Platforms Jinho Hwang† K
Total Page:16
File Type:pdf, Size:1020Kb
NetVM: High Performance and Flexible Networking using Virtualization on Commodity Platforms Jinho Hwang† K. K. Ramakrishnan∗ Timothy Wood† †The George Washington University ∗WINLAB, Rutgers University Abstract compared to purpose-built (often proprietary) network- NetVM brings virtualization to the Network by enabling ing hardware or middleboxes based on custom ASICs. high bandwidth network functions to operate at near Middleboxes are typically hardware-software pack- line speed, while taking advantage of the flexibility and ages that come together on a special-purpose appliance, customization of low cost commodity servers. NetVM often at high cost. In contrast, a high throughput platform allows customizable data plane processing capabilities based on virtual machines (VMs) would allow network such as firewalls, proxies, and routers to be embed- functions to be deployed dynamically at nodes in the net- ded within virtual machines, complementing the con- work with low cost. Further, the shift to VMs would let trol plane capabilities of Software Defined Networking. businesses run network services on existing cloud plat- NetVM makes it easy to dynamically scale, deploy, and forms, bringing multiplexing and economy of scale ben- reprogram network functions. This provides far greater efits to network functionality. Once data can be moved flexibility than existing purpose-built, sometimes propri- to, from and between VMs at line rate for all packet sizes, etary hardware, while still allowing complex policies and we approach the long-term vision where the line between full packet inspection to determine subsequent process- data centers and network resident “boxes” begins to blur: ing. It does so with dramatically higher throughput than both software and network infrastructure could be devel- existing software router platforms. oped, managed, and deployed in the same fashion. NetVM is built on top of the KVM platform and In- Progress has been made by network virtualization tel DPDK library. We detail many of the challenges standards and SDN to provide greater configurability in we have solved such as adding support for high-speed the network [1–4]. SDN improves flexibility by allowing inter-VM communication through shared huge pages software to manage the network control plane, while the and enhancing the CPU scheduler to prevent overheads performance-critical data plane is still implemented with caused by inter-core communication and context switch- proprietary network hardware. SDN allows for new flex- ing. NetVM allows true zero-copy delivery of data to ibility in how data is forwarded, but the focus on the con- VMs both for packet processing and messaging among trol plane prevents dynamic management of many types VMs within a trust boundary. Our evaluation shows of network functionality that rely on the data plane, for how NetVM can compose complex network functional- example the information carried in the packet payload. ity from multiple pipelined VMs and still obtain through- This limits the types of network functionality that can puts up to 10 Gbps, an improvement of more than 250% be “virtualized” into software, leaving networks to con- compared to existing techniques that use SR-IOV for vir- tinue to be reliant on relatively expensive network appli- tualized networking. ances that are based on purpose-built hardware. Recent advances in network interface cards (NICs) al- low high throughput, low-latency packet processing us- 1 Introduction ing technologies like Intel’s Data Plane Development Kit Virtualization has revolutionized how data center servers (DPDK) [5]. This software framework allows end-host are managed by allowing greater flexibility, easier de- applications to receive data directly from the NIC, elim- ployment, and improved resource multiplexing. A simi- inating overheads inherent in traditional interrupt driven lar change is beginning to happen within communication OS-level packet processing. Unfortunately, the DPDK networks with the development of virtualization of net- framework has a somewhat restricted set of options for work functions, in conjunction with the use of software support of virtualization, and on its own cannot support defined networking (SDN). While the migration of net- the type of flexible, high performance functionality that work functions to a more software based infrastructure is network and data center administrators desire. likely to begin with edge platforms that are more “con- To improve this situation, we have developed NetVM, trol plane” focused, the flexibility and cost-effectiveness a platform for running complex network functional- obtained by using common off-the-shelf hardware and ity at line-speed (10Gbps) using commodity hardware. systems will make migration of other network functions NetVM takes advantage of DPDK’s high throughput attractive. One main deterrent is the achievable per- packet processing capabilities, and adds to it abstractions formance and scalability of such virtualized platforms that enable in-network services to be flexibly created, chained, and load balanced. Since these “virtual bumps” User Applications can inspect the full packet data, a much wider range of Buffer Ring packet processing functionality can be supported than Management Management in frameworks utilizing existing SDN-based controllers Poll Mode Packet Flow manipulating hardware switches. As a result, NetVM Libraries Drivers Classification Data Plane makes the following innovations: Environmental Abstraction Layer 1. A virtualization-based platform for flexible network Linux service deployment that can meet the performance of H/W Platform customized hardware, especially those involving com- Figure 1: DPDK’s run-time environment over Linux. plex packet processing. 2. A shared-memory framework that truly exploits the long pipelines, out-of-order and speculative execution, DPDK library to provide zero-copy delivery to VMs and multi-level memory systems, all of which tend to and between VMs. increase the penalty paid by an interrupt in terms of cy- 3. A hypervisor-based switch that can dynamically ad- cles [12, 13]. When the packet reception rate increases just a flow’s destination in a state-dependent (e.g., further, the achieved (receive) throughput can drop dra- for intelligent load balancing) and/or data-dependent matically in such systems [14]. Second, existing operat- manner (e.g., through deep packet inspection). ing systems typically read incoming packets into kernel space and then copy the data to user space for the applica- 4. An architecture that supports high speed inter-VM tion interested in it. These extra copies can incur an even communication, enabling complex network services greater overhead in virtualized settings, where it may be to be spread across multiple VMs. necessary to copy an additional time between the hyper- 5. Security domains that restrict packet data access to visor and the guest operating system. These two sources only trusted VMs. of overhead limit the the ability to run network services We have implemented NetVM using the KVM and on commodity servers, particularly ones employing vir- DPDK platforms−all the aforementioned innovations tualization [15, 16]. are built on the top of DPDK. Our results show how The Intel DPDK platform tries to reduce these over- NetVM can compose complex network functionality heads by allowing user space applications to directly poll from multiple pipelined VMs and still obtain line rate the NIC for data. This model uses Linux’s huge pages throughputs of 10Gbps, an improvement of more than to pre-allocate large regions of memory, and then allows 250% compared to existing SR-IOV based techniques. applications to DMA data directly into these pages. Fig- We believe NetVM will scale to even higher throughputs ure 1 shows the DPDK architecture that runs in the ap- on machines with additional NICs and processing cores. plication layer. The poll mode driver allows applications to access the NIC card directly without involving ker- 2 Background and Motivation nel processing, while the buffer and ring management This section provides background on the challenges of systems resemble the memory management systems typ- providing flexible network services on virtualized com- ically employed within the kernel for holding sk buffs. modity servers. While DPDK enables high throughput user space ap- plications, it does not yet offer a complete framework for 2.1 Highspeed COTS Networking constructing and interconnecting complex network ser- Software routers, SDN, and hypervisor based switching vices. Further, DPDK’s passthrough mode that provides technologies have sought to reduce the cost of deploy- direct DMA to and from a VM can have significantly ment and increase flexibility compared to traditional net- lower performance than native IO1. For example, DPDK work hardware. However, these approaches have been supports Single Root I/O Virtualization (SR-IOV2) to al- stymied by the performance achievable with commodity low multiple VMs to access the NIC, but packet “switch- servers [6–8]. These limitations on throughput and la- ing” (i.e., demultiplexing or load balancing) can only be tency have prevented software routers from supplanting performed based on the L2 address. As depicted in Fig- custom designed hardware [9–11]. ure 2(a), when using SR-IOV, packets are switched on There are two main challenges that prevent commer- cial off-the-shelf (COTS) servers from being able to pro- 1 Until Sandy-bridge, the performance was close to half of native per- cess network