Packet Processing on the GPU

Packet Processing on the GPU

Packet Processing on the GPU Matthew Mukerjee David Naylor Bruno Vavala Carnegie Mellon University Carnegie Mellon University Carnegie Mellon University [email protected] [email protected] [email protected] ABSTRACT example of this trend is Software Defined Net- Packet processing in routers is traditionally implemented working [5], wherein each packet is classified in hardware; specialized ASICs are employed to forward as belonging to one or more flows and is for- packets at line rates of up to 100 Gbps. Recently, how- warded accordingly; moreover, the flow rules ever, improving hardware and an interest in complex packet governing forwarding behavior can be updated processing has prompted the networking community to on the order of minutes [5] or even seconds [3]. explore software routers; though they are more flexible Part of this new-found attention for software routers and easier to program, achieving forwarding rates com- has been an exploration of various hardware archi- parable to hardware routers is challenging. tectures that might be best suited for supporting Given the highly parallel nature of packet processing, software-based packet processing. Since packet pro- a GPU-based router architecture is an inviting option (and cessing is naturally an SIMD application, a GPU- one which the community has begun to explore). In this based router is a promising candidate. The primary project, we explore the pitfalls and payoffs of implement- job of a router is to decide, based on a packet's ing a GPU-based software router. destination address, through which output port the packet should be sent. Since modern GPUs contain 1. INTRODUCTION hundreds of cores [8], they could potentially perform this lookup on hundreds of packets in parallel. The There are two approaches to implementing packet same goes for more complex packet processing func- processing functionality in routers: bake the pro- tions, like flow rule matching in a software defined cessing logic into hardware, or write it in software. network (SDN). The advantage of the former is speed; today's fastest This project is motivated by two goals: first, we routers perform forwarding lookups in dedicated ASICs seek to implement our own GPU-based software router and achieve line rates of up to 100 Gbps (the current and compare our experience to that described by the standard line rate for a core router is 40 Gbps) [2]. PacketShader team (see x2). Given time (and bud- On the other hand, the appeal of software routers get!) constraints, it is not feasible to expect our is programability; software routers can more easily system to surpass PacketShader's performance. Our support complicated packet processing functions and second goal is to explore scheduling the execution can be reprogrammed with updated functionality. of multiple packet processing functions (e.g., longest Traditionally, router manufacturers have opted for prefix match and firewall rule matching). hardware designs since software routers could not The rest of this paper is organized as follows: first come close to achieving the same forwarding rates. we review related work that uses the GPU for net- Recently, however, software routers have begun gain- work processing in x2. Then we explain the design of ing attention in the networking community for two our system in x3. x4 describes our evaluation setup primary reasons: and presents the results of multiple iterations of our 1. Commodity hardware has improved significantly, router. Finally, we discuss our system and future making it possible for software routers to achieve work in x5 and conclude in x6. line rates. For example, PacketShader forwards packets at 40 Gbps [2]. 2. RELATED WORK We are by no means the first to explore the use of 2. Researchers are introducing increasingly more a GPU for packet processing. We briefly summarize complex packet processing functionality. A good here the contributions of some notable projects. 1 1 Incoming packets (DMA) GPU PacketShader [2] implements IPv4/IPv6 forward- 7 Outgoing Main Memory ing, IPsec encryption, and OpenFlow [5] flow match- packets (DMA) 4 Process SM SM SM ing on the GPU using NVIDIA's CUDA architec- 2 Fill bu!er 5 Copy packets ture. They were the first to demonstrate the feasabil- NIC of packets results on GPU SM SM SM ity of multi-10Gbps software routers. A second key contribution from PacketShader is 6 Forward packets CPU 3 Copy pkts to GPU Global Memory a highly optimized packet I/O engine. By allocat- 5 Copy results back ing memory in the NIC driver for batches of packets at a time rather than individually and by remov- Figure 1: Basic System Design ing unneeded meta-data fields (parts of the Linux networking stack needed by endhosts are never used CPU in a software router), they achieve much higher for- Host to Device Device to Host warding speeds, even before incorporating the GPU. Memory Copy Memory Copy Due to time constraints, we do not make equivalent optimizations in our NIC driver, and so we cannot H → D H → D D → H H → D D → H D → H time directly compare our results with PacketShader's. Process Process Process Gnort [11] ports the Snort intrusion detection sys- GPU tem (IDS) to the GPU (also using CUDA). Gnort uses the same basic CPU/GPU workflow introduced Figure 2: Pipelined Execution by PacketShader (see x3); its primary contribution batch of packets from the GPU back to main mem- is implementing fast string pattern-matching on the ory and copy the next batch of packets to the GPU GPU. (Figure 2). In this way, we never let CPU cycles go idle while the GPU is working. Hermes [12] builds on PacketShader by implement- ing a CPU/GPU software router that dynamically adjusts batch size to simultaneously optimize multi- Batching. Although the GPU can process hun- ple QoS metrics (e.g., bandwidth and latency). dreds of packets in parallel (unlike the CPU), there is, of course, a cost: non-trivial overhead is incurred in transferring packets to the GPU and copying re- 3. BASIC SYSTEM DESIGN sults back from it. To amortize this cost, we process packets in batches. As packets arrive at our router, We use the same system design presented by Pack- we buffer them until we have a batch of some fixed etShader and Gnort. Pictured in Figure 1, packets size BATCH SIZE, at which point we copy the whole pass through our system as follows: (1) As packets batch to the GPU for processing. arrive at the NIC, they are copied to main memory Though batching improves throughput, it can have via DMA. (2) Software running on the CPU copies an adverse effect on latency. The first packet of a these new packets to a buffer until a sufficient num- batch arrives at the router and is buffered until the ber have arrived (see below) and (3) the batch is remaining BATCH SIZE-1 packets arrive rather than transferred to memory on the GPU. (4) The GPU being processed right away. To ensure that no packet processes the packets in parallel and fills a buffer of suffers unbounded latency (i.e., in case there is a lull results on the GPU which (5) the CPU copies back in traffic and the batch doesn't fill), we introduce to main memory when processing has finished. (6) another parameter: MAX WAIT. If a batch doesn't Using these results, the CPU instructs the NIC(s) fill after MAX WAIT milliseconds, the partial batch is where to forward each packet in the batch; (7) fi- transferred to GPU for processing. We evaluate this nally, the NIC(s) fetch(es) the packets from main tradeoff in x4. memory with another DMA and forwards them. Mapped Memory. Newer GPUs (such as ours) 3.1 GPU Programming Issues have the ability to directly access host memory that Pipelining. As with almost any CUDA program, has been pinned and mapped, eliminating steps (3) ours employs pipelining for data transfers between and (5). Though data clearly still needs to be copied main memory and GPU memory. While the GPU to the GPU and back, using mapped memory rather is busy processing a batch of packets, we can uti- than explicitly performing the entire copy at once lize the CPU to copy the results from the previous allows CUDA to copy individual lines \intelligently" 2 as needed, overlapping copies with execution. We 3.2.2 Firewall Rule Matching evaluate the impact of using mapped memory in x4 A firewall rule is defined by a set of five values as well. (the \5-tuple"): source and destination IP address, source and destination port number, and protocol. 3.2 Packet Processing The protocol identifies who should process the packet A router performs one or more packet processing after IP (e.g., TCP, UDP, or ICMP). Since most traf- functions on each packet if forwards. This includes fic is either TCP or UDP [10], the source and des- deciding where to forward the packet (via a longest tination port numbers are also part of the 5-tuple prefix match lookup on the destination address or even though they are not part of the network-layer matching against a set of flow rules) and might also header. If the protocol is ICMP, these fields can in- include deciding whether or not to drop the packet stead be used to encode the ICMP message type and (by comparing the packet header against a set of fire- code. wall rules or comparing the header and payload to a Each rule is also associated with an action (usually set of intrusion detection system rules). In our sys- ACCEPT or REJECT). If a packet matches a rule tem we implement two packet processing functions: (that is, the values in the packet's 5-tuple match longest prefix match and firewall rule matching.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us