
Evaluating Network Processing Efficiency with Processor Partitioning and Asynchronous I/O Tim Brecht, G. (John) Janakiraman, Brian Lynn, Vikram Saletore, Yoshio Turner Enterprise Software and Systems Laboratory HP Laboratories Palo Alto HPL-2005-211(R.1) April 14, 2006* network Applications requiring high-speed TCP/IP processing can easily saturate processing, a modern server. We and others have previously suggested alleviating asynchronous I/O, this problem in multiprocessor environments by dedicating a subset of the TCP/IP, multi-core processors to perform network packet processing. The remaining processors processors perform only application computation, thus eliminating contention between these functions for processor resources. Applications interact with packet processing engines (PPEs) using an asynchronous I/O (AIO) programming interface which bypasses the operating system. A key attraction of this overall approach is that it exploits the architectural trend toward greater thread-level parallelism in future systems based on multi-core processors. In this paper, we conduct a detailed experimental performance analysis comparing this approach to a best-practice configured Linux baseline system. * Internal Accession Date Only Published in and presented at EuroSys2006, 18-21 April 2006, Leuven, Belgium Approved for External Publication © Copyright 2006 ACM Evaluating Network Processing Efficiency with Processor Partitioning and Asynchronous I/O Tim Brecht‡ G. (John) Janakiraman§ Brian Lynn§ Vikram Saletore∗∗ Yoshio Turner§ §Hewlett Packard Laboratories ∗∗ Intel® Corporation ‡University of Waterloo ABSTRACT Categories and Subject Descriptors Applications requiring high-speed TCP/IP processing can D.4.4 [Operating Systems]: Communications Manage- easily saturate a modern server. We and others have pre- ment—Network communication; C.4 [Performance of viously suggested alleviating this problem in multiproces- Systems]: Design Studies; C.5.5 [Computer System Im- sor environments by dedicating a subset of the processors plementation]: Servers to perform network packet processing. The remaining pro- cessors perform only application computation, thus elimi- General Terms nating contention between these functions for processor re- Performance, Design, Experimentation sources. Applications interact with packet processing en- gines (PPEs) using an asynchronous I/O (AIO) program- ming interface which bypasses the operating system. A key Keywords attraction of this overall approach is that it exploits the ar- Network Processing, Asynchronous I/O, TCP/IP chitectural trend toward greater thread-level parallelism in future systems based on multi-core processors. In this pa- 1. INTRODUCTION per, we conduct a detailed experimental performance anal- Network protocol processing can saturate servers when ysis comparing this approach to a best-practice configured running high-speed TCP/IP-based applications. High over- Linux baseline system. head for packet processing can leave few CPU cycles avail- We have built a prototype system implementing this ar- able for actual application processing. This problem is wors- chitecture, ETA+AIO (Embedded Transport Acceleration ening as servers deploy higher bandwidth 10 GigE interfaces. with Asynchronous I/O), and ported a high-performance One solution that has been proposed is to offload TCP/IP web-server to the AIO interface. Although the prototype processing from the host CPUs to the network interface to uses modern single-core CPUs instead of future multi-core accelerate network processing and free host cycles for ap- CPUs, an analysis of its performance can reveal important plication processing. However, prior TCP offload investi- properties of this approach. Our experiments show that gations [9, 46, 47, 33, 21] have not demonstrated such im- the ETA+AIO prototype has a modest advantage over the provements and point to the limited capabilities of the TCP baseline Linux system in packet processing efficiency, con- Offload Engine (TOE) hardware. While future TOE designs suming fewer CPU cycles to sustain the same throughput. will likely show improvements, the processing elements in This efficiency advantage enables the ETA+AIO prototype TOEs are likely to always be behind the performance curve to achieve higher peak throughput than the baseline system, of mainstream processors. Furthermore, memory limitations but only for workloads where the mix of packet processing on TOE devices can force them to rely on host support in and application processing approximately matches the allo- high load scenarios, degrading performance. cation of CPUs in the ETA+AIO system thereby enabling Several evolutionary trends in server hardware and net- high utilization of all the CPUs. Detailed analysis shows working software stacks have renewed focus on improving that the efficiency advantage of the ETA+AIO prototype, the packet processing efficiency of servers without resort- which uses one PPE CPU, comes from avoiding multipro- ing to wholesale TCP offload [50, 42, 43, 41, 12]. The cessing overheads in packet processing, lower overhead of higher degree of parallelism enabled by multi-core, multi- our AIO interface compared to standard sockets, and re- threaded processors is a good fit for concurrently process- duced cache misses due to processor partitioning. ing multiple network streams. Increasing cache sizes and shared caches on multi-core chips provide greater poten- tial to hide memory latency by transferring data between processors through the caches (e.g., between the applica- Permission to make digital or hard copies of all or part of this work for tion and the network stack). Memory bandwidth and I/O personal or classroom use is granted without fee provided that copies are bandwidth will scale with the number of processing ele- not made or distributed for profit or commercial advantage and that copies ments as server designs transition to high-bandwidth point- bear this notice and the full citation on the first page. To copy otherwise, to to-point links (from multi-drop shared buses) for memory republish, to post on servers or to redistribute to lists, requires prior specific and I/O [11]. Furthermore, memory controller interfaces permission and/or a fee. EuroSys'06, April 18–21, 2006, Leuven, Belgium. and network interfaces will be integrated much closer to pro- Copyright 2006 ACM 1-59593-322-0/06/0004 ...° 5.00. cessor cores reducing memory and I/O overheads and bottle- necks. Concurrent with these hardware developments, the ing event notifications. ETA’s polling-based dedicated PPE socket/TCP/IP network stack is evolving [4] to adopt con- architecture avoids interrupt related overheads, for exam- cepts such as OS-bypass and asynchronous I/O from APIs ple by substituting softirq mechanisms with lighter-weight such as VIA [17] and InfiniBand [26] to reduce operating sys- in-line function calls, and it results in reduced cache misses tem overhead and increase I/O concurrency. Our research is by eliminating cache conflicts between the application code aimed at leveraging these trends to support scalable network and the network stack. performance on standard server platforms. Our second main contribution is to provide a detailed de- In this direction, we and others [50, 42, 43, 41] have sug- scription of the changes required and issues that arose when gested that network packet processing efficiency can be im- modifying the event-driven userver to take full advantage of proved by dedicating a subset of the server’s processors for the asynchronous socket I/O model. Our experience with network processing and by using asynchronous I/O for com- extending the userver to use the AIO API illustrates sev- munication between the network processing engine and the eral new aspects that should be factored in when developing application. In our architecture, the Embedded Trans- AIO-based applications. In particular, the use of an AIO port Acceleration (ETA) [42, 50] component (developed model necessitates additional steps for registering and de- at Intel® Labs) dedicates (“sequesters”) one or more pro- registering (and, therefore, managing) buffer memory, sim- cessors or hardware threads in the server to perform all net- plifies the application by eliminating the need to handle par- work processing. These dedicated Packet Processing En- tially completed operations, exposes new considerations in gines (PPEs) avoid context switches and cache conflicts the sequencing of the application’s socket operations, and with application processes, and they avoid costly interrupts enables applications to process events and issue operations by polling applications and network interfaces. PPEs can using application-specific policies to improve resource man- poll efficiently since they are fully integrated in the sys- agement and performance. In addition, given the similarity tem’s cache coherence domain. ETA also exposes VIA- of our AIO API to the Sockets Extension API [4] being de- style [17] user-level communication to applications, reducing veloped in the industry, our implementation of AIO over data copies and bypassing the operating system for most I/O ETA is potentially useful as an example implementation for operations [26, 3]. The second component of our architec- the Sockets Extension API. ture exposes Asynchronous I/O (AIO) based socket se- The remainder of this paper is organized as follows. We mantics to applications [7, 6, 4, 23]. The APIs support asyn- discuss related work in Section 2. Section 3 provides an chronous versions of traditional socket operations.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-