An Analysis of TCP Processing Overhead

An Analysis of TCP Processing Overhead

David D. Clark, Van Jacobson, John Romkey, and Howard Salwen Originally published in IEEE Communications Magazine , June 1989 - Volume 27, Number 6 AUTHOR’S INTRODUCTION he Internet’s Transmission Control Protocol, or mance, which can lead us in fruitless directions when we TCP, has proved remarkably adaptable, working innovate. well across a wide range of hardware and operating The project in this article had a very simple goal. We systems, link capacities, and round trip delays. wanted to try to understand one aspect of TCP perfor- None the less, there has been a background chorus mance: the actual costs that arise from running the TCP ofT pessimism predicting that TCP is about to run out of program on the host processor, the cost of moving the steam, that the next throughput objective will prove its bytes in memory, and so on. These costs of necessity downfall, or that it cannot be ported to the next smaller depend on the particular system, but by taking a typical sit- class of processor. These predictions sometimes disguise uation - the Berkeley BSD TCP running on the Unix oper- the desire to develop an alternative, but they are often ating system on an Intel processor - we could at least triggered by observed performance limitations in the cur- establish a relevant benchmark. rent generation of TCP. However, unless one looks care- What we showed was that the code necessary to imple- fully, one may mistake the source of these limitations, ment TCP was not the major limitation to overall perfor- and accuse the protocol itself of problems when the actu- mance. In fact, in this tested system (and many other al limit is something else: the memory bandwidth of the systems subsequently evaluated by others) the throughput host system, a poor implementation, or just a tuning is close to being limited by the memory bandwidth of the problem. system. We are hitting a fundamental limit, not an artifact Innovation proceeds so fast in our field that we are of poor design. Practically, other parts of the OS had larg- often tempted to jump to something new and supposedly er overheads than TCP. better without understanding what we built the last time. Since this article, we have seen a maturation in the Computer Science leaves a trail of half-analyzed artifacts understanding of TCP performance, we have seen better behind in its rush to the future. This is especially true in network “science” - a willingness to study and understand performance, when Moore’s law offers us a factor-of-two what has been built - and we have seen a more realistic set improvement if we just take the next 18 months off. But of expectations about performance, both in the research unless we study what we now have, we are prey to false context and in the deployed systems we have to live with myths and superstition about the cause of poor perfor- every day. 94 IEEE Communications Mogozine 50th Annivenory Commemorotive Issue/Moy 2002 Authorized licensed use limited to: IEEE Xplore. Downloaded on December 15, 2008 at 11:18 from IEEE Xplore. Restrictions apply. hile networks, especially local The TCP we used is the currently distributed TCP is used on top area networks, have been getting version of TCP for UNIX from Berkeley [5].By faster, perceived throughput at using a production quality TCP, we believe that of a network-level the application has not always we can avoid the charge that our TCP is not increased accordingly. Various fully functional. protocol called performance bottlenecks have While we used a production TCP as a starting been encountered, each of which point for our analysis, we made significant Internet Protocol One aspect ofhas networking to be analyzed often and suspected corrected. of changes to the code. To give TCP itself a fair (IP). This protocol, hearing, we felt it was necessary to remove it contributingW to low throughput is the transport from the UNlX environment, lest we be con- which is a layer of the protocol suite. This layer, especially fused by some unexpected operating system in connectionless protocols, has considerable overhead. connectionless or functionality, and is typically executed in soft- For example, the Berkeley implementation of ware by the host processor at the end points of UNIX uses a buffering scheme in which data is datagram packet the network. It is thus a likely source of process- stored in a series of chained buffers called mbufs. ing overhead. We felt that this buffering scheme, as well as the delivery protocol, While this theory is appealing, a preliminary scheme for managing timers and other system deals with host examination suggested to us that other aspects features, was a characteristic of UNIX rather of networking may be a more serious source of than of the TCP, and that it was reasonable to addressing and overhead. To test this proposition, a detailed separate the cost of TCP from the cost of these study was made of a popular transport protocol, support functions. At the same time, we wanted routing, but the Transmission Control Protocol (TCP) [l]. This our evaluation to be realistic. So it was not fair article provides results of that study. Our conclu- to altogether ignore the cost of these functions. latter function is sions are that TCP is in fact not the source of Our approach was to take the Berkeley TCP the overhead often observed in packet process- as a starting point, and modify it to better give a almost totally the ing, and that it could support very high speeds if measure of intripic costs. One of us (Romkey) task of the Internet- properly implemented. removed from the TCP code all references to UNIX-specific functions such as mbufs, and level packet switch, replaced them with working but specialized ver- TCP sions of the same functions. To ensure that the or gateway. TCP is the transport protocol from the Internet resulting code was still operational, it was com- protocol suite. In this set of protocols, the func- piled and executed. Running the TCP in two tions of detecting and recovering lost or corrupt- UNIX address spaces and passing packets by an ed packets, flow control, and multiplexing are interprocess communication path, the TCP was performed at the transport level. TCP uses made to open and close connections and pass sequence numbers, cumulative acknowledgment, data. While we did not test the TCP against windows, and software checksums to implement other implementations, we can be reasonably these functions. certain that the TCP that resulted from our test TCP is used on top of a network-level protocol was essentially a correctly implemented TCP. called Internet Protocol (IP) [2]. This protocol, The compiler used for this experiment gener- which is a connectionless or datagram packet ated reasonably efficient code for the Intel delivery protocol, deals with host addressing and 80386. Other experiments we have performed routing, but the latter function is almost totally tend to suggest that for this sort of application the task of the Internet-level packet switch, or the number of instructions is very similar for an gateway. IP also provides the ability for packets to 80386, a Motorola 68020, or even an RISC chip be broken into smaller units (fragmented) on such as the SPARC. passing into a network with a smaller maximum packet size. The IP layer at the receiving end is THE COMMON PATH responsible for reassembling these fragments. For FINDING a general review of TCP and IP, see [3] or [4]. One observation central to the efficient Under IP is the layer dealing with the specific implementation of TCP is that while there are network technology being used. This may be a many paths through the code, there is only one very simple layer in the case of a local area net- common one. While opening or closing the con- work, or a rather complex layer for a network nection, or after errors, special code will be exe- such as X.25. On top of TCP sits one of a num- cuted. But none of this code is required for the ber of application protocols, most commonly for normal case. The normal case is data transfer, remote login, file transfer, or mail. while the TCP connection is established. In this state, data flows in one direction, and acknowl- edgment and window information flows in the THE ANALYSIS other. This study addressed the overhead of running In writing a TCP, it is important to optimize TCP and IP (since TCP is never run without IP) this path. In studying the TCP, it was necessary and that of the operating system support needed to find and follow it in the code. Since the Berke- by them. It did not consider the cost of the driv- ley TCP did not separate this path from all the er for some specific network, nor did it consider other cases, we were not sure if it was being exe- the cost of running an application. cuted as efficiently as possible. For this reason, The study technique is very simple: we com- and to permit a more direct analysis, we imple- piled a TCP, identified the normal path through mented a special “fast path” TCP. When a pack- the code, and counted the instructions. However, et was received, some simple tests were more detail’is required to put our work in context. performed to see whether the connection was in IEEE CommunicationsMagazine 50th Anniversary Commemorative Issue/May 2002 45 Authorized licensed use limited to: IEEE Xplore. Downloaded on December 15, 2008 at 11:18 from IEEE Xplore.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us