1 Packet Clustering Introduced by Routers: Modeling, Analysis and Experiments Chiun Lin Lim1, Ki Suh Lee2, Han Wang2, Hakim Weatherspoon2, Ao Tang1 1 School of Electrical and Computer Engineering, Cornell University 2 Department of Computer Science, Cornell University [email protected], kslee, hwang, [email protected], [email protected] Abstract Utilizing a highly precise network measurement device, we investigate router’s inherent variation on packet processing time and its effect on interpacket delay and packet clustering. We propose a simple pipeline model incorporating the inherent variation, and two metrics, one to measure packet clustering and one to quantify inherent variation. To isolate the effect of the inherent variation, we begin our analysis with no cross traffic and step through setups where the input streams have different data rate, packet size and go through different number of hops. We show that a homogeneous input stream with a sufficiently large interpacket gap will emerge at the router’s output with interpacket delays that are negative correlated with adjacent values and have symmetrical distributions. We show that for smaller interpacket gaps, the change in packet clustering is smaller. It is also shown that the degree of packet clustering could in fact decrease for a clustered input. We generalize our results by adding cross traffic. We apply these results to demonstrate how we could reduce jitter by minimizing interpacket gap. All the results predicted by the model are validated with experiments with real routers. I. Introduction For real-world network traffic, the observation that packets tend to cluster together or become bursty after passing through one or multiple routers is well-documented for several timescales [6], [4], [16], [13]. On longer timescales, proposed explanations for burstiness are centered on input traffic characteristics such as the distribution of user’s idle and active time [27], the distribution of file sizes [9], and TCP congestion control [13], [25]. At the packet level, the clustering could be attributed to contention and scheduling with cross-traffic at the switching fabric and timing variation in routing table lookup. Is there any other factor that can cause bursty traffic, maybe on an even finer timescale? Consider an experimental setup where all the above mentioned factors are not present. Let a single packet stream with fixed packet size, constant interpacket gap, as well as identical destination goes through a single isolated and idle router with no cross traffic. One would expect the router’s output packet stream to have the same constant interpacket gap as the input stream. However, it has been observed that the interpacket gap is not constant but instead exhibits some variation on the order of 100 ns at the output, and when the experiment is repeated for various interpacket delay constants, the variation is observed to be sufficient to induce packet clustering in some experiments [12]. Moreover, if the input stream goes through multiple routers, the clustering effect can be more prominent. In the absence of external factors, such variation could only be explained by factors inherent in the router design itself. Possible explanations include clock drift, buffering scheme and quantization of packets into cells [7]. Our goal in this paper, though, is not on the causes of the variation, but on the effect of the router’s inherent variation on interpacket delay and packet clustering. Such a fine-scale investigation was not feasible experimentally prior to [12], as network measurement devices have significant measurement error, see e.g. [20]. BIFOCALS as introduced in [12] allows exact timing measurement of network packet streams by directly capturing the physical layer symbol stream in real-time and time-stamping in off-line post-processing. The time-stamps are exact since the precision of the device is smaller than the width of a single symbol. Our investigation can have potential impact on works and applications related to packet clustering as well as interpacket delay and its variation, i.e. jitter. Clustering packets or bursty traffic affects network performance such as queuing delay [22] and packet loss rate [20]. It is also an important consideration for resource or bandwidth allocation [19], particularly for protocols involving pre-negotiated resources such as Diffserv [3]. Interpacket delay, on the other hand, has been used to detect congestion[5], estimate bandwidth with packet train [18] and trace encrypted connections [26]. Lastly, jitter is an important metric for multimedia and real time protocols such as RTP 2 Ethernet Nominal After 64/66b Frame Data IPD IPG Encoding Size Rate [bytes] [bits] [Gbps] [bits] [bits] 1526 12588 1 125928 113340 520 4288 6 7194 2906 72 594 3 1980 1386 72 594 6 990 396 72 594 9 660 66 Table I: Interpacket delay and interpacket gap for various packet sizes and data rates. All values except for Ethernet frame size are measured from the physical layer of 10 Gigabit Ethernet. [23]. As our research focus of interpacket delay variation is directly related to jitter, we will in fact extend and apply the results obtained and show how jitter could be controlled. In this work, we couple analysis with experiments by proposing a simple model firmly grounded in experimental validation to further our understanding on the input-output characteristics of a router with an inherent variation. In particular, we focus on the change in interpacket delay and on how the router’s inherent variation is able to induce packet clustering. The paper contributes to the knowledge of networking in the following aspects: • We propose a simple device-independent model incorporating a router’s inherent variation. The model admits a pipeline structure. Using the model, we make several predictions on the properties of the interpacket delay at the router’s output. We predict that adjacent interpacket delays are negatively correlated (Theorem 3) and the histogram of the interpacket delay is symmetrical in shape (Theorem 4). We provide a benchmark to quantify the inherent variation of a router. • A metric is proposed to quantify packet clustering. We show how packet clustering in terms of this metric changes as it passes through one or multiple routers (Theorem 7 and Section V-B). We also develop conditions on when the output is less clustered than the input (Corollary 8). • In terms of analysis, we approach the problem by carefully categorizing different regions (Figures 4 and 8). They have different packet clustering behavior depending on both data rate and packet size intricately. • We generalize our results by incorporating cross traffic (Lemma 10 and Theorem 13). We interpret the results obtained under the context of jitter and show how to control jitter. • We are able to provide qualitative and quantitative explanations on the observed phenomena of [12]. We also verify all the model results with repeatable experiments using SoNIC [15], which is a software-defined network interface card that achieves the same level of precision as BIFOCALS. II. Motivation: Observed Phenomena In this section, we motivate our study by presenting the key phenomena observed in the experiments of [12]. The data presented here are obtained by replicating the experiment using SoNIC [15]. Readers interested in further details should refer to either paper for more information. We first establish some terminologies and conventions before moving on to describe the various experiment setups and their associated observations. We start with the basic measurement. The interpacket delay (IPD) is the space, in bits, between the first bit of a packet and the first bit of the subsequent packet. The interpacket gap (IPG) is the space between the last bit of a packet and the first bit of the subsequent packet, i.e. IPG = IPD − packet size. Both IPD and IPG are measured on the physical layer of the 10 Gigabit Ethernet (10 GbE) and so to be precise, packet size from here on refers to the length of the Ethernet frame after preamble bits and control characters are inserted by 64/66b encoding (see Clause 49 of IEEE802.3 [2]). So while Ethernet frame sizes range from 72 to 1526 bytes, the actual packet size is longer after 64/66b encoding. Table I shows various Ethernet frame sizes and their corresponding packet sizes. Note that due to 64/66b encoding, the actual capacity of 10 GbE is 10 Gbps × 66=64 = 10:3125 Gbps. For convenience, we will often refer to time in units of bits instead, where 1 bit is equivalent to 97 ps (1=10:3125 Gbps). We say that a stream of packets is homogeneous if all the packets have the same size, the same interpacket delay, and are heading for the same destination. We use shorthand such as 1526B 3G to refer to a homogenous packet stream with 1526-byte Ethernet frames and 3 Gbps data rate. The first experiment of [12] sends a homogeneous packet stream through an isolated Cisco 6500 and the resulting IPD of the output stream is measured. We repeat the experiment and plot the IPD histogram in Figure 1a. The x-axis 3 5 10 105 105 Histogram count 0 Histogram count Histogram count 10 100 100 1.24 1.25 1.26 1.27 0 2000 4000 6000 4200 4400 4600 4800 IPD (bits) x 105 IPD (bits) IPD (bits) (a) 1526-byte packets and 1 Gbps data (b) 72-byte packets and 3 Gbps data rate (c) 520-byte packets and maximum data rate rate Figure 1: Histogram of 1 million observed interpacket delay at the output of Cisco 6500 for various combination of data rates and packet size. The dotted red line marks the interpacket delay of the homogeneous packet stream. is the IPD measured in bits while the y-axis is the count of each IPD on a log-scale.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages22 Page
-
File Size-