A TCP Traffic Generator for Developers Arnd Hannemann

05.10.2016

Arnd Hannemann credativ GmbH 1 / 29 Overview

Introduction

Flowgrind Architecture

Example measurements

Summary

Arnd Hannemann credativ GmbH 2 / 29 Overview

Introduction Motivation

Flowgrind Architecture

Example measurements

Summary

Arnd Hannemann credativ GmbH 3 / 29 Measuring

Wired Internet Tool requirements Backbone

I Background: Wireless Mesh Networks Backbone Mesh Gateways I Creating load anywhere in the network

I Measuring TCP performance between Backbone Mesh Routers any two nodes

I Testing TCP variants Routing Non-routing Mesh Clients Mesh Clients Extensive list of TCP metrics Wired Wirless Mesh Wireless Access-Point I Connection Connection Connection I Separation of control and test traffic

Arnd Hannemann credativ GmbH 4 / 29 Related works

Feature Iperf3 Netperf Thrulay NUTTCP

TCP XXXXXX UDP XXXXXX SCTP – XX ––– Other protocols – – X ––– Kernel statistics – ––– X Interval reports XXXX# – X Concurrent tests against same hosts XXXX – X Concurrent tests against different hosts – X –––– Distributed tests – – – – – Bidirectional test connections –––––# Test scheduling –# – – – – – Traffic generation – – ––– Control/test data separation – – –# – – X

Arnd Hannemann credativ GmbH 5 / 29 Motivation for a new tool

Shortcomings of existing tools

I Client-server architecture ⇒ hard to generate cross-traffic

I Separation of data/control traffic

Arnd Hannemann credativ GmbH 6 / 29 Overview

Introduction

Flowgrind Architecture Architecture Client-server architecture RPC

Example measurements

Summary

Arnd Hannemann credativ GmbH 7 / 29 Flowgrind

Flowgrind

I Is a distributed network performance measurement tool

I Focuses on TCP testing/debugging

I Knobs to test TCP variants against each other

I Dump packet headers with libpcap

I Gathers TCP statistics from kernel (/FreeBSD)

Arnd Hannemann credativ GmbH 8 / 29 Terminology in Flowgrind

Flows

Wired Internet I One data connection for each flow Backbone

I Flows have a source and a Backbone destination endpoint Mesh Gateways Test data can be sent in either I Backbone direction Mesh Routers

I Scheduling, flows can run Routing Non-routing Mesh Clients Mesh Clients sequentially, in parallel or can Wired Wirless Mesh Wireless Access-Point Connection Connection Connection overlap Test ConnectionRPCConnection Flowgrind Controller Flowgrind Daemon I Individual parameters for each flow

Arnd Hannemann credativ GmbH 9 / 29 Problems with client-server architecture

Wireless multi-hop network

Arnd Hannemann credativ GmbH 10 / 29 Problems with client-server architecture

Wireless multi-hop network

Client Server

Arnd Hannemann credativ GmbH 10 / 29 Problems with client-server architecture

Wireless multi-hop network

Client Server

Arnd Hannemann credativ GmbH 10 / 29 Client-server architecture

Overview

I Tools like iperf: split into client and server I Flows can only be established between a client and a server, not between servers I Architecture implemented in older versions of Flowgrind

Problems with client-server architecture

I For multiple clients: external synchronization of test start is needed I Potential different data handling in client and server (e.g. Thrulay)

Arnd Hannemann credativ GmbH 11 / 29 Distributed architecture

Controller (flowgrind) Wired Internet Backbone

I Parses the test parameters

I Configures all involved daemons Backbone Mesh Gateways I Presents the results

Backbone Mesh Routers Daemon (flowgrindd)

Routing Non-routing I Started on every test node Mesh Clients Mesh Clients Wired Wirless Mesh Wireless Access-Point I Performs actual tests Connection Connection Connection Test ConnectionRPCConnection

I Measures performance metrics Flowgrind Controller Flowgrind Daemon

Arnd Hannemann credativ GmbH 12 / 29 Remote Procedure Calls (RPC)

RPC in Flowgrind

I Uses XML-RPC

I All calls initiated by controller, no RPC between daemons

I Can employ different IP address / interface to separate control and test traffic

During a test

I Controller periodically queries all daemons for interval results

I Formats and prints results upon receiving them

Arnd Hannemann credativ GmbH 13 / 29 Overview

Introduction

Flowgrind Architecture

Example measurements Wireless Multi-Hop Network with Cross Traffic AWS example: congestion control algorithms

Summary

Arnd Hannemann credativ GmbH 14 / 29 Cross-Traffic in a Wireless Multihop Network

Test scenario

I Measurement performed on testbed

I Two flows between two unique pairs of nodes

I Routes overlap, one bottleneck link

I Second flow started after a delay, stopping earlier

Arnd Hannemann credativ GmbH 15 / 29 Topology

A E

C D

Bottleneck link

B F

I Flow 1 between nodes A and E

I Flow 2 between nodes B and F

Arnd Hannemann credativ GmbH 16 / 29 Topology

Arnd Hannemann credativ GmbH 17 / 29 WMN example: Flowgrind arguments

flowgrind -n 2 -i 5 -O b=TCP_CONG_MODULE=reno -F 0 -H s=wlan0.mrouter16/mrouter16,d=wlan0.mrouter8/mrouter8 -T b=900 -F 1 -H s=wlan0.mrouter17/mrouter17,d=wlan0.mrouter9/mrouter9 -T b=300 -Y b=300

Arnd Hannemann credativ GmbH 18 / 29 WMN example: Output

# ID begin end through RTT RTT RTT IAT IAT IAT # ID [s] [s] [Mbit] min avg max min avg max S 0 375.011 380.004 0.288782 12916.913 14135.647 15035.946 30.069 183.367 969.321 R 0 375.008 380.001 0.446299 5378.736 7304.811 8322.028 12.080 138.115 1206.780 S 1 375.008 380.009 0.157245 1284.537 2348.903 3978.513 70.058 418.893 2341.099 R 1 375.009 380.010 0.026211 11766.836 11766.836 11766.836 2919.213 2919.213 2919.213 S 0 380.004 385.000 0.288551 13335.203 14015.217 15029.046 63.087 269.419 1427.218 R 0 380.001 385.003 0.406170 7380.097 8201.946 9628.294 16.043 191.917 987.361

cwnd ssth uack sack lost retr fack reor rtt rttvar rto castate mss mtu status ###### 83.000 59 83 0 0 0 0 3 3276.500 50.000 4940.000 open 1448 1500 (n/n) 128.000 107 128 0 0 0 0 3 2879.000 6.000 4252.000 open 1448 1500 (n/n) 44.000 7 44 0 0 0 0 3 2880.500 256.000 4208.000 open 1448 1500 (n/n) 8.000 5 8 0 0 0 0 3 2832.500 149.000 3848.000 open 1448 1500 (n/n) 86.000 59 86 0 0 0 0 3 3654.500 190.000 5072.000 open 1448 1500 (n/n) 142.000 107 142 0 0 0 0 3 3388.500 65.000 4520.000 open 1448 1500 (n/n)

Arnd Hannemann credativ GmbH 19 / 29 WMN example: Goodput

1.6 Flow 0 from node 16 to 8 Flow 1 from node 17 to 9 1.4

1.2

1 ] s / Mb 0.8

Goodput [ 0.6

0.4

0.2

0 0 100 200 300 400 500 600 700 800 900 Time [s]

Arnd Hannemann credativ GmbH 20 / 29 WMN example: Congestion Window

400 Congestion Window Slowstart threshold 350

300

250

200

150 Window size [segments]

100

50

0 0 100 200 300 400 500 600 700 800 900 Time [s]

Arnd Hannemann credativ GmbH 21 / 29 Test of congestion control algorithms in AWS

Test scenario

I Measurement performed in VPC

I Four flows between a pair of nodes

I Four different congestion control algorithms

Arnd Hannemann credativ GmbH 22 / 29 AWS: Flowgrind arguments

flowgrind -n 4 -H s=172.30.0.122,d=172.30.0.123 -T s=900 -F 0 -O s=TCP_CONGESTION=yeah -F 1 -O s=TCP_CONGESTION=cubic -F 2 -O s=TCP_CONGESTION=highspeed -F 3 -O s=TCP_CONGESTION=htcp

Arnd Hannemann credativ GmbH 23 / 29 AWS example: Output

# ID 0 S: 172.30.0.138 (Linux 4.6.0-1-amd64), random seed: 1611955119, sbuf = 12582912/0 [B] (real/req), rbuf = 12582912/0 [B] (real/req), SMSS = 8949 [B], PMTU = 9001 [B], Interface MTU = 9001 (unknown) [B], CC = yeah, duration = 900.003/900.000 [s] (real/req), through = 5.758049/0.000000 [Mbit/s] (out/in), request blocks = 79075/0 [#] (out/in)

# ID 0 D: 172.30.0.139 (Linux 4.6.0-1-amd64), random seed: 1611955119, sbuf = 12582912/0 [B] (real/req), rbuf = 12582912/0 [B] (real/req), SMSS = 1448 [B], PMTU = 9001 [B], Interface MTU = 9001 (unknown) [B], through = 0.000000/5.684553 [Mbit/s] (out/in), request blocks = 0/78065 [#] (out/in), IAT = 0.004/11.529/281.197 [ms] (min/avg/max), delay = 18.708/11481.539/27539.894 [ms] (min/avg/max) ...

Arnd Hannemann credativ GmbH 24 / 29 AWS example: Goodput

500 500 YeAH-TCP CUBIC TCP Highspeed TCP H-TCP 400 400 1 − 300 300 Mb s

200 200 Goodput in

100 100

0 0 0 100 200 300 400 500 600 700 800 900 Time in s

Arnd Hannemann credativ GmbH 25 / 29 Overview

Introduction

Flowgrind Architecture

Example measurements

Summary

Arnd Hannemann credativ GmbH 26 / 29 Summary

Feature Iperf Iperf3 Netperf Thrulay TTCP NUTTCP Flowgrind

TCP XXXXXXX UDP XXXXXX – SCTP – XX –––– Other protocols – – X –––– Kernel statistics – ––– XX Interval reports XXXX# – XX Conc. tests w. same hosts XXXX – XX Conc. tests w. different hosts – X –––– X Distributed tests – – – – – X Bidirectional traffic –––––# X Test scheduling –# – – – – – X Traffic generation – – ––– X Control/test data – – –# – – XX

Arnd Hannemann credativ GmbH 27 / 29 Summary

Flowgrind

I Distributed architecture well suited for complex test scenarios

I Extensive TCP metrics

I Advanced traffic generation features

I https://github.com/flowgrind/flowgrind

Possible future improvements

I Option for easier multi-core support, performance

I support for TCP Fast Open

I Add support for other procotols: UDP/DCCP/SCTP

Arnd Hannemann credativ GmbH 28 / 29 Thanks for listening. Questions?

Arnd Hannemann credativ GmbH 29 / 29