
CAIA Testbed for TEACUP Experiments Sebastian Zander, Grenville Armitage Centre for Advanced Internet Architectures, Technical Report 140314B Swinburne University of Technology Melbourne, Australia [email protected], [email protected] Abstract—This technical report describes how we setup a the TEACUP v0.4 software itself is described in more TEACUP v0.4 testbed to conduct automated TCP experi- detail in [4]. ments. Over the last few decades several TCP congestion control algorithms were developed in order to optimise The rest of the report is organised as follows. Section TCP’s behaviour in certain situations. While TCP was II describes the overall design of the testbed. Section III traditionally used mainly for file transfers, more recently describes our PXE boot setup. SectionIV describes the it is also becoming the protocol of choice for streaming setup of the testbed hosts that are used as traffic sources applications, for example YouTube or Netflix [1], [2]. Now and sinks. SectionV describes the setup of the testbed there is even an ISO standard called Dynamic Adaptive router. SectionVI describes the setup of other hardware Streaming over HTTP (DASH) [3]. However, the impact of part of the testbed, such as power controllers. Section different TCP congestion control algorithms on TCP-based streaming flows (within a mix of other typical traffic) is VII lists a few known issues. Section VIII concludes not well understood. TEACUP experiments in a controlled and outlines future work. testbed allow to shed more light on this issue. II. OVERALL TESTBED DESIGN Index Terms—TCP, experiments, testbed Our testbed is installed in a single rack. It consists of I. INTRODUCTION the following hardware: Over the last few decades several TCP congestion control • Six PC-based testbed hosts that can be used as algorithms were developed in order to optimise TCP’s traffic sources or sinks. behaviour in certain situations. While TCP was tradi- • PC-based router that routes between testbed net- tionally used mainly for file transfers, more recently it works. is also becoming the protocol of choice for streaming • Managed switch to which all PCs are connected applications, for example YouTube or Netflix [1], [2]. • PC used for testbed control and data collection Now there is even an ISO standard called Dynamic • Two power controllers to control the power of the Adaptive Streaming over HTTP (DASH) [3]. However, testbed hosts and router the impact of different TCP congestion control algo- • 8-port IP KVM that allows to remotely access the rithms on TCP-based streaming flows (within a mix of console of the testbed hosts and router other typical traffic) is not well understood. Experiments The six testbed hosts are installed as triple-boot machines in a controlled testbed allow to shed more light on this that can run Linux (openSUSE 12.3 64bit), FreeBSD 9.2 issue. 64bit, and Windows 7 64bit. This allows to experiment This technical report describes how we setup a with all major TCP congestion control variants. The TEACUP1 [4] testbed to conduct automated TCP exper- testbed router is dual-boot and runs Linux (openSUSE iments using real machines. Note that in this report we 12.3 64bit) or FreeBSD 9.2 64bit. We mainly focus have replaced all host names and public IP addresses in on a Linux router, since only Linux supports all the our testbed uses with fake names and private IP addresses Adaptive Queue Management (AQM) techniques we for security reasons. The design and implementation of are interested in. However, we also want to have a FreeBSD router as comparison for the AQM techniques 1 “TCP Experiment Automation Controlled Using Python” FreeBSD supports. To automatically boot machines with CAIA Technical Report 140314B March 2014 page 1 of 13 the required operating system (OS) we use a PXE-based option user-class = "iPXE" { booting approach. filename "conf.ipxe"; } else { Figure1 shows a logical picture of the testbed. All filename "undionly.kpxe"; testbed hosts, the testbed router and the data and option routers 192.168.1.1; } control server are connected to a control network } (192.168.1.0/24). The control network is connected to the Internet. Three testbed hosts are connected to the A host is identified by its MAC address (hardware eth- test network 172.16.10.0/24 and three hosts are con- ernet) and handed out a static IP address (fixed-address). nected to test network 172.16.11.0/24. The testbed router The next-server specifies the TFTP server that serves the routes between the two test networks. The three different boot files, in our case the control host 192.168.1.254. If networks are switched Gigabit Ethernet networks, each the client is an iPXE [6] client, it is given the name mapped to a different VLAN on our managed switch. For of the configuration file to download from the TFTP the sake of brevity in Figure1 we have omitted the power server (conf.ipxe). If the client is not an iPXE client, controllers and the KVM, which are also connected to it is given the name of the iPXE ROM to download 192.168.1.0/24. from the TFTP server and is also told its gateway. Since our testbed hosts and router are not iPXE they will III. PXE BOOT SETUP first download the iPXE ROM, reboot, execute the iPXE ROM and then download conf.ipxe from the from the To automatically switch between OS on the hosts and the TFTP server. router we setup a PXE-based boot solution. All machines are installed with the different OS on different hard disk B. TFTP server partitions. All machines boot via PXE and the PXE boot configuration determines the hard disk partition to boot. The control host is a FreeBSD machine and runs The PXE configuration can be changed on the fly by our the TFTP server. We enabled the TFTP server as automated test script system TEACUP [4]. follows. In /etc/inetd.conf we removed the # The overall PXE boot process works as follows. The in front of the line: tftp dgram udp wait root PXE-enabled client contacts the local DHCP server. The /usr/libexec/tftpd tftpd -l -s /tftpboot. DHCP server will hand out an IP address and the address The directory /tftpboot is the directory with the boot of a local TFTP server that provides the boot image. files. In /etc/rc.conf we added inetd_enable="YES" The client will download the boot image from the TFTP to start inetd at boot time. server and then boot it. In our case the boot image is For downloads from within iPXE configuration, we ac- GRUB4DOS and the GRUB4DOS configuration selects tually decided to use an HTTP server because it is faster the hard disk partition to boot from. and more configurable. We installed thttpd from the ports We first describe the setup of the different components tree, in /etc/rc.conf we added thttpd_enable="YES" to and then describe the boot process in more detail. start thttpd at boot time, and we configured it as follows (/usr/local/etc/thttpd.conf): A. DHCP server user=www dir=/tftpboot Our DHCP server runs on FreeBSD. For setting up port=8080 chroot a DHCP server refer to the technical report [5]. The logfile=/var/log/thttpd.log following configuration was added to our DHCP server. pidfile=/var/run/thttpd.pid The following example shows the entry for testbed host testhost1; the entries for other hosts only differ in the The iPXE ROM image undionly.kpxe and the ipxe con- MAC and IP addresses. figuration files are placed in /tftpboot (the iPXE image host testhost1 { can be downloaded from [6]). We use GRUB for DOS hardware ethernet 00:1e:0b:9d:8f:fb; [7] to select the boot partition. The file grub.exe must fixed-address 192.168.1.2; also be placed in /tftpboot. next-server 192.168.1.254; if exists user-class and CAIA Technical Report 140314B March 2014 page 2 of 13 J V`JV : ::JR QJ `QC 8 8 HQJ `QC V`0V` Q Q Q Q Q Q 8 8 8 8 Q% V` %II7JV ^`VV_ V VIL H ^1J%6_ Figure 1. Testbed overview C. Clients HTTP server, where ${net0/mac} is substituted by the client’s MAC address. If this file is not present the client We enabled PXE booting for all hosts in the BIOS. will download and use grub.exe to boot from the first For most of our PCs this was straight-forward, we just hard disk. This will then activate GRUB2 installed in needed to put network boot first in the boot options the MBR, which allows to select the OS via the GRUB2 list. For the testbed router, which is a different type of menu, if one has terminal access. By default after a machine, we first had to enable an option “Enable Option timeout Linux will be booted. ROM". Then after a reboot a new network boot entry appeared in the boot devices list, and we set this as first The host specific ipxe files look as follows. The entry. following is the file for the host testhost1 conf- macaddr_00:1e:0b:9d:8f:fb.ipxe: D. Boot process #!ipxe chain http://192.168.1.254:8080/grub.exe Assuming PXE boot is enabled in the BIOS, first the --config-file="root (hd0,2); chainloader +1" client machine will contact the DHCP server. The DHCP server will hand out the client’s IP address and tell the In this case the client first downloads grub.exe and then client the IP address of a TFTP serve.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-