
APerformance Comparison of TCPIP and MPI on FDDI Fast Ethernet and Ethernet Saurab Nog and David Kotz Department of Computer Science Dartmouth College Hanover NH saurabcsdartmouthedu dfkcsdartmouthedu Dartmouth Technical Rep ort PCSTR November Revised January Abstract Communication is a very imp ortant factor aecting distributed applications Getting a close handle on network p erformance b oth bandwidth and latency is thus crucial to un derstanding overall application p erformance Webenchmarked some of the metrics of net work p erformance using twosetsofexperiments namely roundtrip and datahose The tests were designed to measure a combination of network latency bandwidth and contention We rep eated the tests for two proto cols TCPIP and MPI and three networks Mbit FDDI Fib er Distributed Data Interface Mbit Fast Ethernet and Mbit Ethernet The p erformance results provided interesting insights into the b ehaviour of these networks under dierent load conditions and the software overheads asso ciated with an MPI imple mentation MPICH This do cument presents details ab out the exp eriments their results and our analysis of the p erformance Keywords Network Performance FDDI FastEthernet Ethernet MPI Revised January to emphasize our use of a particular MPI implementation MPICH Copyright 1995 by the authors INTRODUCTION Intro duction This do cument presents b enchmarking tests that measured and compared p erformance of TCPIP and MPI Message Passing Interface implementations on dierentnetworks and their results Wechose these proto cols TCPIP and MPI b ecause TCPIP Greater exibility and user control Higher p erformance Wide usage and supp ort MPI Programming ease A widely accepted standard for distributed computing applications Since our MPI implementation MPICH runs on top of P which in turn uses TCPIP a p erformance comparison of TCPIP and MPI givesagoodestimateoftheoverheads and advantages asso ciated with using the higher level abstraction of MPI as opp osed to TCPIP so ckets We also compared three dierentnetworks to understand the inherent hardware and soft ware limitations of our setup We rep eated the same set of exp eriments on Mbps FDDI Fib er Distributed Data Interface token ring Mbps Fast Ethernet Mbps Ethernet In most cases we crosschecked our results with results from the publicly available ttcp package The results seem to b e in very close agreement including the presence of spikes and other anomalies Setup Details We ran two dierent tests roundtrip and datahose on all six combinations of proto col TCPIP and MPI and network FDDI Fast Ethernet Ethernet Although our exp eri ments were not sp ecic to any implementation of the hardware or software standards the results were necessarily dep endent on the sp ecic implementations Other implementations mayhave dierent p erformance SETUP DETAILS Hardware Software The FDDI and Ethernet exp eriments used RSs running AIX at MHz The FDDI was an isolated network and the Ethernet exp eriments were run when the Ethernet was lightly loaded usually overnight The Fast Ethernet tests used Pentium based PCs with MB RAM running FreeBSD on an isolated network Both the RSs and PCs had MPICH version released July running on top of P Environment All the tests except those involving Ethernet were conducted in an isolated mo de For the Ethernet exp eriments we did not disconnect ourselves from the rest of the world so wewere careful to cho ose quiet times of the day for testing This minimized the risk of interference from external trac We rep eated the tests several times at least running a few thousand iterations of each test every time Wechose the b est values max for bandwidths and min for time as our nal result Thus our numb ers approximate b estcase p erformance results NODELAY option except the Fast Ethernet datahose All TCPIP tests used the TCP Normally the TCPIP proto col delays small packet transmission in the hop e of gaining advantage by coalescing a large numb er of small packets This delay results in articially high latency times for small packets We used TCP NODELAYtoovercome this b ehaviour and eliminate the wait p erio d We also set b oth the sending and receiving side kernel buers at K bytes maximum allowed on the RSs FreeBSD Pentiums allowupto K The increased buers were prompted by the observation thatkernel buers are the b ottlenecks for most network op erations For MPI we used the default conguration in all cases Performance Measures In these tests wewere interested in the following measures of network p erformance Total Bandwidth the sum of the individual bandwidths of concurrent pro cesses It signies the eective bandwidth usage for the set of all participating hosts BandwidthPair of pro cesses the average bandwidth each individual pair of pro cesses was able to use Timeiteration the average time it takes to complete one iteration of roundtrip or datahose FreeBSD often crashed when o o ded with small messages ROUNDTRIP Roundtrip The roundtrip test is designed to measure the following parameters network latency network contention overhead synchronization overheads All machines were connected to the same network but were paired for communication pur p oses Each host talks to its designated partner only There is a masterslave relationship between the two pro cess in each pair The master initiates the communication and then measures the time interval it takes for the slave to receive and send the message back This completes one roundtrip iteration Our test do es many roundtrip iterations in a burst to determine the average timeiteration and the network bandwidth achieved Many iterations are necessary to overcome clo ckgranularity cold cache eects and accuracy problems All pro cesses synchronize b efore switching to the next message size This serves twomajor purp oses Each message size is timed separately restricting timing and other errors to the phase in which they o ccured Synchronization prevents pro cesses from running ahead or lagging b ehind the group Roundtrip is a store and forward test Eachslave pro cess receives the complete message from its master and then sends it back Since at anygiven time only pro cess in a pair is writing data to the network there is no contention b etween the master and slave pro cesses of a pair We ran roundtrip on and pairs up to pairs on FreeBSD of pro cessors simulta neously to determine howcontention inuences network p erformance Results for various combinations of network and proto col are presented in the following graphs ROUNDTRIP The pseudoco de for roundtrip is as follows for all interesting message sizes set messagesize synchronize all master and slave processes if master I am the master starttiming Initialize timer do a large number of iterations foriterationcountiterationcountMAXITERATIONSiterationcount Initiate communication with slave writeslavemessagemessagesize Wait for reply from slave read slavebuffermessagesize stoptiming Stop timer else I am the slave do a large number of iterations foriterationcountiterationcountMAXITERATIONSiterationcount Wait for the master to initiate communication read masterbuffermessagesize Reply back to the master with the same message writemasterbuffermessagesize End ifelse End for ROUNDTRIP 100 ’roundtrip-tcp-fddi-4-totalbw’ ’roundtrip-tcp-fddi-3-totalbw’ ’roundtrip-tcp-fddi-2-totalbw’ 80 ’roundtrip-tcp-fddi-1-totalbw’ 60 40 Bandwidth (Mbits/sec) 20 0 1 10 100 1000 10000 100000 1e+06 1e+07 Message Size (bytes) Figure Roundtrip TCPIP FDDI Total Bandwidth 60 ’roundtrip-tcp-fddi-4-bw’ ’roundtrip-tcp-fddi-3-bw’ ’roundtrip-tcp-fddi-2-bw’ 50 ’roundtrip-tcp-fddi-1-bw’ 40 30 Bandwidth (Mbits/sec) 20 10 0 1 10 100 1000 10000 100000 1e+06 1e+07 Message Size (bytes) Figure Roundtrip TCPIP FDDI Bandwidth p er Pair of Pro cesses MasterSlave 1e+06 ’roundtrip-tcp-fddi-4-time’ ’roundtrip-tcp-fddi-3-time’ ’roundtrip-tcp-fddi-2-time’ ’roundtrip-tcp-fddi-1-time’ 100000 Averageg Time (micro-sec) 10000 1000 1 10 100 1000 10000 100000 1e+06 1e+07 Message Size (bytes) Figure Roundtrip TCPIP FDDI Time p er Iteration ROUNDTRIP 100 ’roundtrip-mpi-fddi-4-totalbw’ ’roundtrip-mpi-fddi-3-totalbw’ ’roundtrip-mpi-fddi-2-totalbw’ 80 ’roundtrip-mpi-fddi-1-totalbw’ 60 40 Bandwidth (Mbits/sec) 20 0 1 10 100 1000 10000 100000 1e+06 1e+07 Message Size (bytes) Figure Roundtrip MPI FDDI Total Bandwidth 60 ’roundtrip-mpi-fddi-4-bw’ ’roundtrip-mpi-fddi-3-bw’ ’roundtrip-mpi-fddi-2-bw’ 50 ’roundtrip-mpi-fddi-1-bw’ 40 30 Bandwidth (Mbits/sec) 20 10 0 1 10 100 1000 10000 100000 1e+06 1e+07 Message Size (bytes) Figure Roundtrip MPI FDDI Bandwidth p er Pair of Pro cesses MasterSlave 1e+06 ’roundtrip-mpi-fddi-4-time’ ’roundtrip-mpi-fddi-3-time’ ’roundtrip-mpi-fddi-2-time’ ’roundtrip-mpi-fddi-1-time’ 100000 Averageg Time (micro-sec) 10000 1000 1 10 100 1000 10000 100000 1e+06 1e+07 Message Size (bytes) Figure Roundtrip MPI FDDI Time p er Iteration ROUNDTRIP 100 ’roundtrip-tcp-fast-3-totalbw’ ’roundtrip-tcp-fast-2-totalbw’ ’roundtrip-tcp-fast-1-totalbw’ 80 60 40 Bandwidth (Mbits/sec) 20 0 1 10 100 1000 10000 100000 1e+06 1e+07 Message Size (bytes) Figure Roundtrip TCPIP Fast Ethernet Total Bandwidth 60 ’roundtrip-tcp-fast-3-bw’ ’roundtrip-tcp-fast-2-bw’ ’roundtrip-tcp-fast-1-bw’ 50 40 30 Bandwidth (Mbits/sec) 20 10 0 1 10 100 1000 10000 100000 1e+06 1e+07 Message Size (bytes) Figure Roundtrip TCPIP Fast Ethernet Bandwidth p er Pair of Pro cesses MasterSlave 1e+06 ’roundtrip-tcp-fast-3-time’ ’roundtrip-tcp-fast-2-time’
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages21 Page
-
File Size-