KSIITRANSACTIONSONINTERNETANDINFORMATIONSYSTEMSVOL.6,NO.11,Nov20122920 Copyright©2012KSII

Online Games Traffic Multiplexing: Analysis and Effect in Access Networks Jose Saldana, Julián Fernández-Navajas, José Ruiz-Mas and Luis Casadesus CommunicationsTechnologiesGroup(GTC)–AragonInst.ofEngineeringResearch(I3A) Dpt.IEC.AdaByronBuilding.EINAUniv.Zaragoza 50018Zaragoza,Spain [email:{jsaldana,navajas,jruiz,luis.casadesus}@unizar.es] *Correspondingauthor:JoseSaldana ReceivedAugust13,2012;revisedOctober24,2012;revisedNovember13,2012; acceptedNovember13,2012;publishedNovember30,2012

Abstract

Enterprisesthatdeveloponlinegameshavetodeploysupportinginfrastructures,including hardwareandbandwidthresources,inordertoprovideagoodservicetousers.FirstPerson ShootergamesgeneratehighratesofsmallUDPpacketsfromtheclienttotheserver,sothe overheadissignificant.Thisworkanalyzesamethodthatsavesbandwidth,bytheadditionof alocalagentwhichqueuespackets,compressesheadersandusesatunneltosendanumberof packetswithinamultiplexedpacket.Thebehaviorofthesystemhasbeenstudied,showing that significant bandwidth savings can be achieved. For certain titles, up to 38% of the bandwidthcanbesavedforIPv4.Thispercentageincreasesto54%forIPv6,asthisprotocol hasabiggeroverhead.Thecostofthesebandwidthsavingsistheadditionofanewdelay, whichhasanupperboundthatcanbemodified.Sothereisatradeoff:thegreatertheadded delays,thegreaterthebandwidthsavings.Significantreductionsintheamountsofpacketsper secondgeneratedcanalsobeobtained.Testshavebeendeployedinanemulatedscenario matchinganaccessnetwork,showingthatifthenumberofplayersisbigenough,theadded delayscanbeacceptableintermsofuserexperience.

Keywords: gaming,multiplexing,compressing,networkgames,qualityofexperience,first personshooter

Apreliminaryversionofthisworkwaspublishedin“BandwidthEfficiencyImprovementforOnlineGamesbythe useofTunneling,CompressingandMultiplexingTechniques,”Proc.InternationalSymposiumonPerformance EvaluationofComputerandTelecommunicationSystemsSPECTS2011,pp.227234,TheHague,Netherlands, June2011.Someotherpartswerepublishedinthearticle“InfluenceoftheRouterBufferonOnlineGamesTraffic Multiplexing,”presentedatthesameconference,pp.253258.ThestudyhasbeenextendedtoanotherFPSgame. Thestudyofthesavinginpacketspersecondhasbeenexpandedthroughanalyticalcalculationsandinclusionin thecomparisonsbetweenanalyticalandsimulationresults.Theexponentialdistributionhasbeenaddedandstudied. ThisworkhasbeenpartiallyfinancedbytheCPUFLIPIProject(MICINNTIN201017298),theEuropeanSocial Fund, the MBACToIP Project, of the Aragon I+D Agency and Ibercaja Obra Social, and the NDCIPIQQoE ProjectoftheCatedraTelefonica,Univ.ofZaragoza. http://dx.doi.org/10.3837/tiis.2012.10.010 KSIITRANSACTIONSONINTERNETANDINFORMATIONSYSTEMSVOL.6,NO.11,Nov20122921

1. Introduction

Inrecentyears,onlinegamingviaInternethasbecomemoreandmorepopular.Sometitles havemillionsofusers [1] sotheenterprisesthatdevelopthesegameshavetofaceadifficult problemwheneveranewtitleisreleased:theyneedhardwareandbandwidthresourcesin ordertoavoidsaturationoftheirinfrastructure.Asthesuccessofanewtitleisnoteasily predicted,theymayhavetooverprovisiontheseresourcestoensurethatuserswhobuythe gamereceiveagoodservice [2] .In [3] astudyofthebehaviorof gamers waspresented.Itwas shownthattheyareverydifficulttosatisfy.Iftheyhaveconnectionproblems,theyusually leaveandneverreturn,andtheytendnottobeloyaltoaspecificserver. TwoofthemostpopulargenresofonlinegamesareMMORPGs(MassivelyMultiplayer Online Role Playing Games) and FPSs (First Person Shooters). Reference [4] studied the trafficofMMORPGs, concluding thatthey share some characteristicssuch asperiodicity, locality,andselfsimilarity.Anotherconclusionisthattheyhavelessbandwidthandrealtime requirementsthanFPSs. InFPSs,theactionsoftheplayershavetobepropagatedtotheserverandtotherestofthe playersinaveryshorttime,sonetworkdelaysareverycritical.Thesegamesproduceahigh rateofsmallUDPpackets(sometensofbytes)fromtheclienttotheserver,sotheoverhead causedbyIP/UDPheadersissignificant.Servertoclientpacketsaretypicallybigger. The design of these games takes into account that the access network is usually the narrowestbottleneck.Infact,in [5] itwasshownthatsometitlesweredesigned“tosaturate thenarrowestlastmilelink,”takingintoaccountthemaximumavailablebandwidthinthese networksatthemomentthegamewasreleased.Mostaccesstechnologies,asinthecasewith cable or DSL, present an asymmetrical bandwidth, and the uplink is normally the most stringentone. Inordertoreducetheworkloadonthecentralgameserver,andtoensureaprovisionofa goodservice,somestudies [6] , [7] haveproposedtheinclusionofnetworkelements(proxies) nexttotheaccessnetwork,whichcoulddeploydifferenttasks,andthusalleviatethecentral server processing workload. On the other hand, techniques for multiplexing packets and compressingheaders [8] havealsobeendefinedandextensivelyusedsoastoreduceoverhead inotherrealtimeservices,suchasVoIP. Mergingthesetwoideas,wecanthinkaboutaproxyoralocalagentwhichqueuespackets, compressesheadersandsendsmultiplepacketsintoabiggerpacket,atthecostofaddingnew delays,mainlycausedbytheretentiontimeinthequeue.Twomainbenefitscanbeobtained:a reduction of the number of packets per second the router has to manage, and bandwidth savingsgiventhatsmallpacketshaveasignificantoverhead.Thisproxycouldbedistributed withthegameapplicationinthesamewayaslocalserversaredistributedwithcertaintitles. Regardingthepossiblescenarioswherethistechniquecanbeapplied,networkaggregation pointswhereanumberofflowsispresentarethemostinteresting.Asmallmultiplexingdelay may be enough in order to merge a large number of packets, thus obtaining significant bandwidthreduction.Thefirstscenariocouldbetheinfrastructureofagameprovider( Fig. 1a ),whereproxiescanbeusedinordertotransferworkloadtonetworkborders.Thesame techniquecouldbeusedinLANparties,wherelargenumbersofplayerssharethesamepath. Ontheotherhand,Internetcafés,whichhavegivenmanyuserstheopportunityofaccessing Internetservicessincethemiddleofthe1990s,arealsoaninterestingscenario.Nowadays, theystillrepresentanimportantmeansofconnectionforusersinsomecountries [9] .The 2922 Saldanaetal.:OnlineGamesTrafficMultiplexing profileoftheirusershasbeenstudied [10] ,andgaminghasbeenreportedasanimportant activity.Internetcafésarepresentallovertheworld,buttheyhaveaspecialsignificancein developing countries [11] . This scenario, where many computers share the same Internet connection,issubjecttosubstantialvariability:differentnetworktechnologiesdependonthe telecommunicationsinfrastructureofthecountryinquestion,differentroutingequipmentand network topologies, etc. Bandwidth is considered a scarce resource, which has to be well administrated.Itiscommonforagroupofpeoplegotoacafétoplayagame,sothetrafficof thesegroupsofplayerscouldbecompressedandmultiplexedinordertosavebandwidth, takingintoaccountthattheaccessnetworkisusuallythenarrowestbottleneck.Thelocalagent canbeplacedindifferentlocationsinthescenario:itcanbeincludedinalocalmachine( Fig. 1b )orintothecomputerofoneoftheplayers( Fig. 1c ).Itcouldevenbeembeddedinthe router( Fig. 1d ).Thus,itcouldbeabletomeasurethecurrenttrafficdistributionoftheaccess networkandtousethatinformationtoproperlytunemultiplexingparameters.

(a) (b)

() (d) Fig. 1.Scenarioswheremanyplayerssharethesamepath:a)trafficbetweenproxyserversofthesame game;b)playerssharinganaccessnetwork,usingalocalagent;c)playersusingthecomputerof anotherplayerinordertocreatethetunnel;d)localagentembeddedintherouter.Thicklinesrepresent thetrafficofanumberofplayers

Attheotherendofthecommunication,theserverwouldhavetoimplementthedemultiplexer and decompressor, which would imply some processing capacity, and some space for the context ofeachflow,i.e.theinformationnecessarytorebuildthecompressedheaders,(some tensofbytes,aswewillsee [12] ).Thisdoesnotinvolveascalabilityproblem,astheserver alreadystoresthestateofthegameforeachplayer.Ontheotherhand,thesavingsintermsof bandwidthandpacketspersecondmaybebeneficialfortheserver. Ifthenumberofplayersissufficientlylarge,attheexpenseofaddingsmalldelays,alarge KSIITRANSACTIONSONINTERNETANDINFORMATIONSYSTEMSVOL.6,NO.11,Nov20122923 number of packets could be multiplexed into a larger packet. Bandwidth saving not only affectsthegamingtraffic,butitcanalsobebeneficialforthebackgroundtrafficwhichshares theaccesswithit.Furthermore,multiplexinghasanotheradvantage:thenumberofpackets per second the router has to manage will be reduced. There are other applications and scenarioswheremanyrealtimeflowssharethesamepath,e.g.VoIPtrunking,andtheuseof multiplexingandcompressingtechniqueshasbeenproposedandevenstandardized [8] for these.Manyonlinegameshavesimilartrafficpatterns,generatingahighrateoftinypackets, andthuspresentingabigoverhead. Onlyclienttoservertrafficwillbeconsideredhere,asbiggersavingscanbeobtainedand inmanyscenarios(e.g.DSL)thistrafficistransmittedviathemostrestrictivelink.Wehaveto measurethenetworkimpairmentsthatdeterminethequalityexperiencedbyusers,inorderto properlytunetheparametersthatdefinethetradeoffbetweenbandwidthsavingandquality. Although this technique could be applied to other game genres, we will consider FPS trafficinthisworkasFPSgameshaveverystringentrealtimerequirements.Thesubjective qualitymainlydependsondelayandpacketloss [13] .TheSystemResponseTime(SRT:the timethesystemneedstodetectauserevent,toprocessitandtosendtheupdatedgamestateto thelocaloutputdevice)hastobemaintainedunderacertainlimit. AtechniquenamedTCM(Tunneling,CompressingandMultiplexing)waspresentedin [14] and tested by means of simulation. This technique compresses headers, multiplexes packetsfromdifferentflowsandsendsthemusingatunnel.ThetrafficofeightFPSgames wasgeneratedusingrealtracesorsyntheticmodels,andthebandwidthsavingswerepresented. By the addition of small delays, it is possible to obtain bandwidth savings of 30% for clienttoservertrafficofmanygames,andupto50%forsomeothers.Anothereffectofthe techniqueisthatpacketsizeincreasesdependingonthenumberofmergedpackets. Thecurrentpaperpresentsatheoreticalanalysisofthesavingsandcomparesthiswith results obtained using simulation. Taking into account that many scenarios are access networks,afurtherstephasbeentaken.Thestudyhasbeenextendednotonlytobandwidth savings,butalsotothewaythesesavingscanbetranslatedintoQoSparameterimprovements. The traffic has been sent in an emulated access network scenario, using different buffer policiesintheaccessrouter.Abigbufferistested,andalsoatimelimitedone,showingits capabilityforlimitingthelatency.TheresultsintermsofdelayandpacketlossfortheFPS trafficandalsoforbackgroundtrafficarepresentedandanalyzed. Therestoftheworkisorganizedasfollows.Thefollowingsectiondescribesrelatedwork. Section3explainsandanalyzesthetunneling,compressingandmultiplexingmethod.Section 4detailstheresults.Thepaperendswiththeconclusions.

2. Related Work Variousrelatedtopicsareconsideredinthisstudy.First,weconsideronlinegamingtraffic. We then study the problem of the access router. The supporting infrastructures of online games are subsequently summarized and, finally, different optimization algorithms are described.

2.1 Online Gaming Traffic There is a significant amount of literature regarding thetraffic of online games. We only consideractivetraffic,i.e.trafficgeneratedoncethegamehasstarted.Thistraffichastwo different characteristics. First, the client application is responisble for communicating the actionsoftheplayerstotheserver,usingsmallpacketswithasmallperiod.Second,theserver 2924 Saldanaetal.:OnlineGamesTrafficMultiplexing calculatesthenewstateofthegameandbroadcastsittoalltheplayers,usingbiggerpackets whosesizedependsonthenumberofplayers.Ref. [15] presentedamethodtoextrapolate server to client traffic, obtained from empirical measurements. Size distributions were obtainedforanNplayergamefromthemeasuredtrafficof2and3players.Theclientto serversizedistributionwasfoundtobeindependentofthenumberofplayers. In [5] a500millionpackettraceof CounterStrike wasanalyzed.Itwasconcludedthatthe game is designed to saturate the bottleneck, which is the lastmile link. In [16] the characteristicsofmanyonlinegameswereanalyzedintermsofpacketsizeandinterpacket time.In [17] asurveyofdifferenttrafficmodelsfor17populargamescanbefound.These studies show that such games generate a high rate of small packets. This produces a big overhead, so bandwidth savings can be achieved by means of header compression and multiplexing.In [5] itisalsosaidthatthemainbottleneckisfrequentlythenumberofpackets persecondthataroutercanmanage,andnotonlythebandwidthoftheaccessline.Thereason forthisisthatroutersareusuallydesignedforbigpackets,andcanexperienceproblemswhen managingburstsofsmallones.

2.2 Access Networks and Size of the Router Buffer Thescenarioweareconsideringmayhaveverydifferenttechnologies,andtheaccessrouteris noexception:thereisabigvarietyofthem.Theproblemofsizingthebufferofarouterhas beenstudiedindepth.Inthereviewpresentedin [18] ,DhamdereandDrovolisexplainedthat someyearsago,the“ruleofthethumb”ofusingthebandwidthdelayproductwascontested bythesocalled“StanfordModel”,whichusessmallerbuffers.Inthesamework,theauthors alsoproposedatimelimitedbufferwhichdiscardsthepacketsthatspendmorethanacertain timeinthequeue.Thisbufferpenalizesbigpackets,butisinterestingforrealtimemultimedia flows,asitmaintainsthedelayunderanupperbound.Inthispaperwecomparethisapproach withtheuseofbiggerbuffers. In [19] alargenumberofresidentialaccessesweremeasured,anditwasfoundthatmany of them hadbig queues that may add delays of hundreds of milliseconds. In the eventof having traffic amounts that fill the queue, the delay is sufficient to prevent interactive applicationsfromhavinganacceptablequalityfortheuser.Thiscanhappenifpeertopeer applications,whichtendtosaturatetheaccesslink,areusedatthesametimeasinteractive games. As a consequence, the size of the router buffer has to be studied as an important elementwhenconsideringthepossibilityofplayinginteractivegames.

2.3 Infrastructure for Supporting Online Games Theproblemoftheinfrastructurethatsupportsgameshasalsobeenstudiedinsomeworks. Fromthepointofviewoftheuser,Ref. [20] presentedanalgorithmtoallowtheclientto adaptivelyselectthebestserverforacertainonlinegame.Thiscouldallowagroupofusersto playinthesameserver,andusemultiplexingtechniques. Fromthepointofviewoftheserver,therearetwoarchitecturestosupporttheservice: centralizedanddistributed.Inthefirst,thereisaserverthatmaintainsthestateofthegameand distributes it to the players. The problem is that the server represents a bottleneck. In distributedarchitectures [21] thereisnoneedforacentralserver,astheplayersexchangethe information. But this architecture is generally not used in commercial games. A recent simulation study using the traffic of a popular MMORPG [22] has concluded that P2P propagationschemesarenotsuitableforthiskindofgame,takingintoaccountthecurrent statusofaccessnetworks. KSIITRANSACTIONSONINTERNETANDINFORMATIONSYSTEMSVOL.6,NO.11,Nov20122925

The problem of the scalability of the required infrastructure for these games was also studiedbyMauve etal [6] .Theyproposedtheuseofproxiesinordertoproviderobustness andcongestioncontrol,toreducedelaysandtoavoidcheating.Someproxiescouldbelocated closetotheplayers,avoidingworkloadonthecentralserver.Ref. [7] alsoproposedtheuseof boosterboxes , which would be next to the router and could be aware of the state of the network, providing network support to the applications. As said in the introduction, the solutionproposedinthecurrentworkcouldevenruninthemachineofaplayer.

2.4 Traffic Optimization Methods SomeIETFprotocolsforheadercompressionweredevelopedmanyyearsago.First,VJHC [23] presentedamethodtocompressIP/TCPheaders.Someyearslater,IPHC [12] wasalso abletocompressUDPandIPv6headers.Atthesametime,CRTPwasdevelopedinorderto compress IP/UDP/RTP headers. Some years later it was enhanced and named ECRTP. However,thesearenotsuitableforcompressinggamingtraffic,asgamesdonotuseRTP. Thesealgorithmscompressheadersinahopbyhopway,usingthehighredundancyofIP, TCPandUDPheaderfieldsinordertoavoidthesendingofsomeofthem.A context isdefined, whichisfirsttransmittedfromthesendertothedestinationwiththefirstheaders.Thedifferent headerfieldsareclassifiedinto nonchange , random , delta and inferred .Thefirstareonlysent infullheaders. Random headersaresentwithoutcompression,while delta arecodifiedusing fewerbytesthantheoriginalsizeofthefield.Finally, inferred headerscanbeobtainedfrom thefieldsofotherlayers,e.g.thelengthofthepacketcanbeobtainedfromthecorresponding level2field. ROHC [24]isamorerecentstandard,abletocompressbothIP/UDP/RTPheadersand IP/UDPheaders.Itreducestheimpactofcontextdesynchronizationbyprovidingafeedback mechanism from the decompressor to the compressor. It uses three different compression levels,andtheheadercanbecompressedtoonebyte [25].Theuseofthesetechniquesmakes theimplementationmoredifficult [26],andmayaddhigherprocessingdelays. Realtimeservices,asVoIP,videoconferenceoronlinegaming,haveverystrictdelay requirements. This means that the applications generate high rates of small packets, representing a substantial overhead. Multiplexing solutions can be combined with header compressioninordertooptimizetraffic.Differentoptionshavebeenproposed [8] , [27],of whichonewasstandardized [8] forscenarioswheremanyVoIPrealtimeflowssharethesame path,asoccursinVoIPtrunking.Alargenumberofsamplescanbeincludedinasinglepacket whileonlyaddingtheretentiondelaycorrespondingtointerpackettime.Thus,thegreaterthe numberofflowsthebetterthebandwidthefficiency.Thistechniqueiscurrentlybeingadapted [14] ,[28]forusewiththetrafficofFPSgamesandotherinteractiveservices.Anadaptation wasnecessarysincethegamesdonotgenerateRTPpackets.Theeffectoftheadditionaldelay andjitterintermsofasubjectivequalityestimator,wasstudiedin [29] ,showingthatthese techniquescanbeusedwithoutharminguserexperience. The advantages of merging packets of interactive services have been shown in other studies.[22] concludedthatmessageaggregationbeforetransmission,addingasmalldelay, can reduce both bandwidth and global latency in both clientserver and P2P schemes. In addition, [30]consideredthepossibilityofmultiplexingTCPpacketsofanMMORPG.These studiestestedtheresultswhenmultiplexinganumberofpacketsfromasingleuser.However, themethodproposedinthispaperhastobedeployedinnetworkaggregationpointswherea highnumberofflowsmaybepresent.Thismayallowsubstantialbandwidthreductionwhile addingsmalldelays. 2926 Saldanaetal.:OnlineGamesTrafficMultiplexing

3. Compressing and Multiplexing Method Inthissectionweanalyzetheproposedcompressingandmultiplexingmethodintermsof packetspersecondandbandwidthefficiency.Somegraphicalresultsareincluded.Thesection endswithadescriptionofthedifferentdelaysaffectinggametraffic.

3.1 Tunneling, Compressing and Multiplexing Algorithm In RFC 4170 [8] , the IETF described Tunneled Compressed RTP (TCRTP) for the compressionandmultiplexingofRTPflows.First,ECRTPheadercompressionisapplied, andmanypacketsarecombinedintoonelargerpacketusingPPPMux.AnL2TPtunnelisthen usedinordertosendthewholemultiplexedpacketendtoend. Theproposedsolutionusesasimilarscheme,butinthiscasethetrafficisnotRTP.Wecan thereforeonlycompressIP/UDPheadersusingIPHCorROHC.Wewillcallthismethod TunnelCompressMultiplex (TCM hereafter). Fig. 2 shows the protocol stack and the structureofaTCMpacket.Thiscanbedividedintothefollowingparts: • CommonHeader (CH):correspondingtotheIP,L2TPandthePPPheaders. • PPPMuxheader (MH):includedatthebeginningofeachcompressedpacket. • Reducedheader (RH):correspondingtotheIP/UDPcompressedheaderofeachoriginal packet. • Payload (P):theUDPpayloadoftheoriginalpacketsgeneratedbytheapplication.

(a)

(b) Fig. 2.TCMa)protocolstackandb)schemeofamultiplexedpacket

3.2 Theoretical Analysis Of The Proposed Method Wenowpresentananalysisofthesavingswhichcanbeachievedbytheuseofthistechnique. Asstatedintheintroduction,packetdelayisofconsiderableimportanceforthisservice.We havethereforeusedamultiplexingpolicythatmaintainspacketdelayunderanupperbound. Twopolicieswerecomparedin [31].Wewillusethepolicybasedonaperiod,sinceitadds lessdelayandjitter.Aperiod,named T,isdefinedinthemultiplexer.Apacketincludingall thearrivedpacketsissentattheendofeachperiod( Fig. 3 ).Therearetwoexceptions:ifthere KSIITRANSACTIONSONINTERNETANDINFORMATIONSYSTEMSVOL.6,NO.11,Nov20122927 isnopackettomultiplex,nothingwillbesent;andifthereisonlyonepacket,itwillbesentin itsnativeform,astheuseofatunnelwouldmakeitbigger.

Fig. 3.Behaviorofthemultiplexing policy.

Wewillrefertothepacketsgeneratedbytheapplicationas native ,incontrasttomultiplexed (mux )packets.Withthispolicy,theaverageretentiondelaywillbe T/ 2,anditsupperbound willbe T. AswehaveseenintheRelatedWorksection,thereductionintermsofpacketspersecond isaninterestingadvantageofmultiplexing.Wewillfirstcalculatethemultiplexedpacketrate, takingintoaccountthatapacketwillbesenteveryperiodinwhichanativepackethasarrived at the multiplexer. The expression tends tobe the inverse of the period as the number of playersgrows: Pr(k > )0 pps mux = (1) T Next,inordertocalculatethebandwidthreduction,wehavefirsttoobtaintheexpressions forthenumberofbytessentinaperiodifthetrafficisnative ormultiplexed.Wewilldenote thenumberofarrivedpacketsinaperiodas k. NH denotesthesizeofanIP/UDPheader.The expectedvalueofthesumofthesizesofallthe E[k]native packetsarrivedwillbe:

Snative= E[k](NH + E[P]) (2) Inordertocalculatethenumberofbytessentinaperiod whenmultiplexingisapplied,we havetodistinguishthecaseofhavingonepacket,inwhichthesizewillbethesameasin(2), andthecaseofhavingmorethanone,inwhichthemultiplexingschemewillbeapplied.We will have a common header CH plus the size of a number of compressed packets (MH+E [RH ]+E [P]).Sowecanobtainanexpressionforthesizeofamultiplexedpacket:

Smux =Pr(k=1)(NH + E[P])+Pr(k>1)(CH+E[k|k>1](MH+E[RH ]+E[P])) (3) In order to illustrate the bandwidth reduction, the results are presented in terms of bandwidth relationship BWR , which is the division of mux and native bandwidths. Thus, takingintoaccountthattheperiodisthesamefor native and mux packets,wecanobtain BWR dividingthebandwidths( BW mux and BW native ),andusing(3)and(2): BW S /T S BWR= mux = mux = mux = BWnative Snative /T Snative Pr(k = )1 CH E[k | k > ]1 MH + E[RH ] + E[P] = +Pr(k> 1) +Pr(k> 1) (4) E[k] E[k](NH + E[P]) E[k] NH + E[P] Thefirsttermisaresultofthedecisionofnotmultiplexingwhenhavingonlyonepacket. 2928 Saldanaetal.:OnlineGamesTrafficMultiplexing

Thesecondtermexpresseshowthecommonheaderissharedbythewholepacket,andit becomessmallerasthenumberofmultiplexedpacketsincreases.Thethirdtermdependson thecompressingalgorithm,andontheaveragepacketsizegeneratedbytheapplication. So,ifwehaveabignumberofusers,oralongperiod,thenumberofmultiplexedpackets willbebig,andthefirstandsecondtermswillbecomenegligible.Regardingthethirdterm, Pr(k>1)≈1,andalsoE[k|k> 1]/E[k]≈1 .Wecanthusobtaintheexpressionforan asymptotefor BWR: MH + E[RH ] + E[P] BWR a = (5) NH + E[P] Weobservethatthesmallerthevalueof E[P], thesmallerthevalueoftheasymptote. Thetechniqueshouldprovideagoodperformanceinapplicationsthatgenerateahighrateof smallpackets,asFPSgamesdo.Logically,itisexpectedthatthegreaterthenumberofplayers, thebettertheperformance,asthesamenumberofpacketscanbemultiplexedwithfewer addeddelays.Theincreaseof Twillalsobebeneficialfor BWR, butwecannotincreaseit indefinitelyasplayersareverysensitivetodelay. Inordertoobtainsomepreliminarynumericalresults,wenowusetherealparametersof somecommercialgamesandthoseusedintheproposedprotocols: • NH :28bytesforIPv4/UDPand48bytesforIPv6/UDP. • CH :25bytesforIPv4:20correspondtoIP,4toL2TPand1toPPPheader.ForIPv6, CH =45bytes. • MH :2bytes,correspondingtoPPPMux. • E[P]:ThevalueoftheUDPpayloaddependsontheapplicationused. • E[k]:ThenumberofpacketspersecondgeneratedbytheN playersofthegame. • E [ RH ]: In this example, as a worst case scenario, we have considered IPHC compressingUDPheadersto2bytes,byusingonly8bitsfortheCIDfield,andavoiding theoptionalchecksum.IPv4andIPv6headerscanalsobecompressedto2bytes.Sowe willconsideranaverageof4bytesforcompressedheadersand28or48bytesforfull headers,whicharesentevery5seconds(thedefaultF_MAX_TIMEparameterofIPHC [12] ). Usingthesevalues,weobtainthepercentagesshownin Table 1 .Thesearethevaluesof theasymptote,i.e.thebest BWR thatcanbeachievedifthenumberofusersandtheperiod are sufficientlylarge.TheyareobtainedforIPv4andIPv6.Wehaveselectedsomepopulargames, andthespecificvalueshavebeenobtainedfrom [16] and [17] .Thevaluesfor Halo2 refertoa consolewithonlyoneuser [32].Thevaluesobtainedaresignificant.Allthegamesallow bandwidthsavingsabove30%forIPv4,andthissavingcanincreaseto54%ifIPv6isusedin sometitles.

Table 1.BWR AsymptoteValuesforDifferentGames

Game Engine E[P] BWR a IPv4 BWR a IPv6 UnrealT2003 Unreal2.0 29.5 62% 46% III IdTech3 36.15 65% 50% QuakeII IdTech2 37 66% 51% CounterStrike1 GoldSrc 41.09 68% 53% Halo2 Halo2 43.2 69% 54% KSIITRANSACTIONSONINTERNETANDINFORMATIONSYSTEMSVOL.6,NO.11,Nov20122929

3.3 Analysis For Different Traffic Distributions Oncewehaveobtainedthevaluefortheasymptote,wemustcalculatethevaluesof E[k|k> 1], Pr(k=0),Pr(k=1)andPr(k>1).Astheymayvarydependingonthebehaviorofeach game,inthissubsectionwewillcalculatethemfordifferenttrafficdistributions.Inorderto obtainanexpressionfor E[k|k>1],wecanfirstexpress E[k]as: E[k]=Pr(k=0) E[k|k=0]+Pr(k=1) E[k|k=1]+Pr(k >1) E[k|k>1] (6) Andtakingintoaccountthat E[k|k= 0] =0and E[k|k= 1] =1, weobtain: E[k] − Pr(k = )1 E[k|k> 1] = (7) Pr(k > )1 Inthepreviousanalysis,wehavedefined k asthetotalnumberofpacketsarrivedatthe multiplexer,i.e.thesumofthepacketsfromeachplayer.Now,wedefine lasthenumberof packetsarrivedfromasingleplayer.Weconsiderthatthe Ndifferentplayers’packetarrivals areindependent,so E[k] =NE [l]=N λT. Themostcommonstatisticaldistributionsusedintheliteratureformodelinginterpacket timesareexponential,deterministic,normal,lognormalandextreme [32].Wewillobtainthe expressionsforexponentialanddeterministicdistributions.Forthelatter,thespecialcaseof havingtwopossiblevaluesforinterpackettimeswillalsobeconsidered. 3.3.1. Exponential packet arrival Inthiscase,wecanobtainPr(k=0)andPr(k=1)as: Pr(k=0)=eNλT Pr(k=1)= NλΤ eNλT (8) Asaconsequence,Pr( k> 1)canbeobtainedas: Pr(k>1)=1–Pr(k=0) –Pr(k=1)=1–eNλT(1+NλΤ ) (9) Sowecanuse(7)tofind E[k|k> 1]: E[k] − Pr(k = )1 NλT − NλTe − NλT E[k|k>1]= = (10) Pr(k > )1 1 − e −NλT 1( + NλT ) 3.3.2. Deterministic packet arrival Inthissubsection,wewillconsideraconstantpacketrate,asoccursinmanygames [17] .Let t be interpacket time. Wewillconsider T < 2 t, in order to avoid big added delays, so the maximumvalueof lis2,andconsequently: E[l]= λT =Pr(l=1)+2Pr(l=2) (11) Ifwehave T≤t ,thenPr (l= 2)=0,so: Pr(l=1)= E[l]= λT Pr(l=0)=1Pr(l=1)=1–λT (12) Andifwehave T>t ,thenPr (l= 0)=0, soknowingthatthesumoftheprobabilitiesis1, andusing(11),weobtain: Pr(l=1)=2–E[l]=2–λT Pr(l=2)=1–Pr(l=1)= λT –1 (13) Pr(k=0)= [Pr(l=0)]N (14) Ifthereismorethanoneplayerand T>t,thenPr(k=1)willbenull,aseveryplayerwill havesentatleastonepacketduringtheperiod.Andif T≤t, thenPr(k=1)willbe: 2930 Saldanaetal.:OnlineGamesTrafficMultiplexing

 N  N1 Pr(k=1)|T≤t =   Pr(l=1)[Pr(l=0)] (15)  1  OncePr(k=0)andPr(k=1)havebeenobtained,Pr(k>1)canbecalculatedinthe samewayasin(9),and E[k|k>1]canbeobtainedusing(7). 3.3.3. Deterministic arrival with two possible values Inthissubsectionweconsiderthecaseofagamethatgeneratespacketsusingtwodifferent interpackettimes,asoccursinsomegames [17] .Let t1bethesmallesttimeand t2thebiggest, and p1and p2therespectiveprobabilitiesofhaving t1and t2.Inthiscase,wehavethefollowing valueof λ: 1 λ= (16) p1t1 + p2t2

Wewillnowdistinguishtwocases.First,ifwehave T<t 1,thenwehavethesamecaseas (12),sincePr(l=2)=0.Inthecasethatt1 ≤T<t 2,theprobabilityofhavingnopacketsina period Twillbetheprobabilityoftheperiodbeginningduringthefirst t2Tsecondsofan interpacket time of duration t2. We have considered that T < 2t1, and t2 < 2t1, and that consecutiveinterpackettimesareindependent:

t2 − T Pr(l=0)=p 2 =p2λ(t2–T) (17) p1t1 + p2t2 Andnow,using(11)andknowingthatthesumoftheprobabilitieshastobe1,weareable toobtain: Pr(l=1)= λ[T–2p1(T–t1)] Pr(l=2)= p1λ(T–t1) (18) Finally,(14)and(15)canbeusedtoobtaintheprobabilitiesofthedifferentvaluesof k.

3.4 Analytical Results Inordertodepictsomegraphsthatillustratethesavingsintermsofpacketspersecondand bandwidth,aspecificgamehastobeselected.WechoseHalfLifeCounterStrike1 ,duetoits popularityandtheavailabilityofmanystudiesofitsbehavior [5] , [33].InOpenGLmode,it presentstwopossibledeterministicvaluesforinterpackettime:33msand50ms,eachwith the same probability. So we can calculate λ using equation (16), obtaining an average interpackettimeof41ms. First, Fig. 4a presentsthetheoreticalamountofpacketspersecond,asafunctionofthe periodandthenumberofplayers.Wecanseethatittendstobetheinverseoftheperiod.In Fig. 4b wehavedepicted BWR forIPv4asafunctionofthenumberofplayersandtheperiod. Ifwefixthenumberofplayers,weobtain Fig. 4c andifwefixthevalueoftheperiod,we obtain Fig 4d .Theasymptoticbehaviorcanbeobservedforbothparameters,sothemost interestingzoneissherethebandwidthrelationshipisbetween0.70and0.75.Asanexample, ifwelookatthe 20players graphin Fig. 4c ,oncethevalue0.75isreached,theincreaseinthe delaysinordertoimprovethebandwidthsavingwillonlyachieveasmallbenefit. KSIITRANSACTIONSONINTERNETANDINFORMATIONSYSTEMSVOL.6,NO.11,Nov20122931

Packets per second 0.95-1.00 20 players Bandwidth Relationship BWR 500 0.90-0.95 15 players 0.85-0.90 10 players 450 0.80-0.85 5 players 1.00 0.75-0.80 400 0.70-0.75 0.95 350

300 0.90

250 0.85 BWR 200 0.80 150 0.75 100 10 ms 0.70 20 ms 50 2 3 4 5 30 ms 6 7 8 9 10 40 ms period 0 11 12 13 14 native 5 10 15 20 25 30 35 40 45 50 15 16 50 ms 17 18 number of players 19 20 period (ms) (a) (b)

Bandwidth Relationship BWR Bandwidth Relationship BWR T 10ms 20 players 1.00 T 20ms 1.00 15 players T 30ms 0.95 10 players T 40 ms 0.95 5 players T 50ms 0.90 0.90

0.85 0.85 BWR BWR 0.80 0.80

0.75 0.75

0.70 0.70

0.65 0.65 5 10 15 20 25 30 35 40 45 50 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 period (ms) number of players (c) (d) Fig. 4.a)Packetspersecond;b) BWR asafunctionofthenumberofplayersandtheperiod;c) BWR as afunctionoftheperiod;d) BWR asafunctionofthenumberofplayers.

Itcanalsobeseenthattheincreaseinthenumberofplayershasaninfluence.Logically,if therearemoreplayers,thesamevalueof E[k]canbeachievedusingsmallervaluesof T.So weconfirmthattheincreaseinthenumberofplayersisalwaysbeneficial.Infact,ifthereare only 5 players, perhaps it would be better to maintain the value of BWR at around 0.80. Logically,thevalueofthenetworkdelaywillhaveaninfluenceonthedecisionofthevalueof T.Ifthenetworkisfast,wecanaddabiggerdelay,thusincreasingbandwidthsaving.

3.5 System Delays InthissubsectionwestudytheimpactoftheproposedmethodontheSRT,andwedescribe thedifferentdelaysaffectingthetrafficofthegame.In Fig. 5 weshowaschemeofthesystem withthedelaysthatareadded,inordertoobtainSRT.

Fig. 5.Delaysofthesystem 2932 Saldanaetal.:OnlineGamesTrafficMultiplexing

• Tretention isthetimeapacketisretainedinthequeueofthemultiplexer.

• Tprocess representsthetimespentinboththemultiplexeranddemultiplexer.In [27]an RTPtrafficmultiplexerwasimplemented,andtheprocessingtimewaslessthan1ms.

• Tqueue isthetimespentinthequeueoftheaccessrouter.

• Tnetwork isthenetworkdelay,whichisnotaffected.

Theonlysignificantdelaywhichisaddedis Tretention (anaverageof T/2).In [13] itissaid thatlatencytoleranceisbetween150and180msfor QuakeIII ,andabove200msfor Counter Strike ,sothisretentiondelaycanbeeasilyassimilated. Inthispaperwehavenotconsideredthepossibilityofmodifyingtheapplication.However, ifthiscouldbedone,afirstsynchronizationphasecouldbeimplementedinordertomakeall thecomputersinthesamegamegeneratethepacketsatthesamemoment.Thedelaycould thenbesignificantlyreducedforthegamesthatuseafixedinterpackettime.

4. Tests and Results Inthissectionweshowsomeresultsobtainedwithrealgametraces.First,wedescribethe methodusedinordertogeneratethetrafficforthetests.Next,acomparisonisdrawnbetween theanalyticalandsimulationresults.Finally,wedescribesomeemulationtestsdeployedsoas tostudytheinfluenceoftherouterbuffer.

4.1 Simulation of Traffic Multiplexing Inordertocomparethesimulationwiththetheoreticalresults,weusetwodifferentpopular FPSgames: HalfLife CounterStrike1 inOpenGLmode, and QuakeIII .Traffictraceshave beenobtainedfromtheCAIAproject(e.g.thetracefor5playersfor CounterStrike1 appears in [34]).Thereareavailabletracesfrom2to9players.Thereisanoffsetofthefirst10,000 packets,andonlythenext5,000*number_of_playerspacketsareincluded,toensurethatall thepacketscorrespondtoactivegametraffic,whichiswhatwearestudying. CounterStrike1 isanexampleofagamewithtwopossiblevaluesforinterpackettimes: weuse33and50msforthetheoreticalmodel,witha50%probabilityforeachone. QuakeIII canbeconsideredasanexampleofdeterministicinterpackettime,withavalueof11.6ms. Inordertoobtaintracesfordifferentnumbersofplayers,wehaveaddedsomeofthem together.Forexample,wehaveobtainedatraceof20playersbytheadditionofthetracesof9, 6and5playersinthesamescenario.Thiscanbedoneduetoapropertyofclienttoserver traffic,whosedistributionisindependentofthenumberofplayers [15] , [33]:theclientto servertrafficofa20playergamewillbesimilartotheadditionofthreegamesof9,6and5 players.Logically,wehavecutthetimeofthetracestotheshortestone.Generationtime,user identifierandthesizeofeachpacketareextractedfromtherealtraces,andusedasinputfor thetests. SimulationsusingMatlabhavebeenconducted,inordertoobtainthecompressedand multiplexed traffic traces, as illustrated in Fig. 6. First, the original trace is split into the individualtracesofthedifferentplayers,andservertoclienttrafficiseliminated,sincethe simulationsonlyuseclienttoservertraffic.Next,IP/UDPcompressingisappliedtoeach flow.Finally,usingtheperiod T,thesizesandtimesofthemultiplexedpacketsarecalculated. Thus,wecancomparethenativeandmultiplexedtraces,intermsofbandwidthandpackets persecond.Theseresultsarepresentedinthefollowingsubsection. KSIITRANSACTIONSONINTERNETANDINFORMATIONSYSTEMSVOL.6,NO.11,Nov20122933

Fig. 6.Methodusedtothetraces.

4.2. Comparison of Analytical and Simulation results Theamountofpacketspersecondobtainedinthesimulationsiscomparedtothetheoretical valuein Fig. 7.Weobservethatastheperiodincreases,theamountofpacketspersecondgets reduced.Thepacketspersecondamounttendstobetheinverseoftheperiod,irrespectiveof thenumberofplayers.Itcanbeseenthat QuakeIII generatesaverylargeamountofpackets per second and, consequently, the saving can be greater than that obtained for Half Life CounterStrike1 .Itcanalsobeobservedthatthetheoreticalandsimulationvaluesfitwell.

Packets per second Packets per second 600 20 players theor 2000 20 players theor 20 players simul 20 players simul 5 players theor 1800 5 players theor 500 5 players simul 5 players simul 1600

1400 400 1200

300 1000

800 200 600

400 100 200

0 0 native 5 10 15 20 25 30 35 40 45 50 native 5 10 15 20 25 30 35 40 45 50 period (ms) period (ms) (a) (b) Fig. 7.Packetspersecondmanagedbytheaccessrouter:a) HalfLifeCounterStrike1 ;b) QuakeIII .

Fig. 8comparesthetheoreticalvaluesof BWR withthoseobtainedinthesimulations.Itcanbe observedthatthevaluesobtainedaresimilar,exceptforminordifferenceswhentheperiod and the number of players are small.The cause of this is thatthere is a slight difference betweenrealinterpackettimesandthoseofthestatisticalmodel.First,wecanobservethat thevaluesforthe BWR asymptotearetheexpectedones( Table 1 ). QuakeIII obtainsbetter results because its average packet size is smaller than that of Half Life Counter Strike 1. AnotherdifferenceisthatQuakeIII hasvaluesclosetotheasymptoteforsmallervaluesofthe period.Thereasonforthisisthattheamountofpacketspersecondislarger,sothenumberof packetsarrivingatthemultiplexerwillbegreaterforthesamevaluesoftheperiod.Ifwehave 20players,aperiodgreaterthan10or15mswillhavenosenseforthisgame,sincethe bandwidthsavingwillonlyslightlyincreaseatthecostofmoreaddeddelay.Thisfactis interesting,sincetheaddeddelayswillbesmall,notharmingsubjectivequality. 2934 Saldanaetal.:OnlineGamesTrafficMultiplexing

Bandwidth Relationship BWR Bandwidth Relationship BWR 1.00 1.00 20 players simulation 20 players simulation 20 players theoretical 20 players theoretical 0.95 0.95 15 players simulation 15 players simulation 15 players theoretical 15 players theoretical 0.90 10 players simulation 0.90 10 players simulation 10 players theoretical 10 players theoretical 0.85 5 players simulation 0.85 5 players simulation 5 players theoretical 5 players theoretical

0.80 0.80 BWR BWR

0.75 0.75

0.70 0.70

0.65 0.65

0.60 0.60 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50 period (ms) period (ms) (a) (b) Fig. 8.Comparisonofthetheoreticalandsimulationresultsof BWR :a) CounterStrike1 ;b) QuakeIII .

4.3. Influence of the Router Buffer AccessnetworksareoneofthemostcommonscenariosinwhichFPSgamesareplayed.The players’trafficisfirstsentviatheaccessrouter.Thisisthemoststringentbottleneckofthe pathtothegameserver,asdescribedin[19] ,whichshowsthatthebufferoftheaccessrouteris ofprimaryimportanceforrealtimeservices.Thus,inthissubsectionwestudythemutual influenceofthebufferpolicyandTCM.Althoughthebufferdoesnotappeartohaveadirect influence onTCM, as packets are first multiplexedand then senttotherouter, thereisa relationship:thelongertheperiod,thegreaterthesizeofthepacketssenttothebuffer,andthe biggerthebandwidthsaving.Thebehaviorofthepacketsintherouterwilldependonthe bufferimplementation.Theaddeddelaysandlossesmayvarydependingonthepacketsize. Inthissubsection,thetraffictracesof HalfLifeCounterStrike1 with20players,obtained inprevioussections,willbeusedasthedesiredtraffic.Thescenariousedforthetests( Fig. 9), emulatesanaccessnetwork,andincludesthreemachines.First,thetrafficofthegameandthe backgroundtrafficaresentfromahost.Next,arouterwithdifferentimplementationssends thetraffictotheInternet,andthetrafficiscapturedattheend.Thispartiscarriedoutina testbed [35].Inordertoincludenetworkandprocessingdelays,thetraffictraceisprocessed offlinetoobtainthefinalresults.

Fig. 9.Accessnetworkmeasurementscheme .

ThetrafficissentusingJTG [36]generator,whichisabletosendthetracesexactlyasthey wereoriginallyasitreadspacketsizesandinterdeparturetimesfromafile.Atrafficmodel wasthereforenotnecessary.Thesizedistributionofbackgroundtrafficisasfollows[37]: 50%ofthepacketsareof40bytes,10%of576bytes,and40%of1,500bytes.Foreachpoint of the graphs, 810 seconds of traffic have been sent. The traffic of the game and the backgroundtrafficsharethesameaccesslink,whichisemulatedbyamachinerunning tc (TrafficControl).ThistoollimitsthebandwidthatEthernetlevelandimplementsdifferent bufferpolicies.Thebandwidthlimithasbeensetto1Mbps.The burst parameterof tc hasbeen KSIITRANSACTIONSONINTERNETANDINFORMATIONSYSTEMSVOL.6,NO.11,Nov20122935 setto5,000bytes.Thebuffersizeisdefinedbylimitingthemaximumdelayofapacket,which isthesameaslimitingthebuffersize,asthetwoparametersarerelatedbythelinkspeed. Asmentionedabove,in [19] anextensivestudyofalargenumberofresidentialInternet accesseswasconducted.OneoftheconclusionswasthatmanyISPsaddbigqueuingdelays, whichcanbeofhundredsofmilliseconds.Asaconsequence,weusetwodifferentbuffersizes: ahighcapacitybufferwithamaximumdelayof500msandatimelimitedbufferwitha maximumdelayof50ms. The network delay, which is added offline, is the sum of a fixed delay of 20 ms correspondingtothegeographicaldistance,andalognormaldelaywithanaverageof20ms andavarianceof5 [38].Aprocessingdelayisalsoadded.Aspreviouslystated,in [27]a multiplexingschemeforVoIPwasimplementedanditsprocessingdelaywasabout1ms.In ordertoincludetheeffectofmultiplexanddemultiplex,wehaveaddedafixeddelayof5ms. In [39]astudyof HalfLife concludedthatplayerswouldnotplaywhenlatencieswere above225250ms.Morerecentstudies [13] haveconcludedthatacceptablequalitycanbe perceivedwith200msofdelayforcertaintitles.SothedelaysaddedbyTCMcanbeaccepted byplayers.Regardingpacketloss,thebehaviordependsonthegame:whilesomeofthemstop workingwithpacketlossofabout4%,otherscanworkproperlywiththisparameteratabout 35% [13] . Next,wepresentsomegraphsofOneWayDelay(OWD)andpacketlossforbothbuffers, usingdifferentamountsofbackgroundtrafficinordertosaturatetheaccessrouter.Logically, multiplexingwillonlybeinterestingwhenthetrafficofthegamehastocompetewithlarge amountsofbackgroundtraffic.Foreachbufferwehaveusedthreetraffictypes:the native one, inwhichnomultiplexingisapplied,anotherusingT=25ms,andathirdwith T=50ms. Fig. 10 shows theresultsforthehigh capacity buffer. It should benoted that a small increaseofthedelayisintroducedwhenmultiplexing,duetotheretention(halftheperiod) andprocessingtimeinthemultiplexer.Thenativebandwidthis319kbpsatEthernetlevel. Whenthetotaltrafficexceedsthelimit,thedelaysincreasedramatically.Itcanalsobeseen thatthebandwidthsaving(about120kbps)istranslatedintoagreateramountofbackground trafficthatcanbesupportedwhilemaintainingacceptabledelays.

One way delay for game traffic Packet loss for game traffic 200 18% delay native loss native 180 16% delay mux 25ms loss mux 25ms 160 delay mux 50ms 14% loss mux 50ms 140 12% 120 10% 100 8% packet loss packet OWD (ms) OWD 80 6% 60 4% 40

20 2%

0 0% 500 550 600 650 700 750 800 850 900 950 1000 500 550 600 650 700 750 800 850 900 950 1000

Background traffic (kbps at link level) Background traffic (kbps at link level) (a) (b) Fig. 10.Highcapacitybuffer:a)OneWayDelayb)packetloss

Fig. 11 showstheresultsusingthetimelimitedbuffer.Ifwecompareitwith Fig. 10 ,wecan seethattheeffectsonthedelayarethesameasthoseobservedforthehighcapacitybuffer. However,theuseofthetimelimitedbufferhastheadvantageofmaintainingthedelaybelow 2936 Saldanaetal.:OnlineGamesTrafficMultiplexing

160msirrespectiveoftheamountofbackgroundtraffic.Thereisazoneinthegraphwherethe delayobtainedwhenmultiplexingissmallerthanthenativedelay.Thisisaninterestingresult takingintoaccountthehardrealtimeconstraintsofFPSgames. Anotherinterestingphenomenonisthatforthenativetraffic,thepacketlossratedecreases asthebackgroundtrafficgrowsfrom850to925kbps.Thishappensbecausethebandwidth limitisreached,sothefirstpacketstobediscardedarethebigones(1,500bytes)astheyhave agreaterprobabilityofnothavingaplaceinthequeue.Thisrepresentsabenefitfornative packets,astheyareverysmall.Themultiplexedtrafficdoesnotshowthiseffect,becausethe packetsarebigger.

One way delay for game traffic Packet loss for game traffic 200 18% delay native loss native 180 16% delay mux 25ms loss mux 25ms 160 delay mux 50ms 14% loss mux 50ms 140 12% 120 10% 100 8% OWD (ms) OWD

80 loss packet 6% 60 4% 40

20 2%

0 0% 500 550 600 650 700 750 800 850 900 950 1000 500 550 600 650 700 750 800 850 900 950 1000

Background traffic (kbps at link level) Background traffic (kbps at link level) (a) (b) Fig. 11 Timelimitedbuffer:a)OneWayDelayb)packetloss

There is afurtherremarkableeffect regarding packet loss. Whilethe use of T=25 ms achievesbetterresultsthannativetraffic,duetobandwidthsaving,italsoachievesbetter resultsthantheuseof T=50ms.Wecandiscussthissurprisingresultlookingatthe 20players graphin Fig. 4c .Thevaluesof BWR for25and50msareverysimilar,astheyarenearthe asymptote.Infact,thedifferenceintermsofbandwidthissmallerthan6kbps.Butifwe calculatetheaveragepacketsize,wecanseethatinthefirstcaseitis608bytesandinthe seconditis1,192bytes.So,asthebufferpolicypenalizesbigpackets,itwillbebetternotto usealongperiod. Above925kbpsofbackgroundtraffic,itcanbeseenthatnativetrafficsufferslesspacket loss than multiplexed traffic for both buffers. This is because smaller packets have less probabilityofbeingdiscarded.Therefore,therearesomesituationsinwhichmultiplexingcan increasepacketloss.Multiplexingwillaffectdelayandpacketlossofthegameinadifferent mannerdependingontherouterbufferpolicy.

5. Conclusions Thisworkhasanalyzedatunneling,compressingandmultiplexingmethodwhichcanbeused toachievebandwidthsavingsbycompressingheadersandgroupingpacketsintobiggerones. It can be useful for reducing the overhead of online gaming traffic, as these applications usuallygenerateahighrateofsmallpackets.ThemethodusesanIP/UDPheadercompression protocol,PPPMuxmultiplexingandL2TPtunnelinginordertoworkendtoend. Enterpriseswhichdevelopgamescouldbeinterestedinreducingthebandwidthandalso thenumberofpacketspersecondtheyhavetomanage.Thebandwidthsavingscanalsobeof interestinordertoobtainbetterperformanceinaccessnetworkswithlimitedbandwidth.The methodhasbeentestedwiththetrafficofFPSgames,becausetheseapplicationshavevery stringenttemporalconstraints,asplayersdemandahighlevelofinteractivity.Simulations KSIITRANSACTIONSONINTERNETANDINFORMATIONSYSTEMSVOL.6,NO.11,Nov20122937 havebeenconductedinordertostudythebandwidthsavings,andtheresultsshowthatthe bandwidthcanbereducedupto38%forIPv4andover50%forIPv6.Theaddedmultiplexing delayscanremainsmallifthenumberofplayerssharingthesamepathissufficientlylarge. Takingintoaccountthataccessnetworksareascenariowherethesegamesarefrequently used,acomparisonoftheperformanceofnativeandTCMmultiplexedflowsofFPShastobe carriedoutdependingontherouterbuffersize.Itisshownthatthebestmultiplexingsolution isnotnecessarilytheonethatachievesthebestbandwidthsaving.Packetsizealsohastobe considered,assomebufferpoliciespenalizebigpackets.TheTCMtechniquecanhelpusto adapttraffictothenetworkconditions.Ifthenetworkisbetterpreparedforlowratesofbig packets,wecanmodifythetrafficinordertoadaptittotheunderlyingtechnology.

References [1] P.Svoboda,W.KarnerandM.Rupp,“TrafficAnalysisandModelingforWorldofWarcraft,” IEEEInt.ConferenceonCommunications ,pp.16121617,2007. Article(CrossRefLink) . [2] G.Huang,M.YeandL.Cheng,“ModelingsystemperformanceinMMORPG,”IEEEGlobal TelecommunicationsConferenceWorkshops ,pp.512518.2004. Article(CrossRefLink) . [3] C. Chambers, W. Feng, S. Sahu and D. Saha, “Measurementbased Characterization of a CollectionofOnlineGames,”Proc.5th ACMSIGCOMconferenceonInternetMeasurement, USENIXAssociation,Berkeley,2005. Article(CrossRefLink) . [4] K. Chen, P. Huang and C. Lei, “Game traffic analysis: An MMORPG perspective,” In Proc. internationalworkshoponNetworkandoperatingsystemssupportfordigitalaudioandvideo ,pp. 1924.ACM,NewYork,2005. Article(CrossRefLink) . [5] W.FengW,F.Chang,W.FengandJ.Walpole,“ProvisioningOnlineGames:ATrafficAnalysis ofaBusyCounterStrikeServer,”SIGCOMMComput.Commun .Rev.32,p.18,2002. Article (CrossRefLink) . [6] M.Mauve,S.FischerandJ.Widmer,“AGenericProxySystemforNetworkedComputerGames,” In Proc.of1st workshopNetworkandsystemsupportforgames,pp.2528.ACM,NewYork, 2002. Article(CrossRefLink) . [7] D.Bauer,S.RooneyandP.Scotton,“NetworkInfrastructureforMassivelyDistributedGames,” In Proc.of1st workshoponNetworkandsystemsupportforgames,pp.3643.ACM,NewYork, 2002. Article(CrossRefLink) . [8] B.Thompson, T. Koren andD. Wing, RFC 4170: “Tunneling Multiplexed Compressed RTP” (TCRTP),2005. [9] S.H. Batool and K. Mahmood, “Entertainment, communication or academic use? A survey of Internet cafe users in Lahore, Pakistan,” Information Development vol. 26. pp 141147, 2010. Article(CrossRefLink) . [10] M.Gurol,T.Sevindik,“ProfileofInternetCafeusersinTurkey,”TelematicsandInformatics ,Vol. 24,Issue1,pp5968,2007. Article(CrossRefLink) . [11] B.Furuholt,S.KristiansenandF.Wahid,“Gamingorgaining?ComparingtheuseofInternet cafes inIndonesiaandTanzania,”The Intern. Inform. & Library Review ,Vol.40,Issue2,pp 129139,2008.Article(CrossRefLink) . [12] M.Degermark,B.NordgrenandD.Pink,RFC2507:“IPHeaderCompression,”1999. [13] S.ZanderandG. Armitage,“Empirically Measuringthe QoSSensitivityofInteractiveOnline GamePlayers”AustralianTelecommunicationsNetworks&ApplicationsConf.,Sydney,2004. [14] J. Saldana, J FernándezNavajas, J. RuizMas, J.I. Aznar, E. Viruete and L. Casadesus “First PersonShooters:CanaSmarterNetworkSaveBandwidthwithoutAnnoyingthePlayers?,”IEEE CommunicationsMagazine ,vol.49,no.11,pp.190198,2011. Article(CrossRefLink) . [15] P. Branch and G. Armitage, “Extrapolating Server To Client IP traffic From Empirical MeasurementsofFirstPersonShootergames,”Proc.5th ACMSIGCOMMworkshoponNetwork andsystemsupportforgames (NetGames'06).ACM,NY,USA,2006. Article(CrossRefLink) . [16] W.Feng,F.Chang,W.FengWandJ.Walpole,“ATrafficCharacterizationofPopularOnLine 2938 Saldanaetal.:OnlineGamesTrafficMultiplexing

Games,” IEEE/ACMTrans.Networking ,pp.488500,2005. Article(CrossRefLink) . [17] S.Ratti,B.HaririandS.Shirmohammadi,“ASurveyofFirstPersonShooterGamingTrafficon theInternet,”IEEEInternetComputing ,vol14,no.5,pp.6069,2010. Article(CrossRefLink) . [18] A.DhamdhereandC.Dovrolis“Openissuesinrouterbuffersizing”,Comput.Commun .Rev.,vol. 36,no.1,pp.8792,2006. Article(CrossRefLink) . [19] M.Dischinger,A.Haeberlen,K.P.GummadiandS.Saroiu,“Characterizingresidentialbroadband networks,”In Proc.7thACMSIGCOMMconferenceonInternetmeasurement ,ACM,NewYork, NY,USA,pp4356,2007. Article(CrossRefLink) . [20] K.Lee,B.KoandS.Calo,“AdaptiveServerSelectionforLargeScaleInteractiveOnlineGames,” In Proc.14thInternationalWorkshoponNetworkandoperatingsystemssupportfordigitalaudio andvideo ,pp.152157.ACM,NewYork,2004. Article(CrossRefLink) . [21] G.Reina,E.BiersackandC.Diot,“Quiver:aMiddlewareforDistributedGaming,”22ndACM Workshop on Network and Operating Systems Support for Digital Audio and Video , Toronto, Canada,Jun.2012. [22] J.L.MillerandJ.Crowcroft,“TheneartermfeasibilityofP2PMMOG's,”InProc.9thAnnual WorkshoponNetworkandSystemsSupportforGames,Piscataway,NJ,USA,Art.5,6pag.2010. [23] V.Jacobson,RFC1144:CompressingTCP/IPHeadersforLowSpeedSerialLinks,1990. [24] LEJonsson,G.PelletierandK.Sandlund,RFC4995:TheRObustHeaderCompression(ROHC) Framework,2007. [25] A.CouvreurA,L.M.LeNy,A.Minaburo,G.Rubino,B.SericolaandL.Toutain,“Performance analysisofaheadercompressionprotocol:TheROHCunidirectionalmode,”Telecommunication Systems ,vol.31,no.6,pp.8598,2006. Article(CrossRefLink) . [26] E.ErtekinandC.Christou,“Internetprotocolheadercompression,robustheadercompression,and theirapplicabilityintheglobalinformationgrid,”IEEECommunicationsMagazine ,vol.42,pp. 106116.2004. Article(CrossRefLink) . [27] H. Sze, C. Liew, J. Lee and D. Yip, “A Multiplexing Scheme for H.323 VoiceOverIP Applications,” IEEE Journal Select. Areas Commun . vol. 20, pp.13601368, 2002. Article (CrossRefLink) . [28] J. Saldana, D. Wing, J. FernandezNavajas, M.A.M. Perumal and F. Pascual, “draftsaldanatsvwgtcmtf03,TunnelingCompressedMultiplexedTrafficFlows,”July2012. [29] J.Saldana,J.FernandezNavajas,J.RuizMas,E.VirueteNavarroandL.Casadesus,“Influenceof OnlineGamesTrafficMultiplexingandRouterBufferonSubjectiveQuality,”inProc .CCNC 2012IEEEWorkshopDENVECT ,pp.482486,LasVegas.Jan.2012. Article(CrossRefLink) . [30] C. Griwodz and P. Halvorsen. 2006, “The fun of using TCP for an MMORPG,” In Proc. Int. workshoponNetworkandoperatingsystemssupportfordigitalaudioandvideo(NOSSDAV'06). ACM,NewYork,NY,USA,Article1,pp.7. Article(CrossRefLink) . [31] J. Saldana, J. FernándezNavajas, J. RuizMas, J.I. Aznar, L. Casadesus and E. Viruete, “ComparativeofMultiplexingPoliciesforOnlineGamingintermsofQoSParameters,”IEEE CommunicationsLetters ,vol.15,no.10,pp.11321135,2011. Article(CrossRefLink) . [32] S.ZanderandG.Armitage,“AtrafficmodelforthegameHalo2,”In Proc.ofInternational WorkshoponNetworkandoperatingsystemssupportfordigitalaudioandvideo ,ACM,New York,NY,USA,pp1318.2005.Article(CrossRefLink) . [33] T. Lang, G. Armitage, P. Branch and H. Choo, “A Synthetic Traffic Model for HalfLife,” AustralianTelecom,NetworksandApplicationsConference,Melbourne,Australia,2003 . [34] L.StewartandP.Branch,HLCS,Map:dedust,5players,13Jan2006.CentreforAdvancedInternet Architectures SONG Database, http://caia.swin.edu.au/sitcrc/hlcs_130106_1_dedust_5_ fragment.tar.gz,2006.Accessed5April2011. [35] J.Saldana,E.Viruete,J.FernándezNavajas,J.RuizMasandJ.I.Aznar,“HybridTestbedfor Network Scenarios,” SIMUTools, 3rd International Conference on Simulation Tools and Techniques .Torremolinos,Málaga,Spain,2010. Article(CrossRefLink) . [36] J.Manner,JTG,http://www.cs.helsinki.fi/u/jmanner/software/jtg/,Accessed11August2012. [37] CooperativeAssociationforInternetDataAnalysis: NASAAmesInternetExchangePacketLength Distributions . KSIITRANSACTIONSONINTERNETANDINFORMATIONSYSTEMSVOL.6,NO.11,Nov20122939

[38] S.Kaune,K.Pussep,C.Leng,A.Kovacevic,G.TysonandR.Steinmetz,“Modelingtheinternet delay space based on geographical locations,” In Proc. of 17th Euromicro International Conference on Parallel, Distributed, and NetworkBased Processing , 2009. Article (CrossRef Link) . [39] T.Henderson,“LatencyandUserBehaviouronaMultiplayerGameServer,”Lect.NotesComp. Science ,SpringerBerlin/Heidelberg,pp113,vol2233,2001. Article(CrossRefLink) . Jose Saldana received his B.S. and M.S. in Telecommunications Engineering from UniversityofZaragoza,in1998and2008,respectively.HereceivedhisPhDinInformation Technologiesin2011.HeiscurrentlyaresearchfellowintheDepartmentofEngineeringand CommunicationsofthesameUniversity.HisresearchinterestsfocusonQualityofServicein RealtimeMultimediaServices,asVoIPandnetworkedonlinegames. Julián Fernández-Navajas receivedtheTelecommunicationsEngineeringdegreefromthe PolytechnicUniversityofValencia,in1993,andthePh.D.degreefromtheUniversityof Zaragoza,Spain,in2000.HeiscurrentlyanAssociateProfessorintheCentroPolitécnico Superior, Universidad de Zaragoza. His professional research interests are in Quality of Service(QoS),NetworkManagement,TelephonyoverIP,MobileNetworks,onlinegaming andotherrelatedtopics. José Ruiz Mas receivedtheEngineeringofTelecommunicationsdegreefromtheUniversitat PolitècnicadeCatalunya(UPC),Spain,in1991andthePh.D.degreefromtheUniversityof Zaragozain2001.HeworkedasasoftwareengineeratthecompanyTAOOpenSystems from 1992 to 1994. In 1994 he joined the Centro Politécnico Superior as an Assistant Professoruntil2003,whenhebecameanAssociateProfessor.Atpresentheismemberofthe AragónInstituteofEngineeringResearch(I3A)andhisresearchactivityliesintheareaof Quality of Service in Multimedia Services with special emphasis on the provision of methodologies and tools to assess the perception of the enduser (Quality of Experience, QoE). Luis Casadesus receivedhisB.SinComputerSciencefromtheUniversityofLaHabanain 2001,andhisM.S.inMobileNetworksfromtheUniversity of Zaragoza, in 2009. He is currentlyaPh.D.candidateattheCommunicationsTechnologiesGroupoftheUniversityof Zaragoza.HisresearchinterestsfocusonQualityofServiceandQualityofExperiencein MultimediaServices,likevideostreaming,videoconferencingandnetworkedonlinegames.