Purdue University Purdue e-Pubs
Open Access Theses Theses and Dissertations
2013 A Comparative Analysis Of Open Source Storage Area Networks With Esxi 5.1 Robert M. Trinkle Purdue University
Follow this and additional works at: https://docs.lib.purdue.edu/open_access_theses Part of the Databases and Information Systems Commons
Recommended Citation Trinkle, Robert M., "A Comparative Analysis Of Open Source Storage Area Networks With Esxi 5.1" (2013). Open Access Theses. 97. https://docs.lib.purdue.edu/open_access_theses/97
This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. Graduate School ETD Form 9 (Revised 12/07) PURDUE UNIVERSITY GRADUATE SCHOOL Thesis/Dissertation Acceptance
This is to certify that the thesis/dissertation prepared
By Robert M. Trinkle
Entitled A Comparative Analysis of Open Source Storage Area Networks With ESXi 5.1
Master of Science For the degree of
Is approved by the final examining committee:
P.T. Rawles Chair Raymond Hansen
Thomas Hacker
To the best of my knowledge and as understood by the student in the Research Integrity and Copyright Disclaimer (Graduate School Form 20), this thesis/dissertation adheres to the provisions of Purdue University’s “Policy on Integrity in Research” and the use of copyrighted material.
Approved by Major Professor(s): ______P.T. Rawles ______
Approved by: Jeffrey L. Whitten 11/25/2013 Head of the Graduate Program Date i
ACOMPARATIVEANALYSISOFOPENSOURCESTORAGEAREANETWORKS WITHESXI5.1
AThesis
SubmittedtotheFaculty
of
PurdueUniversity
by
RobertM.Trinkle
InPartialFulfillmentofthe
RequirementsfortheDegree
of
MasterofScience
December2013
PurdueUniversity
WestLafayette,Indiana
ii
TABLEOFCONTENTS
Page LISTOFTABLES...... v LISTOFFIGURES...... vii LISTOFABBREVIATIONS...... ix GLOSSARY...... x ABSTRACT...... xi CHAPTER1. INTRODUCTION...... 1 1.1 Background...... 1 1.2 StatementofProblem...... 3 1.3 SignificanceofProblem...... 3 1.4 PurposeofResearch...... 4 1.5 ResearchQuestion...... 4 1.6 Assumptions...... 4 1.7 Limitations...... 5 1.8 Delimitations...... 5 1.9 Summary……………………………………………………………………………………………….5 CHAPTER2. LITERATUREREVIEW...... 7 2.1 iSCSITargetServers...... 7 2.1.1 iSCSIEnterpriseTarget...... 8 2.1.2 SCSTandLIO...... 9 2.1.3 ISTGT…………………………………………………………………………………………….10 2.2 StorageAlignment...... 11 2.3 NetworkConstruction...... 13 2.4 Iometer……………………………………………………………………………………………….14 2.5 Iperf…………………………………………………………………………………………………….15
iii
Page 2.6 Summary……………………………………………………………………………………………..15 CHAPTER3. METHODOLOGY...... 17 3.1 Framework...... 17 3.2 TestingMethodology...... 21 3.2.1 Experiments...... 22 3.3 AnalyzingData...... 23 3.4 Summary……………………………………………………………………………………………..23 CHAPTER4. RESULTSANDDISCUSSIONS...... 25 4.1 LocalRAIDandNetworkResults...... 27 4.2 IETiSCSITargetResults...... 27 4.2.1 PhysicalandVirtualComparison...... 27 4.2.2 MTUSizeComparison...... 29 4.3 SCSTiSCSITargetResults...... 32 4.3.1 PhysicalandVirtualComparison...... 32 4.3.2 MTUSizeComparison...... 33 4.4 LIOiSCSITargetResults...... 35 4.4.1 PhysicalandVirtualComparison...... 36 4.4.2 MTUSizeComparison...... 37 4.5 ISTGTiSCSITargetResults...... 39 4.5.1 PhysicalandVirtualComparison...... 39 4.5.2 MTUSizeComparison...... 41 4.6 iSCSITargetServerComparisons...... 43 4.7 Summary……………………………………………………………………………………………..50 CHAPTER5. CONCLUSIONSANDFUTUREWORK...... 52 5.1 iSCSITargetServerConclusions...... 52 5.2 FutureWork...... 55 REFERENCES...... 57 APPENDICES AppendixA iSCSINetworkTopology...... 60
iv
Page
AppendixB IETSANConfigurationFile...... 61 AppendixC SCSTSANConfigurationFile...... 63 AppendixD LIOSANConfiguration...... 65 AppendixE ISTGTSANConfiguration...... 67 AppendixF HPProcurve2950Configuration...... 70 AppendixG RawiSCSIAverageValues...... 72
v
LISTOFTABLES
Table...... Page Table3.1DellOptiplex2950Specifications...... 19
Table3.2DellOptiplex990Specifications...... 20
Table4.1ThroughputTests...... 26
Table4.2IOPSTests...... 26
AppendixTable
TableG.1IETVirtualIOPSWithStandardardandJumboFrames...... 72
TableG.2IETPhysicalIOPSWithStandardandJumboFrames...... 72
TableG.3IETVirtualMBpsWithStandardardandJumboFrames...... 73
TableG.4IETPhysicalMBpsWithStandardandJumboFrames...... 74
TableG.5SCSTVirtualIOPSWithStandardardandJumboFrames...... 74
TableG.6SCSTPhysicalIOPSWithStandardandJumboFrames...... 75
TableG.7SCSTVirtualMBpsWithStandardardandJumboFrames...... 75
TableG.8SCSTPhysicalMBpsWithStandardandJumboFrames...... 76
TableG.9LIOVirtualIOPSWithStandardardandJumboFrames...... 76
TableG.10LIOPhysicalIOPSWithStansdardandJumboFrames...... 77
TableG.11LIOVirtualMBpsWithStandardardandJumboFrames...... 77
TableG.12LIOPhysicalMBpsWithStandardandJumboFrames...... 78
vi
TablePage
TableG.13ISTGTVirtualIOPSWithStandardardandJumboFrames...... 78
TableG.14ISTGTPhysicalIOPSWithStandardandJumboFrames...... 79
TableG.15ISTGTVirtualMBpsWithStandardardandJumboFrames...... 79
TableG.16ISTGTPhysicalMBpsWithStandardandJumboFrames...... 80
vii
LISTOFFIGURES
Figure...... Page Figure2.1UnalignedVirtualFileSystem...... 11
Figure2.2AlignedVirtualFileSystem...... 12
Figure3.1LogicalTestEnviornment...... 18
Figure4.1IETPhysicalandVirtualMB/s...... 28
Figure4.2IETPhysicalandVirtualIOPS...... 29
Figure4.3IETPhysicalandVirtualMTU...... 30
Figure4.4IETPhysicalandVirtualMTUIOPS...... 31
Figure4.5SCSTPhysicalandVirtualMB/s...... 32
Figure4.6SCSTPhysicalandVirtualIOPS...... 33
Figure4.7SCSTPhysicalandVirtualMTU...... 34
Figure4.8SCSTPhysicalandVirtualMTUIOPS...... 35
Figure4.9LIOPhysicalandVirtualMB/s...... 36
Figure4.10LIOPhysicalandVirtualIOPS...... 37
Figure4.11LIOPhysicalandVirtualMTUMB/s...... 38
Figure4.12LIOPhysicalandVirtualMTUIOPS...... 39
Figure4.13ISTGTPhysicalandVirtualMB/s...... 40
Figure4.14ISTGTPhysicalandVirtualIOPS...... 41
viii
FigurePage
Figure4.15ISTGTPhysicalandVirtualMTUMB/s...... 42
Figure4.16ISTGTPhysicalandVirtualMTUIOPS...... 43
Figure4.17iSCSITargetServiceVirtualMaximumThroughput...... 44
Figure4.18iSCSITargetServerVirtualMaximumIOPS...... 45
Figure4.19iSCSITargetServerPhysicalMaximumThroughput...... 46
Figure4.20iSCSITargetServerPhysicalMaximumIOPS...... 47
Figure4.21iSCSITargetServerPhysicalandVirtualThroughput...... 49
Figure4.22iSCSITargetServerPhysicalandVirtualIOPS...... 50
AppendixFigure
FigureE.1FreeNASTargetGlobalConfiguration...... 67
FigureE.2FreeNASPortalsConfiguration...... 67
FigureE.3FreeNASTargetsConfiguration...... 68
FigureE.4FreeNASExtentsConfiguration...... 69
FigureE.5FreeNASAssociatedTargets...... 69
ix
LISTOFABBREVIATIONS
IEEE:InstituteofElectricalandElectronicsEngineers
IET:iSCSIEnterpriseTarget
IOPS:Input/OutputOperationsPerSecond
LUN:LogicalUnitNumber
MB/s:MegaBytePerSecond
MTU:MaximumTransmissionUnit
NIC:NetworkInterfaceCard
OSI:OpenSystemsInterconnect
PERC:PowerEdgeRaidController
RAID:RedundantArrayofIndependentDisks
SATA:SerialAdvancedTechnologyAttachment
SCSI:SmallComputerSystemInterface
SCST:GenericSCSITargetSubsystemforLinux
TCP/IP:TransmissionControlProtocol/InternetProtocol
VM:VirtualMachine vSwitch:VirtualSwitch
x
GLOSSARY
InternetSmallComputerSystemsInterface(iSCSI)–Atransportprotocolwhichallows systemstocommunicatewithstoragedevicesoverTCP/IP(Satran,Meth,Sapuntzakis, Chadalapaka,&Zeidner,2004). Iometer–Opensourcesoftwarewhichiscapableofrunningmultipleteststo benchmarkIOPSofstoragesolutions. IOPS–Theamountofinputandoutputoperationsonastoragediskpersecond. Theoreticalmaximumdiskoperationscanbemeasuredbyutilizingaformulabasedon averagelatencyandseektimes(Lowe,2010). MaximumTransferUnit(MTU)–ThesizeofanEthernetpacketinwhichdatacanbe sentwithin. StorageAreaNetwork(SAN)ͲͲAdeviceprovidingnetworkattachedblockleveldata storagewhichappearsasalocalresourcetoanoperatingsystem.Thisstorageis presentedasaSCSIsubsystemencapsulatedinaTCP/IPconnection(Aiken&Grunwald, 2003).
xi
ABSTRACT
Trinkle,RobertM.M.S.,PurdueUniversity,December2013.AComparativeAnalysisof OpenSourceStorageAreaNetworksWithESXi5.1.MajorProfessor:P.T.Rawles. StorageAreaNetworkshaveincreasedinpopularitywiththeadvancementof virtualizationtechnologies.SANsconsistofacentralrepositoryofharddisksallowing multipleclientstoaccessandsharedataoveracomputernetwork.Inrecentyears, multiplecommercialandopensourceSANtechnologieshavebeenintroducedtothe market.Unlikecommercialproducts,theopensourceSANtechnologieslackformal documentationandresearchmakingimplementationbestpracticesscarce.This researchanalyzedtheperformanceofdifferentSANarchitecturesandimplementation strategies.Inaddition,thispaperexpandsuponpreviousresearchbyusingcurrent hardwareandsoftwaretechnologies.ThetestresultsofprominentopensourceSAN technologiesandananalysisofacquireddatahasaddedtothebodyofknowledge regardingSANbestpractices.
1
CHAPTER1. INTRODUCTION
Thischapterisanintroductionoftheresearchconductedforthisthesis.First,the problemanditssignificanceareaddressed.Inadditionthescope,questionspertaining totheresearch,assumptions,limitations,anddelimitationsareexamined. 1.1 Background StorageAreaNetworks(SAN)andvirtualizationhavebecomeagrowingtrendin datacenters.Thecombinationofthesetwotechnologiescanbeusedconsolidateand enhancehardware.ThefastadoptionofSANsandvirtualizationhasintroducednew businesspracticesamongindividualsandbusinessesalike. ThetermSANissynonymouslyreferredtoasIPSAN.IPSANsareblocklevel storageareanetworkswhichcommunicateoverTCP/IPusingtheiSCSIprotocol(Yoder, Carlson,Thiel,Deel,&Hibbard,2012).Fromaphysicalaspect,SANsarenormally composedofmultiplediskswhichareredundantlyarrangedforfailoverandthroughput benefits.Theredundantarrayofindependentdisks(RAID)isexposedtophysicalservers throughthenetworkfabricandappearsaslocalstoragetoconnectedservers. Simplistically,SANsallowmultipleserverstoconnecttoasharedstoragerepository acrossacomputernetwork. IBMinventedandcommercializedmainframeVirtualMachines(VMs)many decadesago.However,VMsdidn’tmaketheleaptocommodityhardwareuntilthelate 1990s.Duringthisperiod,VMwarepioneeredefficientvirtualizationonx86platforms (Rosenblum&Waldspurger,2011).Withoutmuchmainstreamadoptionuntiltheearly
2
2000’s,virtualizationtechnologystayedrelativelydormantuntilbecomingpopularin datacenterssoonafter.Virtualizationallowsforoperatingsystemstobeextractedfrom thehardwarelayerandrunvirtuallyasaVM.Oneofthelargestbenefitsofvirtualization technologyistheabilitytoconsolidatemultipleenergyinefficientphysicalserversinto onepowerfulserverutilizingvirtualization.VMwarehasbecomeoneofthetop contributorstothismarketandisreferredtoastheleadingvirtualizationcompanyby manyITprofessionals. Othervirtualizationproductsalsoexistandareprominentindatacentersand workplaces.Microsoft’svirtualizationsolution,HyperͲV,canbeimplementedasbareͲ metalorinstalledinadditiontoanoperatingsystem.Whilepowerful,HyperͲVlacks someoftherobustnessothersolutionsoffersuchasbeinglimitedtofewerresources pervirtualmachine.Xen,abareͲmetalhypervisor,isusedtopowerAmazon’sElastic ComputeCloudandoffersarobustvirtualizationsolutionforconsumers.Accordingto research,XenistheleadingvirtualizationplatformbehindVMware(Csaplar,2012). Whenconsideringvirtualizationtechnologies,StorageAreaNetworksare complimentarytechnology.SANsallowvirtualizationproductstotakeadvantageof robustfeaturessuchmigratingvirtualmachinesacrosshardwarewithoutanydisruption inservice.Thisprocesscanalsobeperformedautomaticallyincaseofhardwarefailure. Inaddition,multiplehypervisorscanconnecttoandsharedatafromasingleSANtarget. MajorinformationtechnologymanufactureshavereleasedcommercialSAN solutionscertifiedforusealongsidevirtualizationsoftware(Liu&AiͲshaikh,2009).Some ofthemostprominentdownsidestothesemanufacturedSANsolutionsarethe hardwarelimitationsaswellasthehighpricetag.Aswithmanyproducts,commercial SANsarecreatedwithgeneralizedhardwaresuitableformostimplementations.This generalizedhardwarecanintroduceconstraintswhenadditionalcustomizationsare needed.UtilizingopensourceSANsoftwareimplementedonprojectspecifichardware, acomparableSANcanbebuiltforamuchlowermonetarycost(Intel,2012).Theopen sourceSANtechnologies,however,oftenlackinstructionsanddocumentationfor installationandconfiguration.Thelackofresearchanddocumentationregardingbest
3 practicesandoptimizationoftheseopensourcedtechnologiespresentsmanyissues whenattemptingtoimplementthemosteffectivesolution. 1.2 StatementofProblem OpensourceStorageAreaNetworkshavebecomeincreasinglypopularwiththe advancementandadoptionofVMwareESXi.SANsallowcentralizedstoragetoappear aslocalstoragetovirtualmachines.WithawidevarietyofopensourceiSCSItarget driversandnetworkprotocolsavailable,implementationbestpracticesarescarcedue tolackingbenchmarks. 1.3 SignificanceofProblem VirtualizationtechnologieshavebeenaprimaryconcerninregardstoStorage AreaNetworkadoptionindatacenters.Customarily,datacentershavestoredoperating systemsaswellaspertinentdatafileslocallyonphysicaldisksconnectedtoaserver.It isestimatedthecostofpoweringtheU.S.’sdatacentersisexpectedtoexceed$15 billionoverthenextdecade(Ren,Wang,Urgaonkar,&Sivasubramaniam,2012).Dueto theincreaseddemandtoreducecarbonfootprintsandenergyconsumption,resource extensiveserversutilizingvirtualizationhaveincreasedinpopularity.Thesepowerful serversareusedtoconsolidateolderserverswhichutilizemoreenergyandprovideless functionality.Alongsidetakingadvantageofvirtualizationtechnologies,physicaldisks havealsobeenremovedfromserversandreplacedbySANs. ImplementinganopensourcebasedSANintroducesmultipleconfiguration options.AmultitudeofopensourceiSCSItargetengineshavebeencreatedoralteredin thelastfewyears.TheprimarysignificanceoftheincreaseiniSCSIenginesisthelackof publishedresearchandthroughputspecificationsamongthechoices.These configurationoptionscreatedifferentscenariosforthroughputanddatarates.Similar tonewtechnologies,itisimportanttheseconfigurationoptionsandtherelatedeffect onnetworkthroughputaremeasured.
4
DatacentersareconstantlyimplementingIPͲbasedserviceswhichcanrelyheavily onvirtualizationandstoragetechnologies.Implementingstorageandvirtualization technologieseffectivelyisnecessarywhenconsideringservicesutilizedbyavast majorityofnetworkusers.Duetothevarietyofoptions,itisimportanttohavedata benchmarksinplacetofurthermeasureandfinetunenetworkperformance. 1.4 PurposeofResearch Thepurposeofthisresearchistoexamineandanalyzetheeffectdifferent technologiessuchasvirtualizationandnetworkprotocolshaveonopensourceStorage AreaNetworkthroughput.Theresearchwillhelpdatacentersdeterminethelevelof overheadvirtualizationintroducesaswellaswhichnetworkmetricsholdthemost validitywhenimplementingSANsolutions. 1.5 ResearchQuestion Thisthesiswillanswertheprimaryresearchquestions: x WhatistheaveragethroughputratesutilizingfourdifferentopensourceStorage AreaNetworktargetservers? o WhateffectonthroughputratesdoesESXi5.1introducecomparedtoa physicalinitiator? x WhateffectdoesMTUframesizehaveonfourdifferentopensourceStorage AreaNetworktargetservers? o WhateffectdoesMTUframesizehaveonphysicalandvirtualiSCSI initiators? 1.6 Assumptions Theassumptionsofthisprojectinclude: x TheSAShardwareandrelatedharddrivesareavailableandworkcorrectlywith Fedora.
5
x Metricsobtainedbyoneinitiatoraccessingasingletargetisrelationaltolarger implementations. x Theunderlyingfilesystemsareproperlyaligned. x Thetestmethodologyissufficientinrepresentingageneralusecase 1.7 Limitations Thelimitationsofthisresearchinclude: x TheperformancetestingwillbelimitedtothebuiltinfunctionalityofIometer. x Theresearchislimitedtothehardware,software,andlocalareanetworkused. 1.8 Delimitations Thedelimitationsofthisresearchinclude: x ThisresearchdoesnotaddressotheravailableopensourceiSCSItargetservers apartfromIET,SCST,LIO,andISTGT. x ThisresearchonlyexaminestheiSCSIprotocol. x Thisresearchdoesnotaddressalternativethroughputmeasuringtoolsother thanIometer. x ThisresearchdoesnotutilizeothervirtualizationtechnologiesotherthanESXi 5.1. x ThisresearchdoesnotexaminehardwareiSCSIinitiators. x ThisresearchdoesnottakeintoaccountiSCSItargetsecurity. x Thisresearchdoesnottakeintoaccountstorageredundancyorfailover. x OnlyoneiSCSIinitiatorwillbeusedduringeachtests. 1.9 Summary WhilethefundamentaltechnologiesbehindSANsandvirtualizationhavebeen examinedextensively,therecentmodernizationandadoptionindatacentersacrossthe globehassparkedgreatinterestinthetechnologies.Reducingenergyuse,becoming
6 moreefficient,andreducingtotalcostofownershiphavebeensomeoftheprimary factorspushingforththevirtualizationmovement.Duetotheincreasedinterest,itis importantthesetechnologiesbeevaluatedandcompared. Thischapterbeganbydiscussingthebackgroundofstorageareanetworksand virtualization.Theproblemstatementaswellasthesignificanceoftheproblemwas thenintroducedanddefined.Inaddition,thepurposeoftheexperimentalongwiththe researchquestionwasaddressed.Finally,limitations,delimitations,assumptionwere addressed.
7
CHAPTER2. LITERATUREREVIEW
Duringthisreviewofliterature,formalbenchmarkingexperimentsregarding currentopensourceStorageAreaNetwork(SAN)technologieswereoutdated.Dueto thelackofcurrentexperimentinformationcomparingmodernopensourceiSCSI protocols,aspectsimportanttoobtainingcredibledatameasurementsforthisresearch willbereviewed.PertinenttopicsrelatingtothisresearcharedifferentSANtarget servers,networkconstruction,storageoptimization,andvirtualization,allofwhichhave beenreviewedconsiderably. Althoughthefocusofthisreviewandresearchisopensourceproducts,previous studiesofcommercialSANproductswerealsoanalyzedwhenpertinent.Theanalyzed commercialSANproductsstudiedaresimplyusedforbasiccomparisonsofunderlying technologies.Althoughacleardifferenceisdefinedbetweencommercialandopen sourcebasedSANs,publishedresearchthatexaminescommercialSANproductshold merit. Theanalysisofliteraturehasbeencompletedusingavarietyofsources.The primarymethodsforresearchutilizedGoogleScholarandnumerousscholarlydatabases includingCompendexandIEEEXplore.Duetothelackofpublishedresearcharound modernopensourceiSCSItargetengines,variousaffiliatedwebpagesbelongingto thesetechnologieswereutilized.Websitesofsoftwarevendorsrelatedtothisstudyare alsoused. 2.1 iSCSITargetServers InternetSmallComputerSystemsInterface(iSCSI)isaprotocolwhichutilizes TCP/IP.TheiSCSIprotocolaimstobefullycompliantwiththestandardizedSCSI
8 architecturemodel(Satranetal.,2004).CompliancewiththeSCSIarchitecturemodelis importantbecauseiSCSItransportsSCSIcommandsthroughTCPtotheinitiatorand communicateswiththeunderlyingsystemasalocalSCSIdisk.Currently,iSCSIisoneof theprimaryprotocolsusedincommercialandopensourceSANsolutions.Whilemany opensourcetargetserverprotocolsareavailable,linuxͲiscsi.org(LIO)hasemergedas thecurrentLinuxkernelstandard(Torvalds,n.d.). TheiSCSIprotocolworksonthebasisofaninitiatorandtarget.Aninitiator accessesatargetandexchangesSCSIblockdataoveranIPnetwork.TheiSCSItarget exposesdiskstotheinitiatorsaddressedbylogicalunitnumbers(LUN).Aninitiatoris generallyaclientcomputerorserverwhichsendsiSCSIrequeststothetargetSAN server.TheiSCSIinitiatorrequestsareprocessedbysoftwareorhardwarecomponents ofthesystem.Softwareinitiatorsutilizethesystemkernelandresourcestoprocessthe iSCSItraffic,whilehardwareinitiatorshaveseparatephysicaloffloadingcapabilitiesto processiSCSItraffic. TargetenginesusediniSCSIhaveevolvedthroughouttheadvancementofLinux andUnix.InpreviousversionsoftheLinuxandUnixkernel,SCSITargetFramework (STGT/TGT)wasthedefaultengine.Recently,majoropensourcesoftwaredistributions, suchasOpenfiler,nativelyutilizedtheiSCSIEnterpriseTarget(IET)engineintheiropen sourceSANproduct.ThepseudokernelsuccessorafterIETwastheSCSITarget Subsystem(SCST).SCSTwasastrongcontenderasthenextstandardLinuxkerneltarget engine,buthasultimatelybeenreplacedbythecurrentstandard,LIO.LIOhasbeen includedintheLinuxkernelbeginningwithversion2.6(LinusTorvalds,2011).FreeBSD, aUnixbasedoperatingsystem,hasadoptedadevelopingtargetnamedISTGT. 2.1.1 iSCSIEnterpriseTarget TheiSCSIEnterpriseTarget(IET)wasaresultofsplittingawayfromaprevious targetimplementation,Ardis,becauseofcertainshortcomings.Ardislackedseveral functionalitieswhichweresolvedbyIET.Themostnotableissuescorrectedwiththe creationofIETwassupportfor64ͲbitarchitectureandLinux2.6kernelsupport,among
9 others(“TheiSCSIEnterpriseTargetProject,”n.d.).DuringthecreationofIET,advanced LinuxkernelsupportwasimportantbecauseIETwasdesignedtoruninuserspaceas opposedtokernelspace.Targetsimplementedinthekernelspaceallowsfordirect communicationwiththephysicalhardwareandresources.Amongotherfeatures,IET alsosupportsmultipletargetsandinitiatorssimultaneously.IETisabletoprovide regularfiles,blockdevices,andvirtualblockdevicestoinitiators(“TheiSCSIEnterprise TargetProject,”n.d.). 2.1.2 SCSTandLIO TheGenericSCSITargetSubsystem(SCST)iscurrentlyfoundinmanyLinuxbased storagesolutions.SCSTwasasplitofthepreviouslydiscussedIETframework.SCSTcame tofruitionbecauseIETwasthoughttohaveviolatedmanyiSCSIstandards.These violationspresentedcriticalissuessuchaspossibledatacorruption,renderingthetarget engineunfitforproductionenvironments(“GenericSCSITargetSubsystemforLinux,” n.d.).DuringthecourseofSCSTmaturity,anothertargetenginenamedlinuxͲiscsi.org (LIO)waspresentedasanadditionalalternativetoIET.Supportingmostmodern networkfabrics,LIOandSCSTsharesomesimilarities. SCSTandLIObothresideintheLinuxkernelandsupportiSCSI(Rodrigues,2011). Themostsignificantsimilaritybetweenthetwotargetsisthefactbothsupport persistentreservations.PersistentreservationsinrelationshiptoiSCSIencompassthe abilityforclusteredstoragesolutionstoparticipateinthetakeoveroffailednetwork connections.Persistentreservationscanmaintainconsistentthroughputspeedsduring networkfailuresandpreventissueswithmultiplehostsaccessingthesameLogicalUnit Number(LUN). ThefirstdifferencebetweenSCSTandLIOisthewaywhichtheprotocolshandle communicationbetweentheinitiatorandtarget.AccordingtotheSCSThomewebpage, SCSTallowsforautomaticsessionreassignmentoncecommunicatedtotheinitiator (“GenericSCSITargetSubsystemforLinux,”n.d.).Inaddition,SCSThaslisted specificationsstatingtheabilitytodynamicallyadjustorprotectagainstiSCSIcommands
10 withincorrecttransfersizesordirections(“GenericSCSITargetSubsystemforLinux,” n.d.).WhilethepublishedcharacteristicslistedforSCSTarescarce,theprotocolis describedtohandlevaryingtransmissionsizesbetterthanothertargetengineswhich couldleadtoanincreaseinIOPSmeasurementsusingdifferentnetworkdesigns. Second,LIOhastheabilitytohavemultipleconnectionspersession(MC/S). Multipleconnections,ormultiplepathstoaninitiator,canbeusedinavarietyofways. MC/Scanestablishanadditionalconnectionthroughanothernetworkpathincaseof primarypathfailure.Thisadditionalconnectionisseamlesstothepriorconnection whichdoesnotterminatetheinitialsession.Also,MC/Sisabletoloadbalancetraffic acrossmultiplelinksthatinturncanincreasethroughputwithcompatiblehardware. Finally,LIOandSCSThandleerrorswithinaniSCSIconnectionatdifferentlevels. AccordingtoRFC3720,iSCSIconnectionissuescanoccuratthesession,digest,or connectionlevel(Satranetal.,2004).DifferentconnectionissueswithiniSCSIcategorize thewaytargetdrivershandleerrorssentfromtheinitiatorduetoabrokenconnection orothererror.Dependingontheprocesshandling,theerrormaybecarriedforthtothe SCSIdriver.LIOsupportsamaximumerrorrecoverylevelof2,whichmeansitcan recoverfromerrorsatallthreepreviouslymentionedareas(“TheLinuxSCSItarget wiki,”2013).Incontrast,SCSTonlyspecifiesamaximumerrorrecoveryof0,which entailsallconnectionerrorsarepassedalongtotheSCSIdriver(“GenericSCSITarget SubsystemforLinux,”n.d.). 2.1.3 ISTGT ISTGTisakernelleveldriverwhichsharesmanysimilaritieswithLIOandSCST. ISTGTsupportspersistentreservationsandalsohastheabilitytoutilizeMC/sandMPIO. MultipleiSCSIinitiatorshavebeentestedwithISTGTincludingWindowsServer2008R2 andESXi5.1,amongmanyothers(“AniSCSItargetimplementationformultipath failoverclusternodes,”n.d.).ISTGTisincludedbydefaultinversion8.1newerofthe FreeNASopensourcestorageutility,however,documentationforthisdriverislacking.
11
ThemaindevelopmentsiteforthisdriverininJapaneseandthedocumentationis limited. 2.2 StorageAlignment Storagealignmentoffilesystemvolumestounderlyingstoragearchitecturecan increasedatatransferratesforcertaintypesofapplications.Unalignedformatted storagevolumescancausemultiplestoragedatachunkstobeaccessforasingledisk readoperationfromtheoperatingsystem.StoragealignmentinregardstoSANsand virtualmachinescanbealignedinthreeareas,SANLUNs,VMFSvolumes,andVMDK files. Figure2.1below(adoptedfromVMwareFigure1)depictsanincorrectlyaligned virtualfilesysteminrelationshiptoitsunderlyingSANarchitecture(VMware,2009).The SANLUNcanbecreatedusingRAIDorsingledisk.
Figure2.1UnalignedVirtualFileSystem TheSANLUNisdividedintochunkswhicheachcontainmultiplesectors.I/Orequests fromfilesystemswhichonlyrequestasectoractuallyreadanentirechunkinwhichthe sectorbelongs.Figure2.1depictsanunalignedstoragearchitecturewhereareadofthe thirdclusteractuallyspanstwoVMFSblockswhichrequestthreeSANchunks.This requestofmultiplechunksforasinglereadcanintroducemeasureableoverhead dependingontheapplication.Figure2.2below(adoptedfromVMwareFigure2)depicts acorrectlyalignedvirtualfilesysteminrelationtoitsunderlyingSANarchitecture (VMware,2009).
12
Figure2.2AlignedVirtualFileSystem Thisdepictionshowsproperlyalignedclusters,blocks,andchunks.Properfilesystem alignmentensurestherequestofasingleclusterdoesnotspanmultiplechunksofthe underlyingstorage.Inthiscase,accessingthethirdclusteronlyrequestsasingleblock andchunkoptimizingefficiency.. InastoragealignmenttestfromVMware,theresultsofsequentialandrandom readsusinganalignedandunalignedarchitecturewerecalculatedusingIometerwith varyingI/Osizes.CorrectlyaligningtheVMFS3filesystemyieldedanincreaseofroughly 20MB/sduringsequentialreadsforlargerI/Osizes(VMware,2009).Correctlyaligning theVMFS3filesystemyieldedanincreaseofroughly15Ͳ20MB/sduringsequential writesforlargerI/Osizes. ThecurrentversionofESXi5.1properlyalignstheVMFSvolumeblockstotheSAN chunksuponfilesystemcreation.ESXi5.1automaticallyalignsVMFS3ofVMFS5 partitionsalongthe1MBboundaryalleviatingthepreviousneedformanualuser alignment(VMware,2012).Whilepreviousversionsofoperatingsystemsintroduced clusteralignmentissues,newerversionsofWindowsandLinuxdistributions automaticallyalignthebootanddatapartitionstotheunderlyingfilesystem. Tosummarize,modernoperatingsystemsalongwithcurrentversionsof VMwareESXialignfilesystemsaccordinglybasedonadefault1MBboundary.Storage alignmentfinetuningcanbecompletedbymanuallyadjustingVMFSblocksizesand VMDKclustersizesifneeded,butisnotrecommended.Usingtheguidedinstallation methodsprovidedwithmostapplications,theneedformanualstoragealignmentisno longerrequired.Inthisresearch,ESXi5.1,WindowsServer2008R2,andtheVMware
13
I/OAnalyzerautomaticallyaligntheirrespectivepartitionsalonga1MBboundary.The RAID0configurationontheSANusedachunksizeof1024KB.Becausethechunksizeis divisiblebythe1MBVMFSandVMDKsizes,thestorageiscorrectlyalignedfor maximumthroughputvalues.
2.3 NetworkConstruction ThestandardmethodtotransportiSCSIdataisbyutilizingTransmissionControl Protocol/InternetProtocol(TCP/IP).Likewise,datacommunicationbetweencomputers ordevicesonaLocalAreaNetwork(LAN)isnormallyhandledutilizingTCP/IP.WhileTCP focusesoncommunicationbetweenapplications,IPfocusesonpassinginformation betweencomputersordevices. CommonlydiscussedatthetransportlayeroftheOpenSystemsInterconnect Model(OSI),TCPseparatesdatainchunksdependingontheMaximumTransmission UnitSize(MTU)oftheTCPpacket.Afterthedataisseparatedintopackets,itispassed alongtothenetworkinglayertobetransportedtoitsdestination.Anacknowledgement aspectisbuiltintoTCPwhichensuresreliabletransmissionofinformationsentoverthe network.Duetothereliabilityandwideadoption,iSCSIprimarilyutilizesTCP/IPtopass traffic. AlthoughnotstandardizedbyIEEE,jumboframeshaveconsistentlybeen referredtoasanyEthernetframewithapayloadthanthestandard1500bytes.IEEEhas determinedtonotsupportordefineJumboframesduetoconcernsaroundvendorand equipmentinteroperability(Faustinietal.,2009).Becausethereisnostandardization, framesizesabove1500bytesareonlylimitedbythecapabilitiesofallavailable hardware.EventhoughJumboFramesizesarenotstandardized,commonsizes referencedbeyond1500bytesare5000,7200,and9000bytes. Inpriorstudiesutilizing500MHzprocessors,MTUsizeshaveincreased bandwidthratesiniSCSIthroughputtestsoverGigabitEthernet(GbE)implementations asmuchas60%ThesetestswereperformedusingaprototypeiSCSItargetengine
14 residingintheLinuxkernelsimilartoLIOandSCST.Whilefibrechannelwasalsoused duringsomeexperiments,GbEwasprimarilythefocusedfabric. 2.4 Iometer Developedin2001byIntel,Iometerisanopensourceapplicationthatcan measureperformancemetricsofharddrives.ThesemeasurementsfromIometerare normallypresentedintheformofIOPSandoverallbandwidth.Iometerrunslocallyona machineandutilizesaclientandservermodeltomeasurestoragedevices.Theserver, orgraphicaluserinterfaceofIometer,controlsthethreadedapplicationswhichperform thereadandwriteoperationsondisks.Thesethreadedapplicationsarerunbyaservice namedDynamowhichcarriesoutthesimulatedreadandwriteoperations.Accordingto theIometeruserguide,thesoftwareapplicationspecializesinmeasuringsystemlevel harddriveandnetworkperformanceaswellasthroughputofattacheddrives(“Iometer User’sGuide,”2003). Iometer,amongotherbenchmarkingtools,hasbeenusedmultipletimesby previousbenchmarkingstudies.Iometerhassoftwareportsformostmajoroperating systemsincludingLinux,Windows,andOSX.Accordingtothedownloadstatisticson SourceForge.net,Iometerwasdownloadedover290,000timesintheyear2012. PerformanceanalysesofpriorSANtestenvironmentshaveprimarilyused Iometer.AnexperimentwithcommoditySANsystemsutilizedIometerwithvaryingseek andwriteschedulestomeasureCPUutilizationandstoragethroughput(Aiken& Grunwald,2003).Thisexperimentdeterminedthethroughputdifferencebetweenlocal SCSIoperationsandnetworkiSCSIoperationswasvisiblyapparent.Asimilarexperiment conductedusingIometerandatestenvironmentalsofoundiSCSIintroducedsignificant overheadcomparedtolocaltests(Zhang,Yang,Guo,&Jia,2005).Inaddition,another experimentalsoutilizedIometerandESX2.0todepictthenegligibledifferencebetween virtualmachinesandnativemachinesusingavarietyofstoragemediumsincludingSANs (Ahmad,Anderson,Holler,Kambo,&Makhija,2003).
15
ScottDrummonds(2008),aperformanceanalystatVMware,statedIometeris thestandardsoftwareutilityandisrecommendedforIOPSmeasurementandanalysisin avirtualenvironment.Measuringdiskperformancefromavirtualmachineona hypervisorcanintroduceissuesifguestsgeneratehighCPUutilization.Guestsutilizing over30%oftheavailableCPUresourcesonthehypervisorcanintroducetimeͲbased measurementinaccuracies(Drummonds,2008).Whilethevirtualizationsoftwareis becomingmorematureeliminatingpossibletimingissues,itisimportanttokeepvirtual machineCPUutilizationataminimumwhenperformingbenchmarkingtestsinorderto obtainthemostaccurateresults.VMwarehascreatedatestingappliancebuiltwith IometercalledI/OAnalyzerwhichaddressessomeofthepotentialtimingissuesand shortcomingwhenusingIometerfromvirtualmachines.I/OAnalyzercanefficiently generateI/OloadsutilizingtheVMwareVISDKtoremotelycollectstorageperformance statistics(VMware,2013). 2.5 Iperf Iperfisanopensourcenetworkmeasuringtooltoanalyzethebandwidth betweentwoendpoints.IperfisacommandlinetoolwithsoftwarepackagesforLinux, Unix,andWindows,amongothers.Thisbandwidthmonitoringtoolhasbeenanalyzed andcomparedtoothersimilartoolsandusedinmanyotherthroughputresearch studies(Kolahi,Narayan,Nguyen,&Sunarto,2011).BecauseIperfhasopensource packagesforalloperatingsystemsinthisresearch,ithasbeenselectedastheprimary bandwidthmonitoringtoolfortheseexperiments. 2.6 Summary Multipleexperimentshavebeenconductedmeasuringperformanceanalysisof virtualizationandnetworkingmetrics.Althoughvalid,someresultsfrompublished researchoffervaryingresults.Iometerhasbeenthepseudostandardwhenmeasuring diskutilizationamongmanydifferentstorageareas.AlthoughmanyopensourceiSCSI
16 targetenginesareavailable,theamountofcomparativemeasurementsamongthe differentiSCSItargetsislacking. InsummarythischapterprovidedareviewoffouriSCSItargetenginesandthe similaritiesanddifferenceseachshare.Also,jumboframeswereexaminedinaddition totheireffectonnetworktraffic.Finally,Iometerandassociatedbestpracticeswere summarized.Ingeneral,thisreviewofliteratureexplainskeytechnologieswhichwillbe usedthroughoutthisresearch.
17
CHAPTER3. METHODOLOGY
Thischapterdiscussestheresearchmethodologywhichwasusedtoaddressthe primaryresearchquestions: x WhatistheaveragethroughputratesutilizingfourdifferentopensourceStorage AreaNetworktargetservers? o WhateffectonthroughputratesdoesESXi5.1introducecomparedtoa physicalinitiator? x WhateffectdoesMTUframesizehaveonfourdifferentopensourceStorage AreaNetworktargetservers? o WhateffectdoesMTUframesizehaveonphysicalandvirtualiSCSI initiators? Asanoverview,thetechnologiesusedinthisexperiment,theframeworkofthe methodology,andtheintendedacquireddatawillbediscussed. Theresearchconductedandthemetricsobtainedfromthismethodologyare quantitativeinnature.Theprimarymetricobtainedandanalyzedthroughoutthe experimentisrecordedinIOPSandthroughput(MB/s)values.Eachindependent storagediskandRAIDarrayhasatheoreticalmaximumIOPSvalue.Thepurposeisto createanenvironmentandexaminedifferentiSCSIprotocolswhilemeasuringthe effectsnetworkingprotocolsandvirtualizationhaveonasetofbaselinestoragevalues. 3.1 Framework Inordertoaccuratelyobtainthroughputmetricsforthisexperiment,itwas necessarytocreateapracticaltestenvironmentrepresentativeofwhatisfoundin datacenters.Thestandardtestingenvironmentshavebeencreatedandduplicatedfor
18 eachiSCSItargetserver.ThetestenvironmentsconsistedofaniSCSItargetserver,a physicalandvirtualclientmachine,andanetworkswitch.Itwasnecessarytocreate multipleenvironmentstoseparateiSCSItargetserverimplementations.Itwasalso determinedonlyasingleiSCSIinitiatorwouldbeconnectedtoaniSCSItargetatatime. Thenetworkingequipmentutilizedwas1GbEinterfacesandCat5eratedEthernet cables.InFigure3.1below,thetestenvironmentconsistingofaphysicalandvirtual environmentareshown.Environment1showninFigure3.1depictsthetopology createdandusedtoexamineaphysicaliSCSIinitiatorimplementation.Environment2 showninFigure3.1wasusedtoexamineavirtualmachinealongsideaniSCSIinitiator.
iSCSIInitiator NetworkSwitch iSCSITarget DellOptiplex990 HP2900Ͳ24G DellPowerEdge2950 WindowsServer2008R2 Fedora18/FreeBSD Environment1
Virtual Machine VmwareI/O Analyzer
Hypervisor NetworkSwitch iSCSITarget DellOptiplex990 HP2900Ͳ24G DellPowerEdge2950 ESXi5.1 Fedora18/FreeBSD iSCSIInitiator Environment2 Figure3.1LogicalTestEnviornment
ThecompletenetworktopologyandarchitecturecanbefoundinAppendixA.
ForIET,SCST,andLIOiSCSITargetservers,theLinuxdistributionFedora18was installedandconfiguredwithdefaultsettingsonaDellPowerEdge2950.The PowerEdge2950wasconfiguredwithasingle500GBSATASeagateBarracudahard
19 driveusedfortheoperatingsysteminstallation.Inaddition,fouradditionalBarracuda SAS500GBwereimplementedinaRAID0configurationforuseastheiSCSIbacking store.ThehardwareRAIDconfigurationwasconfiguredusingthePERC6/Icontrollerin thePowerEdge2950.Additionally,anIntelGigabitEthernetNICwasinstalledinthe PowerEdge2950.ItwasnecessarytouseseparateNICsformanagementtrafficand iSCSItrafficonseparateVLANS.Table3.1belowshowsdetailedspecificationsforthe DellPowerEdge2950used. Table3.1DellOptiplex2950Specifications Processor IntelRXeonL5335@2.00GHz Networking DualembeddedBroadcomNetXtremeII5708GigabitEthernet NIC AdditionalPCIͲExpressIntel893647Ethernet10/100/1000 Memory 16GB533MHz HardDrive(s) System:500GBSATASeagateBarracudaST500DM002 RAID:4XBarracudaES.2SAS500GBST3500620SS RAIDController PERC6/i Fedora18waschosenastheLinuxdistributionbecausenativelyitsupportstwo ofthethreeLinuxbasediSCSItargetserverswhichweretested.Onasingleinstallation ofFedora18,thelatestkernelavailableatthetime,linuxͲ3.9.4,wasimplementedand modifiedtosupportSCST.Modifyingthekernelwasperformeddueto recommendationsfromtheSCSTdocumentation(“GenericSCSITargetSubsystemfor Linux,”n.d.)Thelasttestedtargetserver,ISTGT,wasimplementedonaseparateSATA harddrive.ISTGTwasconfiguredusingFreeNAS9,anopensourcestorageutilitybuilt onFreeBSD.ThepertinentconfigurationfilesmodifiedforeachiSCSItargetservercan bereferencedinAppendixBthroughAppendixE.
20
TheiSCSIinitiatorsusedinthearchitecturewerebuiltusingDellOptiPlex990’s. ThefirstiSCSIinitiatorconfiguredutilizedthesoftwareiSCSIinitiatorbuiltintoWindows Server2008R2.TheWindowsServeroperatingsystemwasinstalleddirectlyonthelocal harddriveoftheOptiplex990.TheOptiplex990wasmodifiedwithanadditionalIntel GigabitEthernetnetworkinterfacecardtoseparatemanagementtrafficfromiSCSI traffic.InordertomeasurethroughputmetricsonWindowsServer2008R2,itwas necessarytoinstallthelatestversionofIometer2006.07.27.Thisenvironmentis referredtoinlaterportionsofthisthesisasthephysicalinitiatorenvironment. AdditionalspecificationsoftheOptiplex990sareshowninTable3.2below. Table3.2DellOptiplex990Specifications Processor Intel2ndGenerationCorei72600withIntelvProTechnology Networking IntegratedIntel82579LMEthernet10/100/1000 AdditionalPCIͲExpressIntel893647Ethernet10/100/1000 Memory 4X4GBNonͲECCdualͲchannel1333MHzDDR3SDRAM HardDrive(s) 3.5”250GB7200RPMSATA3.0Gb/s AsecondOptiplex990withthesameresourcesshowninTable3.2wasutilizedfor thesecondiSCSIinitiator,referredtoinlaterportionsofthisthesisasthevirtual initiatorenvironment.VMwareESXi5.1u1wasinstalledtotheharddriveoftheOptiplex 990withdefaultsettings.ItshouldbenotedthatacustomversionofESXiwasutilized tosupporttheonboardIntel82579LMNICchipsetontheOptiplex990.After installation,ESXiwasconfiguredtoutilizetwoseparatevSwitches.OnevSwitchwas usedprimarilyformachinemanagementwhilethesecondvSwitchwasusedsolelyfor iSCSItraffic.BecausethenativeinstallationofESXi5.1u1cannotaccuratelygenerate iSCSItraffictomeasurethroughput,itwasnecessarytoconfigureavirtualmachineto generateworkload.Inordertocreateworkload,VMware’sI/Oanalyzer1.5.1was deployedonthehypervisor.Thisvirtualmachinewasdeployedwiththedefaultsettings
21 initiallyandthenmodifiedtomatchtheresourcesofthephysicaliSCSIinitiator.The actualamountofRAM,numberofCPUcores,andvirtualharddrivespacefortheVM wasslightlylessthanitsphysicalcounterpartduetoresourceallocationsneededtorun ESXi5.1u1.TheVMwareI/OanalyzerwasconfiguredwithonevirtualCPUwitheight coresand14GBofRAM.Additionally,asecondthickprovisioned60GBharddrivewas addedtothevirtualmachine.Thissecondharddrivewasnecessarytoensurethemetric testswerenotcachedwhilerunningontheSAN(VMware,2013).Bothvirtualdrives werecreatedontheiSCSItarget. Thenetworkswitchconnectingtheinitiatorandtargetsremainedconstant throughoutthearchitectureasaHPProCurve2900Ͳ24G.TheProCurveswitchwas chosenbecauseofhardwareavailabilitywhichsupportedjumboframes.TheProCurve 2900wasdividedintotwoseparateVLANS.Ports1Ͳ12wereconfiguredonVLAN304 usedformanagementtrafficduringtheexperiments.Ports13Ͳ24wereconfiguredon VLAN900whichwasusedsolelyasiSCSItraffic.EachiSCSIportwasconfiguredtoaccept amaximumframesizeof9014bytes.ThecompleteconfigurationfortheHPProcurve 2900ͲGswitchisshowninAppendixF. 3.2 TestingMethodology ItwasfirstnecessarytodeterminethemaximumthroughputvaluesontheiSCSI RAIDarray.Tomeasurethemaximumdiskthroughput,hdparmwasusedlocallyonthe SANserversandranagainsttheiSCSIRAIDarray.Inaddition,itwasimportanttoverify networkconnectivityfrominitiatortotargetwasrunningasexpected.Toobtain maximumnetworkthroughputvalues,Iperfwasranfromtheclientinitiatorstothe targetservers. Tocompareinitiatorsanddifferenttargets,itwasnecessarytocreateastandard setoftests.TheVMwareI/OAnalyzerhaspresetteststosimulatedifferentapplications. Eachtesthasapredeterminedblocksize,read/writepercentage,and random/sequentialpercentagetoreflectcharacteristicsofdifferentscenariosor applications.Avarietyoftestswereselectedfromtheavailablepresetsasthetesting
22 methodology.ThesetestswerealsoreplicatedonthephysicalWindowsServer2008R2 machinerunningIometerandsavedasan.icffileforcontinueduse.Theseriesoftests performedduringeachexperimentareshownbelowinTable3.3.Themethodology remainedconsistentthroughoutthecourseofeachexperiment. Table3.3IometerTestMethodology TestDescription Block Read Write Random Outstanding Size Percentage Percentage Percentage I/O Maximum 512k 100 0 0 32 Throughput MaximumIOPS 512b 100 0 0 32 MaximumWrite 512k 0 100 100 32 Throughput MaximumWrite 512b 0 100 100 32 IOPS Exchange2003 4k 60 40 80 12 Exchange2007 8k 55 45 80 12 SQL16K 16k 66 34 100 16 SQL64K 64k 66 34 100 16 WebServer 8k 95 5 75 4 Workstation 8k 80 20 80 4 3.2.1 Experiments
EachiSCSISANtargetserverwasevaluatedusingastandardtestingprocedure. Tobegin,IometerrunningonthephysicaliSCSIinitiatorwastestedwhileconnectedto theIETiSCSItargetserver.EachtestfromthemethodologyoutlinedpreviouslyinTable 3.3wasperformedforfiveminutes.Eachtestinthemethodologywasperformedthree timesandtheresultswereaveraged.AfterthephysicaliSCSIinitiatortestswere
23 completed,thestepswerereproducedusingtheVMwareI/Oanalyzerontheinitiator runningESXi5.1u1.AfterbothinitiatorsweretestedtheMTUsizewasalteredonthe ESXiiSCSIvSwitch,theiSCSInetworkinterfaceonWindowsServer2008R2,andonthe iSCSInetworkinterfaceoftheIETiSCSItargetserver.TheMTUsizeonthevSwitchand IETiSCSIinterfacewassetto9000byteswhiletheWindowsServer2008R2iSCSINIC wassettoavalueof9014bytes.Thesamemethodologywasperformedagainonboth iSCSIinitiators. Oncealltestsutilizingstandardandjumboframeswerecompleted,theseriesof testswerereplicatedusingtheSCST,LIO,andISTGTiSCSItargetserversforatotalof fourdifferentexperiments.Theprimarymetricsobtainedfromeachsetoftestswere recordedinIOPSandthroughputmeasuredinMBps. 3.3 AnalyzingData AftertheseriesofexperimentswerecompletedoneachiSCSItargetserver,the datacollectedwasanalyzedtoanswertheresearchquestions.First,thedatacollected foreachtestwasexaminedandcomparedtothetheoreticalmaximumnetworkvalues anddiskvaluesobtainedfromtheiPerfandhdparmtestspreviouslyconducted.Next, thevaluesobtainedfromthephysicalWindowsserverandtheVMwarevirtual machineswerecomparedtothesevalues.Finally,thephysicalandvirtualmetrics obtainedusingstandardframeswerecomparedtoresultsfromjumboframes.This seriesofcomparativeanalysiswasrepeatedforeachexperimentconducted.The successfulnessofeachSANwasdeterminedfromthetestsets.AsuccessfulSANortest setisdefinedashavingamajorityofhigherthroughputorIOPSvaluescomparedto othertests. 3.4 Summary Thischapterexplainedtheimportanceofcreatinganaccurateframework architecturetotestfourdifferentiSCSItargetengines.Inaddition,thetesting
24 methodologywasdiscussedwhichexplainedthetestsetused.Finally,adescriptionof theexperimentsconductedwasexplainedanddetailed. Tosummarize,themethodologyusedtotestdifferentarchitectureswas explained.Thismethodologyconsistsofaseriesof10tests.Thetestsetwasperformed fromtwodifferentiSCSIinitiatorsusingthenetworkthroughputmeasuringtool, Iometer.Eachinitiatortestsetwasperformedthreetimes.Thevaluesofeachgroupof testsetswereaveragedtoobtainthefinalvalues.
25
CHAPTER4. RESULTSANDDISCUSSIONS
Duringthecourseoftheexperiments,alargeamountofdatawascollected.Data setsincludethreetrialsofeachtestingmethodologyperformedfourtimesper experiment.Themultiplesetsofdatawereaveragedtogethertoprovidefinalized valueswhichwereusedtocomparescenariosproposedduringtheresearchquestions. Thischapteranalyzesthecollecteddataandinfersconclusionsfromthedatatrends. Whilecollectingdata,itwasobservedcertaintestsfromthemethodologyare bettercomparedwhengroupedtogether.Tomeasurethroughput(MBps),thesubsetof testswhichdonotreflectmaximumIOPSmeasurementsaregroupedandcompared. Thethroughputmeasuringtestshavealsobeenlogicallysubdividedintotwodifferent categories.Thecategoriesincludemaximumthroughputtestsandapplicationspecific throughputtests. ThemaximumthroughputtestssimplymeasurethehighestiSCSIinitiator obtainablevalue.Theapplicationspecifictestsetprovidesanoverviewofhowthe initiatorshandleapplicationspecifictraffic.WhenobservingobtainableIOPS,thetwo testswhichdiscretelymeasuremaximumIOPSvaluesaregroupedandcompared.The testswhichwereusedtocomparethroughputareshownbelowinTable4.1. Additionally,thetestswhichwereusedtocompareIOPSareshownbelowinTable4.2.
26
Table4.1ThroughputTests TestDescription Block Read Write Random Outstanding Size Percentage Percentage Percentage Workers Maximum 512k 100 0 0 32 Throughput MaximumWrite 512k 0 100 100 32 Throughput Exchange2003 4k 60 40 80 12 Exchange2007 8k 55 45 80 12 SQL16K 16k 66 34 100 16 SQL64K 64k 66 34 100 16 WebServer 8k 95 5 75 4 Workstation 8k 80 20 80 4 Table4.2IOPSTests TestDescription Block Read Write Random Outstanding Size Percentage Percentage Percentage Workers MaximumIOPS 512b 100 0 0 32 MaximumWrite 512b 0 100 100 32 IOPS Eachseriesofexperimentsandtestsetshavebeenusedtoanalyzeandcompare theiSCSItargetservers.Inaddition,thedatahasbeenusedtocomparetheeffect virtualizationhasoniSCSItrafficincomparisontoaphysicalmachineutilizingiSCSI. FinallytheeffectofdifferentMTUsizesandthebenefitordegradationofthroughput valueshavebeenexamined. Eachcomparativeanalysesperformedwasbasedontestswiththehighestvalues. AfteracomparativeanalysisofeachiSCSItargetwascompleted,allfourexperiments
27 werecomparedandanalyzedinanattempttodeterminethefastestserverinregardsto thetestingenvironmentandmethodologyused. 4.1 LocalRAIDandNetworkResults TheiPerfutilitywasusedtodeterminethenetworkspeedbetweeninitiatorand target.Itwasdiscoveredtheaveragenetworkthroughputspeedbetweeninitiatorand targetwas112MB/s.ThisvalueisnearthemaximumtheoreticalbandwidthofGigabit BaseEthernetat125MB/s.Itwasdetermined112MB/swouldstandasthemaximum theoreticalthroughputforeachexperiment. ToensuretheDellPowerEdge2950whichwastheunderlyinghardwareforthe iSCSItargetserverswasnotabottleneck,hdparmwasusedtomeasurethereadand writespeedsofthelocalRAIDarray.Usinghdparm,itwasdiscoveredthemaximum totalMB/sspeedoftheRAID0iSCSIarraywas430MB/s.ThisdataconfirmstheSAN serverhardwarewasperformingasexpectedandthroughputspeedsarelimitedbythe networkfabric. 4.2 IETiSCSITargetResults TocomparephysicalandvirtualarchitecturesagainsttheIETiSCSItargetserver,a seriesoftestswereperformed.First,thethroughputandIOPSfrombothinitiator architecturesweregatheredandcompared.Then,thesameprocesswasperformed with9000byteMTUsizesandcomparedagainsttheinitialresults. 4.2.1 PhysicalandVirtualComparison
ThethroughputdifferencesbetweenphysicalandvirtualiSCSIinitiatorsinregards totheIETiSCSItargetserverwerenegligible.Figure4.1belowdisplaystheresultsofthe testsetinthroughput.
28
IETPhysicalandVirtualMB/s 120.00 110.00 100.00 90.00 80.00 70.00 60.00
MBps 50.00 40.00 30.00 20.00 10.00 0.00 Maximum Maximum Write Exchange03 Exchange07 SQL16K SQL64K WebServer Workstation Throughput Throughput Virtual 111.50 109.60 2.34 4.05 8.31 24.62 2.48 2.51 Physical 112.58 109.02 1.97 3.62 7.54 14.57 3.74 3.66 Figure4.1IETPhysicalandVirtualMB/s
Theresultsforbothtestsregardingthemaximumlevelofread/writethroughputforthis targetserverwerenearthemaximumfabricratefoundinpreviousiPerfresults.Infive oftheeightresultsshown,thevirtualmachineapplianceyieldedslightlyhigher throughputrates,however,theactualratedifferenceinmostresultsarenegligible.The greatestdifferencebetweenphysicalandvirtualarchitectureswasduringtheSQL64K testwhichresultedinabout10MB/sdifference,or68%. Duringthesametestset,IOPSwerealsomeasuredandcompared.Figure4.2 belowdisplaystheresultsinIOPSobtainedfromthetestset.
29
IETPhysicalandVirtualIOPS 60000.00 55000.00 50000.00 45000.00 40000.00 35000.00 30000.00 IOPS 25000.00 20000.00 15000.00 10000.00 5000.00 0.00 MaximumIOPS MaximumWriteIOPS Virtual 45291.26 45382.79 Physical 49157.36 53568.77 Figure4.2IETPhysicalandVirtualIOPS Asshownabove,thephysicaloperatingsystemwithsoftwareiSCSIinitiatoryielded higherIOPSvalues.PhysicalreadIOPSvalueswerehigherbyabout3800IOPS,orabout 8.5%.PhysicalmaximumwriteIOPSvalueswerehigherbyabout8000IOPS,or18%.The differencebetweenphysicalandvirtualinitiatorsinthiscaseissignificantbecausethe averageofthevirtualinitiatordoesnotoverlapthestandarddeviationofthephysical initiator. 4.2.2 MTUSizeComparison
Aftercomparingphysicalandvirtualarchitectures,theMTUsizewaschangedon eachinitiatorandIETtargetserverNIC.Theresultscomparingthroughput(MB/s) betweeninitiatorsareshownbelowinFigure4.3.Thefiguregraphicallydepictsthe previousresultsofphysicalandvirtualiSCSIinitiatorsusingstandardframesizesaswell asthesametestingarchitectureutilizingjumboframes.
30
IETPhysicalandVirtualMTUMB/s 120.00 110.00 100.00 90.00 80.00 70.00 60.00 MBps 50.00 40.00 30.00 20.00 10.00 0.00 Maximum Maximum Write Exchange03 Exchange07 SQL16K SQL64K WebServer Workstation Throughput Throughput Virtual1500 111.50 109.60 2.34 4.05 8.31 24.62 2.48 2.51 Virtual9000 106.59 113.39 2.16 4.11 8.65 25.46 2.62 2.44 Physical1500 112.58 109.02 1.97 3.62 7.54 14.57 3.74 3.66 Physical9000 75.75 109.71 2.00 3.55 6.96 14.57 3.67 3.26 Figure4.3IETPhysicalandVirtualMTU
ThefirsttwotestsinFigure4.3yieldedvaryingresults.Thefirsttestmeasuring maximumreadthroughputfavoredstandardMTUsizesinphysicalandvirtual architectures.Theintroductionof9000byteMTUsizesdegradedthereadthroughput performance.Themaximumwritethroughputperformanceresultedinlinespeeds acrossallarchitectures.Theremainingapplicationspecifictestvariedbetween architectures.Virtualarchitecturesresultedinhigherthroughputvalueswhenusing jumboframesinfourofthesixtests.Physicalarchitectures,however,resultedinhigher applicationspecificthroughputvalueswhenusingjumboframesintwoofthesixtests. ThefinaltestperformedanalyzedmaximumIOPSvaluesofstandardframeand jumboframesizes.ThedatafromthetestperformedisshownbelowinFigure4.4.
31
IETPhysicalandVirtualMTUIOPS 60000.00
55000.00
50000.00
45000.00
40000.00
35000.00
30000.00 IOPS 25000.00
20000.00
15000.00
10000.00
5000.00
0.00 MaximumIOPS MaximumWriteIOPS Virtual1500 45291.26 45382.79 Virtual9000 42592.12 44133.03 Physical1500 49157.36 53568.77 Physical9000 39856.17 51462.26 Figure4.4IETPhysicalandVirtualMTUIOPS IntheMTUtestsabove,thepreviousresultsutilizingstandardframesarecomparedto theresultsobtainedusingjumboframes.Standardframesizesyieldedhighervalues regardlessofthearchitectureexamined.Thispatternissimilartotheresultsfound whencomparingthroughputandvaryingMTUsizes.Standardframesizesinavirtual environmentexceededjumboframeresultsby6.3%and2.6%forreadandwritevalues, respectively.Thedifferencebetweenstandardandjumboframesizesforvirtual initiatorsconcerningmaximumwriteIOPSisabout1200IOPS,or3%.Additionally,the differencebetweenstandardandjumboframesizesforphysicalinitiatorsinregardsto maximumreadandwritevalueswere23%and4.0%,respectively.
32
4.3 SCSTiSCSITargetResults TocomparephysicalandvirtualarchitecturesagainsttheSCSTiSCSItargetserver, astandardseriesoftestswereperformedtoobtainvalues.First,thethroughputand IOPSofbothsystemswerecompared.Then,thesameprocesswasperformedwith9000 byteMTUsizesandcomparedagainsttheinitialresults. 4.3.1 PhysicalandVirtualComparison
ThetestsetutilizingphysicalandvirtualarchitectureswithSCSTiSCSIprovided mixedresults.Maximumthroughputvalueswerehigherusingthephysicalarchitecture duringinitialtests.Thevirtualarchitectureapplicationspecificresultswerehigherin fourofthesixtests.TheresultsfromthistestsetareshownbelowinFigure4.5.
SCSTPhysicalandVirtualMB/s
120.00 110.00 100.00 90.00 80.00 70.00 60.00 MBps 50.00 40.00 30.00 20.00 10.00 0.00 Maximum Maximum Write Exchange03 Exchange07 SQL16K SQL64K WebServer Workstation Throughput Throughput Virtual 93.82 110.43 1.85 2.67 3.88 9.17 1.79 1.77 Physical 111.81 112.61 1.48 2.15 3.27 7.88 2.11 2.17 Figure4.5SCSTPhysicalandVirtualMB/s ThedatafromFigure4.5showsonlyslightthroughputdifferencesforthelastsixtests. However,thefirsttwotestsconcerningmaximumreadandwritethroughputfavorthe
33 physicalmachineoverthevirtualappliance.Throughputreadspeedsdifferbyabout18 MBwhilewritespeedsonlydifferbyabout2MB. TheresultsinFigure4.6belowalsoshowvaryingresults.ReadIOPSvaluesfavor virtualarchitectureswhilewriteIOPSspeedsfavorphysicalarchitectures.Observingthe resultsfromthedatashowsacleardifferencebetweenthearchitectures.Themaximum readIOPSbetweenarchitecturesdifferbyabout36,000IOPSwhilemaximumwriteIOPS differbyabout51,000IOPS.Thepercentagedifferencesbetweenphysicalandvirtual initiatorsforreadandwriteIOPSare7.7%and13.3%.
SCSTPhysicalandVirtualIOPS 60000.00
55000.00
50000.00
45000.00
40000.00
35000.00
30000.00 IOPS 25000.00
20000.00
15000.00
10000.00
5000.00
0.00 MaximumIOPS MaximumWriteIOPS Virtual 50476.36 38446.20 Physical 46843.24 43568.61 Figure4.6SCSTPhysicalandVirtualIOPS 4.3.2 MTUSizeComparison
TheMTUsizeofstandardandjumboframesweretestedusingthesameSCST iSCSISAN.Oncethedatawascollected,theresultswerecomparedtodetermineif
34 jumboframesincreasedthroughputspeedsforvirtualandphysicalarchitectures.The dataobtainedisshowninFigure4.7below.
SCSTPhysicalandVirtualMTUMB/s 130.00 120.00 110.00 100.00 90.00 80.00 70.00
MBps 60.00 50.00 40.00 30.00 20.00 10.00 0.00 Maximum Maximum Exchange Exchange Workstatio Write SQL16K SQL64K WebServer Throughput 03 07 n Throughput Virtual1500 93.82 110.43 1.85 2.67 3.88 9.17 1.79 1.77 Virtual9000 88.56 114.15 1.77 2.88 4.17 10.06 1.91 1.87 Physical1500 111.81 112.61 1.48 2.15 3.27 7.88 2.11 2.17 Physical9000 85.12 74.69 1.48 2.14 3.25 7.72 2.05 2.11 Figure4.7SCSTPhysicalandVirtualMTU
ThedatainFigure4.7providesvaryingresults.Inthefirsttwotestscomparing maximumthroughputscenarios,standardframesizesproducedhighervaluesinall maximumphysicalarchitectureinstances.Thevirtualarchitectureresultedinhigher valueswithstandardframesduringmaximumreadthroughputtestsandhighervalues withjumboframesinmaximumwritethroughputtests.Theonlytestusingjumbo frameswhichsaturatedthefabriclinkwasthevirtualmaximumwritethroughputtest. Theremainingapplicationspecifictestsproducedmixedresultsforeacharchitecture. Thephysicalarchitecturefavoredstandardframesinallsixtests.Thevirtual architecture,however,resultedinhighervaluesinfiveofsixtestsutilizingjumbo
35 frames.Itshouldbenotedtheactualdifferencesbetweenstandardandjumboframe sizesarenegligible. Followingthroughputtests,IOPSwerealsomeasuredusingstandardandjumbo framesizes.Thevirtualarchitectureusingstandardframesizesyieldedhigherresults whenmeasuringreadIOPSvaluesandlowerresultswhenmeasuringwriteIOPSvalues. Alternately,thephysicalarchitecturetestedslightlyfavoredjumboframesizesinread andwritetests.ThecompletedatasetisshowngraphicallyinFigure4.8below.
SCSTPhysicalandVirtualMTUIOPS 60000.00
55000.00
50000.00
45000.00
40000.00
35000.00
30000.00 IOPS 25000.00
20000.00
15000.00
10000.00
5000.00
0.00 MaximumIOPS MaximumWriteIOPS Virtual1500 50476.36 38446.20 Virtual9000 47314.26 38689.51 Physical1500 46843.24 43568.61 Physical9000 48434.33 45580.44 Figure4.8SCSTPhysicalandVirtualMTUIOPS 4.4 LIOiSCSITargetResults TocomparephysicalandvirtualarchitecturesagainsttheLIOiSCSItargetserver,a seriesoftestswereperformedtoobtainandcomparevalues.First,thethroughputand IOPSofbothsystemswerecompared.Then,thesameprocesswasperformedwith9000 byteMTUsizesandcomparedagainsttheinitialresults.
36
4.4.1 PhysicalandVirtualComparison
Aftercomparingthephysicalandvirtualinitiatorresultsfromthetestset,itwas determinedtheamountofdifferencebetweenthetwoarchitectureswerenegligible. Themaximumthroughputtests,bothreadandwrite,resultedinnearlinespeedsfor botharchitectures.Inaddition,thelastsixtestssimulatingdifferentapplicationsdidnot exhibitanotabledifference.TheresultsaredepictedgraphicallyinFigure4.9below.
LIOPhysicalandVirtualMB/s 120.00 110.00 100.00 90.00 80.00 70.00 60.00 MBps 50.00 40.00 30.00 20.00 10.00 0.00 Maximum Maximum Write Exchange03 Exchange07 SQL16K SQL64K WebServer Workstation Throughput Throughput Virtual 112.07 112.01 1.53 2.99 5.81 15.48 2.05 1.86 Physical 109.22 112.05 1.55 3.02 5.27 13.71 3.08 2.87 Figure4.9LIOPhysicalandVirtualMB/s TheresultsmeasuringIOPSbetweenvirtualandphysicalinitiatorsareshown belowinFigure4.10.Theresultsshowthevirtualarchitecturewithhighervalues.The actualdifferencebetweenphysicalandvirtualarchitecturesforreadandwritevaluesis 5000IOPSand1600IOPS.Readdifferencesequatetonearly19%,whilewrite differenceswereminimalat5%.
37
LIOPhysicalandVirtualIOPS 40000.00
35000.00
30000.00
25000.00
20000.00 IOPS
15000.00
10000.00
5000.00
0.00 MaximumIOPS MaximumWriteIOPS Virtual 30782.40 34416.29 Physical 25694.18 32813.13 Figure4.10LIOPhysicalandVirtualIOPS 4.4.2 MTUSizeComparison
AlteringtheMTUframesizeprovidedminimallyvaryingresults.Maximum throughputtestsresultedinhighervirtualarchitecturevalueswhenusingjumbo frames.Thephysicalarchitecture,however,resultedinhighervaluesusingstandard framesforbothmaximumthroughputtests.Itshouldbenotedthemaximum throughputvaluesforthevirtualinitiatorareallatlinespeed.Thephysical environment,however,exhibitedadecreaseinthroughputbyover10MB/swhenusing jumboframes.Fromavirtualandphysicalinitiatoraspect,standardframesyielded slightlyhighervaluesinallapplicationspecifictests.Thecompletesetofdatacomparing MTUsizesandarchitecturesisdepictedbelowinFigure4.11.
38
LIOPhysicalandVirtualMTUMB/s 130.00 120.00 110.00 100.00 90.00 80.00 70.00 60.00 MBps 50.00 40.00 30.00 20.00 10.00 0.00 Maximum Maximum Write Exchange03 Exchange07 SQL16K SQL64K WebServer Workstation Throughput Throughput Virtual1500 112.07 112.01 1.53 2.99 5.81 15.48 2.05 1.86 Virtual9000 117.51 114.51 1.08 2.94 3.70 9.88 1.95 1.85 Physical1500 109.22 112.05 1.55 3.02 5.27 13.71 3.08 2.87 Physical9000 95.51 101.88 1.55 3.03 4.22 13.64 2.84 2.77 Figure4.11LIOPhysicalandVirtualMTUMB/s
Followingthroughputtests,IOPSwerethenmeasuredandcompared.Inthreeout offourcomparisons,standardframesizesresultedinhigherIOPSvalues.Thephysical initiatortestutilizingjumboframeswastheonlytestwhichjumboframethroughput valueswereslightlyhigherthanthestandardframesizeresults.Duetoaverages overlappingthestandarddeviations,thedifferencesbetweenframesizesarenegligible. ThecompletesetofresultsfromstandardandjumboframesizesmeasuringIOPSare depictedgraphicallybelowinFigure4.12.
39
LIOPhysicalandVirtualMTUIOPS 40000.00
35000.00
30000.00
25000.00
20000.00 IOPS
15000.00
10000.00
5000.00
0.00 MaximumIOPS MaximumWriteIOPS Virtual1500 30782.40 34416.29 Virtual9000 30278.11 32470.79 Physical1500 25694.18 32813.13 Physical9000 24623.21 34156.42 Figure4.12LIOPhysicalandVirtualMTUIOPS 4.5 ISTGTiSCSITargetResults ThefinalexperimenttestedtheISTGTiSCSItargetserver.Tocomparephysicaland virtualarchitectures,aseriesoftestswereperformedtoobtainmetrics.Tobegin,the throughputandIOPSofbotharchitectureswerecompared.Then,thesameprocesswas performedwith9000byteMTUsizesandcomparedagainsttheinitialresults. 4.5.1 PhysicalandVirtualComparison
ThecomparisonofphysicalandvirtualarchitecturesutilizingtheISTGTSANare depictedbelowinFigure4.13.Theresultsfavorthephysicalarchitectureinapplication specifictestswithnotablethroughputdifferences.Whileapplicationspecifictestshave varyingdifferences,thefirsttwotestswhichmeasuringmaximumthroughputspeeds
40 arecomparableinbotharchitectures.Theactualdifferencebetweenphysicaland virtualmaximumreadthroughputisabout6MB/s,or6%.Theactualdifference betweenvirtualandphysicalmaximumwritethroughputisneglible.Itshouldbenoted thephysicalinitiatorproducedapplicationspecificresultswithskewedvaluesdoto caching.ThephysicalinitiatorandIometerencounteredalimitationwhichdidnotallow IometertoproperlysaturatethememoryintheSAN.Thesecachedresultsareapparent intheremainingphysicalinitiatorapplicationspecificresultsreferencingISTGT.
ISTGTPhysicalandVirtualMB/s 120.00 110.00 100.00 90.00 80.00 70.00 60.00 MBps 50.00 40.00 30.00 20.00 10.00 0.00 Maximum Maximum Write Exchange03 Exchange07 SQL16K SQL64K WebServer Workstation Throughput Throughput Virtual 106.08 95.93 0.68 1.29 1.65 10.31 4.18 1.16 Physical 112.19 94.87 3.22 5.73 10.06 28.52 73.92 6.49 Figure4.13ISTGTPhysicalandVirtualMB/s TestscomparingIOPSvaluesresultedinvirtualarchitectureswithhigherresults.In bothread/writeIOPStests,thevirtualarchitecturewasnotablyhigherthanthephysical initiator.Thedifferenceinmaximumreadvaluesisabout11,000IOPS,or45%.Write valuesdifferedbyabout5000IOPS,or23%.Thelargevarianceinvaluessuggesta marginaldifferencebetweenphysicalandvirtualinitiators.Theresultsaredepicted graphicallyinFigure4.14below.
41
ISTGTPhysicalandVirtualIOPS 40000.00
35000.00
30000.00
25000.00
20000.00 IOPS
15000.00
10000.00
5000.00
0.00 MaximumIOPS MaximumWriteIOPS Virtual 35404.79 26245.88 Physical 24408.89 21170.53 Figure4.14ISTGTPhysicalandVirtualIOPS 4.5.2 MTUSizeComparison
IncreasingtheMTUsizeandperformingthetestsetresultedindegradedperformance comparedtostandardframes.Ingeneral,testsmeasuringmaximumread/write throughputconcludedstandardframesizesresultedinhighervaluescomparedto jumboframes.Insomecases,suchasthemaximumthroughputtests,jumboframes degradedperformancevaluessignificantlyinvirtualarchitectures.Notabledifferences specificallycomefromthevirtualarchitectureinwhichmaximumreadandwrite throughputspeedsdifferbyalmost35MB/s,or47%.Inaddition,standardframesizesin virtualandphysicalarchitecturesresultedinhighervaluesduringapplicationspecific tests.ThecompleteresultsareshownbelowinFigure4.15.
42
ISTGTPhysicalandVirtualMTUMB/s 120.00 110.00 100.00 90.00 80.00 70.00 60.00 MBps 50.00 40.00 30.00 20.00 10.00 0.00 Maximum Maximum Write Exchange03 Exchange07 SQL16K SQL64K WebServer Workstation Throughput Throughput Virtual1500 106.08 95.93 0.68 1.29 1.65 10.31 4.18 1.16 Virtual9000 71.86 59.59 0.44 0.92 1.58 4.22 1.06 1.04 Physical1500 112.19 94.87 3.22 5.73 10.06 28.52 73.92 6.49 Physical9000 106.32 94.06 1.19 2.26 3.50 9.68 66.96 2.50 Figure4.15ISTGTPhysicalandVirtualMTUMB/s
Similarly,testsmeasuringIOPSvaluesalsofavoredstandardframesizesover jumboframes.Thedifferencebetweenframesizesusingvirtualarchitecturesvariedby over15,000IOPS,or75%,inmaximumreadtests.Maximumwritetestsvariesbyover 20,000IOPSwhichisasignificantdifference.Theresultsforphysicalinitiatorswere muchmoreconsistentandthedifferenceswerenegligible.Thecompleteresultsare shownbelowinFigure4.16.
43
ISTGTPhysicalandVirtualMTUIOPS
40000.00
35000.00
30000.00
25000.00
20000.00 IOPS
15000.00
10000.00
5000.00
0.00 MaximumIOPS MaximumWriteIOPS Virtual1500 35404.79 26245.88 Virtual9000 20173.33 9527.63 Physical1500 24408.89 21170.53 Physical9000 23942.18 21281.24 Figure4.16ISTGTPhysicalandVirtualMTUIOPS 4.6 iSCSITargetServerComparisons Afterallresultswereobtainedfromthefourexperimentsandrelativetestsets, theresultsfromeachSANTargetwerecompared.Itwasdeterminedthetestsusedto compareSANsconsistonlyofstandardframesizes.Standardframesizeswereselected duetotheconsistentresultantvaluesofprevioustestscomparedtotheresultsfrom jumboframes.Inaddition,sometestsetsexhibitednotabledifferencesbetween physicalandvirtualarchitectureswithstandardframes.Duetothevariationinvalues, physicalandvirtualarchitecturesarebothusedtocompareSANs.Finalcomparisons consistofonlycomparingmaximumthroughputandIOPSvaluesamongphysicaland virtualarchitectures. TheresultsfromstandardframesizesmeasuringvirtualiSCSIinitiatorthroughput aredepictedgraphicallyinFigure4.17below.Althoughmostvaluesdonotexhibit muchvariation,thehighestvaluesinmaximumthroughputtestswerepartoftheLIO
44 iSCSItarget.LIO,SCST,andIETallhadvalueswhichwereclosetothelinespeedofthe networkfabric.ISTGTnearlyapproachedlinespeedsat106.08MB/s.TheLIOiSCSI targetdidnotyieldthehighestvaluescomparedtootherSANsinapplicationspecific test.Additionally,theresultsfromLIOwerecomparabletothehighestapplication specificvalueswhichwerefromIET.
iSCSITargetServerVirtualMaximumThroughput 120.00 110.00 100.00 90.00 80.00 70.00 60.00
MBps 50.00 40.00 30.00 20.00 10.00 0.00 Maximum Maximum Write Exchange03 Exchange07 SQL16K SQL64K WebServer Workstation Throughput Throughput LIO1500 112.07 112.01 1.53 2.99 5.81 15.48 2.05 1.86 SCST1500 93.82 110.43 1.85 2.67 3.88 9.17 1.79 1.77 IET1500 111.50 109.60 2.34 4.05 8.31 24.62 2.48 2.51 ISTGT1500 106.08 95.93 0.68 1.29 1.65 10.31 4.18 1.16
LIO1500 SCST1500 IET1500 ISTGT1500 Figure4.17iSCSITargetServiceVirtualMaximumThroughput
ThenextcomparisonevaluatedtheIOPSvaluesofavirtualenvironmentamong thefourdifferentSANs.TheresultsshowIETandSCSTasthetargetswithhighestIOPS values.ThecompletecomparisonisshownbelowinFigure4.18.Thelargestseparation inreadIOPSvalueswasbetweenLIOandSCSTatnearly20,000IOPS,or64%.The largestseparationbetweenwriteIOPSvalueswasbetweenIETandISTGTatnearly 42,000IOPS,or72%.
45
iSCSITargetServerVirtualMaximumIOPS 60000.00 55000.00 50000.00 45000.00 40000.00 35000.00 30000.00 IOPS 25000.00 20000.00 15000.00 10000.00 5000.00 0.00 MaximumIOPS MaximumWriteIOPS LIO1500 30782.40 34416.29 SCST1500 50476.36 38446.20 IET1500 45291.26 45382.79 ISTGT1500 35404.79 26245.88
LIO1500 SCST1500 IET1500 ISTGT1500 Figure4.18iSCSITargetServerVirtualMaximumIOPS AcompletereferenceofactualandpercentagechangesareshownbelowinTable4.3. Table4.3VirtualMaximumReadandRightIOPSDifferences VirtualMaximumReadIOPS Comparison ActualDifference PercentageDifference SCSTtoIET 5185.10 11% SCSTtoLIO 19693.96 64% SCSTtoISTGT 15071.58 43% VirtualMaximumWriteIOPS Comparison ActualDifference PercentageDifference IETtoSCST 6936.60 18% IETtoLIO 10966.51 32% IETtoISTGT 19136.91 73%
46
AdditionalcomparisonsconsistedofthroughputandIOPSvaluesofthephysical architectureamongallSANStested.Thedataforthroughputcomparisonisshownin Figure4.19below.Thisfigureshowsthemaximumvaluesforreadthroughputwere fromIET,SCST,andISTGT.Inaddition,themaximumthroughputvaluesforwritetests werefromLIO,SCST,andIET.TheapplicationspecifictestsclearlyfavoredISTGT, however,manyresultssuchasthewebserverinferstheresultswereduetocached values.
iSCSITargetServerPhysicalMaximum Throughput 120.00 110.00 100.00 90.00 80.00 70.00 60.00 MBps 50.00 40.00 30.00 20.00 10.00 0.00 Maximum Maximum Write Exchange03 Exchange07 SQL16K SQL64K WebServer Workstation Throughput Throughput LIO1500 109.22 112.05 1.55 3.02 5.27 13.71 3.08 2.87 SCST1500 111.81 112.61 1.48 2.15 3.27 7.88 2.11 2.17 IET1500 112.58 109.02 1.97 3.62 7.54 14.57 3.74 3.66 ISTGT1500 112.19 94.87 3.22 5.73 10.06 28.52 73.92 6.49
LIO1500 SCST1500 IET1500 ISTGT1500 Figure4.19iSCSITargetServerPhysicalMaximumThroughput
47
RegardingIOPSvaluesbetweenSANSfromaphysicalinitiator,IETandSCST againyieldedthehighestvalues.ThelargestseparationofreadIOPSvalueswas betweenIETandISTGTatnearly24,500IOPS,overa100%difference.Thelargest separationofwriteIOPSvalueswasbetweenIETandISTGTatnearly32,000IOPS,overa 100%difference.ThedifferencesbetweenthetwohighestSANS,SCSTandIET,were muchsmaller.MaximumreadIOPSvaluesbetweenthetargetsdifferedby2300IOPS,or 4.9%.ThedifferencebetweenIETandSCSTwritevalueswas10,000IOPS,or22%.The completecomparisonofIOPSvaluesbetweenSANscanbeseenbelowinFigure4.20.
iSCSITargetServerPhysicalMaximumIOPS 60000.00 55000.00 50000.00 45000.00 40000.00 35000.00 30000.00 IOPS 25000.00 20000.00 15000.00 10000.00 5000.00 0.00 MaximumIOPS MaximumWriteIOPS LIO1500 25694.18 32813.13 SCST1500 46843.24 43568.61 IET1500 49157.36 53568.77 ISTGT1500 24408.89 21170.53
LIO1500 SCST1500 IET1500 ISTGT1500 Figure4.20iSCSITargetServerPhysicalMaximumIOPS AcompletereferenceofactualandpercentagechangesareshownbelowinTable4.4.
48
Table4.4PhysicalMaximumReadandRightIOPSDifferences PhysicalMaximumReadIOPS Comparison ActualDifference PercentageDifference IETtoSCST 2314.12 5% IETtoLIO 23463.18 91% IETtoISTGT 24748.47 101% PhysicalMaximumWriteIOPS Comparison ActualDifference PercentageDifference IETtoSCST 10000.16 23% IETtoLIO 20755.65 63% IETtoISTGT 32398.25 153% ComparingtheoveralldatabetweeniSCSItargetsandbothinitiatorshasbeen shortenedtothemaximumthroughputandIOPStestsets.Themaximumreadandwrite throughputiscomparedaswellasthemaximumreadandwriteIOPS.Thisdatacanbe usedtoinferwhichiSCSISANtargetisthemostconsistentandyieldedthehighest values. InFigure4.21below,physicalandvirtualiSCSIinitiatorsareshownfromallfour experiments.ThedatashowsthroughputspeedsinMB/sforallinitiatorandtarget combinationsaremostlycomparable.Alltestvaluesarenearthemaximumlinespeed of112MB/spreviouslydeterminedwiththeexceptionofISTGTresultsandSCSTvirtual readresults.
49
iSCSITargetServerPhysicalandVirtual Throughput 120.00 110.00 100.00 90.00 80.00 70.00 60.00 MBps 50.00 40.00 30.00 20.00 10.00 0.00 MaximumThroughput MaximumWriteThroughput LIOPHY 109.22 112.05 LIOVM 112.07 112.01 SCSTPHY 111.81 112.61 SCSTVM 93.82 110.43 IETPHY 112.58 109.02 IETVM 111.50 109.60 ISTGTPHY 112.19 94.87 ISTGTVM 106.08 95.93
LIOPHY LIOVM SCSTPHY SCSTVM IETPHY IETVM ISTGTPHY ISTGTVM Figure4.21iSCSITargetServerPhysicalandVirtualThroughput Finally,IOPSwerecomparedwiththesametestsetsofphysical,virtual,andSAN architectures.ThedatapointsinFigure4.22belowshowalargevariationbetweenSAN targets.WhilepreviousthroughputtestsshowedmostiSCSIinitiatorsascomparable withlinespeed,theIOPSvaluesaremorediverse.IETandSCSThadthehighestvaluesin bothtestswithISTGTbeingthelowest.LIO,SCST,andISTGTresultedinhighervirtual valuesasopposedtotherespectivephysicalresultsformaximumreadIOPS.Theonly SANtargetyieldinghigherphysicalreadIOPSresultswasIET.LIOandISTGTresultedin
50 highervirtualvaluesasopposedtophysicalvalueswhileSCSTandIETbothhadhigher physicalvalues.
iSCSITargetServerPhysicalandVirtualIOPS 60000.00 55000.00 50000.00 45000.00 40000.00 35000.00 30000.00 IOPS 25000.00 20000.00 15000.00 10000.00 5000.00 0.00 MaximumIOPS MaximumWriteIOPS LIOPHY 25694.18 32813.13 LIOVM 30782.40 34416.29 SCSTPHY 46843.24 43568.61 SCSTVM 50476.36 38446.20 IETPHY 49157.36 53568.77 IETVM 45291.26 45382.79 ISTGTPHY 24408.89 21170.53 ISTGTVM 35404.79 26245.88
LIOPHY LIOVM SCSTPHY SCSTVM IETPHY IETVM ISTGTPHY ISTGTVM Figure4.22iSCSITargetServerPhysicalandVirtualIOPS 4.7 Summary
Observingandanalyzingthedataacquiredfromtheexperiments,thearchitecture withtheoverallhighestthroughputvalueswasnotdefined.DependentontheSAN targetanalyzed,physicalandvirtualarchitecturesproducedvaryingresults.Ingeneral, nosinglearchitectureconsistentlyproducedhighervaluesthantheothersacrossall fourexperiments.Theentiredatasetfromallexperimentsconductedisshownin AppendixG.ThedatacollectedinferseachSANtargethandlesinitiatorsdifferently, regardlessofarchitecture.
51
Likewise,thedatacollectedandanalyzedregardingIOPSmetricsdidnotinfera clearpatterntoaccuratelyconcludewhicharchitecturewasmostconsistent.Infiveof theeighttotalIOPStests,thevirtualarchitectureproducedhigherIOPSvaluesthanthe physicalarchitecture.InsomeIOPSmeasuringtests,differencesinIOPSamong architecturesrangedfromabout1,800to30,000IOPS.AnalyzingtheassociatedMB/s accompaniedbytherespectivevariancesinIOPSvalues,20,000IOPSisbetween9Ͳ10 MB/sthroughputdifference. TheMTUsizeofframesalsodidnotproduceuniformresults.Dependentonthe SANtargettested,thetwoinitiatorarchitecturesproducedvaryingthroughoutandIOPS results.Inmostcases,jumboframesproducedresultswhichweremoresporadicand lessconsistent.Inaddition,therewereinstanceswherejumboframesproducedlower valuesthanstandardframes.
52
CHAPTER5. CONCLUSIONSANDFUTUREWORK
5.1 iSCSITargetServerConclusions
WhilesomeiSCSItargetsresultedinhigherthroughputorIOPSvaluesforcertain tests,anoverlysuccessfultargetwasnotdepictedbythedata.Theanalyzeddataand seriesofexperiments,however,providedinsightintoiSCSIoverheadandSAN performance.SimilarlytopreviousdiscussedexperimentsconductedbyAiken& Grunwald,iSCSIintroducedanoticeabledifferencebetweenlocalSCSItrafficdueto networkbottlenecks. WhileusingasingleiSCSIinitiator,thedifferencebetweenphysicalandvirtual architecturesdidnotproduceconsistentresults.Inmostcases,themaximum throughputmeasuredonlydifferedbyoneortwoMB/s.Theminorvariationbetween thetwoarchitectureswasunexpected.Theimplementationoftheentirevirtual operatingsystembeingranfromtheiSCSItargetwasexpectedtoproducemore overheadopposedtothephysicalcounterpart.TheexceptionwasSCSTwhichhadan overallreadvirtualthroughputvalueapproximately18MB/s,or1.9%lessthanthe physicalimplementation. Additionally,IOPSresultsobtainedduringtheexperimentswerehigherinvirtual instancesasopposedtophysicalinstances,inmostcases.LIOandISTGTconsistently resultedinhigherIOPvaluesinreadandwritetests,whileSCSTonlyhadhigherread IOPvalues.AlthoughaphysicaltovirtualdifferencewasestablishedinIOPSvalues,the actualdifferencebetweenarchitecturesinmostcaseswereagainminimal.Themost noticeabledifferencebetweenarchitecturesinvolvedISTGTformaximumreadIOPS. Thedifferencebetweenthearchitectureswasnearly11,000IOPS,or45%.
53
Inregardstotheresearchquestionproposedpreviously,virtualizationdidnot introduceameasureableamountofoverheadcomparedtothephysicalinitiator. Previousresearchandtestingsuggestedvirtualizationwouldintroduceameasureable amountofoverheadinregardstoiSCSIthroughput.Whilesomevirtualization throughputvaluesexceededtheirphysicalcounterparts,theresultswerelargely dependentandrelationaltothetypeofSANtested. Thesecondresearchquestionreferencedtheeffectjumboframeshadonagiven architecture.AlteringtheMTUsizedidnotproduceameasurabledifferenceinmost cases.Thisiscontrarytopreviousexperimentsdiscussedintheliteraturereview.In experimentsfromSimitcievaluatingiSCSIperformance,jumboframesincreased throughputvaluesby60%(Simitci,Malakapalli,&Gunturu,2001).Duringthesetof experimentsinthisthesiswithmoderniSCSItargetsandhardware,jumboframesonly marginallyincreasedsomeapplicationspecifictests.Additionallytheoverallthroughput decreasedinsomeinstances.Implementingjumboframesintroducesadditional overheadwhenconfiguringnetworkequipmentandhardware.Aspreviouslystated, jumboframesarenotanIEEEstandardsodifferenthardwaremanufacturerscanhave differentimplementations.WhileMTUsizegenerallydidnotintroducelargethroughput differences,ifany,someiSCSItargetssuchasISTGTslightlybenefitedfromjumbo framesduringapplicationspecifictests. ThetrendofdatainfersolderiSCSItargets,suchasIETandSCST,producedhigher throughputvalues.ThemoremoderniSCSItargets,LIOandISTGT,produced comparablevaluesinthroughputtestsbutlowervaluesinIOPStests..While implementingthemostpracticaliSCSItargetserverforanenvironment,itisnecessary toobserveotherfactorsbesidesthroughputmetrics. Asdiscussedpreviously,theneweriSCSItargetserverssuchasLIOhaveadvanced errorhandling.Althoughnottested,SANsgenerallyhavemultipleinitiatorsconnected atasingletime.ThesemultipleinitiatorsincreasethechanceforiSCSItransporterrors. LIOisstatedtohavethemostadvancederrorhandlinglogicwhichmayprovidehigher throughputratesduringerrorsamongothertargets.
54
ISTGTalsohasadvancederrorhandlingandhardwareaccelerationsupportfor virtualenvironments.Outofthe4targetstested,ISTGTwastheonlytargetwhich allowedforhardwareacceleration.HardwareaccelerationusestheSANhardwareto decreasethetimecertainvirtualizationtaskssuchascloningtakes.Thisrelieves networktrafficanddecreasestheamountoftimeneededforvirtualizationtasks. AlliSCSItargetstested,besidesISTGT,werecreatedfromastandardinstallationof Fedora18.ThesebasicimplementationsofiSCSItargetswereextremelylimitedin supportanddocumentation.Thesetargetsareoftenbuiltintosoftwarebundlessuchas OpenFiler(IET)andOpenE(SCST).Theseproductshaveasupportcommunitywhichcan providetroubleshootingassistanceaswellaspaidsupport.ThebundledSANsolutions alsoprovideeasierinstallationandimplementationwithgraphicalortextbased installers. DatacentersimplementingopensourceSANsolutionshaveavarietyofmetricsto takeintoaccount.Inmostinstances,modernSANtargetssuggestjumboframes introducemoreoverheadwithminimalgain.Additionally,avirtualenvironmentalone doesnotintroduceasignificantamountofoverhead.Fromtheacquireddata, implementationbestpracticesaredependentontheprimaryusecaseintendedfora SAN. TheresultsofthisresearchaddtothebodyofknowledgeofopensourceSAN implementations.Aspreviouslydiscussed,opensourceSANdocumentationand installationmethodscanbescarce.Theresultscontainedinthisresearchcanassist otherswhoseektoimplementoneofthetestedSANtargets.Also,themetricstested suchasarchitectureandMTUsizecanbeappliedtoSANtargetsotherthanIET,SCST, LIOandISTGT.Additionally,theresearchhasreinforcedthefactthatvirtualization introducesminimaloverheadcomparedtophysicalmachinesandinsomeinstancescan outperformphysicalmachines.TestsutilizingmodernSANtargetsandcomputer hardwarehavedeterminedalteringtheMTUsizemaybebeneficialincertain applicationspecificimplementationsbutgenerallycreatemoreoverheadandvaried results.Usingthisresearchasaguide,opensourceSANimplementationscanbebetter
55 plannedforoptimalperformance.Finally,thisresearchintroducesadditionalareasfor otherstoexpanduponandresearchfurther. 5.2 FutureWork
Throughoutthecourseoftheseexperiments,alargeamountofdatawasobtained andanalyzed.Byanalyzingthedata,otherareasforfutureworkbecameapparent. Criteriaforfutureworkisdefinedinthissenseasadditionalrelevantareaswhichhave thepossibilityofexpandingonthepreviousresearch.Theadditionalareasdiscussedin thissectionincludemultipleiSCSIinitiators,additionalvirtualizationtasks,failover times,andincreasedhardwarespecifications. Thefirstareaforfutureworkistheideaofusingmultipleinitiatorsinasingletest asopposedtoonlyonesingleinitiator.Whilethetestsinthisresearchfocusedaround maximumobtainablespeeds,SANsnormallyhavemultipleinitiatorspassingdataatthe sametime.MultipleiSCSIinitiatorspassingtrafficcouldintroduceadditionaloverhead whichtargetscouldhandledifferently.Additionalresearchwithuptothreeinitiators wasdoneaftertheprimaryexperimentspreviouslydiscussedandinsinuatedaslight decreaseinoverallthroughput.ProvidingadditionalstressoniSCSItargetsmayimpact overallperformance. Thenextareaforfutureworkinvolvesfocusingsolelyonadditionalvirtualization aspects.Asdiscussedpreviouslyinthisthesis,virtualizationhasoftenbeenassociated withandcomplimentedbySANs.VMwareandothervirtualizationproductsintroduce additionalaspectswhichcouldaffectoverallthroughputandIOPS.Suchaspectsinclude cloningvirtualmachinesandvirtualmachinesuspensiontimes.Performingsimilar throughputtestsortimedtestsregardingthesefactorsareadditionalareasofconcern. Thefinalareaofadditionalresearchrevolvesarounddifferentequipmentwhich wouldnotlimitthroughputratesorintroducebottlenecks.Thedatacollectedinthe previousexperimentswaslimitedtothethroughputspeedofthenetwork.Additional researchcompletedwhichutilized10GbEnetworkfabricorMPIOcouldbeusedto determineifiSCSItargetshavemorevarianceinmaximumthroughputvalues.Similarly
56 totheOpenFilertestsachieving100,000randomIOPSperformedbyIntel,utilizing availableequipmentwhichwasonlylimitedbytheactualprotocolcouldprovidemore insightintoiSCSItargetdifferences(Intel,2012).
1
REFERENCES
57
REFERENCES
Ahmad,I.,Anderson,J.M.,Holler,A.M.,Kambo,R.,&Makhija,V.(2003).Ananalysisof diskperformanceinVMwareESXservervirtualmachines.2003IEEEInternational ConferenceonCommunications(Cat.No.03CH37441),65–76. doi:10.1109/WWC.2003.1249058
Aiken,S.,&Grunwald,D.(2003).AperformanceanalysisoftheiSCSIprotocol.Retrieved fromhttp://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1194849
AniSCSItargetimplementationformultipathfailoverclusternodes.(n.d.).Retrieved fromhttp://www.peach.ne.jp/archives/istgt/
Csaplar,D.(2012).Isthehypervisormarketexpandingorcontracting?Retrievedfrom http://blogs.aberdeen.com/itͲinfrastructure/isͲtheͲhypervisorͲmarketͲexpandingͲ orͲcontracting/
Drummonds,S.(2008).VMwarecommunities:Storagesystemperformanceanalysis withIometer.RetrievedApril14,2013,from http://communities.vmware.com/docs/DOCͲ3961
Faustini,A.,Solder,C.,Scheibe,T.,Law,D.,Ayandeh,S.,Kohl,B.,&Multanen,E.(2009). EthernetJumboFrames,1–10.
GenericSCSITargetSubsystemforLinux.(n.d.).RetrievedApril04,2013,from http://scst.sourceforge.net/target_iscsi.html
Intel.(2012).Meetingthedemandsofvirtualstorageinthecloud(p.3).Retrievedfrom https://download.openfiler.com/Intel_FrankenSAN_Case_Study_3Ͳ16Ͳ12.pdf
IometerUser’sGuide.(2003).Retrievedfrom http://iometer.cvs.sourceforge.net/viewvc/iometer/iometer/Docs/Iometer.pdf
Kolahi,S.S.,Narayan,S.,Nguyen,D.D.T.,&Sunarto,Y.(2011).Performancemonitoring ofvariousnetworktrafficgenerators.2011UkSim13thInternationalConferenceon ComputerModellingandSimulation,501–506.doi:10.1109/UKSIM.2011.102
58
LinusTorvalds.(2011).NoTitle.Retrievedfrom http://git.kernel.org/?p=linux/kernel/git/torvalds/linuxͲ 2.6.git;a=commitdiff;h=38567333a6dabd0f2b4150e9fb6dd8e3ba2985e5
Liu,E.,&AiͲshaikh,R.(2009).SANperformanceevaluationtestbed(pp.1–5).
Lowe,Sc.(2010).CalculateIOPSinastoragearray.TechRepublic.RetrievedFebruary 26,2013,fromhttp://www.techrepublic.com/blog/datacenter/calculateͲiopsͲinͲaͲ storageͲarray/2182
Ren,C.,Wang,D.,Urgaonkar,B.,&Sivasubramaniam,A.(2012).CarbonͲAwareenergy capacityplanningfordatacenters,391–400.Retrievedfrom http://ieeexplore.ieee.org.ezproxy.lib.purdue.edu/xpl/articleDetails.jsp?arnumber= 6298199
Rodrigues,G.(2011).AtaleoftwoSCSItargets.Retrievedfrom http://lwn.net/Articles/424004/
Rosenblum,M.,&Waldspurger,C.(2011).VirtualizationI/O:Decouplingalogicaldevice fromitsphysicalimplementationoffersmanycompellingadvantages.,9(11),30. doi:10.1145/2063166.2071256
Satran,J.,Meth,K.,Sapuntzakis,C.,Chadalapaka,M.,&Zeidner,E.(2004).Internet SmallComputerSystemsInterface(iSCSI)ͲRFC3720.
Simitci,H.,Malakapalli,C.,&Gunturu,V.(2001).EvaluationofSCSIoverTCP/IPandSCSI overfibrechannelconnections,55344,87–91.Retrievedfrom http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=946698&tag=1
TheiSCSIEnterpriseTargetProject.(n.d.).RetrievedApril04,2013,from http://iscsitarget.sourceforge.net/
TheLinuxSCSItargetwiki.(2013).RetrievedJune04,2013,fromhttp://linuxͲ iscsi.org/wiki/Target
Torvalds,L.(n.d.).NoTitle.Retrievedfrom http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=38567333 a6dabd0f2b4150e9fb6dd8e3ba2985e5
VMware.(2009).RecommendationsforaligningVMFSpartitions.2009—01—01)[2o12Ͳ 04—01].,1–10.Retrievedfrom http://www.vmware.com/pdf/esx3_partition_align.pdf
59
VMware.(2012).PerformancebestpracticesforVMwarevSphere5.1.2010襮O1— 01)[2012—04—01].Retrievedfrom http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf
VMware.(2013).VMwareI/OAnalyzerInstallationandUser’sGuide,1–34.Retrieved fromhttp://download3.vmware.com/software/vmwͲ tools/ioanalyzer/IOAnalyzerGuide_1.5.1_20130412.pdf
Yoder,A.G.,Carlson,M.,Thiel,D.,Deel,D.,&Hibbard,E.(2012).The2012SNIA dictionary,v.2012.1.E.
Zhang,J.,Yang,B.,Guo,J.,&Jia,H.(2005).StudyandperformanceanalysisofIPͲbased storagesystems.(D.Xu,K.A.SchouhamerImmink,&K.Shono,Eds.),5643,150– 159.doi:10.1117/12.571020
57
APPENDICES
60
AppendixA iSCSINetworkTopology
MGMT
iSCSI
iSCSIInitiator DellOptiplex990 WindowsServer2008R2 eth0(MGMT/SRV):10.3.4.21/24 MGMT eth1(iSCSI):192.168.1.21/24 iSCSI
MGMT iSCSI iSCSITargetServer NetworkSwitch DellPowerEdge2950 HPProcurve2900 Fedora18/FreeNAS VMWareI/O MGMT:10.3.4.5/24 eth0(MGMT/SRV):10.3.4.200/24 VLAN304:MGMT eth1(iSCSI):192.168.1.200/24 Analyzer VLAN900:iSCSI
VMWareI/OAnalyzer vmnic1(MGMT/SRV): Hypervisor 10.3.4.58/24 DellOptiplex990 ESXi5.1 iSCSIInitiator vmnic1(MGMT/SRV):10.3.4.10 vmnic2(iSCSI):192.168.1.10 FigureA.1IPNetworkTopologyiSCSIArchitecture
61
AppendixBIETSANConfigurationFile
[root@IETsan~]#cat/etc/tgt/targets.conf
#ThisisasampleconfigfilefortgtͲadmin.
#The"#"symboldisablestheprocessingofaline.
#Setthedriver.Ifnotspecified,defaultsto"iscsi". defaultͲdriveriscsi
#SetiSNSparameters,ifneeded
#iSNSServerIP192.168.111.222
#iSNSServerPort3205
#iSNSAccessControlOn
#iSNSOn
#ContinueiftgtadmexitswithnonͲzerocode(equivalentof
#ͲͲignoreͲerrorscommandlineoption)
#ignoreͲerrorsyes
#provideddevicceasaiSCSItarget
backingͲstore/dev/vg_target00/lv_target00
#iSCSIInitiator'sIPaddressyouallowtoconnect
initiatorͲaddress192.168.1.0/24
62
#provideddevicceasaiSCSItarget
backingͲstore/dev/vg_target03/lv_target03
#iSCSIInitiator'sIPaddressyouallowtoconnect
initiatorͲaddress192.168.1.0/24
63
AppendixCSCSTSANConfigurationFile
#/etc/scst.confConfigurationFile
HANDLERvdisk_fileio{
DEVICEdisk01{
filename/dev/sda1
nv_cache1
}
DEVICEdisk02{
filename/dev/sda2
nv_cache1
}
}
TARGET_DRIVERiscsi{
enabled1
TARGETiqn.2013Ͳ01.test.lcl:scstͲvm{
LUN1disk01
enabled1
}
TARGETiqn.2013Ͳ02.test.lcl:scstͲphy{
64
LUN2disk02
enabled1
}
}
65
AppendixDLIOSANConfiguration
[root@LioSAN]#targetcli targetclishellversion2.1.26
Copyright2011byRisingTideSystemsLLCandothers.
Forhelponcommands,type'help'.
/>ls oͲ/...... [...]
oͲbackstores...... [...]
|oͲblock...... [StorageObjects:2]
||oͲlio1...... [/dev/sda1(200.0GiB)writeͲthruactivated]
||oͲlio2...... [/dev/sda2(200.0GiB)writeͲthruactivated]
|oͲfileio...... [StorageObjects:0]
|oͲpscsi...... [StorageObjects:0]
|oͲramdisk...... [StorageObjects:0]
oͲiscsi...... [Targets:2]
|oͲiqn.2013Ͳ01.test.lcl:Lio2950Ͳvm...... [TPGs:1]
||oͲtpg1...... [enabled]
||oͲacls...... [ACLs:1]
|||oͲiqn.1998Ͳ01.com.vmware:esxiͲ592ddc91...... [MappedLUNs:1]
|||oͲmapped_lun0...... [lun0block/lio1(rw)]
||oͲluns...... [LUNs:1]
|||oͲlun0...... [block/lio1(/dev/sda1)]
||oͲportals...... [Portals:1]
66
||oͲ192.168.1.220:3260...... [OK]
|oͲiqn.2013Ͳ02.test.lcl:lio2950Ͳphy...... [TPGs:1]
|oͲtpg1...... [enabled]
|oͲacls...... [ACLs:1]
||oͲiqn.1991Ͳ05.com.microsoft:winͲv32tlnb77vj...... [MappedLUNs: 1]
||oͲmapped_lun0...... [lun0block/lio2(rw)]
|oͲluns...... [LUNs:1]
||oͲlun0...... [block/lio2(/dev/sda2)]
|oͲportals...... [Portals:1]
|oͲ192.168.1.220:3260...... [OK]
oͲloopback...... [Targets:0]
oͲvhost...... [Targets:0]
67
AppendixEISTGTSANConfiguration
FigureE.1FreeNASTargetGlobalConfiguration
FigureE.2FreeNASPortalsConfiguration
68
FigureE.3FreeNASTargetsConfiguration
69
FigureE.4FreeNASExtentsConfiguration
FigureE.5FreeNASAssociatedTargets
70
AppendixFHPProcurve2950Configuration
ProCurveSwitch2900Ͳ24G#showrun
Runningconfiguration:
;J9049AConfigurationEditor;Createdonrelease#T.13.71 hostname"ProCurveSwitch2900Ͳ24G" module1typeJ86xxA module3typeJ90XXA ipdefaultͲgateway10.3.4.1 vlan1
name"DEFAULT_VLAN"
untaggedA1ͲA4
ipaddressdhcpͲbootp
tagged24
nountagged1Ͳ23
exit vlan304
name"Management"
untagged1Ͳ12
ipaddress10.3.4.5255.255.255.0
exit vlan900
name"iSCSI"
71
untagged13Ͳ24
noipaddress
jumbo
exit jumbomaxͲframeͲsize9018 iproute10.0.0.0255.0.0.010.3.4.1 snmpͲservercommunity"public"Unrestricted
72
AppendixG RawiSCSIAverageValues
TableG.1IETVirtualIOPSWithStandardardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumIOPS 44730.55 45297.46 45845.78 45291.26 455.31 MaximumWriteIOPS 45240.57 45906.26 45001.55 45382.79 382.80 JumboFrames Test1Test2Test3 Average STD MaximumIOPS 42216.12 42897.59 42662.66 42592.12 282.64 MaximumWriteIOPS 45128.85 44255.27 43014.98 44133.03 867.30
TableG.2IETPhysicalIOPSWithStandardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumIOPS 49754.35 48725.28 48992.45 49157.36 436.00 MaximumWriteIOPS 53516.81 53623.37 53566.15 53568.77 43.54 JumboFrames Test1Test2Test3 Average STD MaximumIOPS 39483.66 38100.29 41984.56 39856.17 1607.47 MaximumWriteIOPS 51462.59 51298.60 51625.59 51462.26 133.49
73
TableG.3IETVirtualMBpsWithStandardardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumThroughput 111.30 112.10 111.10 111.50 0.43 MaximumWriteThroughput 109.43 109.72 109.65 109.60 0.12 Exchange03 2.72 1.85 2.46 2.34 0.37 Exchange07 3.48 4.47 4.19 4.05 0.42 SQL16K 8.11 8.59 8.24 8.31 0.20 SQL64K 24.98 24.87 24.00 24.62 0.44 WebServer 2.55 2.41 2.49 2.48 0.06 Workstation 2.49 2.55 2.49 2.51 0.03 JumboFrames Test1Test2Test3 Average STD MaximumThroughput 106.54 106.39 106.84 106.59 0.19 MaximumWriteThroughput 112.94 113.78 113.46 113.39 0.35 Exchange03 2.00 2.48 2.01 2.16 0.22 Exchange07 4.01 4.10 4.21 4.11 0.08 SQL16K 8.62 8.65 8.70 8.65 0.03 SQL64K 24.70 25.98 25.69 25.46 0.55 WebServer 2.59 2.55 2.72 2.62 0.07 Workstation 2.58 2.35 2.40 2.44 0.10
74
TableG.4IETPhysicalMBpsWithStandardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumThroughput 112.95 112.25 112.53 112.58 0.29 MaximumWriteThroughput 109.13 108.90 109.04 109.02 0.10 Exchange03 1.94 1.94 2.04 1.97 0.05 Exchange07 3.59 3.59 3.70 3.62 0.05 SQL16K 7.52 7.52 7.58 7.54 0.03 SQL64K 14.67 14.67 14.38 14.57 0.14 WebServer 3.74 3.74 3.74 3.74 0.00 Workstation 3.66 3.67 3.65 3.66 0.01 JumboFrames Test1Test2Test3 Average STD MaximumThroughput 76.12 75.32 75.80 75.75 0.33 MaximumWriteThroughput 109.34 109.80 109.99 109.71 0.27 Exchange03 2.10 2.34 1.57 2.00 0.32 Exchange07 3.84 3.26 3.55 3.55 0.24 SQL16K 7.34 6.54 6.99 6.96 0.33 SQL64K 14.87 14.22 14.62 14.57 0.27 WebServer 3.21 3.91 3.89 3.67 0.33 Workstation 3.79 3.10 2.90 3.26 0.38
TableG.5SCSTVirtualIOPSWithStandardardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumIOPS 52305.12 51398.67 47725.29 50476.36 1980.19 MaximumWrite IOPS 39547.13 36859.23 38932.23 38446.20 1149.89 JumboFrames Test1Test2Test3 Average STD MaximumIOPS 46737.27 46530.27 48675.23 47314.26 966.06 MaximumWrite IOPS 38356.23 38756.28 38956.00 38689.51 249.36
75
TableG.6SCSTPhysicalIOPSWithStandardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumIOPS 45629.19 46098.24 48802.29 46843.24 1398.43 MaximumWriteIOPS 44116.23 42648.11 43941.50 43568.61 654.79 JumboFrames Test1Test2Test3 Average STD MaximumIOPS 48434.53 48332.62 48535.85 48434.33 82.97 MaximumWriteIOPS 45129.75 45523.66 46087.91 45580.44 393.22
TableG.7SCSTVirtualMBpsWithStandardardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumThroughput 93.48 92.81 94.83 93.82 1.01 MaximumWriteThroughput 110.97 110.23 110.09 110.43 0.39 Exchange03 1.45 1.53 2.56 1.85 0.51 Exchange07 2.76 2.26 2.98 2.67 0.30 SQL16K 3.87 3.90 3.87 3.88 0.01 SQL64K 8.53 9.86 9.12 9.17 0.54 WebServer 2.11 1.56 1.69 1.79 0.23 Workstation 2.54 1.55 1.23 1.77 0.56 JumboFrames Test1Test2Test3 Average STD MaximumThroughput 87.98 88.76 88.95 88.56 0.42 MaximumWriteThroughput 114.21 115.23 113.01 114.15 0.91 Exchange03 1.45 1.95 1.91 1.77 0.23 Exchange07 2.87 3.56 2.20 2.88 0.56 SQL16K 4.32 4.20 3.99 4.17 0.14 SQL64K 9.45 11.63 9.10 10.06 1.12 WebServer 1.42 1.76 2.54 1.91 0.47 Workstation 1.62 2.87 1.12 1.87 0.74
76
TableG.8SCSTPhysicalMBpsWithStandardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumThroughput 112.57 111.17 111.70 111.81 0.58 MaximumWriteThroughput 111.87 112.98 112.98 112.61 0.52 Exchange03 1.39 1.53 1.53 1.48 0.06 Exchange07 2.05 2.19 2.20 2.15 0.07 SQL16K 3.15 3.41 3.25 3.27 0.11 SQL64K 7.97 8.10 7.58 7.88 0.22 WebServer 2.11 2.11 2.10 2.11 0.01 Workstation 2.10 2.10 2.29 2.17 0.09 JumboFrames Test1Test2Test3 Average STD MaximumThroughput 84.10 85.12 86.14 85.12 0.83 MaximumWriteThroughput 74.57 74.96 74.55 74.69 0.19 Exchange03 1.41 1.52 1.51 1.48 0.05 Exchange07 2.04 2.20 2.18 2.14 0.07 SQL16K 3.15 3.34 3.26 3.25 0.08 SQL64K 7.45 7.56 8.15 7.72 0.31 WebServer 2.10 2.05 2.01 2.05 0.04 Workstation 2.08 2.08 2.16 2.11 0.04
TableG.9LIOVirtualIOPSWithStandardardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumIOPS 28778.93 35892.28 27675.99 30782.40 3641.18 MaximumWriteIOPS 34956.78 32957.98 35334.10 34416.29 1042.62 JumboFrames Test1Test2Test3 Average STD MaximumIOPS 28146.28 34148.96 28539.09 30278.11 2741.80 MaximumWriteIOPS 33085.49 30156.56 34170.32 32470.79 1695.28
77
TableG.10LIOPhysicalIOPSWithStandardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumIOPS 26932.55 24225.29 25924.69 25694.18 1117.19 MaximumWriteIOPS 31695.28 34291.82 32452.28 32813.13 1090.31 JumboFrames Test1Test2Test3 Average STD MaximumIOPS 25483.24 23956.29 24430.11 24623.21 638.15 MaximumWriteIOPS 34111.79 34176.56 34180.92 34156.42 31.61
TableG.11LIOVirtualMBpsWithStandardardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumThroughput 111.46 112.45 112.30 112.07 0.44 MaximumWriteThroughput 112.87 111.09 112.08 112.01 0.73 Exchange03 1.54 1.29 1.76 1.53 0.19 Exchange07 2.45 3.54 2.99 2.99 0.44 SQL16K 5.44 6.34 5.64 5.81 0.39 SQL64K 16.23 15.21 15.01 15.48 0.53 WebServer 2.08 1.98 2.09 2.05 0.05 Workstation 1.98 1.61 1.98 1.86 0.17 JumboFrames Test1Test2Test3 Average STD MaximumThroughput 118.42 117.10 117.02 117.51 0.64 MaximumWriteThroughput 115.18 114.26 114.10 114.51 0.47 Exchange03 1.03 1.16 1.05 1.08 0.06 Exchange07 2.91 2.98 2.92 2.94 0.03 SQL16K 4.56 3.57 2.98 3.70 0.65 SQL64K 9.87 9.79 9.97 9.88 0.07 WebServer 1.99 1.91 1.95 1.95 0.03 Workstation 1.76 1.80 1.98 1.85 0.10
78
TableG.12LIOPhysicalMBpsWithStandardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumThroughput 106.65 109.01 112.01 109.22 2.19 MaximumWriteThroughput 112.23 111.90 112.01 112.05 0.14 Exchange03 1.55 1.97 1.13 1.55 0.34 Exchange07 2.99 3.04 3.02 3.02 0.02 SQL16K 5.01 5.69 5.10 5.27 0.30 SQL64K 13.60 14.04 13.50 13.71 0.23 WebServer 3.29 3.08 2.89 3.08 0.16 Workstation 2.92 2.89 2.81 2.87 0.04 JumboFrames Test1Test2Test3 Average STD MaximumThroughput 94.70 95.80 96.02 95.51 0.58 MaximumWriteThroughput 97.65 108.66 99.34 101.88 4.84 Exchange03 1.53 1.56 1.55 1.55 0.01 Exchange07 2.84 3.21 3.03 3.03 0.15 SQL16K 4.12 4.53 4.01 4.22 0.22 SQL64K 13.64 13.37 13.91 13.64 0.22 WebServer 2.65 3.21 2.67 2.84 0.26 Workstation 2.67 3.42 2.21 2.77 0.50
TableG.13ISTGTVirtualIOPSWithStandardardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumIOPS 34529.12 34926.24 36759.00 35404.79 971.20 MaximumWriteIOPS 25748.97 25359.67 27629.00 26245.88 990.84 JumboFrames Test1Test2Test3 Average STD MaximumIOPS 19122.18 21387.58 20010.23 20173.33 932.01 MaximumWriteIOPS 10748.28 9362.38 8472.23 9527.63 936.51
79
TableG.14ISTGTPhysicalIOPSWithStandardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumIOPS 23498.53 26456.27 23271.86 24408.89 1450.68 MaximumWriteIOPS 20192.23 23139.23 20180.12 21170.53 1392.09 JumboFrames Test1Test2Test3 Average STD MaximumIOPS 24708.46 24623.08 22495.00 23942.18 1023.91 MaximumWriteIOPS 21133.86 21404.05 21305.81 21281.24 111.66
TableG.15ISTGTVirtualMBpsWithStandardardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumThroughput 106.75 106.30 105.19 106.08 0.66 MaximumWriteThroughput 95.68 96.53 95.57 95.93 0.43 Exchange03 0.56 0.98 0.49 0.68 0.22 Exchange07 1.01 1.23 1.62 1.29 0.25 SQL16K 1.65 1.89 1.41 1.65 0.20 SQL64K 9.85 10.10 10.98 10.31 0.48 WebServer 10.01 1.56 0.97 4.18 4.13 Workstation 1.12 0.86 1.49 1.16 0.26 JumboFrames Test1Test2Test3 Average STD MaximumThroughput 64.65 71.06 79.87 71.86 6.24 MaximumWriteThroughput 53.98 63.48 61.32 59.59 4.07 Exchange03 0.32 0.76 0.23 0.44 0.23 Exchange07 1.15 0.87 0.74 0.92 0.17 SQL16K 1.34 1.87 1.54 1.58 0.22 SQL64K 3.45 5.12 4.10 4.22 0.69 WebServer 1.04 1.12 1.01 1.06 0.05 Workstation 1.07 0.98 1.08 1.04 0.05
80
TableG.16ISTGTPhysicalMBpsWithStandardandJumboFrames StandardFrames Test1 Test2 Test3 Average STD MaximumThroughput 112.75 112.12 111.70 112.19 0.43 MaximumWriteThroughput 93.98 93.66 96.98 94.87 1.49 Exchange03 3.31 3.26 3.10 3.22 0.09 Exchange07 6.09 5.43 5.68 5.73 0.27 SQL16K 7.90 11.76 10.53 10.06 1.61 SQL64K 28.56 29.87 27.12 28.52 1.12 WebServer 75.47 81.26 65.02 73.92 6.72 Workstation 5.96 5.59 7.91 6.49 1.02 JumboFrames Test1Test2Test3 Average STD MaximumThroughput 107.56 105.20 106.19 106.32 0.97 MaximumWriteThroughput 94.90 95.18 92.09 94.06 1.39 Exchange03 0.91 0.80 1.86 1.19 0.48 Exchange07 1.12 2.87 2.79 2.26 0.81 SQL16K 2.34 3.48 4.70 3.50 0.97 SQL64K 9.65 10.29 9.10 9.68 0.49 WebServer 53.79 67.89 79.21 66.96 10.40 Workstation 2.30 1.96 3.25 2.50 0.55