A Comparative Analysis of Open Source Storage Area Networks with Esxi 5.1 Robert M

Total Page:16

File Type:pdf, Size:1020Kb

A Comparative Analysis of Open Source Storage Area Networks with Esxi 5.1 Robert M Purdue University Purdue e-Pubs Open Access Theses Theses and Dissertations 2013 A Comparative Analysis Of Open Source Storage Area Networks With Esxi 5.1 Robert M. Trinkle Purdue University Follow this and additional works at: https://docs.lib.purdue.edu/open_access_theses Part of the Databases and Information Systems Commons Recommended Citation Trinkle, Robert M., "A Comparative Analysis Of Open Source Storage Area Networks With Esxi 5.1" (2013). Open Access Theses. 97. https://docs.lib.purdue.edu/open_access_theses/97 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. Graduate School ETD Form 9 (Revised 12/07) PURDUE UNIVERSITY GRADUATE SCHOOL Thesis/Dissertation Acceptance This is to certify that the thesis/dissertation prepared By Robert M. Trinkle Entitled A Comparative Analysis of Open Source Storage Area Networks With ESXi 5.1 Master of Science For the degree of Is approved by the final examining committee: P.T. Rawles Chair Raymond Hansen Thomas Hacker To the best of my knowledge and as understood by the student in the Research Integrity and Copyright Disclaimer (Graduate School Form 20), this thesis/dissertation adheres to the provisions of Purdue University’s “Policy on Integrity in Research” and the use of copyrighted material. Approved by Major Professor(s): ____________________________________P.T. Rawles ____________________________________ Approved by: Jeffrey L. Whitten 11/25/2013 Head of the Graduate Program Date i ACOMPARATIVEANALYSISOFOPENSOURCESTORAGEAREANETWORKS WITHESXI5.1 AThesis SubmittedtotheFaculty of PurdueUniversity by RobertM.Trinkle InPartialFulfillmentofthe RequirementsfortheDegree of MasterofScience December2013 PurdueUniversity WestLafayette,Indiana ii TABLEOFCONTENTS Page LISTOFTABLES....................................................................................................................v LISTOFFIGURES.................................................................................................................vii LISTOFABBREVIATIONS.....................................................................................................ix GLOSSARY....................................................................................................................x ABSTRACT...................................................................................................................xi CHAPTER1. INTRODUCTION.........................................................................................1 1.1 Background...............................................................................................1 1.2 StatementofProblem...............................................................................3 1.3 SignificanceofProblem.............................................................................3 1.4 PurposeofResearch.................................................................................4 1.5 ResearchQuestion....................................................................................4 1.6 Assumptions..............................................................................................4 1.7 Limitations.................................................................................................5 1.8 Delimitations.............................................................................................5 1.9 Summary……………………………………………………………………………………………….5 CHAPTER2. LITERATUREREVIEW.................................................................................7 2.1 iSCSITargetServers...................................................................................7 2.1.1 iSCSIEnterpriseTarget.......................................................................8 2.1.2 SCSTandLIO.......................................................................................9 2.1.3 ISTGT…………………………………………………………………………………………….10 2.2 StorageAlignment...................................................................................11 2.3 NetworkConstruction.............................................................................13 2.4 Iometer……………………………………………………………………………………………….14 2.5 Iperf…………………………………………………………………………………………………….15 iii Page 2.6 Summary……………………………………………………………………………………………..15 CHAPTER3. METHODOLOGY......................................................................................17 3.1 Framework..............................................................................................17 3.2 TestingMethodology..............................................................................21 3.2.1 Experiments.....................................................................................22 3.3 AnalyzingData.........................................................................................23 3.4 Summary……………………………………………………………………………………………..23 CHAPTER4. RESULTSANDDISCUSSIONS....................................................................25 4.1 LocalRAIDandNetworkResults.............................................................27 4.2 IETiSCSITargetResults...........................................................................27 4.2.1 PhysicalandVirtualComparison......................................................27 4.2.2 MTUSizeComparison......................................................................29 4.3 SCSTiSCSITargetResults........................................................................32 4.3.1 PhysicalandVirtualComparison......................................................32 4.3.2 MTUSizeComparison......................................................................33 4.4 LIOiSCSITargetResults...........................................................................35 4.4.1 PhysicalandVirtualComparison......................................................36 4.4.2 MTUSizeComparison......................................................................37 4.5 ISTGTiSCSITargetResults.......................................................................39 4.5.1 PhysicalandVirtualComparison......................................................39 4.5.2 MTUSizeComparison......................................................................41 4.6 iSCSITargetServerComparisons............................................................43 4.7 Summary……………………………………………………………………………………………..50 CHAPTER5. CONCLUSIONSANDFUTUREWORK.......................................................52 5.1 iSCSITargetServerConclusions..............................................................52 5.2 FutureWork............................................................................................55 REFERENCES.................................................................................................................57 APPENDICES AppendixA iSCSINetworkTopology..........................................................................60 iv Page AppendixB IETSANConfigurationFile......................................................................61 AppendixC SCSTSANConfigurationFile....................................................................63 AppendixD LIOSANConfiguration.............................................................................65 AppendixE ISTGTSANConfiguration.........................................................................67 AppendixF HPProcurve2950Configuration.............................................................70 AppendixG RawiSCSIAverageValues.......................................................................72 v LISTOFTABLES Table..............................................................................................................................Page Table3.1DellOptiplex2950Specifications......................................................................19 Table3.2DellOptiplex990Specifications........................................................................20 Table4.1ThroughputTests..............................................................................................26 Table4.2IOPSTests..........................................................................................................26 AppendixTable TableG.1IETVirtualIOPSWithStandardardandJumboFrames....................................72 TableG.2IETPhysicalIOPSWithStandardandJumboFrames.......................................72 TableG.3IETVirtualMBpsWithStandardardandJumboFrames..................................73 TableG.4IETPhysicalMBpsWithStandardandJumboFrames.....................................74 TableG.5SCSTVirtualIOPSWithStandardardandJumboFrames.................................74 TableG.6SCSTPhysicalIOPSWithStandardandJumboFrames....................................75 TableG.7SCSTVirtualMBpsWithStandardardandJumboFrames...............................75 TableG.8SCSTPhysicalMBpsWithStandardandJumboFrames...................................76 TableG.9LIOVirtualIOPSWithStandardardandJumboFrames...................................76 TableG.10LIOPhysicalIOPSWithStansdardandJumboFrames...................................77 TableG.11LIOVirtualMBpsWithStandardardandJumboFrames................................77 TableG.12LIOPhysicalMBpsWithStandardandJumboFrames...................................78 vi TablePage TableG.13ISTGTVirtualIOPSWithStandardardandJumboFrames............................78 TableG.14ISTGTPhysicalIOPSWithStandardandJumboFrames.................................79 TableG.15ISTGTVirtualMBpsWithStandardardandJumboFrames............................79
Recommended publications
  • Summit™ M5x Protocol Analyzer / Jammer for PCI Express® 5.0
    Summit™ M5x Protocol Analyzer / Jammer for PCI Express® 5.0 The Summit M5x is Teledyne LeCroy’s highest performance PCI Express analyzer, Key Features offering both inline probing and direct tapping approaches to protocol capture - the best in overall diversity in probing methods. Find errors fast • One button error check Inline operation provides advanced features such as: real-time error injection (jammer) • Fast upload speed for PCI Express 4.0 specification; support for data rates of 2.5 GT/s, 5.0 GT/s, 8.0 • Large trace memory GT/s, and 16.0 GT/s; full data capture on bidirectional link widths of x1, x2, x4, x8 and • Powerful triggering/filtering x16 for up to 16 GT/s; and up to 128GB of trace memory. See and understand the traffic ® • Get useful information Direct tapping operation supports PCIe 5.0 protocol analysis with speeds up to • More choices of data views 32GT/s at up to x8 link widths. The Summit M5x is ideal for high-performance protocol • More ways to analyze data development of add-in boards, switches, servers and workstations, and for customers • Custom decoding and reports currently working on PCIe. Data capture Flexible Hardware Analyzer for CXL (Compute Express Link) Links. • 100% data capture at 32.0 GT/s The Summit M5x PCIe 5.0 Protocol Analyzer is a Support is provided for CXL.io, CXL.mem and high-end analyzer that offers important analysis CXL.cache with full decoding from the FLIT layer Deep memory buffer features for new application development. to the CXL Message layers.
    [Show full text]
  • Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4
    SUSE Linux Enterprise Server 12 SP4 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xii 2 Giving Feedback xiv 3 Documentation Conventions xiv 4 Product Life Cycle and Support xvi Support Statement for SUSE Linux Enterprise Server xvii • Technology Previews xviii I FILE SYSTEMS AND MOUNTING 1 1 Overview
    [Show full text]
  • Fibre Channel Interface
    Fibre Channel Interface Fibre Channel Interface ©2006, Seagate Technology LLC All rights reserved Publication number: 100293070, Rev. A March 2006 Seagate and Seagate Technology are registered trademarks of Seagate Technology LLC. SeaTools, SeaFONE, SeaBOARD, SeaTDD, and the Wave logo are either registered trade- marks or trademarks of Seagate Technology LLC. Other product names are registered trade- marks or trademarks of their owners. Seagate reserves the right to change, without notice, product offerings or specifications. No part of this publication may be reproduced in any form without written permission of Seagate Technol- ogy LLC. Revision status summary sheet Revision Date Writer/Engineer Sheets Affected A 03/08/06 C. Chalupa/J. Coomes All iv Fibre Channel Interface Manual, Rev. A Contents 1.0 Contents . i 2.0 Publication overview . 1 2.1 Acknowledgements . 1 2.2 How to use this manual . 1 2.3 General interface description. 2 3.0 Introduction to Fibre Channel . 3 3.1 General information . 3 3.2 Channels vs. networks . 4 3.3 The advantages of Fibre Channel . 4 4.0 Fibre Channel standards . 5 4.1 General information . 6 4.1.1 Description of Fibre Channel levels . 6 4.1.1.1 FC-0 . .6 4.1.1.2 FC-1 . .6 4.1.1.3 FC-1.5 . .6 4.1.1.4 FC-2 . .6 4.1.1.5 FC-3 . .6 4.1.1.6 FC-4 . .7 4.1.2 Relationship between the levels. 7 4.1.3 Topology standards . 7 4.1.4 FC Implementation Guide (FC-IG) . 7 4.1.5 Applicable Documents .
    [Show full text]
  • SAS Enters the Mainstream Although Adoption of Serial Attached SCSI
    SAS enters the mainstream By the InfoStor staff http://www.infostor.com/articles/article_display.cfm?Section=ARTCL&C=Newst&ARTICLE_ID=295373&KEYWORDS=Adaptec&p=23 Although adoption of Serial Attached SCSI (SAS) is still in the infancy stages, the next 12 months bode well for proponents of the relatively new disk drive/array interface. For example, in a recent InfoStor QuickVote reader poll, 27% of the respondents said SAS will account for the majority of their disk drive purchases over the next year, although Serial ATA (SATA) topped the list with 37% of the respondents, followed by Fibre Channel with 32%. Only 4% of the poll respondents cited the aging parallel SCSI interface (see figure). However, surveys of InfoStor’s readers are skewed by the fact that almost half of our readers are in the channel (primarily VARs and systems/storage integrators), and the channel moves faster than end users in terms of adopting (or at least kicking the tires on) new technologies such as serial interfaces. Click here to enlarge image To get a more accurate view of the pace of adoption of serial interfaces such as SAS, consider market research predictions from firms such as Gartner and International Data Corp. (IDC). Yet even in those firms’ predictions, SAS is coming on surprisingly strong, mostly at the expense of its parallel SCSI predecessor. For example, Gartner predicts SAS disk drives will account for 16.4% of all multi-user drive shipments this year and will garner almost 45% of the overall market in 2009 (see figure on p. 18).
    [Show full text]
  • AMD Opteron™ Shared Memory MP Systems Ardsher Ahmed Pat Conway Bill Hughes Fred Weber Agenda
    AMD Opteron™ Shared Memory MP Systems Ardsher Ahmed Pat Conway Bill Hughes Fred Weber Agenda • Glueless MP systems • MP system configurations • Cache coherence protocol • 2-, 4-, and 8-way MP system topologies • Beyond 8-way MP systems September 22, 2002 Hot Chips 14 2 AMD Opteron™ Processor Architecture DRAM 5.3 GB/s 128-bit MCT CPU SRQ XBAR HT HT HT 3.2 GB/s per direction 3.2 GB/s per direction @ 0+]'DWD5DWH @ 0+]'DWD5DWH 3.2 GB/s per direction @ 0+]'DWD5DWH HT = HyperTransport™ technology September 22, 2002 Hot Chips 14 3 Glueless MP System DRAM DRAM MCT CPU MCT CPU SRQ SRQ non-Coherent HyperTransport™ Link XBAR XBAR HT I/O I/O I/O HT cHT cHT cHT cHT Coherent HyperTransport ™ cHT cHT I/O I/O HT HT cHT cHT XBAR XBAR CPU MCT CPU MCT SRQ SRQ HT = HyperTransport™ technology DRAM DRAM September 22, 2002 Hot Chips 14 4 MP Architecture • Programming model of memory is effectively SMP – Physical address space is flat and fully coherent – Far to near memory latency ratio in a 4P system is designed to be < 1.4 – Latency difference between remote and local memory is comparable to the difference between a DRAM page hit and a DRAM page conflict – DRAM locations can be contiguous or interleaved – No processor affinity or NUMA tuning required • MP support designed in from the beginning – Lower overall chip count results in outstanding system reliability – Memory Controller and XBAR operate at the processor frequency – Memory subsystem scale with frequency improvements September 22, 2002 Hot Chips 14 5 MP Architecture (contd.) • Integrated Memory Controller
    [Show full text]
  • Gen 6 Fibre Channel Technology
    WHITE PAPER Better Performance, Better Insight for Your Mainframe Storage Network with Brocade Gen 6 TABLE OF CONTENTS Brocade and the IBM z Systems IO product team share a unique Overview .............................................................. 1 history of technical development, which has produced the world’s most Technology Highlights .................................. 2 advanced mainframe computing and storage systems. Brocade’s Gen 6 Fibre Channel Technology technical heritage can be traced to the late 1980s, with the creation of Benefits for z Systems and Flash channel extension technologies to extend data centers beyond the “glass- Storage ................................................................ 7 house.” IBM revolutionized the classic “computer room” with the invention Summary ............................................................. 9 of the original ESCON Storage Area Network (SAN) of the 1990s, and, in the 2000s, it facilitated geographically dispersed FICON® storage systems. Today, the most compelling innovations in mainframe storage networking technology are the product of this nearly 30-year partnership between Brocade and IBM z Systems. As the flash era of the 2010s disrupts between the business teams allows both the traditional mainframe storage organizations to guide the introduction networking mindset, Brocade and of key technologies to the market place, IBM have released a series of features while the integration between the system that address the demands of the data test and qualification teams ensures the center. These technologies leverage the integrity of those products. Driving these fundamental capabilities of Gen 5 and efforts are the deep technical relationships Gen 6 Fibre Channel, and extend them to between the Brocade and IBM z Systems the applications driving the world’s most IO architecture and development teams, critical systems.
    [Show full text]
  • Architecture and Application of Infortrend Infiniband Design
    Architecture and Application of Infortrend InfiniBand Design Application Note Version: 1.3 Updated: October, 2018 Abstract: Focusing on the architecture and application of InfiniBand technology, this document introduces the architecture, application scenarios and highlights of the Infortrend InfiniBand host module design. Infortrend InfiniBand Host Module Design Contents Contents ............................................................................................................................................. 2 What is InfiniBand .............................................................................................................................. 3 Overview and Background .................................................................................................... 3 Basics of InfiniBand .............................................................................................................. 3 Hardware ....................................................................................................................... 3 Architecture ................................................................................................................... 4 Application Scenarios for HPC ............................................................................................................. 5 Current Limitation .............................................................................................................................. 6 Infortrend InfiniBand Host Board Design ............................................................................................
    [Show full text]
  • Serial Attached SCSI (SAS)
    Storage A New Technology Hits the Enterprise Market: Serial Attached SCSI (SAS) A New Interface Technology Addresses New Requirements Enterprise data access and transfer demands are no longer driven by advances in CPU processing power alone. Beyond the sheer volume of data, information routinely consists of rich content that increases the need for capacity. High-speed networks have increased the velocity of data center activity. Networked applications have increased the rate of transactions. Serial Attached SCSI (SAS) addresses the technical requirements for performance and availability in these more I/O-intensive, mission-critical environments. Still, IT managers are pressed on the other side by cost constraints and the need White paper for flexibility and scalability in their systems. While application requirements are the first measure of a storage technology, systems based on SAS deliver on this second front as well. Because SAS is software compatible with Parallel SCSI and is interoperable with Serial ATA (SATA), SAS technology offers the ability to manage costs by staging deployment and fine-tuning a data center’s storage configuration on an ongoing basis. When presented in a small form factor (SFF) 2.5” hard disk drive, SAS even addresses the increasingly important facility considerations of space, heat, and power consumption in the data center. The backplanes, enclosures and cabling are less cumbersome than before. Connectors are smaller, cables are thinner and easier to route impeding less airflow, and both SAS and SATA HDDs can share a common backplane. White paper Getting this new technology to the market in a workable, compatible fashion takes various companies coming together.
    [Show full text]
  • Chapter 6 MIDI, SCSI, and Sample Dumps
    MIDI, SCSI, and Sample Dumps SCSI Guidelines Chapter 6 MIDI, SCSI, and Sample Dumps SCSI Guidelines The following sections contain information on using SCSI with the K2600, as well as speciÞc sections dealing with the Mac and the K2600. Disk Size Restrictions The K2600 accepts hard disks with up to 2 gigabytes of storage capacity. If you attach an unformatted disk that is larger than 2 gigabytes, the K2600 will still be able to format it, but only as a 2 gigabyte disk. If you attach a formatted disk larger than 2 gigabytes, the K2600 will not be able to work with it; you could reformat the disk, but thisÑof courseÑwould erase the disk entirely. Configuring a SCSI Chain Here are some basic guidelines to follow when conÞguring a SCSI chain: 1. According to the SCSI SpeciÞcation, the maximum SCSI cable length is 6 meters (19.69 feet). You should limit the total length of all SCSI cables connecting external SCSI devices with Kurzweil products to 17 feet (5.2 meters). To calculate the total SCSI cable length, add the lengths of all SCSI cables, plus eight inches for every external SCSI device connected. No single cable length in the chain should exceed eight feet. 2. The Þrst and last devices in the chain must be terminated. There is a single exception to this rule, however. A K2600 with an internal hard drive and no external SCSI devices attached should have its termination disabled. If you later add an external device to the K2600Õs SCSI chain, you must enable the K2600Õs termination at that time.
    [Show full text]
  • SC20를 통해 본 HPC 기술 동향 HPC Technology Through SC20
    Electronics and Telecommunications Trends SC20를 통해 본 HPC 기술 동향 HPC Technology Through SC20 어익수 (I.S. Eo, [email protected]) 슈퍼컴퓨팅기술연구센터 책임연구원 모희숙 (H.S. Mo, [email protected]) 슈퍼컴퓨팅기술연구센터 책임연구원 박유미 (Y.M. Park, [email protected]) 슈퍼컴퓨팅기술연구센터 책임연구원/센터장 한우종 (W.J. Han, [email protected]) 인공지능연구소 책임연구원/연구위원 ABSTRACT High-performance computing (HPC) is the underpinning for many of today’s most exciting new research areas, to name a few, from big science to new ways of fighting the disease, to artificial intelligence (AI), to big data analytics, to quantum computing. This report captures the summary of a 9-day program of presentations, keynotes, and workshops at the SC20 conference, one of the most prominent events on sharing ideas and results in HPC technology R&D. Because of the exceptional situation caused by COVID-19, the conference was held entirely online from 11/9 to 11/19 2020, and interestingly caught more attention on using HPC to make a breakthrough in the area of vaccine and cure for COVID-19. The program brought together 103 papers from 21 countries, along with 163 presentations in 24 workshop sessions. The event has covered several key areas in HPC technology, including new memory hierarchy and interconnects for different accelerators, evaluation of parallel programming models, as well as simulation and modeling in traditional science applications. Notably, there was increasing interest in AI and Big Data analytics as well. With this summary of the recent HPC trend readers may find useful information to guide the R&D directions for challenging new technologies and applications in the area of HPC.
    [Show full text]
  • Designware IP for Cloud Computing Socs
    DesignWare IP for Cloud Computing SoCs Overview Hyperscale cloud data centers continue to evolve due to tremendous Internet traffic growth from online collaboration, smartphones and other IoT devices, video streaming, augmented and virtual reality (AR/VR) applications, and connected AI devices. This is driving the need for new architectures for compute, storage, and networking such as AI accelerators, Software Defined Networks (SDNs), communications network processors, and solid state drives (SSDs) to improve cloud data center efficiency and performance. Re-architecting the cloud data center for these latest applications is driving the next generation of semiconductor SoCs to support new high-speed protocols to optimize data processing, networking, and storage in the cloud. Designers building system-on-chips (SoCs) for cloud and high performance computing (HPC) applications need a combination of high-performance and low-latency IP solutions to help deliver total system throughput. Synopsys provides a comprehensive portfolio of high-quality, silicon-proven IP that enables designers to develop SoCs for high-end cloud computing, including AI accelerators, edge computing, visual computing, compute/application servers, networking, and storage applications. Synopsys’ DesignWare® Foundation IP, Interface IP, Security IP, and Processor IP are optimized for high performance, low latency, and low power, while supporting advanced process technologies from 16-nm to 5-nm FinFET and future process nodes. High-Performance Computing Today’s high-performance computing (HPC) solutions provide detailed insights into the world around us and improve our quality of life. HPC solutions deliver the data processing power for massive workloads required for genome sequencing, weather modeling, video rendering, engineering modeling and simulation, medical research, big data analytics, and many other applications.
    [Show full text]
  • EMC’S Perspective: a Look Forward
    The Performance Impact of NVM Express and NVM Express over Fabrics PRESENTATION TITLE GOES HERE Live: November 13, 2014 Presented by experts from Cisco, EMC and Intel Webcast Presenters J Metz, R&D Engineer for the Office of the CTO, Cisco Amber Huffman, Senior Principal Engineer, Intel Steve Sardella , Distinguished Engineer, EMC Dave Minturn, Storage Architect, Intel SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced in their entirety without modification The SNIA must be acknowledged as the source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the author nor the presenter is an attorney and nothing in this presentation is intended to be, or should be construed as legal advice or an opinion of counsel. If you need legal advice or a legal opinion please contact your attorney. The information presented herein represents the author's personal opinion and current understanding of the relevant issues involved. The author, the presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. 3 What This Presentation Is A discussion of a new way of talking to Non-Volatile
    [Show full text]